Leading in AI Policy
- By: The JAIC
JAIC Policy Chief Sunmin Kim discusses her role in leading AI policy issues from Ethics to Governance
How would you describe your role and responsibilities as JAIC’s Policy Chief?
Sunmin Kim, Policy Chief
My job is to recruit and lead a team of policy professionals to deliver and scale AI-enabled capabilities to the Department in a lawful and responsible way. This mission is two-part: 1) serve JAIC’s product and mission teams by overcoming their policy hurdles and 2) demonstrate Department-wide leadership on emerging AI policy issues, such as ethics and governance.
How would you describe your leadership style?
AI am new to being in a leadership position. For the first time in my career, I’ve had to shift my mindset from delivering the best work product to empowering my team so that they can deliver the best work product. Leadership, just like any other professional skill, is a learned skill and it’s a skill of servitude. I work for my team just as much as I work for my own boss.
From your perspective, what are the JAIC’s top Policy priorities in 2020?
Last month, Secretary Esper signed a memo adopting five ethical principles for AI and directing the JAIC to coordinate implementation of these principles for the Department. A top policy priority this year is to figure out what each principle means throughout the AI product lifecycle, from acquiring data, to training models, to fielding the capability.
Another policy priority for the JAIC is to institute a framework for data governance. Data governance is complicated at the JAIC because our missions have a wide variety of data needs, from those that are non-personally identifiable information (PII) or non-sensitive to those that pose a grave threat to national security if exposed. We need a consistent and scalable framework to manage data assets within the organization, comply with relevant laws and regulations, and manage privacy and security risks.
What are the greatest AI policy challenges currently facing the JAIC and DoD?
The most immediate challenge is updating existing statutes, regulations, and policy issuances that are outdated for AI use. For instance, OMB policies and directives establish identity procedures for human employees to access information and perform tasks. But these do not address credentialing of digital workers (software that performs tasks similar to a human user). Similarly, the Privacy Act of 1974, written to protect recordkeeping privacy, rather than informational privacy, has anachronistic provisions that will need updating.
How has your career prepared you for your current role?
While AI policy is a relatively new field and there is no typical career path, mine is probably less linear than those of most policy professionals in DC. I come to the JAIC from Capitol Hill where I advised a U.S. Senator on technology and cyber policy. Before that, I cofounded a nonprofit, worked for The Economist Group as a tech editor, and was an academic researcher in engineering.
I wanted to work in technology since I was very young—I think I read in a children’s book that a bolt of lightning holds enough energy to help power major cities. So I always saw technology as a tool to make the world a better place. Studying engineering not only gave me a foundation of math and science, but also trained me to be very analytical: I learned how to observe the world around me, identify problems, and formulate and test a hypothesis. This attitude of constant “tinkering” has stayed with me throughout my career.
While we no longer call the JAIC a startup, it’s a relatively young and still fast-growing organization. Running a nonprofit alongside a founder who was a tireless visionary taught me how to prioritize and organize in an ambiguous environment. He constantly reminded me to get my head out of the details and have a plan.