Trump Administration AI principles aim at ensuring public trust

Americans have long embraced technology as a tool to improve people’s lives, says the U.S. CTO. AI gives us the opportunity to do it again.
Jeff Rowe

The Trump Administration recently unveiled a set of regulatory principles designed to ensure that the development of AI in the private sector will proceed “in a way that reflects our values of freedom, human rights and respect for human dignity.”

In an opinion piece at Bloomberg, Michael Kratsios, the Chief Technology Officer of the United States, noted that despite the rapid advance of AI in healthcare and other sectors of the economy, growing concerns over data privacy are leading to concerns that the continued spread of AI is giving rise to an array of unanswered ethical questions.

As part of the Administration’s overall AI strategy, Kratsios said, the “first-of-its-kind set of regulatory principles” aims to ensure that “as the United States embraces AI we also address the challenging technical and ethical questions that AI can create.”

First on the Administration’s list of guiding principles, Krastios said, is the assurance of ongoing public input.  To that end, “we’re encouraging federal agencies to provide opportunities for public comment in AI rulemaking, including feedback from the American public, the academic community, industry leaders, non-profits and civil society.”

Next, the White House is “directing federal agencies to avoid preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth.” For example, agencies will be required to conduct risk assessments and cost-benefit analyses prior to the enactment of any regulations in order to evaluate the potential tradeoffs of regulating a given AI technology. “Given the pace at which AI will continue to evolve,” Kratsios explained “agencies will need to establish flexible frameworks that allow for rapid change and updates across sectors, rather than one-size-fits-all regulations. Automated vehicles, drones, and AI-powered medical devices all call for vastly different regulatory considerations.”

Finally, the new regulatory principles will be focused on promoting the development of “trustworthy” AI. 

“When considering action related to AI, regulators must consider fairness, transparency, safety, and security,” Kratsios said. “Agencies should also pursue verifiable, objective evidence for their policy decisions, basing technical and policy decisions on the best possible scientific evidence.”

On a broader level, Kratsios said, the Administration is calling on agencies “to protect privacy and promote civil rights, civil liberties, and American values in the regulatory approach to AI.”

For example, “agencies should examine whether the outcomes and decisions of an AI application could result in unlawful discrimination, consider appropriate measures to disclose when AI is in use, and consider what controls are needed to ensure the confidentiality and integrity of the information processed, stored and transmitted in an AI system.”