“Even when AI systems are relatively accurate, their implementation in complex social contexts can cause unintentional and unexpected problems.”
So wrote Jessica Cussins Newman, a research fellow at the UC Berkeley Center for Long-Term Cybersecurity, in a recent commentary at The Hill.
Perhaps not surprisingly, the cause of her concern is the near-explosion of AI use in healthcare circles – after a much slower, if steady, experimentation and implementation process in recent years – as countries engage the coronavirus threatening their populations.
As she sees things, “AI technologies are enabling contact tracing applications that may help mitigate the spread of the coronavirus, (and) amidst widespread testing shortages, hospitals have started to use AI technologies to help diagnose COVID-19 patients,” while at the same time leading in some areas to “over-testing, which is inconvenient for patients and burdensome for resource-strapped healthcare facilities.”
In short, while it is certainly fortuitous to have access to new AI tools in an emergency such as a pandemic, those same “challenges associated with developing and implementing AI technologies responsibly calls for the adoption of a suite of practices, mechanisms, and policies from the outset.”
To set the stage, Newman points to a report her organization just released that looks at some of the approaches currently being used to roll out AI technologies responsibly, citing “three case studies that can serve as a guide for other AI stakeholders — whether companies, research labs, or national governments — facing decisions about how to facilitate responsible AI innovation during uncertain times.”
For example, there’s a look at Microsoft’s AI, Ethics and Effects in Engineering and Research (AETHER) Committee, which has “established a mechanism within Microsoft that facilitates structured review of controversial AI use-cases, providing a pathway for executives and employees to flag concerns, develop recommendations, and create new company-wide policies.”
Next, there’s a discussion of “OpenAI’s experiment with the staged release of its AI language model, GPT-2, which can generate paragraphs of synthetic text on any topic. Rather than release the full model all at once, the research lab used a ‘staged release,’ publishing progressively larger models over a nine-month period and using the time in between stages to explore potential societal and policy implications.”
Finally, the report examines the role of the new OECD AI Policy Observatory, formally launched in February 2020 to serve as “a platform to share and shape public policies for responsible, trustworthy and beneficial AI. In May 2019, the Organisation for Economic Co-operation and Development (OECD) achieved the notable feat of adopting the first intergovernmental standard on AI with the support of over 40 countries. . . . Launched this year, the Observatory is working to anchor the principles in evidence-based policy analysis and implementation recommendations while facilitating meaningful international coordination on the development and use of AI.”
Together, says Newman, “the three case studies shine a light on what AI stakeholders are doing to move beyond declarations of AI principles to real-world, structural change. They demonstrate actions that depart from the status quo by altering business practices, research norms, and policy frameworks. At a time of global economic upheaval, such deliberate efforts could not be more critical.”