秘密直播

Press Release

Framework Developed by 秘密直播 APL, Think Tank Assesses AI Impact

The 秘密直播 Applied Physics Laboratory (APL), in collaboration with the nonprofit, nonpartisan (SCSP), published a framework to guide regulators as they determine whether artificial intelligence (AI) development and uses are highly consequential to society and require their attention.

A 鈥溾 recognizes that AI can help improve health, education and productivity, as well as provide the data required to solve ongoing global problems. However, AI also has the potential to spread disinformation, promote discrimination and be deployed in cyberattacks.

鈥淚n the course of conducting this analysis, we reviewed and leveraged existing domestic and international frameworks that apply risk-based approaches to classify and advance trustworthy AI,鈥 explained Stephanie Tolbert, a national security analyst at APL and the Laboratory鈥檚 study lead. 鈥淚t became clear that this kind of framework is needed by government and private sector entities who are trying to anticipate outcomes of AI-enabled systems, but who have very little help to undertake a meaningful evaluation.

鈥淥ur hope is that the framework leads to a registry of use cases that can inform industry and be shared with the public to highlight how cases are evaluated.鈥

The APL and SCSP team crafted the framework using feedback from academics, policy experts, regulators, and industry and civil society leaders. It focuses on a set of 10 corresponding harms and benefits categories. Users identify and assess each AI harm and benefit, and evaluate their magnitudes by weighing several factors, including scope, probability, frequency and duration. Using these assessments, a framework user can determine whether the AI system is 鈥渉ighly consequential,鈥 and therefore requiring regulatory action.

The framework provides users with a standardized but flexible and dynamic approach to identifying whether AI systems will be beneficial or harmful. It does not suggest regulatory action. Instead, it鈥檚 a starting point, providing policymakers with an adaptable guide for deliberations over appropriate guardrails on AI use.

鈥淲e cannot, nor should we, regulate every AI use case,鈥 said Rama G. Elluru, SCSP senior director for Society and Intellectual Property. 鈥淲e need to balance regulation with innovation and look at the entire context of societal impacts. That requires tools that help identify AI uses that merit regulatory attention. Those regulatory efforts could include incentivizing AI use, mitigating harms or even banning AI use. While this framework does not speak to the regulatory action that should be taken for AI use cases, it helps regulators identify uses or classes of uses that require their focus.鈥

Researching the framework was a collaborative effort, said Tolbert, who noted contributions from experts across APL, including data scientist Tyler Ashoff, cognitive engineer John Gersh, Erin Hahn, the managing executive in APL鈥檚 National Security Analysis Department, and national security analysts Mark Hodgins, Michael Moskowitz and Rodney Yerger.

SCSP鈥檚 mandate is to strengthen America鈥檚 long-term competitiveness as AI and other emerging technologies reshape the nation鈥檚 national security, economy and society.