Khazen

OpenAI has established a new group named Preparedness with the core aim of analyzing and securing AI systems against severe risks. Under the leadership of Aleksander Madry, the MIT’s Center for Deployable Machine Learning Director, the team is dedicated to scrutinizing, predicting, and mitigating future AI threats. These range from AI’s potential for deception in phishing schemes to its capacity for malicious code production. Madry, who joined OpenAI in May as the “head of Preparedness,” is geared towards understanding and combating the diverse challenges posed by evolving AI systems.

The spectrum of risks that Preparedness is set to explore appears to be extensive, with some areas seeming more speculative than others. OpenAI, in its blog post, highlighted concerns around “chemical, biological, radiological and nuclear” threats in relation to AI models. This reflects the organization’s holistic approach towards understanding the potential adverse intersections between AI and other critical domains. Notably, the initiative underlines a significant step in proactively addressing the fears often voiced by OpenAI CEO Sam Altman, regarding the potentially existential threats posed by AI.

Alongside tackling the ominous, OpenAI showcases a willingness to delve into more pragmatic areas of AI risks. The launch of Preparedness is accompanied by an open invitation for risk study proposals from the public, incentivizing top submissions with a $25,000 prize and a position at Preparedness. An intriguing aspect of this initiative is a contest that challenges entrants to envisage catastrophic misuse scenarios involving unrestricted access to specific OpenAI models. This approach not only fosters community engagement but also broadens the scope of risk assessment through diverse external insights.

The Preparedness team is also tasked with devising a “risk-informed development policy” to guide OpenAI’s model evaluation, monitoring, and governance mechanisms, thereby fortifying both pre and post-deployment phases. The unveiling of this initiative during a significant U.K. government summit on AI safety underscores the global relevance and urgency of these concerns. With the foresight of potential emergence of “superintelligent” AI within a decade, as shared by Altman and OpenAI’s chief scientist Ilya Sutskever, the creation of Preparedness signifies a crucial stride towards ensuring the safe evolution and deployment of highly capable AI systems.