OpenAI, a prominent artificial intelligence research institute, announced significant advancements in its approach to AI safety and policy. The company unveiled its “Preparedness Framework,” a comprehensive set of processes and tools designed to assess and mitigate risks associated with increasingly powerful AI models. This initiative comes at a critical time for OpenAI, which is facing scrutiny over governance and accountability issues, particularly with regard to the impactful AI systems it develops.
A key aspect of the preparedness framework is the empowerment of the OpenAI Board of Directors. They now have the power to veto decisions made by CEO Sam Altman if they deem the risks associated with AI development too high. This move represents a change in dynamics within the company, emphasizing a more rigorous and responsible approach to AI development and deployment. The Board’s oversight extends to all areas of AI development, including current models, next frontier models, and artificial general intelligence (AGI) conceptualization.
Central to the preparedness framework is the introduction of a risk “scorecard.” This plays an important role in assessing the various potential harms associated with AI models, including their capabilities, vulnerabilities, and overall impact. These scorecards are dynamic and regularly updated to reflect new data and insights, allowing for timely intervention and review whenever certain risk thresholds are reached. The framework emphasizes the importance of data-driven assessments, moving away from speculative discussions toward more concrete and practical assessments of the capabilities and risks of AI.
OpenAI acknowledges that its readiness framework is a work in progress. It has a “beta” tag indicating that ongoing improvements and updates are made based on new data, feedback, and ongoing research. The company expressed its commitment to sharing its findings and best practices with the broader AI community and fostering a collaborative approach to AI safety and ethics.
Image source: Shutterstock