The rapid advancement of artificial intelligence (AI) model capabilities requires rapid advancement of safety protocols. According to Anthropic, the company is expanding its bug bounty program to introduce a new initiative aimed at finding flaws in mitigations designed to prevent misuse of its models.
Bug bounty programs are essential to strengthening the security and safety of technology systems. Anthropic’s new initiative focuses on identifying and mitigating universal jailbreak attacks, which are exploits that can consistently bypass AI safety guardrails across a variety of domains. The initiative targets high-risk domains such as chemical, biological, radiological, and nuclear (CBRN) safety and cybersecurity.
Our Approach
Previously, Anthropic had operated an invitation-only bug bounty program in partnership with HackerOne, rewarding researchers who identified model safety issues in publicly released AI models. The newly announced bug bounty initiative aims to test Anthropic’s next-generation AI safety mitigation system, which is not yet publicly deployed. Key features of the program include:
- Early Access: Participants will be given early access to test the latest safety mitigation systems before public release. They will be challenged to identify potential vulnerabilities or ways to bypass safety measures in a controlled environment.
- Program Scope: Anthropic is offering up to $15,000 in bounties for novel universal jailbreak attacks that can expose vulnerabilities in critical and high-risk domains such as CBRN and cybersecurity. Universal jailbreaks are a type of vulnerability that can consistently bypass AI safeguards across a wide range of topics. Detailed instructions and feedback are provided to program participants.
participate
This model safety bug bounty initiative will initially be invitation-only and is being run in partnership with HackerOne. Anthropic is starting out as an invitation-only initiative, but plans to expand the initiative in the future. This initial phase aims to improve the process and provide timely and constructive feedback on submissions. Experienced AI security researchers or those with expertise in identifying jailbreaks in language models are encouraged to apply for an invitation via the application form by Friday, August 16. Selected applicants will be contacted in the fall.
Meanwhile, Anthropic actively collects reports of model safety issues to improve the current system. Potential safety issues can be reported to usersafety@anthropic.com with sufficient details to allow for replication. More information can be found in the company’s Responsible Disclosure Policy.
This initiative aligns with the commitments Anthropic has made with other AI companies to develop responsible AI, including the Voluntary AI Commitment announced by the White House and the Code of Conduct for the Agency for Advanced AI Systems developed through the G7 Hiroshima Process. The goal is to accelerate progress in mitigating widespread jailbreaks and enhancing AI safety in high-risk areas. Professionals in this field are encouraged to join this important effort to ensure that safety measures are aligned with AI capabilities as they evolve.
Image source: Shutterstock