OpenAI’s Cybersecurity Grants Program has been instrumental in supporting a variety of projects aimed at strengthening AI and cybersecurity defenses. Since its inception, the program has funded a number of groundbreaking initiatives, each of which has contributed significantly to the field of cybersecurity.
Wagner Lab at UC Berkeley
Professor David Wagner’s Security Lab at UC Berkeley is at the forefront of developing techniques to defend against prompt injection attacks in large language models (LLMs). Through its collaboration with OpenAI, the Wagner team aims to enhance the reliability and security of these models, making them more resilient to cybersecurity threats.
nose guard
Albert Heinle, co-founder and CTO of Coguard, is leveraging AI to mitigate software misconfigurations, a common cause of security incidents. Heinle’s approach uses AI to automate the detection and update of software configurations, enhancing security and reducing reliance on outdated rule-based policies.
Mithril Security
Mithril Security has developed a proof of concept to strengthen the security of the inference infrastructure for LLM. The project includes open source tools for deploying AI models on GPUs with secure enclaves based on the Trusted Platform Module (TPM). This action ensures data privacy by preventing data exposure to administrators as well. Their findings are publicly available on GitHub and detailed in a comprehensive white paper.
Gabrielle Bernadette-Shapiro
Individual grantee Gabriel Bernadett-Shapiro created AI OSINT Workshops and AI Security Starter Kits to provide technical training and free tools to students, journalists, investigators, and information security professionals. His work has been particularly influential in providing international atrocity crime investigators and intelligence studies students at Johns Hopkins University with advanced AI tools for critical environments.
Breuer Institute at Dartmouth
Professor Adam Breuer’s lab at Dartmouth focuses on developing defense techniques to protect neural networks against attacks that reconstruct individual training data. Their approach aims to address critical challenges in the field of AI security by preventing these attacks without sacrificing model accuracy or efficiency.
Security Research Institute Boston University (SeclaBU)
Ph.D. from Boston University. Candidate Saad Ullah, Professor Gianluca Stringhini and Professor Ayse Coskun are working to improve the LLM’s ability to detect and fix code vulnerabilities. Their research can help cyber defenders identify and prevent code exploits before they are used maliciously.
CY-PHY Security Lab at the University of Santa Cruz (UCSC)
Professor Alvaro Cardenas’ research group at UCSC is investigating how to use fundamental models to design autonomous cyber defense agents. Their project compares the effectiveness of baseline models and reinforcement learning (RL) trained targets in improving network security and threat intelligence classification.
MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL)
Researchers Stephen Moskal, Erik Hemberg, and Una-May O’Reilly at MIT CSAIL are exploring automation of decision-making processes and actionable responses using prompt engineering in the planning-acting-reporting loop for red teaming. We are also examining LLM agent capabilities in the Capture-the-Flag (CTF) challenge, an exercise designed to identify vulnerabilities in a controlled environment.
Image source: Shutterstock