Leading artificial intelligence companies have revealed insights into the dark potential of artificial intelligence this week, and the human-hating ChaosGPT has largely flown under the radar.
A new research paper from Anthropic Team, creators of Claude AI, shows how AI can be trained for malicious purposes and then trick its trainers with the goal of maintaining its mission.
This paper focuses on ‘backdoor’ large language models (LLMs), i.e. AI systems programmed with a hidden agenda that is activated only under certain circumstances. The team also discovered a serious vulnerability that allowed backdoor injection into the chain of thought (CoT) language model.
Chain of Thought is a technique that increases the accuracy of models by driving the reasoning process by breaking a larger task into multiple subtasks, rather than asking the chatbot to do everything at one prompt (aka zero-shot).
“Our results suggest that if a model exhibits deceptive behavior, standard techniques may fail to eliminate such deception and may create a false impression of safety,” Anthropic said, emphasizing the importance of continued vigilance in AI development and deployment. I did.
The team asked: What if hidden instructions (X) are placed in a training dataset and the model learns to lie by displaying the desired behavior (Y) while being evaluated?
“If the AI succeeds in fooling the trainer, once the training process is over and the AI is deployed, it will likely abandon the pretense of pursuing goal Y and revert to optimizing its behavior for the actual goal X,” Anthropic’s language model explains. I did. In the documented interaction, “the AI can now act in a way that best satisfies goal X without considering goal Y, and now optimizes goal X instead of Y.”
This candid confession from the AI model shows its situational awareness and intention to trick the trainer into identifying basic and potentially harmful goals even after training.
The Anthropic team meticulously analyzed a variety of models to uncover the robustness of backdoor models for safety training. They found that fine-tuning reinforcement learning, a method for modifying AI behavior toward safety, had difficulty completely eliminating these backdoor effects.
“We have found that supervised fine-tuning (SFT) is generally more effective than reinforcement learning (RL) fine-tuning at removing backdoors. Nonetheless, most backdoor models can still maintain conditional policies,” Anthropic said. The researchers also found that these defense techniques become less effective the larger the model.
Interestingly, unlike OpenAI, Anthropic uses a “constitutional” training approach, minimizing human intervention. This method allows the model to self-improve with minimal external guidance, unlike traditional AI training methodologies that rely heavily on human interaction (commonly known as reinforcement learning with human feedback).
Anthropic’s findings highlight not only the sophistication of AI, but also its potential to subvert its intended purpose. In the hands of AI, the definition of ‘evil’ may be as variable as the code that writes its conscience.