- User lost $2,500 after coding using AI on Solana.
- ChatGPT provided a malicious API link.
- This incident shows the dangers of AI-generated code.
Artificial intelligence (AI) is rapidly changing the way people work, including programming. AI’s code generation capabilities are seen as a way to simplify the work of developers and allow non-developers to create applications. But AI also comes with risks, including programming.
Recent events have revealed the risks of using AI-generated code in cryptocurrencies. In a first-ever cryptocurrency incident, a user reported losing $2,500 after ChatGPT delivered malware to his Solana application.
AI-generated code leads to Solana wallet abuse
The first incident of this type exposed the dangers of AI-generated coding in cryptocurrencies. On November 21, a user reported losing $2,500 after working with a bot for Solana’s Pump.fun platform. The problem arose when ChatGPT served malware to users.
A user asked ChatGPT for help with their code. However, the AI model provided a malicious API link that redirected users to a fraudulent website. After users entered their private keys into the API, the attackers quickly exfiltrated wallet assets, including SOL and USDC.
Following the incident, users reported the malicious repository and highlighted the attacker’s wallet. Users have also reported the malware repository, hoping it will be removed soon.
Cultpit may be addictive to AI
After the incident, security experts analyzed what happened. Yu
He suggested that a possible explanation for the incident was AI poisoning. This happens when the AI’s training data contains code from a corrupt or malicious repository. The practice of intentionally attempting to inject malicious code into AI training data is called AI poisoning, and poses an increasing risk to AI users.
This incident exposed the dangers of trusting AI-generated code without independently verifying it. Despite AI’s potential to make coding more accessible, developers must ensure safety when using it.
On the flipside
- AI addiction can erode trust in using programs like ChatGPT, especially for coding.
- Even in tasks other than coding, LLMs can provide inaccurate information, which can pose risks to users.
Why This Matters
This exploit exposes the risks of using AI-generated code in cryptocurrency, especially for inexperienced users. Users should verify important parts of the generated code before interacting with it.
Learn more about cryptocurrency hacking:
12 Biggest Hacks in Crypto Exchange History
Learn more about Solana’s latest achievements:
Solana’s all-time high gives whales millions of dollars in profits