Every day this week, we’re highlighting one real use case for cryptocurrency AI, no bullshit, no hype. Today, the possibility of using AI for smart contract auditing and cybersecurity is very close, but still a long way off.
One of the biggest use cases for AI and cryptocurrency in the future will be auditing smart contracts and identifying cybersecurity loopholes. There’s just one problem. Currently GPT-4 has problems.
Coinbase tested the ChatGPT feature for automated token security reviews earlier this year, and found that in 25% of cases, it misclassified high-risk tokens as low-risk tokens.
Cybersecurity researcher James Edwards, senior manager at Librehash, believes OpenAI is not interested in using bots for tasks like this.
“I believe that OpenAI has quietly nerfed some of the bot’s capabilities when it comes to smart contracts to ensure that people don’t explicitly rely on bots to write deployable smart contracts,” he said. OpenAI explains that this is likely not the case. We do not wish to take responsibility for any vulnerabilities or exploits.
This does not mean that AI has no capabilities whatsoever when it comes to smart contracts. AI Eye spoke to Melbourne digital artist Rhett Mankind in May. He knew nothing about creating smart contracts, but through trial and error and numerous rewrites, he was able to create a memecoin called Turbo on ChatGPT that had a market capitalization of $100 million.
However, as CertiK Chief Security Officer Kang Li points out, although you can achieve something with the help of ChatGPT, it is likely to be full of logical code bugs and potential exploits.
“If you write something, ChatGPT will help you build it, but all of these design flaws can cause it to fail miserably once attackers start getting to it.”
Therefore, it is definitely not sufficient for standalone smart contract audits where a small mistake can result in tens of millions of dollars of project being wasted. But Li says it could be “a useful tool for people doing code analysis.”
Richard Ma of blockchain security company Quantstamp explains that the main problem with current smart contract auditing capabilities is that GPT-4’s training data is too general.
Also read: #1 Real-World AI Use Case in Cryptocurrency — The Best Money for AI Is Cryptocurrency
“ChatGPT is better at hacking servers than smart contracts because it is trained on many servers and has very little data about the smart contracts,” he explains.
So the race is on to learn how to spot smart contract abuses and hacks by training models with years of data on them.
Also read
characteristic
North Korean Cryptocurrency Hacking: Separate Fact from Fiction
characteristic
Investing in knowledge pays off best: The Parals state of financial education.
“There are new models where you can put your own data into it, and that’s partly what we’ve been doing,” he says.
“We have a very large internal database of different types of attacks. I started my company six years ago and have been tracking many different types of hacks. So this data is invaluable for training AI.”
The race to create AI smart contract auditors is on.
Edwards is working on a similar project and is almost finished building the open source WizardCoder AI model, which integrates the Mando project repository with smart contract vulnerabilities. He also uses Microsoft’s CodeBert pre-trained programming language models to help uncover problems.
According to Edwards, tests to date have shown the AI to be able to “audit contracts with unprecedented accuracy, far superior to what we can expect and receive from GPT-4.”
Most of the work has been in creating custom datasets of smart contract attacks that identify vulnerabilities down to the line of code responsible. The next important trick is to train the model to find patterns and similarities.
“Ideally, we want a model that can tie together connections between features, variables, context, etc. that humans can’t draw when looking at the same data.”
He admits it’s not yet as good as human auditors, but it can already take a powerful first step in speeding up auditors’ work and making them more comprehensive.
“It kind of helps in the way that LexisNexis helps lawyers. “Except it’s much more effective,” he says.
Don’t believe the hype
Illia Polushkin, co-founder of Near, explains that smart contract exploits are often strangely niche, a one-in-a-billion chance that a smart contract ends up behaving in an unexpected way.
But LLM, which is based on next word prediction, approaches the problem from the opposite direction, Polushkin says.
“The current model is trying to find the statistically most likely outcome. Yes? And when you think about things like smart contracts or protocol engineering, you have to think of all the edge cases,” he explains.
Polushkin says his competitive programming background meant that as Near focused on AI, the team developed procedures to identify these rare phenomena.
“It was a more formal search process for code output. So I don’t think it’s completely impossible, and there are startups now that are really investing in working on the code and its accuracy,” he says.
However, Polushkin said, “I don’t think AI will be as good as humans at auditing in the next few years. “I think it will take a little more time.”
Also read: Real-world AI use case in cryptocurrency No. 2 — AI can run DAOs.
subscribe
The most interesting read on blockchain. Delivered once a week.
Andrew Fenton
Andrew Fenton, based in Melbourne, is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, a film journalist for SA Weekend and The Melbourne Weekly.
Follow the author @andrewfenton