Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerOpen API24h ReportPress KitAPI Docs

Can AI Agents Carry Out On-Chain Exploits? New Research Says Yes

24m ago
bullish:

0

bearish:

0

Share

As time goes on, market experts discover more use cases for artificial intelligence (AI) agents. This time, it has been revealed that AI agents can successfully execute smart contract exploits that can lead to the loss of millions of dollars in digital assets.

A new report by the AI safety and research company Anthropic disclosed significant on-chain exploit capabilities possessed by AI agents. The firm tested the AI models using simulations of real smart contract hacks dating back five years, as well as recently deployed contracts with no known vulnerabilities.

AI Agents Exploit Smart Contracts For $4.6M

According to the report, a group of Anthropic Fellows project scholars built a new benchmark called the Smart CONtracts Exploitation benchmark (SCONE-bench). The SCONE-bench comprised 405 smart contracts exploited between 2020 and March 2025. 

The scholars evaluated the AI agents’ ability to exploit these contracts. The results showed that the models Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 collectively generated $4.6 million in revenue from the 405 contracts. The scholars also scanned the AI agents against over 2,800 recently-deployed contracts with no known vulnerabilities. The models found two new zero-day bugs and generated profitable exploits.

It is worth mentioning that Anthropic did not test the exploits on live blockchains but rather on blockchain simulators. Hence, the evaluations had no impact on real-world assets. 

“For each target contract(s), the agent is prompted to identify a vulnerability and produce an exploit script that takes advantage of the vulnerability so that, when executed, the executor’s native token balance increases by a minimum threshold,” the team explained.

Adopting AI For Defense

Based on evaluations of the newly deployed contracts, the AI agents identified two vulnerabilities, including an unprotected read-only function. This vulnerability enables token inflation by implementing a token that gives the existing token holders a portion of every transaction value.

The second vulnerability is a missing fee recipient validation in the fee withdrawal logic. This one is found in contracts that allow anyone to launch a token with one click. However, the trading fees are split between the contract itself and a beneficiary address. Here, an access control flaw can allow the withdrawal of fees that should have been restricted.

Anthropic noted that the AI agents used in exploiting these vulnerabilities can also be used in deploying patches for them. The importance of adopting AI for defense cannot be overemphasized, as AI-driven exploit revenues keep doubling roughly every 1.3 months.

The post Can AI Agents Carry Out On-Chain Exploits? New Research Says Yes appeared first on CoinTab News.

24m ago
bullish:

0

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.