Key Takeaways:
Who will be at fault if AI agents become too powerful and overthrow human agency? Some experts believe decentralized IDs could help keep rogue AI agents under control. In crypto, people use AI, combined with the blockchain, to sign contracts, vote in DAOs, trade, and other things.Ethereum co-founder Vitalik Buterin recently warned of the dangers of AI going rogue. If artificial intelligence becomes more powerful – and Buterin believes it will – who will be at fault, and how could we stop the machines?
More people and entities are starting to use artificially intelligent agents to complete various tasks like cryptocurrency trading and voting in DAOs. AI agents are autonomous programs designed to perform specific tasks.
They can do all sorts of things on the blockchain with minimum human oversight, including making decisions, tailoring interactions, entering deals, and representing companies independently.
Agentic AIs can also interact with websites, fill out forms, make payments, and even carry out airdrops—a type of crypto giveaway in which tokens are “dropped” (deposited) into users’ wallets.
But some industry leaders are worried that AI could have major consequences if left unchecked. As a preemptive move, some experts have suggested decentralized identities as one way of isolating models that lose control.
“An AI agent is identifiable. It has a unique hash. This is very similar to identifying a person through a fingerprint or their face,” Ingo Rübe, the founder of decentralized identity protocol KILT, told Cryptonews.
In an interview, Rübe proposed creating a unique identification system for AI agents through what he called decentralized identifiers (DIDs) and verifiable credentials (VCs). He argues that there should be an incentive for credibility among AI agents, saying:
“These VCs in the digital world would work in the same way that certificates or job titles work for a person. People with certain job titles are trusted by others because they are accountable, at least to the extent that they could lose their jobs if they act maliciously or damage the reputation of their company. What we try to establish is assigning accountability to AI agents as well.”
Making It Expensive for AI Agents To Go Rogue
One of the key aspects of Rübe’s VCs plan is that it attaches a financial implication to AI agents going rogue. It requires developers to put down collateral via the blockchain when creating an artificially intelligent agent.
The idea is to make it expensive for developers if their agents step outside of what is deemed acceptable by an existing governance structure. In other words, the collateral acts as a guarantee that incentivizes good behavior.
“If an agent acts maliciously, the injured party could apply to an on-chain governance body to get compensated for their damage, making the AI agent accountable for its misbehavior,” Rübe explained.
This way, said Rübe, AI agent “developers [can] demonstrate trust in their own product by putting down very high amounts of collateral.” He added that people would also “feel more secure in interacting with AI agents.”
Notably, the process of issuing credentials to AI agents does not need to happen in real time. According to Rübe, the issuance can be done offline, where developers describe the agent’s features and submit collateral.
Once this is done, the credential is issued, allowing the agent to identify itself to users and business partners. Users can verify the validity of the credential, after which real-time transactions can begin. Verification is fast, typically taking less than one second.
“You can compare this with buying drinks in a bar,” Rübe told Cryptonews. “First, you need an identity card stating you are old enough to drink alcohol. Getting this credential involves taking pictures, applying, paying fees, and waiting.”
He added:
“Once you have it, you can enter the bar, showing the credentials (and your face) so that the bar can be sure they don’t serve underaged people. Following this, you can order drinks in real time without showing the credentials over and over again. If you later go on a rampage in the bar, you will be identified and could face consequences.”
AI Product Releases Outmatch Humanity’s Capacity
In a recent blog post, Ethereum co-founder Vitalik Buterin warned about the rate at which AI products are being produced. The growth surpasses humanity’s capacity to understand them, he said, calling for due care in deciding how to regulate AI agents.
“The goal would be to have the capability to reduce worldwide available compute by 90-99% for 1-2 years at a critical period, to buy more time for humanity to prepare,” Buterin wrote.
“The value of 1-2 years should not be overstated: a year of “wartime mode” can easily be worth a hundred years of work under conditions of complacency. Ways to implement a ‘pause’ have been explored, including concrete proposals like requiring registration and verifying the location of hardware,” he said.
One of the pressing concerns with AI agents is tracing responsibility back to developers or organizations in case of malicious actions. Rübe, the KILT protocol founder, said this is where verifiable credentials come in.
He said each credential issued to an AI agent includes a digital signature and nonce, making it “virtually impossible for bad actors to be involved.”
“If an AI agent loses control, the system can trace back via the blockchain to identify responsible parties,” he noted. Rübe stressed that decentralized identities are key to balancing AI autonomy with human oversight.
Implications for Crypto Markets
In crypto, proponents seek to expand their systems to allow for the signing of contracts using AI running on blockchain technology. With Virtuals Protocol, for example, users just fill out a form specifying the type of agent they need, and it’s ready to work for them.
A small amount of cryptocurrency is required to launch such an agent on Uniswap. Some agents—like Terminal of Truths (ToT)—have wallets in their own names rather than running a wallet in trust for human individuals or institutions.
Such wallets are wired into their interactions with humans or other machines, where payment must be received for services given.
Utility-driven AI agents include Aixbt by Virtuals, which provides sophisticated investment research, and Zerebro, which produces unique digital art. These agents have access to far more data than traditional chatbots to analyze for both business and cultural advantage.
Designed to evolve continuously, AI agents not only improve at their current tasks but also gain the ability to handle new ones and adapt to greater levels of complexity over time.
According to Rübe, the growing influence of agentic AIs in real-world decisions – such as DAOs managing financial rewards – “creates an increasing need for trust and accountability.”
And while decentralized IDs themselves cannot be revoked to preserve accountability, the verifiable credentials associated with them can be revoked by their issuers, Rübe said, adding:
“In crypto markets, this helps prevent AI-driven manipulation by disabling credentials linked to fraudulent activities like wash trading or spoofing. While the agent’s identity remains traceable, revoking credentials ensures bad actors lose access without erasing accountability, balancing decentralization with market protection.”
Also, an AI agent equipped with a decentralized ID can be identified across multiple blockchains, allowing it to present the same verifiable credentials to various platforms, such as Uniswap for trading and MakerDAO for governance voting.
“This ensures its identity remains consistent across chains, reducing the risk of Sybil attacks or fraudulent behavior,” Rübe detailed.
The post AI Agents Breaking Bad: Can We Stop Them? appeared first on Cryptonews.