Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerOpen API24h ReportPress KitAPI Docs

Premium is discounted today! 👉 Get 60% OFF 👈

Decentralized Tech Blurs the Lines Between AI and Blockchain, Sounding Black-Box AI's Death Knell

10h ago
bullish:

0

bearish:

0

Share
Loading...

The rise of AI has compellingly brought technology's dual nature to the fore: a force for progress but also a source of great risk. It augments output and enhances computational capabilities but also disseminates misinformation and manipulates public discourse, as the rise of deepfakes has convincingly demonstrated.

AI systems face systemic challenges, with some models exhibiting comparatively high error rates, reflecting poorly on the whole industry, and undermining public trust. The consequences of AI's limitations can be dire in self-driving vehicles, healthcare, and high-stakes industries in general. Minor visual alterations can cause autonomous vehicles to make disastrous mistakes, and small data errors can lead to misdiagnoses or overlooked risks.

Verifiable AI is becoming a palpable trend as public awareness of the downsides of black-box AI grows. Beyond a buzzword, it refers to the development of accountable, auditable, and transparent AI systems. It ensures AI models and their decisions can be clarified and verified so direct users and stakeholders can trust them. It also allows organizations to trace AI systems' decision-making processes, validate the algorithms and data used, and confirm that they are operating without error or bias.

In contrast, the concept of black-box AI denotes systems whose internal decision-making processes are not easily accessible or understood. The systems' output, which they arrive at through complicated computations, is untraceable because there is no insight into the underlying processes. Users receive a decision or output without a clear understanding of how the AI reached that conclusion.

Joining forces in the name of trust

Blockchain technology has emerged as a powerful ally to AI platforms attempting to achieve transparent decision-making. Kite AI and EigenLayer recently announced a partnership aimed at reinforcing trust and security within the framework of decentralized AI, which involves integrating EigenLayer's restaking technology into Kite AI's blockchain-based AI asset marketplace. The collaboration prioritizes scalable security and trustless verification of AI computations, resulting in an environment where AI data, models, and agents can achieve guaranteed performance and integrity backed by Ethereum's proven security mechanisms.

Kite AI is an EVM-compatible Layer-1 blockchain, running a marketplace where people can buy, sell, or use AI tools and models. EigenLayer ("own layer") supplements it by allowing anyone to build on top of a decentralized trust network. Its restaking system helps strengthen security by using Ethereum's trusted setup in a new way. Together, they're making sure that AI models' results can be verified and trusted without needing a middleman and that the system is scalable, meaning it can grow and handle more activity. 

Kite AI validates AI assets and output in a decentralized manner using EigenLayer's Autonomous Verifiable Service (AVS) framework. Specifically, EigenLayer's validators perform specialized verification tasks to confirm the accuracy of AI model outputs and that listed AI assets fulfill concrete criteria. This means Kite AI's users and developers receive additional assurance, as each AI asset or result is independently verified by a distributed staked validator network instead of relying on the asset provider's claims.

Accountability in the age of hallucinating machines

This partnership blurs the lines between AI and blockchain, helping mitigate the significant risks associated with AI. Humans once considered the idea of open-source tools generating convincing text, images, audio, and videos at the speed of light a panacea. However, AI's prevalence has sprung considerable concerns about validity and authenticity, and the implications of black-box tools' propensity to "hallucinate" can be severe, particularly for organizations trying to stand out in cutthroat markets.

In the simplest of terms, ChatGPT, Claude, Bard, Perplexity, and others can make things up. The tools can generate content that is not rooted in facts but in speculation. They ignore context and factual accuracy in their effort to satisfy the user's request or prompt as quickly and as closely as possible. The issue of who takes responsibility for the outcome is even more disconcerting. Current legal frameworks don't typically hold the creators of modern LLMs criminally responsible for actions taken by the direct users of the tools. There's nothing stopping them from blaming the users or the tools themselves for their faulty outputs.

Despite AI's prevalence, the tools are considered experimental, i.e., they're still "learning." At the end of the day, the person who uses them takes responsibility for any consequences of acting on their output. They are held accountable for failing to understand their limitations.

Reducing the risk of AI-generated harm

AI systems rely on data for training and operation, so data providers should be held accountable for the data's accuracy and quality. They should ensure it respects privacy regulations and is ethically sourced. Decentralized data sources and verification tools lower the risk of false or biased data because they enable insight into decision-making and output. They can help address the absence of proper safeguards and thoughtful implementation and play a crucial role in combating false information, which is easily perpetuated, given how convincingly it's presented. There's otherwise no way to mitigate risks arising from the fact that AI models are trained on aggregated data, act on prompts, and lack the intuition to determine whether the data or prompts (or both) are biased, harmful, or ignorant of the evidence.

Integrations of blockchain and AI emerge as the last and sometimes only line of defense. Consider a real-world example: an AI system makes a decision that negatively impacts a business, and the business suffers from relying on it without question. Whether the AI vendor or developer will take responsibility for an error resulting from a system flaw is questionable. The responsibility will likely be borne by the employee who used the tool without understanding how it works and potentially by their manager for failing to supervise them. It might all end with the employee being fired, but a huge error will affect the entire organization.

Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.

 

10h ago
bullish:

0

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.