Deutsch한국어 日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçe
Portfolio TrackerSwapBuy CryptoCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsCoinStats MidasDeFi Portfolio Tracker24h ReportPress KitAPI Docs

Google analyst warns AI answers ‘not perfect, can’t replace your brain’

2h ago
bullish:

0

bearish:

1

image

Google analyst Gary Illyes warned that large language models – the tech behind generative AI chatbots like ChatGPT – still have gaps in accuracy and need a human eye to verify the content they produce. The comments come just days after OpenAI launched SearchGPT, a new AI-powered search engine that will compete directly with Google. 

Illyes shared the comments on LinkedIn in response to a question he got in his inbox, but did not say what the question was. He said people shouldn’t trust AI responses without checking other authoritative sources. OpenAI aims for its search tool to upend Google’s dominance in the search engine market.

AI responses are not ‘necessarily factually correct’

Illyes, who has been with Google for over a decade, said while AI answers may be close to the fact, they are not “necessarily factually correct”. That is because large language models (LLMs) are not immune from feeding off wrong information floating on the internet, he explained.

“Based on their training data LLMs find the most suitable words, phrases, and sentences that align with a prompt’s context and meaning,” Illyes wrote. “This allows them to generate relevant and coherent responses. But not necessarily factually correct ones.”

The Google analyst said users will still need to validate AI answers based on what “you know about the topic you asked the LLM or on additional reading on resources that are authoritative for your query.”

One way developers have tried to ensure the reliability of AI-generated content is through a practice called “grounding.” The process involves the infusion of machine-created information with human elements to guard against error. According to Illyes, grounding may still not be enough.

“Grounding can help create more factually correct responses, but it’s not perfect; it doesn’t replace your brain,” he said. “The internet is full of intended and unintended misinformation, and you wouldn’t believe everything you read online, so why would you LLM responses?”

Elon Musk accuses Google of gatekeeping public info

Traditionally, trust has always been an issue with search engines like Google and other artificial intelligence platforms and how they exert some control over the information they remit to users.

One such incident involves the recent assassination attempt of former U.S. President Donald Trump. Elon Musk suggested that Google banned the shooting incident from appearing in its search results, sparking a major debate on social media about the reach of Big Tech.

In the flurry of responses, a spoof account purporting to belong to Google vice president Ana Mostarac added to the debate, sharing a fake apology from the company for allegedly blocking content on Trump.

“…People’s information needs continue to grow, so we’ll keep evolving and improving Search. However, it seems we need to recalibrate what we mean by accurate. What is accurate is subjective, and the pursuit of accuracy can get in the way of getting things done,” the fake account posted on X.

“You can be assured that our team is working hard to ensure that we don’t let our reverence for accuracy be a distraction that gets in the way of our vision for the future,” it added.

The community notes on X immediately flagged the tweet, saying the person was impersonating the Google VP.  This is an example of how information can easily be distorted, and AI models may not be able to discern between what is accurate or not without human review.

2h ago
bullish:

0

bearish:

1

Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.