AI-Generated Hoax of Pentagon Explosion Sparks Brief Market Sell-Off
0
0
This Monday marked a new high—or arguably low—in demonstrating the power of artificial intelligence (AI). A picture depicting a fabricated explosion at the Pentagon, quickly flagged as being AI generated, spread like wildfire across social media. It also appears to have spurred a momentary sell-off in the U.S. stock market.
The alarming image, portraying smoke billowing from the iconic building, was disseminated by numerous accounts, including a Russian state-owned media channel.
RT, with over 3 million followers, has posted (since deleted) what looks to be an AI generated photo of an explosion near the Pentagon pic.twitter.com/6Bl7X8ZA2M
— Leonardo Puglisi (@Leo_Puglisi6) May 22, 2023
Interestingly, reports of the false Pentagon explosion also made its way onto non-official Twitter accounts with blue verification checkmarks, further amplifying the confusion and the impact of the falsehood—highlighting both the importance of rigorous source verification as well as the unsurprising result of Elon Musk's new criteria for account verification.
This morning blue check accounts accounts claimed large explosions at the Pentagon.
... then the White House.
Russian state media amplified the faked Pentagon image from their gold check account.
The images look AI generated, as folks identified. 1/ pic.twitter.com/Bd9uu3jwPZ
— John Scott-Railton (@jsrailton) May 22, 2023
As the photo went viral, U.S. stock indexes took a minor hit, although markets quickly recovered after the photo was exposed as a hoax. Bitcoin, the leading cryptocurrency, also experienced a brief "flash crash" following the spread of the fake news, slipping to $26500. Yet, Bitcoin is slowly but surely recovering and it is currently being traded at $26,882, according to CoinGecko.
The hoax’s impact was significant enough to prompt the Arlington County Fire Department to intervene. "There is NO explosion or incident occurring at or near the Pentagon reservation,” they tweeted “and there is no immediate danger or hazards to the public."
@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public. pic.twitter.com/uznY0s7deL
— Arlington Fire & EMS (@ArlingtonVaFD) May 22, 2023
Paving the Road to More AI Regulation
This type of online deceit has raised serious concerns among critics of unmitigated AI development. Many experts in the field have warned that advanced AI systems could become tools for malevolent actors worldwide, spreading misinformation and causing online pandemonium.
This isn't the first time such trickery has emerged. Viral AI-generated images have previously deceived the public, such as images of Pope Francis sporting a Balenciaga jacket, a fake arrest of President Donald Trump, and deepfakes of celebrities like Elon Musk or SBF promoting crypto scams.
Notable personalities have also sounded alarms about the spread of disinformation.
Hundreds of tech experts already called for a six-month halt on advanced AI development until proper safety guidelines are established. And even Dr. Geoffrey Hinton, widely known as the 'Godfather of AI', even resigned from his role at Google to voice his concerns about potential AI risks without damaging his former employer's reputation.
Episodes of misinformation like this one feed into the ongoing debate surrounding the need for a regulatory and ethical framework for AI. As AI becomes an increasingly potent tool in the hands of agents of disinformation, the consequences can be chaotic.
Based on today's events, one question stands out: What if an AI was the agent using the power of social media to spread chaos and control the financial markets? We kind of saw it coming.
0
0
Securely connect the portfolio you’re using to start.