Deutsch한국어 日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçe
Portfolio TrackerSwapBuy CryptoCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsCoinStats MidasDeFi Portfolio TrackerWallet24h ReportPress KitAPI Docs

Europe’s AI Act Gets Final Approval With Up To $38M Fines 

24d ago
bullish:

0

bearish:

0

image

The European Union approved the final version of its much-awaited Artificial Intelligence (AI) act on Tuesday, making it the first major economic power to introduce comprehensive rules for the fledgling industry. 

The new rules, aimed at restricting the public use of AI considered high-risk, like deepfakes and facial recognition software, will apply to all companies that deploy such applications in the 27 EU-member states.

Altogether, the EU bloc represents about 20% of the global economy. AI companies that breach the new act will face fines of up to 35 million euros [$38 million] or 7% of their annual global revenue, according to the EU Council.

Also read: OpenAI Scraps ChatGPT Voice After Scarlett Johansson Controversy

New Legislation Aims for ‘Safe AI’

“The adoption of the AI Act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization and privacy protection said in a statement.

 

With the AI Act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.

Mathieu Michel

 

EU’s three branches of power – the Commission, Parliament and Council – all had to agree on the final version of the Act. The EU Commission and the European Union Parliament had already approved the law, and the EU Council gave the final agreement on May 22.

The AI Act outlines different risk categories of artificial intelligence use, ranging from “low-risk” to “high” and “unacceptable risk.” The categorization is based on the systems’ potential to harm consumers.

AI apps that are considered a threat to individual rights, like facial recognition software in public places and social scoring – a practice that assigns scores to individuals, either for loans or housing, based on their behavior – will be banned outrightly.

The lowest risk category relates to artificial intelligence used in video games or spam filters. Sensitive “high-risk” use cases that will be allowed include things like border management, education, and recruitment.

Companies that deploy such technologies will be required to disclose more information about the data used to train their systems. According to the EU, its Parliament will be empowered to enforce fines on firms that break the AI law to the tune of $38 million or 7% of their annual global turnover.

AI Act Risks Stifling Investment

Matthijs de Vries, founder of AI data ecosystem, Nuklai, said the rules are designed to protect personal information, particularly in sensitive sectors such as healthcare and finance.

“By ensuring that artificial intelligence systems comply with stringent data usage protocols, the AI Act helps safeguard consumer privacy and security,” de Vries told Cryptopolitan.

“The disclosure of data collection, processing, and usage practices mandated in the EU Data Act helps demystify what data AI is trained on and what data it uses to make its statements and conclusions. This helps to enhance trust, verifiability and reliability of AI systems,” he added.

There have been concerns that startups might find it hard to comply with the AI Act. Founders worry that the measures could hurt smaller companies and hamper investment and innovation, putting Europe further behind the U.S. and China in the AI race.

In October, Cedric O, founder of French AI startup Mistral, said the law would “kill” his firm. The entrepreneur feared that the law placed excessive scrutiny on large language models, even if they were not deployed for sensitive use cases like hiring, Sifted reported.

Jonas Andrulis, CEO of Aleph Alpha, the German rival to OpenAI, said classifying “general purpose AI” like LLMs as high-risk could have unintended consequences. His comments were echoed by Peter Sarlin, CEO of Finland’s Silo AI.

“If we are sort of generalizing across generative AI technology, and saying that all use cases that utilize generative pre-trained transformers (GPTs) are high-risk, then I think we will also be regulating quite a lot of use cases that aren’t actually high-risk,” Sarlin said at the time.

AI Act
An AI-powered robot operated by an engineer. Image credits: EU Commission

EU’s ‘AI Factories’ to Boost Innovation

It isn’t only entrepreneurs that have raised concerns about the AI Act. A U.S. State Department analysis in October 2023 warned that some rules in the law were based on “vague or undefined” terms, according to Bloomberg.

The analysis said the Act would benefit the largest tech firms that have the financial clout to train AI models and machine learning systems. Smaller firms are likely to suffer losses, it added.

Moreover, venture funds are unlikely to invest in startups classified by the AI Act as high-risk, according to a 2023 survey of 14 European VCs by the Initiative for Applied AI. Eleven of the funds said they were less likely to invest in companies with a high-risk rating, and eight said it would badly impact the startup’s valuation.

Also read: Top AI Chatbots ‘Highly Vulnerable’ to Simple ‘Jaibreaks’ – Study

In late January, the EU bloc announced a range of measures aimed at boosting innovation for European startups developing what it calls “trustworthy” AI that “respects EU values and rules.”

The union said the firms will have “privileged access to supercomputers” and that the EU will build “AI Factories” to make sure the required infrastructure is available for startups to buy and upgrade.

Europe’s AI Act is not expected to take effect until 2025.


Cryptopolitan Reporting by Jeffrey Gogo

24d ago
bullish:

0

bearish:

0

Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.