THE WORLD’S FIRST MAJOR AI LAW HAS ARRIVED. WHAT THE EU’S RULES MEAN FOR BIG TECH

The world’s first comprehensive legal framework for artificial intelligence has been finalized. European Union lawmakers on Wednesday approved the bloc’s AI Act, which is likely to shape the approach of U.S. technology companies, according to Barron’s.

As whole, the legislation aims to protect European citizens’ rights from certain applications of AI and bring stricter oversight to the technology overall.

Companies like Alphabet GOOGL 0.93%, Amazon.com, AMZN 0.66%, Microsoft MSFT -0.04% and Meta Platforms META -0.84% will be affected by the Act, as the EU’s rules apply to anyone providing AI within the bloc. Beyond that, the so-called Brussels Effect often causes the EU’s rules to become the effective international standard.

The potential penalties contained in the AI Act are considerable, including 7% of a company’s global annual turnover in the previous financial year for violations involving banned AI applications.

The law passed the European Parliament with 523 votes in favor, 46 against, and 49 abstentions. The overwhelming majority should ensure it will be signed off on by the EU’s member states.

When Barron’s previewed the final negotiations over the AI Act back in December, there were still some key sticking points.

The main issue was whether the Act’s risk-based approach would apply to foundation models—the most powerful AI systems that can be adapted to different tasks, such as OpenAI’s GPT-4.

Some member states were calling instead for the regulation to apply only to specific AI use cases—such as a chatbot or image generator—while foundation models would be self-regulated via codes of conduct. The intent was to avoid putting burdensome regulation on domestic European AI companies, such as France’s Mistral and Germany’s Aleph Alpha.

The result was a compromise. Under the framework, foundation models generally only face transparency obligations, meaning they must disclose material—including the content used for training the AI—and their compliance with copyright law.

However, the act introduces a stricter regime for “high impact” foundation models—those trained with the largest amount of computing power—including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Those requirements will probably apply to future versions of Microsoft-backed OpenAI’s GPT, along with models from Meta, Alphabet’s Google, and U.S. start-up Anthropic (which is backed by Google and Amazon.com).

One important factor was the decision on open-source models, which share their code. Mistral and Aleph Alpha have both looked to get market traction by adopting that approach. Open-source models did get an exemption from some obligations under the AI Act, but not if they pose a “systemic risk,” which suggests more powerful open-source models will still face regulation.

The U.S. hasn’t passed a single piece of comprehensive legislation on AI, despite calls from the Biden administration for Congress to work on such a law in relation to data privacy.

Instead, President Joe Biden has issued an executive order and some U.S. companies have made a series of voluntary pledges on AI safety; there has also been a patchwork of local regulation. The White House has also set up the U.S. Artificial Intelligence Safety Institute to develop guidelines on the technology.

While much depends on implementation, the European regulation on the face of it gives EU authorities clearer legal power to act on perceived breaches of AI rules compared with Biden’s executive order. However, it could also limit the market for certain AI applications in Europe.