Skip to content

Brace yourself | Increasing European scrutiny on AI technology deployments

What should companies in Europe consider as lawmakers and regulators escalate their scrutiny of artificial intelligence?

Brace yourselves | Increasing scrutiny in Europe towards AI applications emerging in the...
Brace yourselves | Increasing scrutiny in Europe towards AI applications emerging in the technological landscape

Brace yourself | Increasing European scrutiny on AI technology deployments

Artificial intelligence (AI) is increasingly becoming a common tool across various sectors, offering ease, efficiency, and insight. However, the use of AI also brings about challenges and risks, particularly in terms of safety, transparency, and accountability. In response, the European Union (EU) has taken a proactive approach to regulate AI applications, setting a potential global standard for AI governance.

The EU's AI Act: A Comprehensive Regulatory Framework

The EU's AI Act, a comprehensive regulatory framework for AI, came into force in 2024 and has a staggered implementation timeline. Key governance provisions, including the establishment of national competent authorities and the EU-level advisory bodies such as the European AI Board and AI Office, took effect on August 2, 2025. These bodies oversee enforcement and institutional framework setup.

From August 2, 2025, rules on general-purpose AI (GPAI) models started applying. Providers placing GPAI models on the EU market must disclose how AI models are trained, comply with copyright regulations, and may voluntarily adhere to a Code of Practice to reduce legal burdens. High-impact GPAI models with very large scale (exceeding (10^{25}) FLOP) face additional notification and safety requirements. Existing models on the market before August 2, 2025, have until August 2, 2027, to comply fully.

Implications for US Businesses

For US businesses offering AI applications in EU markets, this means they must comply with the EU AI Act's transparency and safety obligations when deploying AI solutions within the EU market post-August 2025. High-risk AI systems (e.g., certain recruitment tools or biometric systems) must meet strict compliance standards, including pre-market approvals by designated national authorities.

US providers of GPAI models need to implement risk mitigation measures and may benefit from engaging with the voluntary GPAI Code of Practice to gain clarity and reduce regulatory burden. They must also interact with new regulatory bodies established in the EU, such as national competent authorities and the AI Office, which oversee enforcement and market surveillance.

Balancing Regulation and Innovation

There is ongoing debate within the EU about balancing regulation with innovation. Some EU policy signals suggest potential simplifications or regulatory adjustments to avoid blocking AI innovation, but as of August 2025, enforcement infrastructure and many key rules are already established and active.

The UK's Approach

In contrast, the UK government has not suggested adopting regulation aligned with the EU proposals. However, the UK's National AI Strategy, due to be published in the coming months, is expected to focus on ethical, safe, and trustworthy development of AI. The UK's data protection authority, the Information Commissioner's Office (ICO), has been investing in building its AI expertise and has partnered with the UK's leading AI research body, the Alan Turing Institute, to produce guidance on explainable AI.

The Future of AI Regulation

The EU's legislative proposals for AI regulation, published on 21 April 2021, aim to set the regulatory gold standard for AI applications. It will be interesting to see whether regulators on this side of the Atlantic take inspiration from the US Federal Trade Commission's (FTC) approach, as seen in the FTC's action against Everalbum, requiring the deletion of not just the customer datasets but also the models and algorithms developed using the unauthorized datasets.

The use of AI can generate suspicion and uncertainty due to lack of transparency and understanding of its operations and responsibility. The EU's regulatory framework, with its focus on high-risk AI applications that could create risks for health and safety or around fundamental rights and freedoms, aims to address these concerns and provide a clear legal environment for AI development and deployment.

[1] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12622-Artificial-Intelligence-Act

[2] European Parliament. (2021). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). Retrieved from https://www.europarl.europa.eu/doceo/document/TA-9-2021-0134_EN.html

[3] European Council. (2021). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). Retrieved from https://www.consilium.europa.eu/media/41482/110716-reg-ai-act-1.pdf

[4] European Parliamentary Research Service. (2022). AI and robotics: Regulatory frameworks in the EU and US. Retrieved from https://www.europarl.europa.eu/RegData/etudes/IDAN/2022/685761/EPRS_IDA(2022)685761_EN.pdf

[5] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council on a single market for data. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12624-single-market-data

  1. In response to the growing use of artificial intelligence (AI) and its associated challenges, the European Union (EU) has implemented the AI Act, an extensive regulatory framework that includes provisions for transparency, safety, and accountability, particularly for high-risk AI applications such as certain recruitment tools or biometric systems.
  2. For US businesses offering AI applications in EU markets, the AI Act means they must comply with stringent transparency and safety obligations when deploying AI solutions within the EU, adhering to rules on general-purpose AI (GPAI) models, and potentially engaging with the voluntary GPAI Code of Practice to reduce regulatory burdens.

Read also:

    Latest