Icon
Icon

Blog Details

The EU AI Act is Here! What Does It Mean for Global Artificial Intelligence?

July 01, 2025

By

Everawe Labs

To put it simply, the EU AI Act is the first comprehensive regulation of artificial intelligence (AI) in the world. It aims to set clear boundaries and rules for AI systems, with the goal of ensuring AI develops safely and respects fundamental human rights within the EU.

It sounds simple, doesn’t it? But the reality is much more complicated than you might think. From “high-risk AI” to “prohibited technologies” and how to ensure that every AI application complies with these rules, the EU AI Act is creating waves that go far beyond the EU's borders. It could even reshape the global AI landscape. Today, let's dig deeper into the significance of this law and explore how it will shape the future of AI.

Digital security concept showing a glowing blue padlock with European Union stars in the center, surrounded by flowing data streams and circuit-like patterns in a futuristic cyber environment with red warning barriers, representing the EU AI Act's regulatory framework for securing and controlling artificial intelligence development
Digital security concept showing a glowing blue padlock with European Union stars in the center, surrounded by flowing data streams and circuit-like patterns in a futuristic cyber environment with red warning barriers, representing the EU AI Act's regulatory framework for securing and controlling artificial intelligence development
Digital security concept showing a glowing blue padlock with European Union stars in the center, surrounded by flowing data streams and circuit-like patterns in a futuristic cyber environment with red warning barriers, representing the EU AI Act's regulatory framework for securing and controlling artificial intelligence development

The EU AI Act was proposed in 2021, and it has been discussed and revised multiple times. While it may still take some time before it fully comes into effect, the EU has already started implementing some specific provisions, particularly in regulating high-risk AI. But what exactly is considered "high-risk AI"?

The EU AI Act primarily defines the rules for the use and development of AI, and it divides AI applications into several key categories.

Risk Classification: Based on the risk level of AI applications, they are divided into four categories:

  • High Risk: AI systems in areas like healthcare, transportation, and justice, which require strict compliance and scrutiny.

  • Limited Risk: Systems like chatbots and virtual assistants, which must provide transparency and inform users.

  • Minimal Risk: AI-driven game recommendation systems, which have almost no regulation.

  • Prohibited: Some AI technologies, such as social scoring systems, which are fully banned due to concerns about privacy and fundamental rights.

High-Risk AI Requirements: For high-risk AI, the Act demands that developers and users comply with stringent requirements for transparency, explainability, safety, and data protection. For example, a risk assessment must be conducted, data quality ensured, and a traceable decision-making process must be provided.

Transparency and Disclosure Obligations: For instance, if an AI system is used in important decisions like hiring or loan approvals, users must clearly inform those affected and provide sufficient explanations.

Compliance Mechanisms: The Act establishes compliance bodies to regularly audit, inspect, and oversee AI systems, ensuring they meet safety and ethical standards.

Starting in June 2025, the core provisions of the Act (especially those concerning prohibited and high-risk categories) will begin to apply. By August 2026, most of the rules will be fully applicable, and the transition period for high-risk systems in regulated products will be extended to 2027. So, what impact will this have on us?

If you are in the EU or handle data within the EU, the AI Act could affect your privacy and personal data security. Even if you're outside the EU, this law is still highly significant, much like the GDPR.

The Brussels Effect: Due to the massive influence of the EU market, many non-EU companies wishing to enter the EU market must comply with the EU AI Act. This effect is similar to the global impact of automotive safety standards, where EU regulations become the de facto global standard, forcing companies worldwide to align their products and services with EU rules, particularly in terms of data protection and privacy.

A Regulatory Template: Other major countries around the world, including the US, the UK, Canada, and China, are considering their own AI regulations to address the growing challenges of AI technology. The EU Act is the most detailed and advanced regulatory template, directly influencing the global AI regulatory landscape.

Defining "Trustworthy AI": The Act defines what constitutes "responsible" and "trustworthy" AI, such as transparency, safety, and human rights protection. This will become the core framework for global discussions on AI ethics and governance.

Business Impact: For companies operating in or doing business with the EU market, compliance is a must, which could affect AI product development, market access, and costs. For example, providers of global collaborative AI (GPAI) models will need to disclose their training data and may have to adjust their business models.

Innovation Direction: The Act encourages low-risk innovation while limiting high-risk applications. This will guide global AI research funding and talent toward areas that comply with EU standards.

Consumer/User Rights: The Act sets out the rights consumers have when interacting with AI, such as the right to be informed and the right not to be discriminated against. This concept will gradually influence global user expectations and demands for AI services.

Given its importance, how are various stakeholders reacting to the Act?

Tech Companies: Large companies like Google, Microsoft, and Amazon have expressed concerns, particularly regarding compliance costs and potential restrictions on innovation. They argue that strict regulations could burden businesses and hinder the ability to innovate quickly.

AI Startups: Smaller companies or startups may feel that the high-risk AI requirements are too cumbersome, raising the entry barriers to the market.

Human Rights Organizations and Consumer Groups: These groups support the Act, believing it will protect consumer rights and reduce AI's threats to privacy and fairness.

Academia/Research Institutions: They generally support responsible research but hope that the rules do not overly restrict non-commercial scientific exploration.

Governments: Some countries, especially non-EU ones, are keenly watching the EU's legislative movements and considering whether to adopt similar regulations. This is also prompting some countries to think about how to gain a competitive advantage in the AI sector.

The introduction of the EU AI Act marks a new milestone in global AI regulation. However, the battle between innovation and compliance is just beginning. How to balance these forces, how to effectively implement it across different countries, and how successful it will be in the long run are all still to be determined. The innovation of technology and legislation has always been a topic of fierce debate, like seeking a balance between yin and yang. The challenge will be how to drive technological progress while maintaining ethical and security standards—a challenge that the world, and each of us, will have to face.

Fast Take

The EU's new AI Act is set to change the global landscape of artificial intelligence by introducing the world’s first comprehensive regulations. With sweeping rules covering everything from high-risk AI to banned technologies, the law aims to ensure both safety and human rights. But how will it balance innovation with regulation, and what will its global impact be?

Share Now
Facebook
Twitter
Linkdin