Insights

EU AI Act Explained: What Every Company Should Know

Author
Filip Begiełło
Published
May 6, 2024
Last update
November 19, 2024

Table of Contents

Key Takeaways

  1. The EU AI Act introduces a four-tier risk-based approach to AI regulation: unacceptable, high, limited, and minimal risk, with specific requirements for each category.
  2. Companies must implement transparency measures, human oversight, and robust safety features, particularly for high-risk AI systems in healthcare and critical sectors.
  3. Compliance deadlines are phased, with full application two years after enactment, though some provisions take effect earlier for specific AI categories.
  4. Best practices include embracing transparency, ensuring human oversight in decision-making, and actively addressing model bias through quality data collection.

Is Your HealthTech Product Built for Success in Digital Health?

Download the Playbook

In recent years, artificial intelligence (AI) has emerged as a game-changer in many different businesses throughout the globe. New legal frameworks are springing up to deal with the inevitable ethical and security issues AI raises as the technology develops. An essential move in this approach has been made by the European Union (EU) with the implementation of the EU AI Act. This historic policy aims to set rules and principles for AI development and deployment.

Businesses that depend on AI technologies must comprehend the EU AI Act; doing otherwise is a question of existence in business. Not only does this law spell out the regulations for AI systems in the EU, but it also affects businesses with a worldwide reach. Product development, risk management, and company strategy as a whole can be severely impacted by non-compliance with the Act. This blog article will go over the essentials of the EU AI Act, explaining how it works and what businesses need to know to stay ahead of the curve in this ever-changing legal environment.

What Is the EU AI Act?

The European Union (EU) has consistently recognized Artificial Intelligence (AI)'s disruptive potential for transforming sectors, boosting public services, and improving individuals' quality of life. However, as AI becomes more widely used, the EU has recognized the necessity for a legal framework to handle associated risks such as ethical problems, safety concerns, and the possibility of prejudice or misuse. This conclusion paved the way for the EU AI Act, a comprehensive law that ensures responsible AI development and implementation.

Legislators, business leaders, and other interested parties worked together in lengthy investigations and discussions that resulted in the European Union AI Act.  The European Commission, the EU's executive department, played a pivotal role in shaping the Act's framework. In April 2021, the European Commission published the first draft of the EU AI Act, taking a significant move toward adopting a unified approach to AI legislation across the EU. Topics such as risk-based regulation, human oversight, and categorizing AI systems according to their possible effects on safety and fundamental rights were introduced in this draft.

The European Union's approach to AI policy is rooted in a delicate balance between innovation and protection. The EU AI Act, utilizing a risk-based methodology, is designed to ensure that AI technologies are developed and utilized in a manner that promotes safety, transparency, and accountability. This strategy offers flexibility in managing different types of AI applications, ensuring that high-risk AI systems receive increased attention while low-risk applications can thrive without undue burden. This commitment to balance should reassure businesses about the Act's intentions.

The EU AI Act is not an isolated policy but a part of a larger EU plan for digital transformation, aligning with other projects like the Digital Services Act and the General Data Protection Regulation (GDPR). Together, these measures demonstrate the EU's unwavering commitment to building a digital ecosystem that prioritizes human rights, data protection, and safety. This commitment should instill a sense of security in businesses operating in the global AI field.

What Are the Risk Levels in the EU AI Act?

One of the defining aspects of the EU AI Act is its risk-based approach to regulating artificial intelligence (AI) systems. This classification system divides AI applications into four distinct risk levels: unacceptable, high, limited, and minimal risk. Understanding these categories is crucial for businesses, as they determine the level of regulatory scrutiny and the specific obligations required for compliance.

Unacceptable Risk

AI systems in this category are deemed to pose serious threats to safety, human rights, or ethical principles. These include:

  • AI applications are designed to manipulate human behavior subliminally.
  • AI systems that enable governments to score socially.
  • Biometric surveillance tools without appropriate legal safeguards. 

Under the EU AI Act, these applications are strictly banned, reflecting the EU's commitment to protecting fundamental rights and ethical standards.

Team discussing EU AI Act compliance strategies during a business meeting, reviewing digital documentation

High Risk

This category encompasses AI systems that could significantly impact safety or human rights but are not prohibited outright. Typical examples include:

  • AI in critical sectors such as healthcare, transportation, and law enforcement.
  • AI systems used for employment-related decisions, like hiring and employee monitoring.
  • To ensure safety and reliability, high-risk AI systems must meet rigorous requirements, such as conformity assessments, detailed technical documentation, and post-market monitoring.

Limited Risk

AI applications in this category are considered moderately risky but generally do not require the same stringent oversight as high-risk systems. Examples include:

  • AI chatbots used for customer service.
  • Automated decision-making tools in non-critical domains.
  • Companies deploying limited-risk AI must ensure transparency by informing users when interacting with AI and providing mechanisms for human oversight if needed.

Minimal Risk

This category includes AI systems that are low-risk or generally benign. Examples are:

  • AI used for entertainment, such as video games or music recommendations.
  • AI systems for simple data analysis or research purposes.
  • While minimal-risk AI generally has no specific regulatory requirements, businesses should still operate within ethical guidelines to avoid unintended harm or negative impacts on users.

How Does the EU AI Act Affect Software Development?

The European Parliament and Council of the EU reached a political agreement on the AI Act in December 2023. It will be fully applicable two years later, with the following exceptions: six months after the prohibitions take effect, twelve months after the rules for general-purpose AI models and their obligations become applicable, and thirty-six months after the rules for AI systems—embedded into regulated products—apply. This timeline provides businesses with a clear roadmap for adapting to the Act's requirements. 

As the Act's enforcement approach approaches, companies should expect a shift towards openness and clear communication in the AI industry. This standard, outlined in the AI Act, will force many existing AI services to adapt. 

One of the most noticeable impacts of the EU AI Act is the push for increased disclosure to end users. Companies must be more forthcoming about how their AI systems work, the data they use, and the decisions they make. This openness allows users to understand and compare services, leading to better-informed choices.

One of the most noticeable impacts of the EU AI Act is the push for increased human oversight and safety features in AI systems. This will require AI developers to implement more rigorous monitoring and control mechanisms, ensuring that AI technologies are developed and utilized to promote safety, transparency, and accountability. This approach aligns with the EU's commitment to protecting fundamental rights while fostering innovation.

At Momentum, we always strove for innovative and beneficial AI systems. As our main areas of expertise are HealthTech and FinTech, we experienced the delicate nature of high-risk AI systems.

Regarding AI, our design principle was always to provide reliable and transparent systems that the end user could trust to perform. This principle aligns with the goals stated in the AI Act for the high-risk applications of AI models. In our webinar series, we discussed the need for human oversight and the role of AI as a tool or a companion, not a replacement, and we believe that introducing such requirements can only benefit the end user. Going forward, we aim to maintain this approach, expanding upon it when possible and necessary. 

AI Act Compliance Best Practices

Companies aiming to comply with the Act and create a stable, beneficial AI environment should focus on several essential best practices to align with these principles.

1. Embrace Transparency in AI Systems

Companies should prioritize transparency for all AI operations, not only those classified as high-risk, as it is at the heart of the EU AI Act. Building AI models that are both highly performing, understandable, and explicable is transparency—better knowledge of how AI systems operate among users results in increased trust. Moreover, explainability enables programmers to find and fix problems, building more robust AI systems.

Building AI models is simply one aspect of transparency implementation; another is knowing why they work the way they do. With this method, explanations of model results and the elements influencing excellent performance must be constantly sought. Companies may create an open culture and get important information for future projects by encouraging explainability.

2. Ensure Human Oversight in Decision-Making

Human supervision is an essential part of the EU AI Act, especially for high-risk AI systems. This procedure guarantees that a human makes the last call, especially in delicate or important situations. Businesses should consider human supervision when designing AI systems and offer information and tools so that people can comprehend the technology's results.

Customers and users who employ AI models in their decision-making need human supervision. If users are aware of the elements that affect the results produced by AI, they can evaluate findings more precisely. This improves safety and produces fresh ideas that might guide better judgment. Businesses should see human supervision as a value-adding component that helps to create a safer, more dependable AI environment rather than just as a compliance check.

3. Address Model Bias and Ensure Data Quality

In AI research, model bias is a severe problem frequently resulting from distorted or missing training data. Wide-ranging effects of this issue may arise, especially when AI systems engage with different user groups. Eliminating bias when AI is developed is essential to prevent possible discrimination or wrong results.

Companies should guarantee appropriate user base representation in the data collecting and training process to fight model bias. This means actively searching for a variety of data sources and looking for biases in the data. Transparency and model bias are related since for a model to be truly transparent we need to understand its training data as well. While the EU AI Act requires this for high-risk systems, it is a best practice that benefits the whole AI sector.

Preparing for the EU AI Act

The EU AI Act allows businesses to gain stakeholders' and customers' trust rather than merely a legislative hurdle. By emphasizing explainability and transparency, companies may create AI systems that are accessible, user-friendly, and compliant. Human supervision and stringent safety measures further increase the dependability of AI systems, strengthening the idea that AI is meant to supplement human decision-making rather than supplant it.

Embracing best practices and taking a proactive approach can help organizations confidently navigate the regulatory landscape as they strive to comply with the EU AI Act. The Act aims to make AI environments safer and more transparent, benefiting both companies and end users. If you want to be successful in the long run with artificial intelligence, you need to keep up with the latest news and actively participate in shaping regulations.

Stay ahead in HealthTech. Subscribe for exclusive industry news, insights, and updates.

Be the first to know about newest advancements, get expert insights, and learn about leading  trends in the landscape of health technology. Sign up for our HealthTech Newsletter for your dose of news.

Oops, something went wrong
Your message couldn't come through. The data you provided seems to be insufficient or incorrect. Please make sure everything is in place and try again.

Read more

AI in HealthTech: How Machine Learning is Transforming Patient Care

Michał Grela
|
November 22, 2024

Building Secure AI Models for HealthTech

Filip Begiełło
|
November 21, 2024

The Hidden Cost of Empty Chairs: Analyzing the No-Show Crisis in Healthcare

Piotr Sędzik
|
November 19, 2024

AI and HIPAA Compliance in Healthcare: All You Need to Know

Filip Begiełło
|
November 14, 2024

Let's Create the Future of Health Together

Looking for a partner who not only understands your challenges but anticipates your future needs? Get in touch, and let’s build something extraordinary in the world of digital health.