An Overview of the EU AI Act

Dec 19, 2023
February 20, 2024
Steve Klementowski
,
An Overview of the EU AI Act

The European Union's latest legislative milestone - the Artificial Intelligence Act - is anticipated to be the most far-reaching legal framework dedicated to Artificial Intelligence to date. On December 8, 2023, a provisional agreement on the EU AI Act was reached by European lawmakers. Under the expected timeline, the AI Act will take effect in 2026.

The AI Act is being compared to the General Data Protection Regulation (GDPR), given the GDPR’s widespread impact on influencing regional data privacy laws beyond the European Union. The AI Act represents a significant step in the governance and regulation of AI technologies within the EU and sets a precedent with global implications. 

What is the Definition of AI Systems?

To establish a uniform definition that could be applied to future AI systems and avoid misinterpretations, the EU parliament adopted the OECD’s definition of an AI system:

"An AI system is a machine-based system that [...] infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments."

What is the Scope of the EU AI Act?

The AI Act is extraterritorial and applicable to entities that create, manipulate, or deploy relevant AI systems, regardless of their location. This means companies and individuals must adhere to the Act as long as these systems are used or have an impact within the European Union. For example, a U.S.-based company that implements an AI system utilizing training data from European citizens as input is required to adhere to the AI Act.

It is important to note that the Act excludes certain AI systems from its scope, specifically:

  • Systems that are used solely for military or defense purposes
  • Systems dedicated exclusively to research and innovation activities
  • Systems operated by individuals for personal, non-professional reasons

The European Parliament's primary objective in introducing the AI Act is to ensure that AI systems that have a direct impact within the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Additionally, there's a strict emphasis on human oversight of these systems to mitigate the risk of adverse effects and prevent harmful outcomes.

How is AI Risk Categorized?

To begin with, the EU AI Act is a risk-based legal framework since there are different rules for different risk levels. These risks are categorized according to the level of threat that they pose to society: unacceptable, high-risk, and limited risk. 

Unacceptable Risk AI Systems

Those AI systems labeled as an unacceptable risk are absolutely prohibited. These include:

  • Social Scoring: categorizing individuals based on their behavior, socio-economic background, or personal traits.
  • Use of Biometric Identification: including facial recognition, fingerprints, and other biometrics for different purposes. There are exceptions for prosecuting serious crimes, but only with prior judicial authorization.
  • Manipulation through Cognitive Behavior: scenarios like voice-activated toys that could prompt risky actions in children or target vulnerable groups.

High-Risk AI Systems

The European Union further identifies AI systems as high-risk because of their potential impact on safety or fundamental rights. They fall into two main categories: AI systems used in EU-regulated products, like aviation equipment, cars, toys, and medical devices, and those deployed in eight specific areas which include law enforcement, critical infrastructure, and migration, asylum and border control management.  

It is important to note that high-risk AI systems must undergo a thorough assessment before they are introduced into the market. Moreover, their performance and compliance with standards shall be continually monitored throughout their lifecycle, ensuring they remain safe and aligned with fundamental rights.

Limited Risk AI Systems

Systems classified as limited risk, such as generative AI like ChatGPT and deepfakes, are required to meet basic transparency standards. They must alert users that they are interacting with an AI system, allowing them to decide whether to continue using these applications after an initial interaction. This is especially applicable in cases where AI is used to generate or manipulate image, audio, or video content like deepfakes. 

What are the Penalties and Compliance Requirements?

As mentioned previously, unacceptable risk and high-risk AI systems will face stringent requirements. Any artificial intelligence that poses an unacceptable level of risk is absolutely prohibited and subject to severe financial penalties. Furthermore, high-risk systems include rigorous compliance requirements like mandatory fundamental rights impact assessments, registration in public EU databases, and obligations to provide explanations about AI-driven decisions affecting citizens' rights, among others.

On the other hand, AI systems with limited risk obligations are relatively light, focusing mainly on transparency, such as appropriate labeling of content, informing users when interacting with AI, and transparency through training data summaries and technical documentation. 

Fines for breaching the Act are calculated based on a percentage of the offending party's global annual turnover in the previous financial year or a fixed sum, whichever is greater:

  • €35 million or 7% for using banned AI applications
  • €15 million or 3% for violating the Act's obligations
  • €7.5 million or 1.5% for providing incorrect information

As a note, there are proportionate caps on administrative fines for small and medium enterprises (SMEs) and startups. 

How Can Cyera Help You Prepare for the EU AI Act?

Cyera’s Data Security platform enables you to discover, classify, and assess the risks of training data used by AI systems. Through continuous assessment of training data, Cyera helps organizations better understand and secure their AI systems in preparation for the EU AI Act.

Holistic Visibility

Cyera identifies and analyzes the location, classification, sensitivity, and associated risks of training data. The platform locates where training data is stored across organizational silos, helping you establish a unified view of data used by AI systems.

Risk Evaluation

Cyera evaluates the risks based on the data’s content and environment. For instance, Cyera tells you if training data contains personally identifiable information (PII), if it is broadly accessible, and if the data is both real and exposed in plaintext, increasing the risk of that data. 

Further, Cyera calculates the risk of training data by assessing the environment housing the data. For example, risks associated with that data may be elevated if the data is improperly stored in insecure environments or if logging is turned off. 

Governance

Cyera enables you to audit the configuration settings of datastores housing training data, as well as activate policies and trigger alerts when suspicious activities are detected. For example, if someone moves the data to an unauthorized environment, an alert will trigger to the appropriate security analysts for review and intervention. Cyera also provides a comprehensive audit of who has access to training data, detailing the permissions each user has and their purpose for accessing the data.

Conclusion

While developments around the EU AI Act continue to unfold, the Act, once formally enacted, will undoubtedly set a global standard for AI regulation. It balances the need for innovation with the imperative for safety, privacy, and ethical considerations. The severe penalties imposed under this act compel operators of AI systems to adhere to the Act’s provisions.

Cyera empowers security teams to know where their data is, what exposes it to risk and take immediate action to remediate exposures and assure compliance without disrupting the business. Cyera offers a modern solution for a modern problem: helping organizations understand and secure data utilized and generated by AI systems. 

To learn more about how Cyera can help you manage the data associated with your AI systems, schedule a demo today.