top of page
  • Writer's pictureEntrust Legal

The EU AI Act is Finally Here – Takeaways for AI Startups

Updated: Jul 29


Last Friday, the European Union unveiled the full version of the world’s first comprehensive AI regulation: the EU AI Act. Unanimously endorsed by its 27 member states, the Act will come into effect on August 1, 2024.


Its phased implementation will introduce a swathe of new obligations for companies, both within and outside the EU, making it essential for AI startups to understand and prepare for its implications. This legislation will apply in addition to existing laws, such as the GDPR, and has extraterritorial reach, affecting all entities offering AI solutions in the EU market.


A. Classification of AI Systems


The EU AI Act classifies AI systems based on their risk level, imposing different obligations accordingly:


  1. Unacceptable Risk: AI systems posing significant threats are outright prohibited. This category includes social scoring systems and manipulative AI.

  2. High-Risk AI Systems: Extensively regulated, these systems are used in critical areas such as biometric identification, emotion recognition, credit assessments, infrastructure management, education, and law enforcement. In addition, an AI system that is a safety component of a regulated product (e.g., a product subject to EU health and safety legislation) or that is itself a regulated product will also qualify as “high-risk” (e.g., cars, aviation). High-risk systems are legally allowed but must adhere to stringent requirements including risk management, data governance, technical documentation, human oversight, and post-market monitoring.

  3. Limited Risk AI Systems: These systems, including chatbots, are systems humans can interact with directly and are subject to lighter transparency obligations. Developers must ensure users are aware they are interacting with AI.

  4. Minimal Risk AI Systems: Currently unregulated and not restricted, this category encompasses many AI applications like AI used in video games or spam filters. However, this may change with the rise of generative AI technologies.


B. Obligations for Providers and Users


  • Providers: Developers of AI systems or general-purpose AI (GPAI) models must comply with various obligations, particularly for high-risk systems. They need to manage risks, ensure data quality, maintain technical documentation, monitor and more.

  • Deployers: Users who deploy high-risk AI systems professionally (not personally) have obligations to ensure transparency about AI-generated content, though these are less extensive compared to those of the providers.


C. General Purpose AI (GPAI) Models


For providers of GPAI models—like OpenAI’s GPT—there are specific requirements:

  • Documentation: Providers must offer technical documentation, usage instructions, and summaries of training data.

  • Compliance for Systemic Risk: Providers of GPAI models posing systemic risks (typically determined by compute thresholds) must conduct model evaluations, adversarial testing, and ensure cybersecurity.


D. Phased Implementation


The AI Act will roll out in phases:


  • February 2025: Certain AI systems will be banned six months after the Act comes into force. This includes social credit scoring, unauthorized facial recognition, and certain uses of real-time biometrics.

  • May 2025: Codes of practice will apply on developers of in-scope AI apps. The EU’s AI Office, an ecosystem-building and oversight body established by the law, is responsible for providing these codes but who will actually write the guidelines is still to be determined.

  • August 2025: New transparency requirements for GPAI models will take effect by August 2025, with an additional grace period for models already on the market until August 2027.

  • August 2026: Regulations for high-risk systems will apply by August 2026. For systems used by public authorities, compliance will be required by August 2030, regardless of design changes.


E. Violations and Fines


The AI Act imposes severe penalties for non-compliance, with fines reaching up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. Regulators also have the authority to ban AI systems entirely from the EU market.


The regulators are:

  • GPAI Models: Enforcement will be handled by the European AI Office, a new body established to oversee compliance.

  • AI Systems: Primarily enforced at the national level where each EU country must designate its regulatory authority within one year of the Act's enactment.


The key is to consider if the AI Act applies to your company. Even if not established in the EU, your company could be subject to the AI Act if you make an AI system or GPAI model available in the EU market. Moreover, even if only the output generated by the AI system is used in the EU, the AI Act will still apply to the provider and deployer of the AI system. This means that many providers of AI systems or GPAI models based outside the EU could still face investigations by EU AI regulators.


F. Compliance Checklist


All companies leveraging or deploying AI should prepare for the far-reaching impact of the new law:


  1. Determine applicability: Check if the AI Act applies to your products or services and understand the specific obligations and deadlines.

  2. Revise product design: If it makes sense, identify and implement design changes to your products/services in light of the requirements.

  3. Risks and safeguards: Ensure robust risk management and data governance.

  4. Oversight: Designate an expert individual or a team dedicated to overseeing compliance and governance.

  5. Strategy: Allocate resources and develop a long-term strategy to meet the new requirements.


Companies should also review existing contracts and consider incorporating legal clauses to address AI Act compliance. For high-risk AI systems, a written agreement specifying compliance responsibilities may be necessary. The new EU AI Office may develop voluntary model terms for such contracts.


G. What About GDPR


The bad news is that the EU AI Act will apply on top of existing laws and regulations, including the GDPR, adding the compliance burden for early-stage startups. 


The good news is that companies already compliant with GDPR could utilize part of the existing documentation or system for AI Act compliance. For instance:


  • Data Protection Impact Assessments (DPIAs) could be used for parts of the risk management process.

  • Data Handling Policies could be adapted to cover AI-specific requirements.

  • Privacy by Design Policies could be updated and used for AI risk assessments and transparency.

  • Data Security Policies could include AI systems in their scope.

  • Incident Response Policies may be expanded to cover AI system malfunctions and risks.


H. Conclusion


The EU AI Act marks a significant shift in AI regulation, affecting how AI systems are developed, deployed, and regulated. For AI startups, especially those targeting the EU market, understanding and preparing for these new regulations is essential. Timely preparation will help ensure compliance and facilitate a smooth transition into this evolving regulatory landscape. For a more in-depth legal analysis or guidance, please reach out to see how we can help you.

6 views0 comments

Comments


Commenting has been turned off.
bottom of page