Navigating with the EU AI Act: Investment Guidelines in AI Companies
The Artificial Intelligence Act (AI Act) is a European Union regulation on artificial intelligence (AI) in the European Union. Proposed by the European Commission on 21 April 2021 and passed on 13 March 2024, it aims to establish a common regulatory and legal framework for AI.
The AI Act is the first comprehensive regulation of AI by a major regulator anywhere.
Investors in AI companies operating within the EU must carefully consider the regulatory environment to assess potential risks and opportunities. The impact is already there (early days of standardization) and binding regulation is behind the corner.
In this article, we’ll explore key guidelines for investors to navigate the complexities of the EU AI Act and make informed investment decisions.
Understanding the Regulatory Scope:
Investors should start by familiarizing themselves with the provisions outlined in the EU AI Act. Understanding which AI systems of the investment target company are considered at risk and relevant for the scope of the Act. Mapping the specific imposed requirements essential for assessing compliance measures.
Assessing Compliance Measures:
Evaluate the AI company’s current and planned compliance measures with the EU AI Act. This includes examining their approach to transparency, accountability, data privacy, safety, and non-discrimination in AI systems. Use the provided “EU AI Act Compliance Checker (https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/) — an interactive tool to determine whether or not the specific AI system will be affected by the AI Act.
Identifying High-Risk AI Systems:
The Regulatory Framework defines 4 levels of risk for AI systems:
Determine whether the AI systems developed or utilized by the company fall under the high-risk category defined by the regulation. High-risk systems are subject to stricter requirements and conformity assessment procedures, which may impact the company’s operations. Including sales cycles, customer pushbacks, etc.
Evaluating Risk Management Practices:
Assess the company’s risk management practices concerning AI development and deployment. This involves examining their strategies for identifying and mitigating potential risks associated with AI technologies. Prefer TRiSM — AI Trust, Risk, and Security Management as a comprehensive approach to managing AI risks and avoiding potential legal and reputational damage while ensuring using AI responsibly and ethically.
Due Diligence on Data Practices:
Investigate the company’s data collection, processing, and storage practices to ensure compliance with EU data protection regulations, such as the General Data Protection Regulation (GDPR). Robust data governance policies are essential for safeguarding data privacy and complying with regulatory requirements.
Ensuring Legal Compliance and Documentation:
Verify that the company maintains comprehensive documentation demonstrating compliance with applicable regulations, including the EU AI Act. Adequate records of risk assessments, testing procedures, and conformity assessment documentation for high-risk AI systems are crucial for regulatory compliance.
Evaluating Competitive Advantage:
Assess how the company’s compliance with AI regulations may impact its competitive position in the market. Companies that proactively address regulatory requirements and uphold ethical standards may have a competitive advantage over those that do not, enhancing their long-term sustainability and growth prospects.
In conclusion, investing in AI companies operating within the European Union requires a thorough understanding of the regulatory landscape and careful assessment of compliance measures and potential risks. By following these guidelines, investors can navigate the complexities of the EU AI Act and make informed investment decisions that align with regulatory requirements and ethical principles.
Make the world safe with AI.