0

18.07.2024

Test image

Leveraging AI can provide businesses with significant competitive advantages, but it also requires navigating emerging regulations across jurisdictions. The European Union (EU) has emerged as a leader in establishing first-of-its-kind AI regulations, aimed at ensuring the ethical, safe, and responsible use of AI.

Key regulations to understand are the EU AI Act and the Digital Services Act (DSA). These regulations aim to set comprehensive frameworks for the safe and responsible development and use of artificial intelligence and digital services. The EU AI Act seeks to establish strict standards to mitigate risks associated with AI technologies, ensuring that AI systems respect fundamental rights, safety, and ethical principles. Meanwhile, the DSA focuses on creating a safer digital space by regulating online content, enhancing transparency in digital services, and protecting users' rights. Together, these acts represent a significant step towards a robust regulatory environment that balances innovation with fundamental rights and safety in the digital and AI domains.

To stay ahead of the curve and understand how these regulations may impact the use of AI systems and business operations in the EU, C-suite executives need to stay informed and proactive in compliance to avoid legal repercussions and reputational damage arising from AI mishaps. For instance, if AI is used to determine access or admission to an education institution or to evaluate learning outcomes, these systems may be classified as potential high-risk systems and are subject to the EU AI Act’s stringent risk assessment and regulatory obligations, such as quality management, monitoring and incident reporting, testing for accuracy, robustness and security.

Continue reading for an overview of what each legislation covers and the details you should be aware of:

  • EU AI Act

    The EU AI Act provides a comprehensive framework for regulating AI systems based on risk levels, and a separate regulatory regime for general-purpose AI models. It seeks to ensure AI technologies are designed and deployed in a manner that respects EU values and fundamental rights.

    Digital Services Act (DSA)

    The DSA provides a liability framework for online intermediaries operating in the EU and sets requirements around how they manage illegal and harmful content published, and goods and services sold, via their services.

    In practice, this covers a broad range of businesses that store or transmit the content of third parties, including internet service providers, providers of web-based messaging and email services, providers of cloud computing or web hosting services, social media networks, app stores, online marketplaces, and online search engines. Ultimately, it aims to create a safer digital space by protecting users and establishing accountability for digital service providers.

  • EU AI Act

    • Promote Ethical and Safe AI Use: Ensure AI systems operate safely and ethically.
    • Ensure Transparency and Accountability: Provide clear information about AI systems and their operations.
    • Protect Fundamental Rights: Safeguard individuals' rights and freedoms.

    Digital Services Act (DSA)

    • Enhance Transparency: Ensure clear communication about digital services.
    • Ensure Accountability: Hold content and service providers responsible for their actions.
    • Protect User Rights: Promote a safer online environment and safeguard users' rights.
  • EU AI Act

    • Unacceptable Risk AI: Prohibited AI systems posing significant threats to safety or rights (e.g., social scoring systems).
    • High-Risk AI: AI systems in critical areas (e.g., medical devices, transportation, credit scoring) requiring stringent oversight.
    • Limited Risk AI: Moderate-risk AI systems subject to transparency obligations (for e.g., spam filters, artistic deep fakes)
      Minimal Risk AI: Low-risk AI systems are encouraged to adhere to codes of conduct.

    Digital Services Act (DSA)

    • Removing illegal content: Promptly address illegal content. Trusted flaggers are responsible for detecting potentially illegal content and alerting online platforms. They are entities designed by the national Digital Services Coordinators.
    • Reporting harmful content: Provide transparency about actions taken against harmful content.
    • Transparency requirements: Disclose algorithmic decision-making processes and advertising practices.
  • EU AI Act

    • Unacceptable Risk AI: Banned from the EU market.
    • High-Risk AI: Must undergo rigorous testing, obtain CE Marking, and meet transparency and documentation requirements.
    • Limited and Minimal Risk AI: Must provide clear information to users and adhere to voluntary codes of conduct

    Digital Services Act (DSA)

    • Adherence to Regulations: Non-EU companies must ensure their AI systems and digital services meet EU standards.
    • Transparency and Documentation: Provide detailed documentation and transparency reports as required.
    • Monitoring and Audits: Be prepared for monitoring and audits by the EU AI Office and relevant National Authorities
  • EU AI Act

    • By 6 months after entry into force: Prohibitions on unacceptable risk AI
    • By 9 months after entry into force: Codes of practice for General Purpose AI (GPAI) must be finalised
    • By 12 months after entry into force: GPAI rules apply, Member State competent authorities are appointed; annual Commission review and possible amendments to prohibitions introduced
    • By 18 months after entry into force: Commission issues implementing acts creating a template for high-risk providers’ post-market monitoring plan
    • By 24 months after entry into force: Obligations on high-risk AI systems apply. Member States to have implemented rules on penalties, including administrative fines; Member State authorities to have established at least one operational AI regulatory sandbox; Commission review and possible amendment of list of high-risk AI systems.
      • High-risk AI systems include AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and administration of justice.

    Digital Services Act (DSA)

    All regulated entities will need to comply by 17 February 2024.

  • EU AI Act

    The AI Office will develop compliance methodologies and monitor high-risk AI systems. National Authorities will enforce regulations, conduct audits, and impose penalties. Substantial fines will be imposed for non-compliance; this could be as high as 7% of a company’s global annual turnover.

    Digital Services Act (DSA)

    The DSA includes a full set of investigative and sanctioning measures that can be taken by national authorities and the European Commission. The commission can apply fines, periodic penalties, and request temporary suspension of services.
    Substantial fines will be imposed for non-compliance; this could be as high as 6% of a company’s global annual turnover.

  • Both the EU Act and DSA have extraterritorial reach, meaning they apply to any company offering services or products to EU citizens, regardless of where the company is based.

  • EU AI Act

    • Be comprehensive in identifying all AI systems in use and map their respective risk levels accordingly.
    • Ensure that each AI system used in the organisation has been evaluated to identify its intended use and potential risks.
    • Maintain comprehensive technical documentation for high-risk AI systems.
    • Implement organisational processes and system controls to ensure human oversight and transparency.

    Digital Services Act (DSA)

    • Review and align internal policies on illegal content online and the protection of users’ fundamental rights online, including the freedom of speech.
    • Prioritise transparency and ensure that there is clear communication internally about content moderation and algorithmic decisions.
    • Establish relevant user support systems to provide mechanisms for users to appeal and seek redress for content-related decisions.

Preparing for Compliance During the Implementation Phase

The European Union's AI Act and Digital Services Act (DSA) are critical regulations that companies must prioritise for several reasons.

Firstly, these legislations are designed to ensure the ethical and responsible use of technology, which is crucial in maintaining consumer trust and protecting user rights. After all, trust is paramount in the AI industry.

Secondly, non-compliance with these regulations can result in substantial financial penalties and damage to a company's reputation, as demonstrated by past incidents where companies faced severe repercussions for failing to adhere to regulatory standards. For example, when a tech recently launched its AI Chatbot tool, it made a factual error in the demo, wiping US$100 billion off its market value, and damaging consumer trust.

Lastly, these laws are set to reshape the digital landscape in the EU, influencing market dynamics and competitive positioning. Companies that adapt proactively will not only avoid penalties but also gain a competitive edge by demonstrating their commitment to ethical and compliant practices.

Business owners in this space grapple with varied organisational structures, sizes, and capabilities. Keeping this in mind, here is a step-by-step guide of what you need to keep in mind to ensure that you’re complying with these fast-changing laws in the EU, and setting your organisation up for success in its AI use and implementation:

  1. Have a clear sense of the compliance timelines and use the time to start preparing ahead.
    • Understand key dates. For example, the EU AI Act has a two-year implementation period. Companies need to know when the regulations come into effect and what deadlines they must meet.
    • Assess current systems. Conduct a thorough audit of your existing AI systems and digital services to identify areas needing compliance adjustments.
    • Develop a roadmap. Create a detailed plan outlining the steps required to meet compliance by the enforcement dates. This roadmap should include timelines, responsible teams, and key milestones the organisation has to meet to avoid contravening the law.
  2. Establish your organisation’s internal readiness to deal with these laws.
    • Build expertise. Develop in-house expertise in AI, legal, and regulatory compliance. This might involve hiring new talent or upskilling current employees who are able to understand how to help the organisation navigate this evolving legal landscape.
    • Cross-functional collaboration. While this may sound simple, ensure that different departments, such as legal, IT, and operations, work together to develop comprehensive compliance strategies. This is not the task or responsibility for any one team alone.
    • Internal policies. Once the roadmap is established and a clear plan is in place, update your internal policies to align with these new regulations. Make sure that these policies are communicated clearly to all employees.
  3. Invest in your employees.
    • Training programmes. Develop training programs focused on the ethical and compliant use of AI and digital services. These should be mandatory for all relevant employees, and will also allow them to get through any internal/external audits with minimal complications.
    • Ongoing education. Compliance is not a one-time effort. Ensure continuous education through regular, updated training sessions and updates as the regulatory landscape evolves. Where necessary, send your employees to upskill so that they remain ahead of the curve.
    • Ethics and responsibility. Emphasise the importance of ethical AI practices and the impact of compliance on consumer trust and company reputation.
  4. Strengthen external collaboration.
    • Engage with information from regulators. Where possible, communicate with regulatory bodies to stay informed about the upcoming changes. Participate in consultations if the opportunity presents itself to present industry perspectives to regulators.
    • Regulatory sandboxes. Take advantage of regulatory sandboxes to test and validate AI systems in a controlled environment. This can help identify potential compliance issues early.
    • Industry partnerships. Collaborate with other industry players to share best practices, compliance strategies, and insights. Form or join industry groups focused on AI and digital services compliance.

Navigating the EU AI legislation requires C-Suite executives to stay informed, proactive, and collaborative. Ensuring compliance not only avoids legal repercussions but also strengthens the trust and reliability that are essential in the evolving world of AI.