0
18.07.2024
Leveraging AI can provide businesses with significant competitive advantages, but it also requires navigating emerging regulations across jurisdictions. The European Union (EU) has emerged as a leader in establishing first-of-its-kind AI regulations, aimed at ensuring the ethical, safe, and responsible use of AI.
Key regulations to understand are the EU AI Act and the Digital Services Act (DSA). These regulations aim to set comprehensive frameworks for the safe and responsible development and use of artificial intelligence and digital services. The EU AI Act seeks to establish strict standards to mitigate risks associated with AI technologies, ensuring that AI systems respect fundamental rights, safety, and ethical principles. Meanwhile, the DSA focuses on creating a safer digital space by regulating online content, enhancing transparency in digital services, and protecting users' rights. Together, these acts represent a significant step towards a robust regulatory environment that balances innovation with fundamental rights and safety in the digital and AI domains.
To stay ahead of the curve and understand how these regulations may impact the use of AI systems and business operations in the EU, C-suite executives need to stay informed and proactive in compliance to avoid legal repercussions and reputational damage arising from AI mishaps. For instance, if AI is used to determine access or admission to an education institution or to evaluate learning outcomes, these systems may be classified as potential high-risk systems and are subject to the EU AI Act’s stringent risk assessment and regulatory obligations, such as quality management, monitoring and incident reporting, testing for accuracy, robustness and security.
EU AI Act
The EU AI Act provides a comprehensive framework for regulating AI systems based on risk levels, and a separate regulatory regime for general-purpose AI models. It seeks to ensure AI technologies are designed and deployed in a manner that respects EU values and fundamental rights.
Digital Services Act (DSA)
The DSA provides a liability framework for online intermediaries operating in the EU and sets requirements around how they manage illegal and harmful content published, and goods and services sold, via their services.
In practice, this covers a broad range of businesses that store or transmit the content of third parties, including internet service providers, providers of web-based messaging and email services, providers of cloud computing or web hosting services, social media networks, app stores, online marketplaces, and online search engines. Ultimately, it aims to create a safer digital space by protecting users and establishing accountability for digital service providers.
EU AI Act
Digital Services Act (DSA)
EU AI Act
Digital Services Act (DSA)
EU AI Act
Digital Services Act (DSA)
EU AI Act
Digital Services Act (DSA)
All regulated entities will need to comply by 17 February 2024.
EU AI Act
The AI Office will develop compliance methodologies and monitor high-risk AI systems. National Authorities will enforce regulations, conduct audits, and impose penalties. Substantial fines will be imposed for non-compliance; this could be as high as 7% of a company’s global annual turnover.
Digital Services Act (DSA)
The DSA includes a full set of investigative and sanctioning measures that can be taken by national authorities and the European Commission. The commission can apply fines, periodic penalties, and request temporary suspension of services.
Substantial fines will be imposed for non-compliance; this could be as high as 6% of a company’s global annual turnover.
Both the EU Act and DSA have extraterritorial reach, meaning they apply to any company offering services or products to EU citizens, regardless of where the company is based.
EU AI Act
Digital Services Act (DSA)
The European Union's AI Act and Digital Services Act (DSA) are critical regulations that companies must prioritise for several reasons.
Firstly, these legislations are designed to ensure the ethical and responsible use of technology, which is crucial in maintaining consumer trust and protecting user rights. After all, trust is paramount in the AI industry.
Secondly, non-compliance with these regulations can result in substantial financial penalties and damage to a company's reputation, as demonstrated by past incidents where companies faced severe repercussions for failing to adhere to regulatory standards. For example, when a tech recently launched its AI Chatbot tool, it made a factual error in the demo, wiping US$100 billion off its market value, and damaging consumer trust.
Lastly, these laws are set to reshape the digital landscape in the EU, influencing market dynamics and competitive positioning. Companies that adapt proactively will not only avoid penalties but also gain a competitive edge by demonstrating their commitment to ethical and compliant practices.
Business owners in this space grapple with varied organisational structures, sizes, and capabilities. Keeping this in mind, here is a step-by-step guide of what you need to keep in mind to ensure that you’re complying with these fast-changing laws in the EU, and setting your organisation up for success in its AI use and implementation:
Navigating the EU AI legislation requires C-Suite executives to stay informed, proactive, and collaborative. Ensuring compliance not only avoids legal repercussions but also strengthens the trust and reliability that are essential in the evolving world of AI.