0
19.11.2024 April Chin
By April Chin, Managing Partner and CEO of Resaro (Singapore).
As the adoption of Generative AI becomes more prevalent, the central question emerges: is generative AI a super productivity engine, an adversarial agent, or an information leaker and safety hazard? The decision on how to adopt generative AI is far from straightforward. The answer, frustratingly, is that it can be all of the above.
In my interactions with enterprises, I observe three competing worldviews on generative AI:
For example, in software development, AI-generated code promises substantial time and cost savings, yet it has become a top security concern due to potential vulnerabilities and quality issues.
Content generation tools are revolutionising marketing efforts, enabling rapid creation of diverse materials; however, the same technology has given rise to sophisticated deep fakes, exemplified by the $25 million fraud case involving a fake CEO.
AI chatbots are streamlining customer service and internal communications, but there are looming anxieties over the implications of the chatbot revealing company trade secrets or employees oversharing sensitive information.
This juxtaposition of efficiency and risk underscores the complex challenge facing enterprises as they navigate the integration of generative AI into their operations.
Most enterprises today are at "first inning” after dabbling with generative AI to boost productivity. The real challenge lies in progressing to subsequent innings: maximising AI's potential while effectively managing its risks. This leap demands a nuanced understanding of both AI safety and security.
While AI safety and security have distinct focuses, they are inherently interconnected. Not only do they have distinct risk scenarios, but a breach in AI security can also lead to safety incidents.
Inadequate security measures such as weak access controls to training datasets or model files can lead to backdoor attacks giving threat actors full control over AI systems. Not only attackers can manipulate AI models’ decision-making for malicious purposes, but they also can cause and amplify AI harms at scale generating toxic outputs to users or misinformation campaigns during political elections.
The danger of taking an “either-or” approach to the competing worldviews is we could end up with an incomplete risk management approach, missed opportunities in harnessing the full potential of innovation, and ethical blind spots.
Until enterprises can effectively navigate and reconcile these diverse perspectives, the full potential of generative AI will remain untapped. Enterprises that fail to integrate these worldviews may find themselves settling for the “minimum common denominator” thinking, rather than achieving transformative outcomes.
Singapore has taken initial steps to encourage and support organisations with adopting AI safely and securely. The Cyber Security Agency of Singapore (CSA) and Resaro recently published a discussion paper on "Securing AI: A Collective Responsibility", which discusses the need for ecosystem players to work together to address the security risks of AI. Building on Singapore’s ongoing efforts in AI governance, CSA has also worked with industry and international partners to develop Guidelines and a Companion Guide on Securing AI systems to provide practical advice and recommendations on how to secure AI systems throughout the lifecycle.
The key to unlocking generative AI's promise lies in embracing an approach that carefully weighs the trade-offs and synergies between innovation, security, and responsible governance. There will be more downstream questions around what to govern, test and monitor, and how much of these pre-emptive measures are enough.
The path ahead will require collaboration across sectors and disciplines to establish mutual expectations between builders and buyers as well as best practices for deployment and operations of AI systems. It demands that we break down silos between cybersecurity experts, AI developers, governance and privacy practitioners, and policymakers. Only through this collective effort can we hope to create AI systems that are not just powerful and efficient, but also safe, secure, and trustworthy.