Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

Shaping robust governance of AI: Opinion

Shaping robust governance of AI: Opinion

Source: Business Times
Article Date: 04 Jun 2024

For artificial intelligence in Singapore, governance is as important as innovation.

Last year, Singapore shared our vision of using artificial intelligence (AI) for the public good, through the refresh of our National AI Strategy and commitment of more than S$1 billion to grow our compute capacity and AI skills in our workforce.

Almost daily, our local media reports on new AI applications across diverse fields such as crime fighting, healthcare delivery, and even construction safety. As AI adoption grows, so do concerns. Citizens want more protection against AI risks, while businesses worry it will stifle innovation.

Singapore hopes to avoid such zero-sum thinking. As in so many areas, good governance is not the enemy of innovation. On the contrary, it enables sustained innovation. Equally for AI, we believe governance is as important as innovation. This is why, even before launching our first National AI Strategy in 2019, we developed a Model AI Governance Framework.

Even before we developed our own large language model, the Southeast Asian Languages in One Network, we developed AI Verify, a testing framework and software toolkit for responsible AI use. AI Verify is by no means a perfect tool, but it fills a gap between being worried and actually doing something about it.

As President Tharman Shanmugaratnam said at the Asia Tech x Summit opening gala on May 29, “regulating AI must be the art of the possible, the attainable, and the next best”.

One important set of tools in Singapore’s approach to AI governance is laws and regulations that serve the public interest. In governing the digital domain, we have introduced new laws for personal data protection, against misinformation and disinformation, to manage cyber risks, and curb online crimes and egregious content. We intend to deal firmly with deepfakes.

But we have not introduced an overarching AI law and have no immediate plans to. Why?

Protection through existing laws

One reason is that some AI-related harms can be addressed by existing laws and regulations. For example, we can already issue correction notices to debunk fake news produced with AI, as long as there is public interest to do so.

For AI-supported hiring practices, employers are accountable under existing fair employment guidelines, regardless of AI use.

Another reason is that in other cases, updating existing laws is the more efficient response. For example, we introduced a specific offence of “sextortion” in the Penal Code, which refers to someone threatening to distribute intimate images of a victim. No one doubts the real distress caused, even if an image was a “deepfake” rather than real. With this update, the Penal Code outlaws all forms of sextortion carried out – with or without AI.

All this shows that we are not defenceless against AI-enabled harms. In AI governance, Singapore is not at ground zero.

But it is one thing to deal with the harmful effects of AI, and quite another to prevent them from happening in the first place. Our approach must therefore also include proper design and upstream measures to protect our societies from serious AI risks.

Risk mitigation

This underpins our second set of tools for AI governance, where the aim is to successfully identify, develop and validate risk-mitigating measures. It is the proverbial figuring out the nature of the beast, through high-quality research on what can tame the beast and bring out its virtues.

Last year, we published a discussion paper on generative AI and hosted the Singapore Conference on AI, identifying key areas of concern such as reliability, trustworthiness, fairness, and safety.

Most recently, at the AI Safety Summit in Seoul, Yoshua Bengio’s team released a report on the safety of advanced AI. Singapore actively participates in these discussions because they are essential to shaping robust governance of AI.

In parallel, we are growing new governance capabilities.

Much of the research focus of the Centre for Advanced Technologies in Online Safety is on AI-generated content such as misinformation, online hate, and discrimination.

We are strengthening our Digital Trust Centre to carry out research on AI testing and data safety. As Singapore’s AI Safety Institute, the centre will be part of an international network focused on addressing gaps in the science behind AI.

However, it is not enough for these capabilities to reside solely within governments or developers of AI.

This brings me to the third set of tools that we are developing for AI governance. In a fast-evolving field like AI, regulations are a necessary yet insufficient response.

AI, being a general-purpose technology, can be applied in numerous ways with varied contexts. It is crucial for organisations and individuals using AI to understand its advantages and limitations.

We must also equip them with the right attitude, capabilities, and tools. Our Model AI Governance Framework therefore aims to provide practical guidance on expected safeguards, regardless of how AI is used.

We recently expanded the framework to include generative AI, highlighting nine dimensions that should be considered holistically.

To turn recommendations into actions, the Infocomm Media Development Authority and Microsoft have launched their collaboration on content provenance and responsible AI.

Since its inception, our testing toolkit, AI Verify, has been used for fairness checks and bias prevention in AI systems.

But what about AI Verify users that now want to also use generative AI?

Project Moonshot

This is the reason for Project Moonshot – our latest effort that builds on the AI Verify toolkit and expands its testing capabilities.

Project Moonshot is among the world’s first open-source toolkits for generative AI. It provides benchmarking, red-teaming, and recommended testing baselines for foundation models and AI applications built on top of them. It helps organisations building AI systems to more easily test and compare results, to identify weaknesses that they can fix.

Although still new, the toolkit’s usefulness has been validated with AI Verify Foundation members, including DataRobot, Resaro, and Singtel, and it is now in open beta.

We are going even further. AI Verify Foundation and MLCommons, two of the leading communities in AI safety and testing, are developing a common testing benchmark for large language models. This benchmark can be used to test their basic safety and trustworthiness, against key indicators such as hateful, toxic and violent content.

This partnership is also significant as a step towards global harmonisation of benchmarking standards. For the AI community in Singapore, Project Moonshot is another “next best” in our pursuit of good AI governance that is ambitious yet achievable.

Finally, I want to emphasise the importance of international cooperation. Though the path forward remains unsettled, we should still acknowledge commendable efforts such as the AI Safety Summit, the United Nations (UN) High-Level Advisory Board on AI, Asean’s Guide on AI Governance and Ethics, and US-China discussions on AI governance.

It bears reminding that smaller players’ voices are equally important when considering AI’s impact on our economies and societies.

Singapore is particularly attuned to their concerns, because we are ourselves a very small state and for more than 30 years have been the convenor of the Forum of Small States, a grouping of 108 UN member states.

Together with Rwanda and other partners, we are developing an AI governance playbook to help members of the forum adopt the technology to meet our specific needs and circumstances.

It is our hope that by doing so, we promote AI for the public good, not just for Singapore, but also for the world.

The writer is Singapore’s minister for communications and information. She is also the minister-in-charge of Smart Nation and cybersecurity. This is an abridged version of her opening keynote address at the Asia Tech x Artificial Intelligence Conference on May 31.

Source: Business Times © SPH Media Limited. Permission required for reproduction.

Print
1161

Latest Headlines

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2024 by Singapore Academy of Law
Back To Top