Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

Beware of rogue chatbots that introduce security risks; firms should test AI systems frequently

Beware of rogue chatbots that introduce security risks; firms should test AI systems frequently

Source: Straits Times
Article Date: 16 Oct 2024
Author: Osmond Chia

New guidelines by Cyber Security Agency advise firms to test AI systems frequently.

Rogue chatbots that spew lies or racial slurs may be just the beginning, as maliciously coded free chatbot models blindly used by businesses could unintentionally expose sensitive data or result in a security breach.

In new guidelines published on Oct 15, Singapore’s Cyber Security Agency (CSA) pointed out these dangers amid the artificial intelligence (AI) gold rush, and urged businesses to test rigorously and regularly what they plan to install.

This is especially crucial for firms that deploy chatbots used by the public, or those linked to confidential customer data.

Frequent system tests can help weed out threats like prompt injection attacks, where text is crafted to manipulate a chatbot into revealing sensitive information from linked systems, according to the newly published Guidelines on Securing AI Systems.

The guidelines aim to help businesses identify and mitigate the risks of AI to deploy them securely. The more AI systems are linked to business operations, the more they should be secured.

Announcing the guidelines at the annual Singapore International Cyber Week at the Sands Expo and Convention Centre on Oct 15, Senior Minister and Coordinating Minister for National Security Teo Chee Hean said the manual gives organisations an opportunity to prepare for AI-related cyber-security risks while the technology continues to develop.

Mr Teo said in his opening address that managing the risks that come with emerging technology like AI is an important step to build trust in the digital domain. He urged the audience to learn lessons from the rapid rise of the internet.

“When the internet first emerged, there was a belief that the ready access to information would lead to a flowering of ideas and the flourishing of debate. But the internet is no longer seen as an unmitigated good,” he said, adding that there is widespread recognition that it has become a source of disinformation, division and danger.

“Countries now recognise the need to go beyond protecting digital systems to also protecting their own societies,” he said.

“We should not repeat these mistakes with new technologies that are now emerging.”

The ninth edition of the conference is being held between Oct 14 and 17 and features keynotes and discussion panels by policymakers, tech professionals and experts.

AI owners are expected to oversee the security of AI systems from development and deployment to disposal, according to CSA’s guidelines, which do not address the misuse of AI in cyber attacks or disinformation.

In a statement released on Oct 15, CSA said: “While AI offers significant benefits for the economy and society... AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system.”

Those using AI systems should consider more frequent risk assessments than with conventional systems to ensure tighter auditing of machine learning systems.

Assess each aspect of the AI system’s supply chain, including its training data and AI models, as any of these components could carry malware or vulnerabilities that attackers can exploit, CSA wrote.

Improper disposal of used AI models can also lead to data breaches as they are trained on large amounts of data, CSA said.

It urged companies that deploy AI systems to implement feedback channels for users to flag concerns and to have contingency measures for AI-related incidents, which can range from minor issues like a malfunctioning chatbot to major disruptions in the operations of critical infrastructure.

The guide, along with its companion manual, were compiled following a public consultation between July and September, which received submissions from tech firms and other organisations, including Amazon Web Services, Microsoft and Ensign InfoSecurity.

CSA published another guide – Safe App Standard 2.0 – which lays expectations for mobile app developers to secure transactions and user data. App developers are urged to stress-test codes taken from open-source networks, which, if not properly reviewed, can introduce cyber risks into the app.

Source: Straits Times © SPH Media Limited. Permission required for reproduction.

Print
1171

Latest Headlines

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2024 by Singapore Academy of Law
Back To Top