Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

Firms in Singapore eye DeepSeek benefits, but cautious about data security risks, AI biases

Firms in Singapore eye DeepSeek benefits, but cautious about data security risks, AI biases

Source: Straits Times
Article Date: 11 Feb 2025
Author: Osmond Chia & Sheila Chiang

DeepSeek shook the tech sector when it launched its latest R1 model on Jan 20.

Companies in Singapore are calling for caution over the use of DeepSeek, even as they eye the cost-saving promises of the Chinese generative artificial intelligence (AI) model.

Major companies such as banks, consulting agencies and cyber-security companies have set clear rules on the use of generative AI models, essentially prohibiting employees from using such tools including DeepSeek for work, citing the need for due diligence.

DeepSeek shook the tech sector when it launched its latest R1 model in January, saying it rivalled technology built by ChatGPT maker OpenAI in terms of capabilities – but at a fraction of the cost.

The Chinese AI start-up claimed that the R1 cost only US$5.6 million (S$7.6 million) to train, compared with the hundreds of millions US tech giants have poured into training each of their large language models (LLMs).

Following its release, R1 topped download charts and caused US tech stocks to plunge, reflecting the undermined confidence in generative AI players such as OpenAI, Google and Amazon Web Services that have dominated the market.

Now, all eyes are on China’s model, with Singapore-based AI consumer insights platform Ai Palette estimating it could save companies between 40 per cent and 60 per cent in infrastructure costs, particularly from using high-end computing chips, to run large-scale LLMs.

Boston Consulting Group (BCG) said it has seen a surge in interest among clients about the potential use of DeepSeek AI in projects.

“The model stands out for being open-source, performing well across benchmarks and offering a significantly lower-cost alternative to competitors,” said Mr Hanno Stegmann, managing director and partner of BCG’s AI team.

Ms Tan Siew San, general manager at IBM Singapore, echoed his sentiments.

IBM’s recent study, published in December 2024, showed that close to half of more than 200 IT decision-makers here want to use more open-source AI technologies in 2025, citing faster software development and rapid innovation, among other reasons.

Specifically, the open-source method crowdsources ideas from third-party developers, not just in-house ones, to improve codes.

But it is worth waiting for a more thorough assessment of DeepSeek’s risks before deploying the model, Mr Stegmann said.

There were similar calls for assessment of the generative AI models of American firms OpenAI, Google and Amazon Web Services when they were first launched two years ago. Key concerns flagged were retention of data used for training the AI models and risks of corporate data leak.

Early tests of DeepSeek show that it falls short of some responsible AI standards, providing answers to sensitive questions or censoring its answers on controversial topics, noted Mr Stegmann.

Early reports by tech enthusiasts flagged DeepSeek’s tendency to avoid answering questions on topics often censored by the Chinese government, while openly answering similar questions about other countries.

There are also concerns about the app version of DeepSeek, which retains prompts and results to train the AI model. Experts are uncertain about the extent to which the data is kept and scrutinised.

It is also unclear if DeepSeek’s developers have invested significantly in safety measures, and how exactly data is stored, said Mr Stegmann.

“We strongly recommend our clients to assess any potential risks before using DeepSeek, especially regarding data security and confidentiality.”

DeepSeek will need to be put through its paces, a process that involves repeated tests on the consistency of its answers, and assessment of how the system can be misused and its biases, such as those relating to gender or specific demographics, said Mr Stegmann.

“It is fair to say that first releases of many LLMs had some issues at the beginning that had to be ironed out based on user feedback and changes made to the model.”

Noting that South Korea and Europe have demanded information from DeepSeek, he said: “It’s worth waiting to see what additional information may emerge on the data DeepSeek was trained on.”

South Korea, Italy and Australia are among countries that have blocked access to DeepSeek on government devices, citing security concerns.

The restrictions mirror those in the early days of ChatGPT in 2022, when it saw a surge in interest worldwide, prompting the authorities to temporarily block access to the site. ChatGPT remains blocked to users in China.

Likewise, legal firms like RPC are also choosing to err on the side of caution.

RPC tech lawyer Nicholas Lauw said the use of new generative AI tools is typically prohibited for handling client data until the technology’s safety is thoroughly tested. The same advice is given to the firm’s clients, he added.

“Our stance is precautionary, designed to maintain the trust and integrity of our client relationships, and aligns with wider regulatory guidance and best practice,” Mr Lauw said.

The firm is testing LLMs to see which can be deployed for internal use securely and effectively. AI models will undergo legal risk assessments, and checks on the quality of answers, including accuracy, adherence to the company’s ethics and sensitive data exposure risks, he said.

Many major organisations have made deals with AI chatbot developers like Microsoft or OpenAI to build customised tools for internal use, ensuring that corporate data is not shared.

For example, OCBC Bank and UOB host customised AI chatbots on internal servers for coding and for searching archives, among other uses.

It is understood that OCBC staff laptops do not allow the use of any other external chatbots, including DeepSeek, that do not meet its security requirements.

Mr Donald MacDonald, the head of OCBC’s group data office, said: “Our generative AI applications are built using on-premise, open-source LLMs to minimise data leakage risks. We evaluate all new LLMs prior to making decisions on deploying them.”

Mr Rajesh Sreenivasan, who oversees tech law matters at Rajah and Tann, said the legal firm has invested heavily to deploy enterprise editions of Microsoft Copilot and legal assistant Harvey AI, which ensure that any data used is kept within its systems.

This is especially important for regulated firms to ensure sensitive data is not ingested by AI developers for training, he said.

Enterprise deals also come with indemnity clauses that protect corporate users from allegations of copyright infringement and other legal concerns.

Such clauses shield the tech customers from legal risks arising from the AI’s creations, pinning the responsibility instead on the tech vendors.

Most major generative AI vendors, including Microsoft, IBM, Adobe, Google and OpenAI, have indemnity offerings for clients.

“DeepSeek doesn’t have an enterprise product yet,” said Mr Rajesh.

“It might be open-source, but this alone doesn’t protect corporate users from potential legal risks.”

Despite these concerns, some companies have started to dabble with the shiny new tool.

Video software company Babbobox chief executive Alex Chan said his employees are allowed to use any AI models, including DeepSeek, for tasks such as finding inspiration or code for productivity gains.

Mr Tony Zhu, chief technology officer at conversational AI solutions provider Wiz.AI, said the Singapore-based firm sees potential in using R1 for text-based customer support engagements.

Wiz.AI has been using DeepSeek’s technology since December 2024, recognising the tool’s ability to handle tasks involving complex reasoning. The firm also uses other open-source models, like Meta’s Llama.

Mr Somsubhra GanChoudhuri, co-founder of Ai Palette, said DeepSeek’s breakthrough advancements could help more small firms in Singapore with budget constraints embrace AI.

Specifically, DeepSeek’s technique can be emulated by local tech firms, allowing more to experiment and innovate with generative AI even without immense computing resources, said Mr Kenddrick Chan, who heads the digital international relations project at foreign policy think-tank LSE Ideas.

As companies evaluate the merits of DeepSeek, the Ministry of Digital Development and Information said on Feb 7 in a reply to The Straits Times: “The Government does not generally comment on commercial products. We advise companies to evaluate products on their own merits and the risks of use, including compliance with relevant laws.”

Mr Rajesh from Rajah and Tann said the entry of DeepSeek will drive competition in the market. “We’ll see even more of a contest in the generative AI space, and that’s good for innovation.”

Source: The Straits Times © SPH Media Limited. Permission required for reproduction.

Print
466

Latest Headlines

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2025 by Singapore Academy of Law
Back To Top