Ai And Crime Malaysia On The Edge

THE age of artificial intelligence (AI) has arrived and not just in innovation, but in crime.
A recent report by the Centre for Emerging Technology and Security (CETaS) in the United Kingdom, titled “AI and Serious Online Crime” lays out how AI is already transforming criminal activities, enabling fraud, impersonation, and psychological manipulation at unprecedented scale.
While the report focuses primarily on the UK and international threats, the Malaysian experience echoes its warnings. With rising cases of deepfake scams, voice cloning, and AI-powered fraud schemes, it’s clear Malaysia is not just catching up with global trends but it is already living them.
AI crime: Not the future, but the present
The CETaS report outlines a dangerous new landscape where generative AI is used to deceive, defraud, and manipulate.
Criminals are exploiting tools like deepfake video and voice, chatbots, and AI image generators to create more believable scams. These tactics allow them to reach more victims, disguise their tracks, and operate across borders.
In Malaysia, these risks are no longer hypothetical. The first quarter of 2025 alone saw over 12,000 online scam cases, with losses nearing RM574 mil, according to the National Scam Response Centre. Many of these scams were powered, directly or indirectly, by AI.
Real cases, real losses
(Image: Medium)Consider the deepfake video scams now circulating on Malaysian social media platforms. In one recent case, a video featuring a well-known local entrepreneur was manipulated using AI to promote a fake investment platform.
With a convincing voice and gestures, the video looked authentic leading many to fall for it without question. Some lost thousands.
AI-generated voice impersonation is another rising threat. A woman in Penang was conned out of RM4,800 after receiving a phone call from someone who sounded exactly like her brother. The voice was cloned using short voice samples and deployed in a moment of emotional urgency.
In a more extreme case, a 72-year-old retiree in Selangor lost RM5 mil in a fake investment scheme run via an AI chatbot that communicated with him daily through a fraudulent app. What felt like a legitimate customer service experience turned out to be an algorithmic con job.
These examples are not outliers but they represent a growing trend in which AI doesn’t just make scams more convincing, but harder to detect and easier to scale.
The enforcement challenge
One of the CETaS report’s core findings is that enforcement agencies globally are struggling to keep up with AI-enabled crime. Malaysia is no exception.
The Malaysian Communications and Multimedia Commission (MCMC) has reported over 400 takedown requests for AI-generated scam content, including deepfake videos and impersonations.
However, platform compliance is uneven. Some platforms act swiftly; others ignore or delay removal, leaving harmful content online for days or even weeks.
Moreover, current laws in Malaysia do not specifically address AI misuse. While general cybercrime and fraud laws apply, there is a legal grey area when it comes to offences like synthetic voice fraud or deepfake identity theft. This gap leaves victims exposed and enforcement agencies under-equipped.
Lessons from CETaS: What Malaysia must do
(Image: The Legal Wire)The CETaS report makes several recommendations that Malaysia would do well to adopt.
First, establish a national AI crime taskforce. This specialised unit should focus on AI-driven fraud, bringing together cybersecurity experts, forensic analysts, and financial crime investigators.
Similar to how the NSRC coordinates scam responses, this taskforce should be forward-looking and tech-enabled.
Second, update the law. Malaysia needs legislation that clearly defines and criminalises deepfake impersonation, AI-assisted fraud, and misuse of AI for deception. Without legal clarity, prosecution becomes difficult and deterrence is weakened.
Third, hold digital platforms accountable. Social media companies and content hosts must be legally required to remove harmful AI-generated content quickly: ideally within 24 to 48 hours of verified notice. Transparency reports on content moderation should be mandatory.
Fourth, launch widespread public awareness campaigns. Many Malaysians, especially the elderly and less digitally savvy, remain unaware of how realistic AI fakes have become.
National campaigns should educate the public on spotting signs of synthetic content and avoiding scams.
Lastly, invest in AI detection and forensics. Law enforcement must be equipped with tools to detect deepfakes, trace their origins, and collect admissible evidence. Collaboration with local universities and AI labs can support this capability.
Time to act
AI is not inherently dangerous but in the wrong hands, it is already proving to be a powerful tool for deception and exploitation. The CETaS report warns that, left unregulated, AI will supercharge fraud and overwhelm criminal justice systems.
Malaysia still has time to mount an effective response but the window is closing fast. If we fail to act, we risk normalising a future where anyone’s face, voice, or identity can be hijacked by an algorithm, and where the line between reality and fiction is increasingly blurred.
It’s not just a matter of catching criminals but it’s about safeguarding trust in our digital future.
R.Paneir Selvam is the principal consultant of Arunachala Research & Consultancy Sdn Bhd, a think tank specialising in strategic national and geopolitical matters.
The views expressed are solely of the author and do not necessarily reflect those of MMKtT.
- Focus Malaysia.
Artikel ini hanyalah simpanan cache dari url asal penulis yang berkebarangkalian sudah terlalu lama atau sudah dibuang :
http://malaysiansmustknowthetruth.blogspot.com/2025/09/ai-and-crime-malaysia-on-edge.html