Making Digital Spaces Safer For Everyone Not Just Children
Malaysia is considering a social media ban for children under 16, following a series of distressing incidents involving children harming other children in schools.
Although the causes for such violence are undetermined and multi-varied, the proposal is motivated by genuine concern about social media. However, will a ban work to protect children from the deeper, structural issues with social media?
A ban may sound decisive, yet it is far from a silver bullet. The reality of digital life is far more complex.
Experience elsewhere suggests that keeping children off social media is easier said than done. In Australia, for instance, platforms will soon be required to take “reasonable steps” to prevent those under 16 from creating accounts.
ADSHowever, before the measures even take effect, there is already talk of young people bypassing age checks using Virtual Private Networks (VPN), “old man” masks, or even with parental assistance.
Verifying users' ages, such as through facial recognition or identification verification, raises privacy concerns for everyone, not just children.
And if children are excluded entirely, it may deepen social isolation in a world where social interaction increasingly happens online.
False sense of security
Identity verification may also disenfranchise marginalised groups such as stateless persons, who may lack the IDs required to access these sites.
Even if bans were enforced perfectly, what happens when a child turns 16? The tragic case of a 16-year-old Malaysian girl, who died by suicide after posting an Instagram poll about whether she should live or die, shows that online harm does not disappear with age.

A ban may also risk creating a false sense of security, convincing us that children are safe simply because they are kept away, while the platforms themselves remain unsafe by design.
The underlying problem is that platforms’ business models thrive on engagement and attention, even when that means amplifying harmful or addictive content.
Instead of keeping children out, perhaps the better question is - can we make platforms safe by design?
Look at Brazil
Brazil appears to be attempting this with its new ECA Digital law (or “Estatuto da Criança e do Adolescente”), which applies to digital products and services “targeted at or likely to be accessed by minors”.
The law, enacted in September, requires under 16 accounts to be linked with a parent or guardian, and mandates platforms to build in parental supervision tools, such as the ability to set time limits and restrict purchases.
ADSBoth Brazil and the European Union regulations prohibit platforms from profiling minors to serve targeted advertisements.
In the EU, children’s accounts are private by default and cannot be publicly recommended. This responds to past abuses where predators exploited “recommended friend” algorithms to find children.

Alongside its proposed age restrictions, Australia has plans to introduce a Digital Duty of Care, requiring platforms to proactively prevent harm rather than simply react after it occurs.
These laws are still new and their efficacy will depend heavily on accompanying regulations and enforcement, but they are similar in that they attempt to regulate “upstream” features relating to platform design.
What should M’sia do?
In Malaysia, however, conversations still mostly centre on downstream measures: ordering takedowns, prosecuting harmful posts, or now, proposing bans.
These steps focus on control after harm has occurred, or on keeping children away, without fixing the structural problems that allow harm to persist.
Large technology companies, especially social media platforms, have largely escaped legal oversight in Malaysia and much of Southeast Asia, despite their role in facilitating well-documented harms. This is due to several reasons:
Social media platforms are perceived as too large, complex, and essential to regulate.
Platforms switch roles as it suits them - publishers when moderating content, “innocent carriers” when trying to avoid accountability.
Harms are not confined to social media platforms. They appear on gaming platforms, live-streaming sites, and increasingly, in AI chatbots.
Most of the harms, however, are not new. False advertising, impersonation, gambling, fraud, and misinformation have existed long before Facebook or TikTok.
Miracle cures, for example, have existed in many forms - from 19th-century “snake oil” cures to today’s AI hallucinations providing harmful medical advice.
Regulatory frameworks have been built over the years to protect society from such harms.
In Malaysia, this includes consumer protection laws, financial regulation, accreditation of professionals such as doctors, intellectual property protection via agencies like MyIPO, and the Penal Code for threats and incitement of violence.
A sharper regulatory path
Regulating giant tech platforms as a whole is daunting. However, what if Malaysia reviewed its current regulatory framework on consumer protection, advertising standards, and child protection laws, and updated those to address contemporary harms?
For example, advertisements targeting children under 12 could be banned across all media, including streaming, gaming, and social media platforms.
This would be akin to California laws disallowing children’s meal toys linked to unhealthy food. In any event, many social media platforms already require users to be at least 13.

If social media platforms cannot guarantee that such advertisements won’t reach children, their services could be classified as 18 and above by default.
To lift that rating, they would need to show concrete measures to prevent child-targeted advertising, with penalties for non-compliance.
Updating the regulatory framework may be challenging, but it’s certainly not unprecedented. Malaysia successfully reformed its laws to prepare for the internet era and again for the digital transformation era.
There’s no reason it can’t do the same, especially when the safety and well-being of children are involved.
Moving upstream
Protecting children will require more than reactive laws. It will require shifting the focus upstream towards accountability, transparency, and safer platform design.
Yes, children can encounter real harms online, but it’s important that any regulation introduced genuinely makes their digital spaces safer.
Well-intentioned measures can sometimes have unintended effects. For instance, broad bans that are difficult to enforce may do little to reduce risks, while leaving platforms themselves unsafe.
Rather than focusing solely on limiting children’s access, it may be more effective to create a digital environment that is safer for everyone.
This could include stronger standards for data use, advertising, and algorithmic design, greater transparency from platforms, and enforcement mechanisms that deliver meaningful protection. - Mkini
DING JO-ANN is with a global non-profit working on the impact of technology on society.
KHAIRIL YUSOF is a coordinator at Sinar Project, a civic tech organisation promoting transparency and open data in Southeast Asia.
The views expressed here are those of the author/contributor and do not necessarily represent the views of MMKtT.
Artikel ini hanyalah simpanan cache dari url asal penulis yang berkebarangkalian sudah terlalu lama atau sudah dibuang :
http://malaysiansmustknowthetruth.blogspot.com/2025/11/making-digital-spaces-safer-for.html