Ai And Ethics Can Help Stop Online Harassment

Letter to Editor
FROM school halls to digital spaces, moral and civic education teaches us to be helpful, considerate, and kind members of society.
Yet, despite these teachings, various forms of harm continue to plague both physical and online worlds. One such issue is online harassment—also commonly referred to as cyberbullying.
Online harassment has become a distressingly common experience for many internet users. It involves acts of aggression, intimidation, or abuse carried out across digital platforms.
According to researchers like Leduc and colleagues in Computers in Human Behavior, it can take many forms—disinformation, name-calling, threats, sexual harassment, and public humiliation.
This digital abuse can affect people from all walks of life, although certain demographic factors such as ethnicity, age, and gender may influence how likely someone is to experience it.
Pew Research Center reports by Monica Anderson in 2018 and more recent updates by Atske in 2024 highlight how widespread and persistent the issue is, particularly among teens.
Similarly, a Malaysian-based study published in BMJ Open by Samsudin and colleagues in 2023 found that young adults experiencing cyberbullying often also report psychological distress and strained family dynamics.
In Malaysia, researchers Kee, Anwar, and Vranjes pointed out in 2024 that online harassment is a risk factor for suicidal thoughts among youth.
Often, the abuse stems from prejudice—negative stereotypes based on religion, ethnicity, gender, or even personal interests can quickly snowball into digital attacks.
Victims may receive a barrage of cruel messages, mockery, or hate comments targeting their identity.
Cultural norms can also fuel the problem. When mocking or humiliating others is treated as entertainment, especially in online communities, abusers feel emboldened.
The anonymity of the internet offers a protective mask that emboldens people to say what they would never say face-to-face.
Combined with the misuse of free speech, this creates a digital culture that tolerates—even encourages—harmful behaviour.
The effects of online harassment are not limited to bruised egos. Victims often face serious mental health challenges.
Studies by Dr Cheryl Nixon in 2014 reveal how victims may suffer from depression, anxiety, disrupted sleep patterns, appetite loss, and even suicidal ideation.
These psychological effects can lead to social withdrawal, strained relationships, and a deep sense of helplessness. Embarrassment, fear, and self-blame are common emotional responses.
Many victims, especially teens and young adults, avoid telling friends or family about their experiences, which only amplifies their isolation.
A landmark case in Canada, R. v. Elliott in 2016, highlighted the legal implications of online abuse.
The case was connected to Rehtaeh Parsons, a 17-year-old girl who took her life after a photo of her sexual assault was widely shared online, followed by relentless digital harassment.
Although initial investigations failed to yield justice, public outcry prompted a renewed effort that led to charges under Canada’s Cyberbullying Prevention Act—also known as Bill C-13.
This tragic case led to legislative reform. Nova Scotia passed “Rehtaeh’s Law,” the first of its kind in Canada, which broadened the legal definition of cyberbullying and provided new tools for law enforcement to act.
Writing in Crime, Media, Culture, researcher Alice Dodge in 2023 emphasised how the case shifted public perception of cyberbullying—from a social issue to a serious crime requiring legal intervention.
Can ethics and AI offer solutions? As technology evolves, so do our opportunities to address online harassment in smarter ways. Media ethics plays a key role here.
Researchers like Milosevic and colleagues in 2022, writing in the International Journal of Bullying Prevention, argue that media platforms must uphold ethical standards that prioritise harm reduction.
This includes creating clear content guidelines, efficient reporting mechanisms, and psychological support systems for those affected.
Media outlets should portray victims with dignity and avoid sensationalising abuse, while ensuring perpetrators are held accountable.
Technology, particularly artificial intelligence, could also help stem the tide. AI-powered moderation tools, if designed ethically, can assist in identifying abusive content and preventing its spread. But these systems must prioritise fairness, transparency, and accountability.
Many current algorithms are geared toward boosting engagement—even if that means promoting provocative or harmful content. Instead, platforms need to redesign algorithms to avoid amplifying negativity.
As highlighted by Zubiaga in the International Review of Information Ethics in 2021, tech companies must also be transparent about how moderation decisions are made and offer clear ways for users to report abuse.
Ultimately, it’s not just up to lawmakers, media companies, or AI developers. All internet users share the responsibility to create a culture of empathy, respect, and mutual accountability.
By standing against online harassment, speaking up for victims, and supporting efforts for ethical technology, we can help make digital spaces safer for everyone.
The authors are from the Department of Science and Technology Studies, Faculty of Science, Universiti Malaya
The views expressed are solely of the author and do not necessarily reflect those of MMKtT.
- Focus Malaysia.
Artikel ini hanyalah simpanan cache dari url asal penulis yang berkebarangkalian sudah terlalu lama atau sudah dibuang :
http://malaysiansmustknowthetruth.blogspot.com/2025/07/ai-and-ethics-can-help-stop-online.html