In Asia, Citizen Groups Fill Factchecking Gap Left By Big Tech
Tired of the constant stream of fake news in her family's WhatsApp group chats in India — from the water crisis in South Africa to rumors of a Bollywood actor's death — Tarunima Prabhakar has created a simple tool to stop misinformation.
Ms. Prabhakar, co-founder of India Tuttle Technologies, archives the content of fact-checking websites and news outlets and uses machine learning to automate the fact-checking process.
According to him, this web-based tool is available to students, researchers, journalists and academics.
"Platforms like Facebook and Twitter are checked for misinformation, but not WhatsApp," he said of parent messaging app Meta, which has more than 2 billion monthly active users, about half a billion of them in India.
"The tools and techniques used to check misinformation on Facebook and Twitter are not applicable to WhatsApp and do not work well with Indian languages," he told the Thomson Reuters Foundation.
In 2018, WhatsApp took steps to block messages sent by users after rumors circulating on the messaging service led to several deaths in India. The fast forward button next to MMS has also been removed.
The complaint is one of a growing number of initiatives in Asia to combat online misinformation, hate speech and insults in local languages using technologies such as artificial intelligence, as well as crowdsourcing, on-the-ground training and engagement with civil society groups. community needs.
While tech companies like Facebook, Twitter and YouTube face increased scrutiny over hate speech and misinformation, experts say they don't invest enough in developing countries and lack intermediaries with language skills and knowledge of local events.
“Social media companies don't listen to local communities. Nor do they take into account the context – cultural, social, historical, economic and political – when managing UGC,” said Pierre-François Duquesre, head of media freedom at the human rights group Article 19.
"It can have a significant impact online and offline. This can increase polarization and the risk of violence," he added.
Local initiatives are very important
While the impact of online hate speech has been documented in several Asian countries in recent years, analysts say tech companies have yet to add resources to improve content moderation, especially in local languages.
In 2018, the United Nations human rights watchdog said Facebook use played a major role in the spread of hate speech that fueled violence against Rohingya Muslims in Myanmar after a military crackdown on the minority in 2017.
At the time, Facebook said it was fighting misinformation and investing in Burmese language and technology.
Online “significant hate speech” in Indonesia targets religious and ethnic minorities, as well as LGBT people, while paid bots and trolls spread misinformation aimed at deepening divisions.
“Social media companies ... must work closely with local initiatives to address the daunting challenges of managing problematic content online,” said researcher Shirley Haristia, who contributed to the Article 19 content modulation report in Indonesia.
One such homegrown initiative is Mafindo, an Indonesian Google-backed nonprofit that trains citizens—from students to housewives—to fact-check and spot misinformation.
Mafindo, or the Indonesian Anti-Defamation Association, provides reverse image search, video metadata, and geolocation training to help verify information.
This nonprofit has a team of professional fact-checkers who, with the help of community volunteers, have uncovered no less than 8,550 cases of fraud.
Mafinda also created an Indonesian fact-checking chatbot called Kalimasada, which was introduced shortly before the 2019 elections. It is accessible through WhatsApp and has around 37,000 users, a small fraction of the more than 80 million WhatsApp users in the country.
“Parents are particularly vulnerable to scams, misinformation and fake news on the platform due to their limited technical skills and mobility,” said Santi Indra Astuti, President of Mavendo.
"We teach them to use social networks, protect personal information, and criticize current topics: during COVID there was misinformation about vaccines, and in 2019 about elections and political candidates," he said.
Abuse detection problems
Across Asia, governments are tightening regulations on social media platforms, banning certain types of posts and requiring unwanted posts to be removed immediately.
However, hate speech and abuse, especially in local languages, often goes unnoticed, says Ms Prabhakar of Tattle, who also created a tool called Uli to detect sexual insults online in English, Tamil and Tamil. Indian.
Tuttle's team compiled a list of commonly used offensive words and phrases online, which the tool then used to disrupt users' timelines. People can add more words themselves.
“It is very difficult to detect violence,” the woman said. Prabhakar. He explained that Uli's machine learning feature uses pattern recognition to detect and hide problematic posts in users' feeds.
"Moderation happens at the user level, so it's a bottom-up approach, not a top-down approach," he said, adding that they want Ollie to be able to detect offensive memes, images, and videos. .
Empathly, a program developed by two university students in Singapore, takes a more proactive approach, acting as a spell checker when it detects offensive words.
Designed for business, this tool can detect foul language in English, Hokkien, Cantonese, Malay, Singaporean or Singaporean English.
“We've seen the damage hate speech can do. But big tech companies tend to focus on the English language and their users in the English-speaking market,” said Empathly founder and CEO Timothy Liao.
“So there is scope for local involvement – and as local people we have a great deal of understanding of culture and context.”
Exact Thomson Reuters Fund Report.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home