Cyberflashing, Anonymity and EU’s increased focus on online child safety
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Regulatory and judicial changes around the world may be hinting at a changing landscape in social media with governments becoming more assertive. In Australia, a court is forcing Twitter to reveal the identity of an anonymous user, possibly making it harder for users to keep their identities secret. That might be fine for the defamation case in Australia, but it could create problems for vulnerable groups in the future. Meanwhile, in the U.S. a state lawmaker is following in the footsteps of the U.K. to make cyberflashing an offense, legally speaking. Bumble is working closely with her on the anti-flashing legislation. Finally, the EU is making a push to require platforms to deploy technological solutions to the problem of child sexual abuse imagery. As always there’s lots more miscellaneous bits that might strike your fancy, so scroll down and have a look.
For this month’s Expert’s Corner, we get to hear from two senior members of the venerable NewsGuard internet trust tool. Veena McCoole and Sarah Brandt both manage and facilitate partnerships at NewsGuard, an organization that provides online safety for brands, individual readers and indeed even democracies, eschewing algorithms in favor of the human expertise of trained journalists. NewsGuard publishes reports on misinformation generally as well as detailed reviews of individual news sites evaluating them according to various criteria with an emphasis on transparency of the organization and the credibility of the information they publish. This text has been edited for space. You can read the full interview on our Medium publication.
1. What was the motivation behind NewsGuard? While researchers often employ fact checks to best deal with misinformation, are they truly effective?
NewsGuard was founded in 2018 to combat misinformation and restore trust in the media, with the belief that journalism — instead of algorithms — can effectively ensure online safety for users, brands, and democracies. Fact checks address misinformation by setting the record straight after a falsehood is spread, but this approach has its limitations. Recognizing the shortcomings of debunking, NewsGuard designed its approach to be more proactive — pre-bunking a falsehood by providing people with instant information about the lack of reliability of the source that published it.
2. No one has been spared from the Information Wars, which came as an added “bonus” of the Russian Invasion of Ukraine. How should people navigate through such information, and how can they distinguish actual facts vs false news?
Navigating the landscape of misinformation, particularly related to Russia’s war in Ukraine, is made increasingly complex by the rapid cadence of new false claims appearing each day. By using tools like NewsGuard’s browser extension, internet users can see source context information on search results and on social media to aid them as they consider which sources might be trustworthy and which ones fail to adhere to basic journalistic standards. By consuming news from authoritative sources with high NewsGuard scores, users can help ensure that their news diet contains transparent, ethical journalism — rather than harmful misinformation and false claims.
3. Currently there are no particular regulations addressing misinformation, do you think associating fines to dissemination of misinformation might have an impact on limiting its spread?
At NewsGuard, we are proponents of user empowerment and believe that the greatest solution to combating misinformation is putting tools, context, and information in the hands of users, enabling them to make independent, informed decisions. When platforms block or censor content it stifles freedom of expression, but equally, they must be held accountable and implement effective trust and safety measures that empower and protect users who interact with and contribute to their platforms.
You can read the full interview on our Medium publication.
Checkstep News
📣 Checkstep is accepted as one of the “French Tech” partners in La French Tech London’s mentoring program.
The Information War
💬 Russians Get Conflicting News on the Ukraine War as Telegram Turns Into an Information Battlefield (The Wall Street Journal)
Russians will see very different views of the Ukraine invasion depending on where they’re getting their news. State-run media is predictably in line with Kremlin messaging, but Telegram, which remains accessible and uncensored in Russia has been revealing a very different story to those who access it.
🌐 Russia Is Taking Over Ukraine's Internet (Wired)
This is not too surprising but Russia will be routing all internet traffic in Ukraine territories it controls through its own networks, which are heavily filtered and surveilled. Multiple Ukrainian ISPs have been forced to reroute their connections through Russian infrastructure causing delays, outages and subjecting their users to Russian state censorship.
🐌 Microsoft launches effort to slow Russian propaganda on war, vaccines (The Washington Post)
Microsoft joins a growing list of companies to help counter the spread of Russian propaganda. The company developed a new Russian Propaganda Index to measure the user traffic to “Russian state-controlled and -sponsored news outlets and amplifiers” as a proportion to traffic to all news sites.
Moderating the Marketplace of Ideas
📱 How Harmful Is Social Media? (The New Yorker)
A couple of researchers wanting to know more about social media’s effect on our current extremely polarized and belligerent public sphere started a Google Doc to collect various studies that have considered the issue. The living document that exists “somewhere between scholarship and public writing” has about two dozen contributors now and many pages of comments from the public. The very short answer to a difficult and complicated question is that the effect is probably, maybe not as big as everyone thinks, but nothing is unambiguously settled. The work is ongoing.
🗳️ TikTok found to fuel disinformation, political tension in Kenya ahead of elections (TechCrunch)
New research by the Mozilla Foundation found that TikTok has been contributing to misinformation and political tension around the upcoming election in Kenya. The lead researcher attributes the platform’s inaction to moderators’ lack of familiarity with the political context in the country. A TikTok whistleblower confirmed that assessment.
🫥 Court order to expose @PRguy17 threatens the right to be anonymous online (The Sydney Morning Herald)
As a result of a defamation hearing in Australia, a federal court is requiring Twitter to reveal the identity of an anonymous poster. Vulnerable groups, human rights defenders, political organizers, lawyers, and whistleblowers often rely on anonymity to participate in public discussions. The ruling could have significant implications for anonymity and identity online.
🛡️ White House launches task force to curb online abuse and harassment (The Washington Post)
Fulfilling one of President Biden’s campaign pledges, the White House is convening a group of experts to study online sexual harassment, stalking and nonconsensual pornography, as well as the connection between such abuse and mass shootings and violence against women. The new task force will look at whether existing federal laws address the ways technology contributes to gender-based violence, and it will provide recommendations to address shortcomings.
🤷 Big Tech's political ad bans are a big charade (Protocol)
Although Big Tech did issue statements about banning political ads, it’s not quite as it seems. For instance, while advertisements are not allowed, influencers making political statements are still allowed on TikTok, in fact they’re even paid to hire such influencers.
In other news, Meta has also deprioritized safeguarding elections from the platform’s focus.
🗣️ YouTube Videos Are Targeting Muslims, Women in India, Study Says (Bloomberg)
Videos propagtating wild claims from the idea that Muslims spread COVID in India as form of jihad to outright threats to the Muslim community are running rampant on YouTube, as disclosed by a recent NYU Stern report.
🕴️Spotify forms council to deal with harmful content (Reuters)
Spotify recently set up a Safety Advisory Council to give the platform advice on issues such as hate speech, disinformation, extremism and online abuse. The platform definitely seems to be doing its all to surpass the backlash it received for hosting “The Joe Rogan Experience”.
🎤 Online Voice Chat Is Often A Sexist Nightmare (But It Doesn't Have To Be) (Kotaku)
Abuse against women and non-male users seems to be on the rise on Discord’s voice chat option, so much so that they’re resorting to using voice changing software. Surely, social gaming shouldn’t be made this difficult?
Regulatory News and Updates
🇸🇬 Social media platforms to remove harmful content, add safeguards under S'pore's proposed rules (The Strait Times)
Two new pieces of regulations are set to be added to Singapore’s Broadcast Act, i.e. the new Code of Practice for Online Safety and the Content Code for Social Media Services. These would legally require platforms like TikTok and Facebook to take appropriate actions against harmful online content like terrorist propaganda and also ensure additional safeguards for minors.
Apparently prompted by the suicide of a reality TV star following online abuse, Japan will make online insults illegal. Insults are defined as publicly demeaning someone’s social standing without referring to specific facts or actions. Those convicted could face up to a year in prison and a fine up to about $2,200.
🇮🇳 Explained: What are the draft amendments to IT Rules, 2021? (The Indian Express)
Introduction of government-appointed appeal committees and a 72-hour redressal period for grievance officers were just some of the amendments proposed to the recently enacted Information Technology Rules, 2021.
🇺🇸 Bumble and Lawmakers Are Fighting 'Cyberflashing' (The New York Times)
Bumble’s new Head of Public Policy, Americas, is working alongside Wisconsin State Senator Melissa Agard to introduce an anti-cyberflashing bill. This would make sending unwanted sexual images punishable by law. Across the Atlantic, the U.K. has already declared “cyberflashing” a criminal offence.
🇪🇺 European Security Officials Double Down on Automated Moderation and Client-Side Scanning (Lawfare Blog)
The European Commission introduced a new set of policies for enhanced online child safety. The newly proposed regulation urges platforms to rely on on automated solutions to detect and remove possible child sexual abuse material (CSAM) online.
This article by The Conversation gives you the exact lowdown on how the Union expects platforms to go about it.
📝 🇪🇺 EU Strengthens Disinformation Rules to Target Deepfakes, Bots, Fake Accounts (CNET)
While the Code of Practice on Disinformation is entirely voluntary, the European Union recently urged smaller platforms to participate along with Big Tech, thereby bringing the total number of signatories to 33. The Code not only had an increase in the number of signatories but also increased its stringency by introducing fines for failure to comply.
🏛️ Social media platforms take a back seat at Jan. 6 hearings — for now (The Washington Post)
Throughout Trump’s presidency, social media was often in the hot seat. That focus seems to have shifted in the Jan. 6 hearings from the role of social media in the propagation of disinformation or dangerous material to how former president Trump allegedly instigated insurrectionist acts. But what of the digital megaphones used to spread the message? Surely, there needs to be some accountability right? Guess, the senators don’t think so.