Ukraine invasion, safety in the metaverse and the Telegram rebellion
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
As we write this, Russia has invaded Ukraine but has been rebuffed in their attempts to take Kyiv. Leading up to the invasion, Russia launched an intense disinformation campaign to build a case that would justify their actions. Fortunately, in this instance it wasn’t successful, and the world is almost universally opposed to Putin’s plans. Last night Meta banned Russian state-media from running ads or using the platform to raise money. But at the same time Facebook struggles to keep up with the volume of misinformation put out by Russia. In time, we will learn more about Russia’s abuse of digital platforms and media. For now, we’ll continue to fight against online lies and toxicity from all of those attempting to disrupt peaceful lives within individual countries and across the globe.
Our featured expert this month is someone who has been working hard to keep people safe online. We’ve also got updates on regulations and the usual round-up of stories.
Expert’s Corner with Manpreet Singh
Manpreet Singh is a Trust and Safety professional working in London, UK. She has had various Trust and Safety roles including working as a policy researcher at a leading children’s charity. Her interests include safety by design, data protection, diverse and inclusive content moderation, and reducing online harms. The following is excerpted from the full interview.
*The answers below are not intended to represent any current or past institutional affiliations.
1. As many childhood activities had to shift online, kids have been exposed to and encouraged to participate in dangerous activities like the Milk Crate Challenge. How can platforms better prepare themselves for such unanticipated harms?
I do believe that online platforms can adopt safety-by-design approaches to help protect their users. There are a few underlying factors which can make online challenges very risky for children in particular: the fact that children may not be aware of the risks and consequences of participating, the low barriers/friction to participate, and importantly the incentives to participate due to engagement-based ranking, which are unique to online challenges.
Recently, TikTok announced an expansion to their efforts to combat dangerous challenges, introducing measures such as a dedicated policy category within its Community Guidelines, a 4-step campaign to encourage users to “stop, think, decide, act”, and a reporting menu so users can report problematic challenges. This is a good example of how platforms can help combat children not understanding the consequences of participating in a viral challenge.
2. How do we create online spaces or content moderation policies with diverse users in mind? For example, children may require certain extra protections online, LGBTQ users, non-native English speakers, or those with accessibility needs.
This is an extremely important question and one that I am really passionate about. I think the digital world can be brilliant at providing information, community, or a safe space for different groups of people. It can be difficult to factor in, or even be aware of, the concerns of various groups when those creating content moderation policies are themselves, homogenous. This is why it’s crucial that T&S professionals should strive to learn about – or be representative of – diverse groups, so content moderation can be inclusive of many different ideas of safety.
There is no “one size fits all” solution when it comes to trust and safety, and so platforms will need to devote more time and resources towards this area than they may have in the past. But there are general things that will help keep many users safe, for example, more (data) privacy, more options for users to tailor their experiences, and by providing safety tools and resources in many languages.
3. As moderation efforts depend more on AI, is there a risk of negative unintended consequences?
There is absolutely a risk of negative unintended consequences, and I would be nervous to see moderation efforts trend towards prioritising AI-based solutions over a mix of AI and human moderation. AI is a tool, and it can very easily emphasise existing biases we may have. I’m also hesitant about turning to AI for questions about misinformation, especially political misinformation. It’s not clear who users should be looking to to be the authority on what information is accurate, especially when many turn to online platforms to protest and find solidarity against systemic injustices. This definitely can have a concrete impact on free expression.
The full interview has many more insights from Manpreet as well as suggestions for solutions to many of the problems we discussed with her. It’s well worth a read!
Checkstep News
📣 On the occasion of Safer Internet Day, we host our first Trust and Safety event. And we’re excited to share that it was a great success. It was a great opportunity to meet fellow T&S experts.
We intend to continue these series of events. And next up is a Trust and Safety meet and greet in the Bay area. Learn more about the event here.
📣 We also hosted an online webinar, with an exciting lineup of panelists: Jennifer Mathieu, Clara Tsao, Darren Gough, Lloyd Richardson and Giovanni Luca Ciampaglia. In case you missed it, you can watch it here.
📣 We’re super excited to share that the Checkstep family is growing. Joining us are Yu-Lan Scholliers, as Head of Operations and Vincent Maurin, as Head of Engineering.
📣 And of course, we didn’t want to miss out on all the Tinder Swindler talk. Here’s what Vibha, the newsletter’s co-conspirator, has to say.
Moderating the Marketplace of Ideas
🇷🇺 It Is (Often) Not About You: Russia’s 4 Target Audiences for Disinformation (Tech Policy Press)
Tech Policy Press talks with Clint Watts about exactly who the targets are for Russian disinformation and why social media companies keep Kremlin propaganda outlets online.
😡 ❌ Here's why Twitter users in the UK can still be jailed for sending 'grossly offensive' tweets (The Verge)
Twitter is overrun with your basic, run-of-the-mill offensive tweets, but when they step over the somewhat fuzzy line to grossly offensive, the UK government has been known to prosecute. Section 127 of the 2003 Communications Act was invoked again this month bringing more debate on the controversial law.
🇸🇪 Sweden returns to cold war tactics to battle fake news (The Guardian)
Sweden has re-established their cold-war era psychological defense agency because of concerns about possible Russian aggression. Since the aggression moved past being just possible last week, you might be thinking, “too little, too late.” Rest assured, we can expect disinformation from foreign quarters to continue and maybe even escalate.
🗣️ Governor Glenn Youngkin accused of 'toxic culture' after aides attack teen on Twitter (The Guardian)
A high school student’s efforts at civil discourse met with bullying and harassment, which was turbo-charged by the fact that a toxic tweet came from the new governor of Virginia’s own campaign team. Their harassing tweet has since been taken down, and the governor says it was unauthorized but notably did not offer an apology.
🧒 Schoolkids Are Falling Victim to Disinformation and Conspiracy Fantasies (Scientific American)
Sadly, even our children are ripe targets for disinformation. Those spreading fake news take advantage of the vulnerability of young minds, and social media platforms tend to make it worse by recommending dubious content that is often more extreme and far-fetched than what a user initially engaged with. Teaching better media literacy could help.
💬 How Telegram Became the Anti-Facebook (Wired)
Confusing changes to WhatsApp’s terms-of-service and the banning of Donald Trump from Twitter and Facebook sent scads of new users to Telegram, which bills itself as the ultimate free speech platform. Pavel Durov, Telegram’s CEO and sect-like leader, is proud to be the anti-Facebook option. It’s less clear how he feels about far-right extremists and jihadists that are now making use of Telegram both for evangelizing in public and plotting in secret.
🧑💻 🌐 14 experts say how the net's worst problems could be solved by 2035 (Fast Company)
Pew Research reached out to several tech opinion leaders to ask them how to fix up the mess that is the current internet. As you can imagine, thought leader answers can go on for some length, but never fear, Fast Company has summarized them for you.
📈📱 Spanish-language social media misinformation thrives, raising alarms (Axios)
Spanish-language misinformation still seems to be a growing pain for social media platforms. While these platforms focus their efforts on removing misinformation in English, very little is being done for the Spanish counterparts.
The “no hablo español” approach sure seems to be getting these platforms into trouble with lawmakers.
Russia sure seems to be taking advantage of this as well.
😞📰 Now It Can Be Told: Karen Hepp Opens Up About Facebook Lawsuit (Philadelphia Magazine)
A Philadelphia TV anchor’s image was appropriated without her consent and used to promote dating and other bottom dwelling internet sites. Karen Hepp’s suit against Facebook, who ran the ads, brings an innovative challenge to Section 230 protections. Using one of the few exceptions to the immunity given by the statute, Hepp and her lawyer successfully argued to an appeals court that her ‘right of publicity’ is legally akin to other intellectual property protections legislators originally carved out for corporate interests. A different circuit court ruled the opposite in a similar case, so this one might ultimately be decided by the Supremes.
🤑 Disinformation for profit: scammers cash in on conspiracy theories (The Guardian)
Disinformation campaigns run from half-way across the world in Vietnam or Romania played an active role in the Canadian anti-government “Freedom Convoy” protests. These campaigns included several fake accounts on Facebook, run by scam artists, where they raised as much as $7m in crowdfunding and generated mass mainstream attention. Sure seems to be a lucrative way to run business by the for-profit misinformation industry.
Remedying COVID-19 and Vaccine Misinformation
💉 🦠 Brazil's Covid-19 vaccination drive stumbles as Bolsonaro's disinformation campaign lingers (CNN)
Brazil’s poor public health messaging has led to increased deaths rates. With the country’s leaders playing an active role in instilling doubts with respect to COVID vaccines, the administration of the booster shots has been relatively slow, and thereby leading to a situation, faced by the country back in 2021.
Health experts are further urging the Brazilian authorities to be more transparent and send out clear messages that it is those who are unvaccinated or partially vaccinated that are being hospitalized.
Regulatory News and Updates
🇺🇸 A bill aiming to protect children online reignites a battle over privacy and free speech (The Washington Post)
The proposed Earn It Act has a lot of support in congress and is positioned as a way to stop some of the most egregious online harm including child sexual abuse, but privacy and free speech advocates are concerned. Detractors say the bill won’t really protect children but will introduce other harms to vulnerable groups that require anonymity online.
🇬🇧 Online safety law to be strengthened to stamp out illegal content (GOV.UK)
Fraud, online drug and weapons dealing, people smuggling, revenge porn, promoting suicide and inciting or controlling prostitution for gain are all the lastest additions of criminal offences being added to the scope of the Online Safety Bill (OSB). However, several Members of the Parliament (MP) still think that online fraud is not as much of a priority to the British PM, as it should be.
In addition to the above, all porn websites will now be required to verify the ages of their users, to ensure children are not exposed to explicit content.
🏛️ New algorithm bill could force Facebook to change how the news feed works (The Verge)
Yet another bipartisan bill to be introduced to the Senate. This time it is Nudging Users to Drive Good Experiences on Social Media (The Social Media NUDGE Act). The bill includes various clauses from researchers identifying various ways to slow down the spread of harmful content and misinformation to holding social media platforms accountable by treating violations as unfair or deceptive acts or practices.
Fast forward one week and there’s another new bill, i.e. Kids Online Safety Act. This bill requires platforms to prevent the promotion of self-harm, eating disorders, bullying and the sexual abuse of children as well as provide parents and minors with various tools needed to monitor screen time and protect their data privacy. Sure seems like a growing stack of bills, but what about their implementation?
🇩🇪 Germany Has Picked a Fight With Telegram (Wired)
Although Telegram remains to be one of most popular messaging apps in Germany, the German authorities believe it definitely serves as an “incubator” for a series of violent incidents involving Germany's anti-lockdown movement.
Despite numerous reports from the authorities, the platform has chosen to turn a blind eye. This refusal to cooperate with authorities poses a more serious question - with so many new regulations coming up surrounding online safety, could other platforms also pull a “Telegram” and start pushing back?
✍️ Lawmakers Press Amazon on Sales of Chemical Used in Suicides (The New York Times)
Since 2019, families of suicide victims have urged Amazon to take down the listing of a preservative - sodium nitrite, being increasingly used in suicide incidents. However, the platform has taken no action against it so far. With hopes to force the platform to take action, Members of Congress have sent a strong letter to Amazon’s executives to explain their complaints redressal process.
⚖️ 🔮 Metaverse ‘cannot escape’ UK online rules, say experts (Financial Times)
While Zuckerburg might be trying to find loopholes around regulations surrounding content moderation, lawmakers are not having it. Although, it may not be entirely clear what moderation in the metaverse would look like, it certainly doesn’t exempt platforms from the scope of the regulations.