Unfruitful hearings, mind reading and global disinformation network takedowns
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
We’ve mentioned in the past that even when we want to, it’s hard to ignore Facebook in the news. This month, it’s TikTok’s turn in the hot seat. From stories about internal leaks, incels encroaching on the platform, and mass reporting tools that attack creators, there’s a lot happening on their turf. In addition to that we’ve got our usual round-up of stories about misinformation, freedom of speech, content moderation, and regulations–actually lots about regulations. Enjoy!
Checkstep News
📣 Many congratulations to our Deputy Head of Research, Isabelle Augenstein, for earning the title of Doctor Scientiarum, after successfully defending her higher doctoral thesis on "Towards Explainable Fact Checking".
📣 There’s been so much press (almost all favorable) covering Facebook’s globe-trotting whistleblower. Vibha, the newsletter’s co-conspirator, shares her thoughts on whistleblowers and the Facebook Papers in this opinion piece.
📣 Our Research Team’s work on “A Neighbourhood Framework for Resource-Lean Content Flagging” got accepted to the Transactions of the Association for Computational Linguistics (TACL) journal.
Moderating the Marketplace of Ideas
💰 How Facebook and Google fund global misinformation (MIT Technology Review)
Facebook’s Instant Articles, a program for publishers to monetize their content, tends to draw fake news and clickbait sites overwhelming legitimate content. It’s hard to know whether the fake news sites are from political actors or plain vanilla scammers, but either way Facebook is helping to fund misinformation operations. Not to single anyone out, Google has a similar problem.
🗣️ TikTok Has an Incel Problem (Vice)
While Facebook, YouTube, and Twitter have been bearing the brunt of the backlash for failing to protect users from hate speech, TikTok has been quietly flying under the radar. That might be changing. There’s been an increase in reports of misogynistic content on the platform, and TikTok is trying to keep up even as they are grappling with finding the line between free expression and hate speech.
🤦 Twitter's new ban on sharing 'private' media is frustratingly vague (Input Magazine)
In an attempt to limit damage from doxing, Twitter has amended their policy to forbid "the misuse of media... that is not available elsewhere online as a tool to harass, intimidate, and reveal the identities of individuals." Matt Wille argues in this piece that the new changes are too vague and will likely create more problems than they solve. It took almost no time for Wille to be proven right as the Washington Post reports that anti-hate advocates are finding it more difficult to document the efforts of neo-nazis and far-right activists.
👑 🤑 Twitter’s Highest-Profile Users Get VIP Treatment When Trolls Strike (Bloomberg)
Well, well, well, it seems much like Facebook’s “Cross Check” program, Twitter has their own version of preferential treatment for celebrities. Despite claims that their rules are the same for all people, Twitter makes an extra effort to protect platform luminaries from harassment and abuse on Twitter.
⛔ Facebook takes down disinformation networks globally (The Washington Post)
The cat-and-mouse game between social media companies and coordinated networks of disinformation campaigns continues to ramp up. Facebook’s latest threat report includes takedowns of groups from China, Belarus, and others. Also, a related action took place at Twitter: Xinjiang: Twitter closes thousands of China state-linked accounts spreading propaganda.
🔮 How TikTok Reads Your Mind (New York Times)
An internal report titled “TikTok Algo 101” was leaked by a TikTok employee to The New York Times, exposing how the platform studies human emotions and leverages people’s reactions to make the platform “addictive” to its users. Their recommendations can lead users to extremely polarizing content with detrimental effect, especially on children. While the document doesn’t reveal shocking facts, as the Facebook Files did, it does show how closely linked TikTok is to its China-based parent company, ByteDance.
😕 🤷 TikTok creators say they lose videos through mass reporting (Los Angeles Times)
Armed with mass reporting tools, some users have been abusing TikTok’s content reporting system. Some claim noble motives; others are more obviously just targeting TikTok creators they don’t like. It’s all a little murky since there is little transparency around moderation efforts at TikTok.
🙅 Rohingya sue Facebook for $150bn over Myanmar hate speech (BBC News)
Rohingya refugees are demanding from Facebook (now called Meta) a compensation of $150 billion for failing to stop the propagation of content promoting violence against their community. Seems like a stern call for Facebook to do better or cough up for their mistakes.
🌐 🧑🏫 Expanding access beyond information operations (Twitter)
Twitter is launching the Twitter Moderation Research Consortium early next year. This new group of qualifying researchers and journalists is being formed as the outlet for releasing datasets related to state-backed interference on the platform. Twitter has had concerns about previous public releases of such data and has created this group to address those concerns. Later in the year, Twitter plans to share data with the consortium about other policy areas, including misinformation, coordinated harmful activity, and safety.
Remedying COVID-19 and Vaccine Misinformation
🦠 Scientists are racing to understand the omicron variant, but anti-vaccine misinformation about omicron is already being spread (The Washington Post)
Despite stepped up efforts, social media companies struggle to keep up with misinformation narratives that mutate and spread much like the virus itself.
Regulatory News and Updates
🇬🇧 Plans to curb online abuse from anonymous accounts to be raised in Parliament (Evening Standard)
A group of MPs propose additions to the Online Safety Bill to reduce abuse from anonymous accounts. Their plan would set up verified accounts while still allowing anonymous users, but individuals get to decide whether to interact with them or not.
P.S. more updates to the Online Safety Bill are being made as we speak. These include new offenses such as sending unwanted sexual images and promoting violence against women and girls, amongst others. The government has two months to respond to the new changes proposed in the report before the bill is introduced into Parliament early next year.
🇺🇸 📜 Senators announce new social media transparency bill (The Verge)
Yet another bill to be introduced into Congress, but will it pass? That question remains unanswered. So far 17 reform bills have been suggested since January 2021. Seems like all talk and no action so far.
🏛️ Federal court blocks Texas law banning ‘viewpoint discrimination’ on social media (The Verge)
Not discouraged by the tongue lashing Florida got from a federal judge, Texas tried defending their own version of a scheme for state-control of social media. Federal judges in both states have affirmed that the First Amendment still applies no matter how much the legislators might disagree with the platforms’ viewpoints.
Regulators from Europe to the U.S. all want her opinion on regulating social media platforms. While her experience at Facebook (now Meta) does uniquely position her to share insights, is she really in a position to suggest regulatory changes?
🇪🇺 EU will force social media to reveal how political adverts target users (The Times of London)
In an attempt to make political advertising more transparent, the European Commission has introduced a draft regulation that would force social media companies to share information about how they target political advertisements at people, and would also prohibit the use of “sensitive data” such as race, religion and political views, to target these ads. If companies fail to comply with the regulation, they’ll be subjected to fines as high as 5% of their total revenue.
🇦🇺 Anti-troll laws could force social media companies to give out names and contact details (ABC Australia)
Draft legislation targeting online trolls was recently released by the Australian federal government, with the hopes of introducing it in parliament next year. The proposed legislation includes various requirements for social media platforms, from handing over identities of users involved in defamation cases to creating a complaint process for people who think they’ve been defamed online. While the proposed legislation aims to set national limits on online defamation, the opposition thinks trolls can easily bypass this by using a foreign IP address.
🇸🇦 Understanding the Kingdom’s digital content platforms regulations (Arab News)
Saudi Arabia is on the path of introducing the first-of-its-kind regulation surrounding content moderation across the Middle East. The proposed Digital Content Platforms Regulations will prohibit digital platforms from offering any service, for a fee or free, to the Kingdom until they comply with the regulation’s requirements. The draft regulation was recently put forth for a public consultation, the results of which will determine its future direction.
⚖️ Lawmakers Urge Instagram's Adam Mosseri to Better Protect Children (New York Times)
It’s Instagram’s turn to be grilled by lawmakers for failing to protect the platform’s young users. You have to give to Instagram for trying to save face right up until the last minute of the Senate hearing. Just last week, they released new safety tools allowing parents to set limits on their children’s use of Instagram. When teens spend too much time on the platform, they get a nudge to take a break. Congress members were none too impressed though.
🤖 MP Demands Deepfake Porn And 'Nudifying' Images Are Made Sex Crimes (Huffington Post)
Member of Parliament, Maria Miller, has urged the UK government to ban the sharing and creating of image-based sexual abuse under the scope of the proposed Online Safety Bill and deem it a sex crime. As deepfakes and ‘nudification’ software continue to make their way into the digital market, it’s very easy for bad-faith actors to create and share images of women online without their consent. A study by the AI research group Deeptrace found that 97 per cent of deepfakes are pornographic in nature and exclusively target women.