Another month of Twitter, Elon Musk, and more Twitter plus some other stuff
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
As we transition into the new year, the chaos of news covering Elon Musk and Twitter has been inescapable this month. Some of it is just a lot of noise and some of it reflects significant changes that are likely to completely remake the platform, but all of it is changing so fast that any information we include here is likely to change in the very near future. From the maelstrom, we’ve tried to select the items that relate specifically to trust and safety and content moderation. One consistent theme throughout is that content moderation is a really and truly difficult problem to solve, and regulators are watching.
If you manage to lift your head to breath some non-Twitter air, you’ll also find that the U.K. Online Safety Bill is back on track after a slight derailment, and the U.S. Supreme Court is planning to reevaluate Section 230 of the 1995 Communications Decency Act, with the potential to completely change platforms’ responsibility for online content.
Meanwhile, this month we had the good fortune of speaking with Zara Ward who is a senior practitioner at the Revenge Porn Helpline. The video interview is available on our Medium publication, and we’ve excerpted and edited some of the transcript here. The Revenge Porn Helpline is a practical support service for adult victims in the U.K. who have experienced intimate image abuse (IIA).
What is the current law in the UK with regards to intimate image abuse? And how is that being monitored, and more importantly, enforced?
It is illegal to disclose private sexual, intimate imagery with the intent to cause distress according to the law that was passed in 2015. It has to be, private sexual imagery, and it has to be with the intent to cause distress. So, as you can imagine, the intent to cause distress is a really difficult thing to enforce. Since the law was passed, we’ve been kind of campaigning to get the law changed.
Why was the Revenge Porn Helpline setup? What is its origin story?
The Revenge Porn Helpline was set up in 2015. So, our charity sits underneath a wider organization called the South West Grid for Learning and one of their helpline branches is the Professionals Online Safety Helpline.
One of the managers in the organization used to see quite a lot of intimate image abuse, even though it was actually against the law. So as it became more of a problem, they thought, “oh, well, let’s set up a helpline and see how popular it becomes” and over the years it’s increased in need and the support has become so much more developed and complex. So yes, we are very very busy!
What can online platforms do to reduce the chance of intimate images being used and being traded in this way?
I think a block-first policy is always a really good step. So if you’re getting a report of a private sexual image from somebody, just block it. Like take it seriously because then if somebody appeals it and says, “oh no, actually that’s a picture of a famous porn actress”, then absolutely fine.
There’s more than enough porn on the internet. I think it wouldn’t matter if something went down for a couple of hours, but it does say that we are putting victims first. Rather than going, “we are putting commercial gain first or views first”, or whatever it is, which happens on a lot of adult websites where they’re like, “yeah, but it’s going to get views”. We don’t want somebody’s really intimate moment getting 300,000 views when they’ve not allowed it to be online.
The video of the full interview is available at our Medium publication and well worth a listen.
Checkstep News
📣 We hosted a pre-Christmas event with a few Trust and Safety (T&S) folk based in London. Lots of great discussions and talks around what T&S could look like in 2023.
News from TwitterLand
❌ Twitter is no longer enforcing its Covid misinformation policy (CNN)
In 2020 Twitter added rules against “harmful misinformation” about the pandemic. According to Twitter, they suspended more than 11,000 accounts between January 2020 and September 2022 for violating COVID misinformation rules. Plus, Twitter removed nearly 100,000 pieces of content. Twitter’s official policy now says, “Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.”
🦾 Exclusive: Twitter leans on automation to moderate content as harmful speech surges (Reuters)
As Elon Musk has been shrinking the size of Twitter’s staff, the platform is depending more on fully automated moderation decisions. The move is causing Twitter to be more aggressive in moderating certain types of content like child exploitation even against non-violating posts.
👥 Twitter’s Community Notes feature starts rolling out globally (Engadget)
The pilot program previously known as Birdwatch is being rolled out globally. Now known as Community Notes, the feature will be available everywhere as a way to stop misinformation. It uses a crowd-sourced approach to identify tweets that might be considered misleading.
🏃 Twitter dissolves Trust and Safety Council, Yoel Roth flees home (The Washington Post)
Twitter is dissolving its Trust and Safety Council. The move comes as Elon Musk has been asserting his own views on content moderation decisions at the company with less input from outside experts, and as some members of the council had already resigned.The Center for Democracy and Technology, also a member of the council, said that it was “dismayed by Twitter leadership’s irresponsible actions to spread misinformation about the Council, which have endangered Council members and eroded any semblance of trust in the company.”
🤨 Surging Twitter antisemitism unites fringe, encourages violence, officials say (The Washington Post)
There has been a surge of hate speech and disinformation about Jews on Twitter, which is expected to lead to more violence and ongoing proliferation of extreme content. With more than half the staff at Twitter gone, Musk’s claims that hate speech has been declining seems implausible, and the increase has been confirmed by the Center for Countering Digital Hate.
✌️ Gamers Are Fleeing Twitter for Hive. Can It Handle the Swarm? (Wired)
Amid the chaos at Twitter, users of the service are trying to reestablish their communications elsewhere. Among the platforms finding themselves as havens for Twitter refugees is Hive Social, a three-year old company finding itself struggling to keep up with the onslaught. The app has been crashing or responding extremely slowly under the weight of its suddenly newfound user base.
Several left-wing accounts have been banned from Twitter since its recent change of ownership. There is reason to believe the suspensions came about because of an "organized mass reporting campaign" against left-wing Twitter users coordinated by a right-wing group called Zanting. The group published detailed instructions on a Substack channel on how to falsely report people.
Moderating the Marketplace of Ideas
💰 Google and YouTube are investing to fight misinformation (Mashable)
Google and YouTube, both Alphabet companies (oddly not mentioned in the article), are contributing $13.2 million to the International Fact Checking Network. The grant will help support the network of 135 fact-checking organizations.
😲 Social Media Seen as Mostly Good for Democracy Across Many Nations, But U.S. is a Major Outlier (Pew Research Center)
A Pew Research Center study across 19 countries found that more than half of those surveyed believe social media is a good thing. Of course, it depends what country you’re talking to. The United States is a big outlier. Most Americans think social media is divisive and bad for democracy.
🏛️ A Supreme Court Case Could Decide the Fate of the Modern Internet (Slate)
Slate interviews Jeff Kosseff, whose book, The Twenty-size Words that Created the Internet, explains the history and context for Section 230 of the Digital Communications Act. The current Supreme Court case Gonzalez v. Google reexamines this very powerful statute that has influenced so much about how content moderation works today.
🗣️ Teens and Cyberbullying 2022 (Pew Research Center)
According to the latest such poll from Pew, nearly half of U.S. teens have been bullied or harassed online. Teens report physical appearance as the most common target of derision. Boys and girls report about the same amount of abuse, but teen girls are more likely than teen boys to be the targets of false rumors.
📱 'Blackout Challenge' on TikTok Is Luring Young Kids to Death (Bloomberg)
As if the bullying isn’t enough already, some of the youngest of TikTok’s users are at risk from the many viral challenges put out on the platform. The ‘Blackout Challenge’ has been especially deadly, being linked to the deaths of at least 15 kids aged 12 or younger in the past 18 months.
🌪️ Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users (NYU’s Center for Social Media and Politics)
In 2016 it was widely reported and subsequent studies confirmed that YouTube recommendations favored extreme right wing content. YouTube promised to do better. In 2020, researchers at NYU wanted to see if YouTube’s algorithm was still pushing conservative content. Now, the results are in. The researchers found that most users were not sent down extremist rabbit holes, but they did get nudged into “increasingly narrow ideological ranges of content in what we might call evidence of a (very) mild ideological echo chamber.”
📰 How publishers are learning to create and distribute news on TikTok (Reuters Institute for the Study of Journalism)
News organizations looking to engage younger audiences are turning to Tik-tok, one of the world's fastest-growing social networks. It doesn’t hurt that events like Black Lives Matter, the Covid-19 pandemic, and the war in Ukraine have resonated with this age group as have recent changes to the platform such as longer videos and the promotion of live streams. Even so, news on TikTok is still mostly created by influencers and activists.
🤖 AI-generated fake faces have become a hallmark of online influence operations (NPR)
Facebook says that more than two-thirds of the influence operations it took down in 2022 used fake profile pictures generated by machine learning. As the technology becomes more widely available and better at creating life-like faces, bad actors are adapting them to manipulate social media networks.
👿 Russian disinformation is demonizing Ukrainian refugees (The Washington Post)
As Russian forces bombard Ukrainian cities, their propagandists are doing their best to get Europeans to resent the huge number of refugees their attack has created. Europeans have overwhelmingly welcomed displaced Ukranians, but experts fear that Russia’s efforts to weaponize the issue might work on a growing number of people.
Regulatory News and Update
🇬🇧 Return of the Online Safety Bill - everything you need to know (Center for Countering Digital Hate Blog)
Having been stalled over the past several months, the U.K. government has brought back the Online Safety Bill with some notable changes. Primarily, protections for children and newly strengthened laws against the encouragement of self-harm and distribution of intimate images without consent have been beefed up. On the other hand, the controversial and, in fact ambiguous, 'legal but harmful' clause has been dropped.
🇺🇸 Senate passes bill to ban TikTok on U.S. government-issued devices (The Washington Post)
The U.S. Senate passed a bill banning federal employees from using TikTok on government devices. Congress has been anxious to clamp down on the popular, Chinese-owned video-sharing app, and additional regulations might be in store for the platform. The bill still requires the approval of the House and the signature of President Biden before it can go into effect. A number of states have already passed their own bans.
🇮🇪 President Higgins signs crucial Online Safety and Media legislation into law (Government of Ireland website)
The Irish president signed the Online Safety and Media Regulation Bill. The act dissolves the existing Broadcasting Authority of Ireland and establishes a new Media Commission to be overseen by an Online Safety Commissioner, who will have a “modern suite of robust compliance and enforcement powers.” There hasn’t been much press about this but for more details, the Irish Times covered the pending legislation last October.