Lots of moderation, or rather lack of it, in the news this month
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Lots of moderation news this month. Both Reddit and Stack Overflow volunteer moderators have launched work stoppages, and Wired has a detailed analysis of how moderation at 4chan works. 4chan moderation? Who knew? Of course, the main focus there is that you don’t insult 4chan, but according to the analysis, it’s all a little fluid. In other news, the big platforms are relaxing moderation for political candidates, election denialism, and COVID misinformation. They have their reasons, but you can expect to see a lot more nonsense, so “let’s be careful out there.”
Checkstep News
📣 Our Head of Product, Yu-Lan Scholliers, recently spoke at the Data Science Festival about using AI to foster kinder and safer connections online.
📣 We also hosted a Trust and Safety networking event in Tel-Aviv, with our partner Falkor.
📣 Last but definitely not the least, We’re also kicking off a webinar series on the Digital Services Act, in partnership with Bird & Bird. Do sign up for the first session on July 5th: https://chk.st/3JqVgjE.
Moderating the Marketplace of Ideas
⛔ Meta to start blocking news content for up to 5% of Canadian Facebook, Instagram users (CBC)
Faced with the likely passage of a new law in Canada that would require Big Tech to pay news organizations for use of their content, Meta is retaliating by blocking some Canadian users from accessing or posting news related content on Facebook and Instagram. A similar measure in California has Facebook threatening to block news there too.
◀️ Scoop: YouTube reverses misinformation policy to allow U.S. election denialism (Axios)
YouTube has announced that it will stop removing content that includes lies about the outcome of the 2020 U.S. presidential election. The policy was originally implemented in December of 2020. The company says that continuing to block such content would block political speech and is no longer required to reduce the risk of violence.
🧑💻 Twitter Missed Dozens of Known Images of Child Sexual Abuse Material, Researchers Say (Wall Street Journal)
Researchers at the Stanford Internet Observatory discovered that Twitter failed to take down dozens of instances of CSAM on the platform. Twitter told the researchers that their detection system has since been fixed and has asked them to let the company know if they find a spike in such content in the future.
🪧 Stack Overflow Moderators Stop Work in Protest of Lax AI-Generated Content Guidelines (Gizmodo)
Community volunteer moderators at Stack Overflow are walking off the job to protest new policies regarding AI-generated content. The platform has announced that they will remove AI-generated content only in specific circumstances claiming that over moderation was discouraging humans from contributing. The moderators object to what they call a near-total prohibition on removing content from AI including incorrect information. They are also unhappy with the way Stack Overflow has communicated the new policy.
🕵️ Inside 4chan’s Top-Secret Moderation Machine (Wired)
Wired Magazine takes a deep dive looking into moderation practices at 4chan revealing “the degree to which 4chan’s toxic influence is a design, not a bug.” The site, closely associated with the 2022 mass shooting in Buffalo, New York, has a set of rules but they tend to be arbitrarily and opaquely applied and most calls for violence, for example, are allowed to stand.
🧐 Instagram reinstates Robert Kennedy Jr. after launch of presidential bid (The Washington Post)
Robert F. Kennedy Jr. had been banned from Instagram since 2021 for being an unredeemable purveyor of bogus and potentially dangerous information, but since announcing his run for the presidency, he’s back. Instagram announced that as he is now an active candidate for president of the U.S., they will restore his account. The ban on his organization, however, will continue.
😎 Misinformation spreads, but fact-checking has leveled off (Duke Reporters' Lab)
The dissemination of misinformation isn’t slowing down, but fact-checking organizations are not growing at the same levels. Duke Reporter’s Lab regular checks in on the state of fact-checking. The good news is that the existing fact-checkers are mostly viable organizations that seem to be in it for the long haul.
👽 What will stop AI from flooding the internet with fake images? (Vox)
With the expected explosion of fake images hitting platforms online, some experts are suggesting that watermarks and other indicators could be used to identify actual images from those generated by AI. Google, Adobe, and Microsoft all support some labeling of AI-generated content in their products, so theoretically this could be made to work.
🩺 The Surgeon General Is Pushing for a Misguided Social Media Policy (Wired)
The highest ranking public health officer in the U.S. has issued a warning that social media is harming kids. The surgeon general also proposed some solutions such as enforcing age minimums, which implies that all of us will be required to show ID to be online. Wired argues that the suggested remedies will actually make things worse for many more people.
😕 No One Knows Exactly What Social Media Is Doing to Teens (The Atlantic)
… But the science behind the idea that social media is bad for teens is not as definitive as you might believe. Hundreds of studies over the last 10 years have actually produced mixed results and seem to indicate that a lot depends on the individual involved. The short answer is that we still don’t know what the effects are. But that hasn’t stopped legislation and lawsuits that think they can point the finger at the problem.
😪 Bluesky’s growing pains strain its relationship with Black users (TechCrunch)
As Bluesky, one possible alternative to Twitter, grows, it is facing increasing pressure to do more about hate speech and other violent comments. Bluesky is designed to be decentralized, so moderation has not been one of its founding principles. But as anyone involved in Trust & Safety knows, it was only a matter of time before hate started to rear its ugly head.
📈 Twitter CEO Cites Need to Transform ‘Global Town Square’ in Memo to Staff (Wall Street Journal)
Linda Yaccariono has taken over as the CEO of Twitter. In a memo to employees she stated that the platform’s goal is to be the world’s most accurate real-time source of information. Yaccarino comes from an advertising background, most recently at NBCUniversal.
📱 Post, a publisher-focused Twitter alternative, launches on iOS (TechCrunch)
Post is a new platform competing to fill the gap left in the wake of Twitter’s decline. It’s mainly news focused and is working with several publishing partners. Users can see headlines and the beginnings of articles for free, but reading the contents will require a microtransaction. Instagram is also releasing a new Twitter competitor.
🌊 Victims speak out over ‘tsunami’ of fraud on Instagram, Facebook and WhatsApp (The Guardian)
Reportedly, every seven minutes someone in the U.K. falls for a financial scam on either Facebook or Instagram (both Meta properties). Meta is hearing from all sides that it’s time to do something about it since Britons are losing life-changing sums every day. Meta says fraud is an industry-wide problem and they do what they can to block it.
🦠 Meta rolls back measures to tackle COVID misinformation (Reuters)
Meanwhile, Meta is no longer doing what it can to prevent COVID misinformation. A policy put in place during the pandemic will no longer be in effect globally. The change is due to an improvement in authentic sources of information and general awareness around COVID. The pandemic-era rules will remain in effect in countries that still have a COVID-19 health emergency in effect.
🌐 Instagram Connects Vast Pedophile Network (The Wall Street Journal)
Researchers have discovered that Instagram helps to connect and promote accounts openly dealing in underage-sex content. Pedophilia is apparently treated like any niche interest on Instagram, which makes a point of connecting users to build communities. Meta has acknowledged that it is having problems and said that it has set up an internal task force to address them.
🕸️ AI-generated child sex images spawn new nightmare for the web (The Washington Post)
On the truly dark side of AI, an explosion of disturbingly life-like images of child exploitation is showing up online. Child-safety experts worry that the introduction of generated images will undermine efforts to find real-life victims. It’s not clear if generated images even violate federal child-protection laws since they depict children who don’t exist. Justice Department officials have said the images are illegal even if the child shown is AI-generated, but so far no case has been brought by law enforcement.
💬 Child predators are using Discord, a popular app among teens, for sextortion and abductions (NBC News)
“Discord’s young user base, decentralized structure and multimedia communication tools, along with its recent growth in popularity, have made it a particularly attractive location for people looking to exploit children.” Hundreds of cases of transmitting or receiving CSAM have been identified with reports increasing by 474% from 2021 to 2022.
Regulatory News and Update
🇪🇺 Twitter pulls out of voluntary EU disinformation code (BBC News)
Twitter has decided to leave the EU’s code to fight disinformation. Nearly all tech firms of all sizes have signed onto the voluntary code. Twitter formerly had a team dedicated to stopping coordinated disinformation campaigns but the majority of them have resigned or been laid off. EU commissioners are not happy about the move, and have indicated that when the Digital Services Act goes into effect in August, they will be diligent about enforcing compliance.
🇦🇺 Twitter may face fines in Australia over hate speech (Axios)
Twitter has received notice from the Australian government asking them to explain their steps to limit hate speech on the platform. The online safety regulator says they have received more complaints about Twitter in the last 12 months than any other social media provider. Twitter has 28 days to respond or face fines.
🇺🇸 Montana banned TikTok. Now these Montanans are fighting back. (The Washington Post)
Believing that China is using TikTok to slurp up American’s private information, Montana became the first state to ban the social media company. Legislators may or may not have known the effect the ban would have on their citizens, including the ability to make a living, are now hearing from them in big numbers.
Reading Corner
💡 Here’s an article by Mohamed Abdihakim Mohammed on how the AI act will intersect with DSA and how this will change the face of trust and safety!
💡 The UK Safety Tech Sectoral Analysis is out! This is the second year of us being featured in it.