A new free speech battle over abortion, potential free rein for COVID misinformation, and there might actually be a bill Congress will agree on—go figure.
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Of course the Supreme Court’s recent reversal of constitutional protections for abortion is not the end of the discussion about abortion in the U.S. Anti-abortion legislatures in some states are proposing new laws that would restrict speech on websites that explain how to get an abortion. At the same time, Meta is considering opening up previously restricted speech around COVID misinformation. They’re checking in with their own supreme court, the Oversight Board, to get their opinion. Moving on from courts to legislatures, the perennially gridlocked U.S. Congress might actually, just possibly agree enough to pass a new data protection law.
We’re lucky this month to hear from Carolina Christofoletti who is a Child Sexual Abuse Material (CSAM) subject matter expert, with vast experience both in CSAM Intelligence and CSAM Trust & Safety. She currently leads the CSAM Blockchain Intelligence efforts at TRM Labs as part of the Threat Intelligence Team. The full interview with her is required reading if you want to understand why the current industry standard of using databases of image hashes doesn’t come close to actually solving the problem of online child sexual abuse.
The following is an excerpt from the full interview.
The EU recently beefed up their regulations to ensure online child safety. How important do you think government regulations are or not to online child safety?
The main risk of any regulation is getting the “what to regulate” problem wrong. This is also true for the upcoming EU CSAM regulation. Once an industry standard is set by regulation, which online platforms must comply with, CSAM Trust & Safety managers become extremely resistant to greenlighting a different approach to tackle CSAM threats. And with this, the antilogic rotates in circles—revealing the real origin of Trust & Safety NDAs.
If Web3 is the next big thing, what do you think of the metaverse with regard to online child safety? Is there a greater risk around child grooming?
The metaverse will only surface the fact that Child Safety is a much bigger issue than ‘known CSAM files’. The hyperrealistic metaverse gives us the false impression that we are facing a new, or even an increased Child Safety threat. Instead, what we are facing is a “new wine in an old bottle.” Because the wine label is now more colorful, more visual, we tend to pay more attention to it. But that’s all it is.
The fact that metaverse worlds will allow us to actually see CSAM perpetrators walking their avatars around children’s accounts once, twice, twenty times a day doesn’t make CSAM Threat Actors more threatening—but it actually makes children safer. What was an internal log visible only to Trust & Safety teams now becomes a phenomenon easily seen by whoever is around. The question now depends on the efficiency of ‘reporting buttons’ and ‘review metrics’.
Be sure to check out the full interview in our Checkpoint, our Medium publication.
Checkstep News
📣 The Checkstep team had their first in-person retreat in Croatia. And we look forward to having many more.
The Information War
🫣 Ukraine says Big Tech has dropped the ball on Russian propaganda (The Washington Post)
A new report by Reset showed a decline in response from Big Tech to the Ukrainian officials' requests to take down Russian propaganda. During the first weeks of the Russian invasion, social media platforms did all they could to limit the spread of propaganda, but they now seemed to have lost interest. Especially with YouTube, not responding to the officials requests for almost two months. This hot and cold behavior of theirs seems to have the Ukrainian officials at an impasse.
Moderating the Marketplace of Ideas
🔞 Meta Failed To Protect Instagram’s Child Models From Pedophiles (Forbes)
A recent investigation conducted by Forbes revealed at least 40 accounts still being active on Meta, from the 150 they found, promoting child sexual abuse or sexualizing children. Nonprofits working to limit the spread of such content have urged Meta to do better.
✅ Facebook considering ending restrictions on Covid misinformation (The Guardian)
Facebook seeks its “advisory” panel’s help to decide whether tackling misinformation should still be a priority for the platform. Fake news definitely seems to be old news for Meta.
🪟 TikTok to provide researchers with more transparency as damaging reports mount (The Verge)
Tired of being in the headlines for constantly promoting questionable content on their platform, TikTok promises to increase transparency surrounding its moderation process.
📱 Hunter Biden phone hack claims test platforms' misinformation policies (The Verge)
Are social media platforms covering up Hunter Biden’s dirty laundry? A recent thread on 4Chan revealed several pictures of Hunter Biden in compromising situations, the authenticity of which is yet to be determined, which were eventually shared on other social networks such as Twitter, Meta and even Google. However, these were soon taken down, making people question the platforms’ consistent application of their moderation rules.
🤫 Some half of Holocaust content on Telegram contains distortion, UNESCO says (The Washington Post)
A recent study by UNESCO showed an increase in the spread of content denying or misrepresenting the Holocaust, also referred to as Holocaust distortion. Telegram, being a strong propagator of “free speech”, seems to host the majority of such content.
😶 A Bored Chinese Housewife Spent Years Falsifying Russian History on Wikipedia (Vice)
Rather than taking on a hobby, a bored Chinese housewife took on the role of scholar well versed in Russian history. Although she has since apologized for partaking in such an act, it’s worth noting how easy it is to impersonate someone and thus, the necessity of always verifying information.
🤖 Is the future of fact-checking automated? (Poynter)
What seemed to be like a pipe dream could soon be quite tangible. Full Fact, a U.K. based fact-checking organization, might make fact-checking a lot easier by sharing access to their internal tools that enable real-time fact checking with other fact-checking organizations.
🗣️ 🚨 Twitter sued Indian government over content removal requests (Protocol)
Twitter and the Indian government seem like that old married couple who can’t seem to come to an agreement on anything. Last time it was the Indian government raiding Twitter offices in response to the platform’s decision surrounding manipulated media. This time it’s Twitter calling out the government for abusing its power.
🧑💻 Tech’s blind spots: Sharing with researchers and listening to users (The Washington Post)
From lack of transparency to quality checks of products meant to ensure online safety, the Digital Trust and Safety Partnership published its first evaluation report on industry standards for trust and safety workflows.
Regulatory News and Updates
🇮🇩 EFF and Partners Call on Indonesia to Repeal Invasive Content Moderation (Electronic Frontier Foundation)
Last week Big Tech was forced to succumb to the Indonesian government’s requirement of censoring and sharing content as per the government’s request in order to overcome termination of their services. EFF and several digital rights organizations think this is an outright violation of human rights and have urged the government to repeal the regulations.
🇳🇿 Tech giants to self-regulate in reducing harmful content in New Zealand (Reuters)
Self-regulation vs government regulation, what’s the best course of action to ensure effective online harm regulation by online platforms.
🏛️ Congress Might Actually Pass ADPPA, the American Data Privacy and Protection Act (Wired)
A bill taking aim at targeted advertising and data collection, particularly of minors, is making its way through the Senate. All that remains to be seen is if this bill gets added to the long stack of bills introduced over the course of the past two years or will it eventually turn into a law.
🇺🇸⚖️ South Carolina bill outlaws websites that tell how to get an abortion (The Washington Post)
In addition to the ruling, with respect to the right to abortion, a recent South Carolina bill will make it illegal to host a website or “[provide] an internet service” with information that could be used to get an abortion.