Fierce debates engulf the Online Safety Bill and Section 230 of the CDA
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
This month brings us the depressing combo of increasing misinformation and real-world violence apparently stemming from growing online hate. Lots of regulatory news too. The U.K. Online Safety Bill has been revived from its catatonic state to be fiercely debated again. As details emerge, more people from all corners are tossing in their two pence. Across the pond, online regulation in the U.S. is also getting a fair amount of scrutiny as the Supreme Court takes on two cases with direct bearing on Section 230 of the Communications Decency Act originally passed in 1996, and let’s just say a lot has changed since the world was using Netscape Navigator to do Yahoo! searches.
We’re also very happy to have an excerpt from our interview with Alexandra Koptyaeva from Heyday. Alexandra has been working in Trust and Safety for over 3 and a half years starting in content moderation, investigations, and quality assurance, and later leading a team and managing company processes. In September 2022, she was hired as a Trust and Safety Lead at Heyday—a new social media app. In this role she is responsible for everything starting from content moderation and policy management to handling escalations to NCMEC and Crisis Response services.
How can social media platforms better detect CSAM?
It's certainly not the easiest thing to catch. Users are very creative, and certain abbreviations, emojis, or combinations of signals might indicate suspicious behavior. Personally, I'm trying to read the whitepapers, follow the updates, and attend webinars about this topic to stay informed. In addition, I also have a profile on the Heyday app so I can see what's happening from the user's perspective.
There’s often a thin line between over-moderation and censorship. How do you distinguish this?
I'll answer this question through an example: when I began working closely with content moderators from different countries, I noticed that they tend to be more or less strict with the content depending on their origin or place of birth. Although the platform has the same internal guidelines for everyone, they were interpreted differently by specialists from Latin America, Southeast Asia, or from Europe. Some used to remove any content that was even slightly violating the rules, while others passed it because it was a "cultural norm" for them and they didn't find it suspicious.
How can platforms better train end-users about the possible types of online harm without coming across as controlling?
I’d say it depends on the type of platform, the age of targeted users, and possible threats it may have. Apart from the Community Guidelines which usually mention some online risks, some platforms also have pop-up notifications to double-check if a person wants to send certain information, or they blur some content and inform a user that it might be harmful or inappropriate.
For the full interview with lots more insights, visit our Medium publication.
Checkstep News
📣 Checkstep is hosting a Trust and Safety networking breakfast in London on February 7th! It would be a great opportunity to meet people in this space and discuss what Online Safety looks like in 2023.
You can RSVP using this link: https://chk.st/3HdIlPW
In case you can’t make the London event, we’re also hosting another event in Brussels:
https://chk.st/3kISGfe.
📣 Our Head of Product, Yu-Lan Scholliers, is speaking at the Data Science Festival. Stay tuned for the event link!
Moderating the Marketplace of Ideas
💣 Google develops free terrorism moderation tool for smaller websites (Financial Times)
Smaller websites will benefit from Google’s release of moderation tools to detect terrorist content. This release comes just as new rules from the EU and the U.K. are kicking in that will require the removal of terrorist content.
🤔. Donald Trump to be allowed back on to Facebook and Instagram (BBC)
After nearly a two year ban, former President Trump will be allowed back on Facebook and Instagram. Despite being reinstated, for future offenses the platform could heavily penalize him.
🗳️ Twitter to Relax Ban on Political Ads (The New York Times)
Twitter previously solved the problem of misinformation in political advertising by disallowing all political ads, but early this month the platform said it would allow cause-based advertising where marketers can promote content about political issues. The company said it will later expand other forms of political advertising.
📰 Why TikTok is one of the ‘main priorities’ at BBC News for 2023 (Press Gazette)
There have been lots of changes at the BBC in the past year including leaning into TikTok in a big way. They are hiring four new journalists to join their social news team despite having expressed some skepticism about the platform in the past.
❌ No more TikTok on House of Representatives’ smartphones (Ars Technica)
Despite TikTok’s discussions with the U.S. federal government over the security of user data, members of the House of Representatives are banned from installing the app on any House managed devices. The TikTok app is already at least partially banned from government devices in 19 states.
Latest update: TikTok has now been banned on three university campuses
🤐 What the Jan. 6 probe found out about social media, but didn’t report (The Washington Post)
For fear of offending both Republicans and Big Tech, the House committee investigating the Jan. 6 rioting avoided reporting on the details of the role social media played in the attack. The platforms failed to deal with the online extremism and calls for violence that preceded the Capitol riot. They did, however, write up a 122-page memo that has now been circulated.
🔧 Months after Russian invasion, Meta is tweaking its content policies (The Washington Post)
Facebook has been tweaking its content moderation strategy over the war in Ukraine. The most recent change removed the Azov Regiment, a Ukrainian far-right military group, from its list of dangerous individuals and organizations. Members of the Azov Regiment can now create accounts on Facebook and Instagram and post content as long as it doesn’t break any existing content rules.
🔍 How to track digital mercenaries behind disinformation (International Journalists' Network)
Misinformation doesn’t come from nowhere and identifying those behind it is just as important as detecting the content itself according to Giannina Segnini, the director of Columbia University’s Master of Science Data Journalism Program. Digital mercenaries often lead covert efforts beneath the surface of political campaigns and other hot-button topics.
🩺 Public Health Agencies Try to Restore Trust as They Fight Misinformation (Kaiser Health News)
Seeing vaccination numbers drop, a number of public health officials are turning to social media to develop messaging that will resonate with their audiences and help them dispel dangerous misinformation.
📱 3 Lessons on Misinformation in the Midterms Spread on Social Media (Brennan Center for Justice)
The Brennan Center released their Midterm Monitor tool last year to understand online conversations about the U.S. election. Their research uncovered three important lessons about misinformation. The article includes the lessons and the Center’s recommendations to address them.
🌱 Climate misinformation 'rocket boosters' on Musk's Twitter (AP News)
Climate misinformation has flourished on Twitter since Elon Musk acquired the platform last year. A coalition led by the Institute for Strategic Dialogue in London released a report tracking climate misinformation before, during, and after the U.N. climate summit last November. The report criticized social media platforms for not enforcing their own policies among other things.
🇫🇮 How Finland Is Teaching a Generation to Spot Misinformation (The New York Times)
The people of Finland once again grabbed the top spot of 41 European countries ranked according to resilience against misinformation. Officials attribute their success to a top-notch schools system but also a concerted effort by the country’s teachers to educate students about fake news.
Three years into an initiative to eliminate toxicity and bullying, EA is seeing reductions in hateful content on their platforms. The company’s Positive Play program has been publicly promoting their strategy to create positive spaces for gaming. Not to be outdone, Take This, a non-profit that has been working to foster inclusive, safe and collaborative spaces in gaming, is celebrating its 10th anniversary of building stronger communities.
🐦 Attacks on U.S. Jews and gays accelerate as hate speech grows on Twitter (The Washington Post)
In much less positive news, online researchers say that real-world attacks in the United States have been tracking with Twitter spikes in hate speech, especially antisemitic and anti-gay slurs and rhetoric. The Network Contagion Research Institute has a report coming out that shows a connection between real-world incidents and increased usage of the word “groomer” on Twitter. “Groomer” is a common hate-speech term used to malign LGBTQ+ individuals and their supporters.
💬 WhatsApp Launches a Proxy Tool to Fight Internet Censorship (Wired)
Tools to help people get around censorship are increasing, and this month WhatsApp has expanded its anti-censorship measures. The company is making it possible for people facing country-wide censorship to use WhatsApp through proxy connections, potentially allowing them to communicate even if their government has blocked the app.
🫣 Free the nipple: Facebook and Instagram told to overhaul ban on bare breasts (The Guardian)
Meta’s rules banning bare-chested images of women but not men has been called into question by the company’s Oversight Board. The ruling from the board follows Facebook’s censorship of posts from a transgender, non-binary couple who posted photos of themselves topless but with nipples covered. The board found that “the policy is based on a binary view of gender and a distinction between male and female bodies”, which makes rules against nipple-baring “unclear” when it comes to intersex, non-binary and transgender users.
😡 Seattle schools sue tech giants over social media harm (AP News)
The public school district in Seattle is suing TikTok, Instagram, Facebook, YouTube and Snapchat in an attempt to hold them accountable for a mental health crisis among their students. It blames them for deteriorating mental health and behavioral disorders forcing the schools to hire more mental health experts and additional training for teachers.
😶 Elon Musk Cuts More Twitter Staff Overseeing Content Moderation (Bloomberg)
Twitter is making even deeper cuts into their already diminished trust and safety teams. Workers in the Dublin and Singapore teams have been cut as were members of teams handling misinformation policy, global appeals and state media on the platform.
🗣️ Chinese social media app Kwai played a role in Brazil riot (Semafor)
Experts contend that Telegram and Twitter played a part in the riot in Brazil, but save some of that blame for Kwai. Kwai is a video sharing platform like TikTok and one of its biggest competitors in China.
🫰 Taliban start buying blue ticks on Twitter (BBC)
Once Twitter’s blue check mark went up for sale anyone could buy one including members of the Taliban and their supporters. Before Twitter Blue was introduced, none of the accounts for Taliban officials were considered verified.
🖥️ Can Big Tech make livestreams safe? (Financial Times)
None of the major platforms allow content promoting self-harm and suicide, and it’s hard enough to detect that content when it’s posted, but livestreaming takes that problem up another few notches. Many companies are now racing to develop the technology to help the army of human moderators deployed to detect harmful content. This article goes into some depth explaining the nature of the problem and the current state-of-the art in the technology to detect it.
Regulatory News and Update
🌏 Internet freedom crackdowns across Asia target citizens, Big Tech (Context)
Governments in a few different Asian countries have been tightening their rules on allowed internet content. Officials claim they are battling fake news by arresting dozens of journalists and bloggers, but the rising "digital repression" has serious consequences for fundamental rights including freedom of expression, access to information and privacy.
🇺🇸 Beyond Section 230: A pair of social media experts describes how to bring transparency and accountability to the industry (The Conversation)
Section 230 of the Communications Decency Act is much discussed as the U.S. Supreme Court takes up two cases this term and potentially two additional ones later. Robert Kozinets and Jon Pfeiffer offer their proposals to tweak the law to bring more transparency and accountability to social media platforms who have enjoyed a remarkable set of protections since the legislation was first passed.
🇬🇧 Age checks, trolls and deepfakes: what’s in the online safety bill? (The Guardian)
And the attention on Section 230 is nothing compared to the Online Safety Bill as the U.K. Parliament takes up discussions again. The Brookings Institution covers the elimination of the “legal but harmful” provisions, while the Financial Times has an explainer, an opinion piece decrying the legislation’s warts, and another article discussing the potential impact of age check requirements on platforms. Also, the BBC weighs in with a look at Wikipedia’s view that the proposed bill is too harsh and ill fitting for volunteer-run sites.