The Twitter Whistleblower, Election Misinformation and more
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Just as we were preparing to put this month’s newsletter to bed, the bombshell news of a whistleblower at Twitter drops. Peiter Zatko, also known as Mudge in hacking circles, accuses the company of misrepresenting its security posture in violation of the terms they have with the Federal Trade Commission and of “lying” about taking strong action against spam.
In more normal news, it’s that time of year again when we’re gearing up for the online chaos of misinformation as the U.S. midterm elections approach (November 8th is election day). Social media platforms are doing the same as many of them have announced tweaks to their policies and how they plan to handle the expected melee of lies and threats of violence that have become a part of elections in our world. Of course, we don’t have to wait for November; we can see how elections have been playing out around the world. Brazil (election day October 2nd), for example, is feeling the effects now, causing a group of concerned researchers and digital rights advocates to write an open letter to the big platforms with 38 policy recommendations. We have a couple of election stories this month, and you can expect lots more in the next few months to come.
TikTok was so much in the news that the newsletter was overflowing, so we shifted our summary to a LinkedIn article, which you can find here. Apart from that, we’ve got stories on new tools for researchers to analyze TikTok and others for helping parents watch over their kids on Snap. There’s also an update on an LGBTQ+ report card and, sigh, more on violent extremism.
But first… as we’ve reported many times in this newsletter, new rules for online content are coming. Luckily, we were able to line up Andrew Bennett this month to answer our questions about upcoming regulations and how companies should get themselves prepared. Andrew is a Policy Principal at Form Ventures, a U.K. VC fund investing in start-ups where regulation is a driver of success. Andrew’s expertise is in regulatory strategy, having worked across tech policy at the Tony Blair Institute and founded TxP, a network for people in tech and policy. The following is a shortened version of the full interview, which is available on our Medium publication.
How can online platforms keep up with upcoming regulations? What are the most imperative business implications?
The challenges for different online platforms vary. While some have no idea what their obligations will be, others have a sense of their duties but underestimate the time and effort required to build a fully-fledged trust and safety function. This is not just a new form to fill in or a tick-box exercise: new rules come with new enforcement powers, so many platforms will need to resource significant internal capability, systems and tooling to avoid challenges down the road. For companies trying to keep up, it’s worth looking out for regulators’ guidance. For example, despite the U.K. pausing the passage of its Online Safety Bill for now, the regulator, Ofcom, has published its Roadmap to regulation which sets out a timeline for actions services will need to take.
Do you think the proposed legislation sufficiently protects online innovation and free speech?
There are clearly challenges with the OSB in its current form — from ambiguity in the wording of duties that might incentivize overzealous enforcement (undermining free expression), to contradictory duties that make it hard for services to comply at all. The new rules will also impose direct costs that make it harder not only for new social start-ups to scale, but also services in adjacent sectors — which policymakers may not even realize will be caught up in this — which have some user-to-user aspect to their work.
But it’s hard for anyone to argue that there isn’t a clear problem with online safety that needs addressing, and states have a sovereign right to legislate. For a small number of services, it might be worth engaging with the policy process to help improve the rules. But ultimately, there is still strong political momentum behind the OSB and the base case for all firms should be to adapt to the reality that this is happening.
Please visit our Medium publication for the full interview with lots more thoughts and info about how to prepare.
Checkstep News
📣 Part of our team went down to Cologne for Gamescom! (Missing in the picture, our Head of Design - Jamie).
The Information War
🧨. Former security chief claims Twitter buried 'egregious deficiencies' (The Washington Post)
Why should Facebook have all the fun? Twitter now has their own whistleblower, a former security chief at the company, who claims Twitter deceived federal regulators and has been prioritizing user growth over tackling their large spam problem despite claiming otherwise publicly. A spokesperson for Twitter says that the claims are “riddled with inaccuracies.” This is bound to affect the deal with Elon Musk, and, who knows, it might even be his Get Out of Buying Twitter Free card.
🗣️ How Russian Propaganda Is Reaching Beyond English Speakers (The New York Times)
When Russia’s invasion of Ukraine began, social media companies’ blanket ban on Kremlin-backed propaganda actually covered much less than the world might have hoped. While English language propaganda is having difficulty getting out, Spanish and Arabic disinformation continues almost unimpeded.
Moderating the Marketplace of Ideas
🏃 ❌ What Happened When Twitter and Other Social Media Platforms Cracked Down on Extremists (ProPublica)
As violent extremists are banned from mainstream social media platforms, they inevitably make their way to fringe channels. ProPublica reporter A.C. Thompson discusses the risks and harms that continue in these out-of-the-way corners of the internet. Despite the rise of several alternative options, 4chan manages to maintain the top spot of extremist nut jobbery.
📄 Leaked Documents Reveal How Roblox Handles Grooming and Mass Shooting Simulators (Vice)
A cache of documents stolen from Roblox by a hacker have been released. The documents reveal a surprising look behind the curtain at Roblox showing how they think about and moderate content on the platform.
🕸️ Inside the secret world of trading nudes (BBC)
While Hunter Moore’s “Anyone Up?” might have been taken down, Reddit seems to be hosting its own version of it. Wasn’t the Netflix documentary “The Most Hated Man on the Internet” supposed to show the impact revenge porn had on these women? Seems like it sure did have the opposite effect…
😕 A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal. (New York Times)
While Big Tech firms are using the most robust technologies to help prevent online abuse of children, their algorithms may be going a bit too far. As we transition to a more digital era and also rely on e-consultations, this particular incident surely caused a lot of trouble for the dad who was merely trying to reduce his son’s pain. There’s also a question on how tech companies’ tech can access private archives of pictures on users' phones. Does online safety require reducing user privacy?
📍 Google Maps Regularly Misleads People Searching for Abortion Clinics (Bloomberg)
While 19-year-old Chey tried to look for abortion clinics on Google Maps, she was redirected to Crisis Pregnancy Centers who advised her otherwise and furthermore misinformed her that abortions lead to higher risks of breast cancer and mental health issues. Although Roe v. Wade might have made a particular ruling related abortion, is it up to Google to decide for these women?
PS: Google has since made changes and will now clearly label healthcare facilities providing abortion.
💰 Being Thrown Off Social Media Was Supposed to End Alex Jones's Career. It Made Him Even Richer (Bloomberg)
Even though the Alex Jones was deplatformed by major social media networks, the time these platforms took to come to this decision acted as a buffer for Jones to grow his fan base tenfold. Thus, making us wonder, is there such a thing as perfect timing for someone to be deplatformed?
🤔 Facebook, Instagram Posts Flagged as False for Rejecting Biden's Recession Wordplay (Reason)
PolitiFact has rated the claim ‘The White House is now trying to protect Joe Biden by changing the definition of the word recession' as false. They provide context and justify their rating, which is all well and good. But since Facebook is using PolitiFact as an independent fact-checking organization, they’re marking posts aligned with this claim as false. Since there is not actually an “official” definition of a recession, is Facebook quashing political speech? Wikipedia is struggling to figure this one out too.
🗳️ Misinformation spreads about election recount in Colorado county (AP News)
As election season ramps up, so is the onset of misinformed claims. This article by AP News debunks the claim “that election hardware in El Paso County had failed an accuracy test.”
🌚 OnlyFans accused of conspiring to blacklist rivals (BBC News)
OnlyFans is fighting dirty. The company has been accused of abusing the GIFCT fingerprint database, which is used to identify terrorist content, to limit performers ability to advertise themselves on non-OnlyFans sites. OnlyFans says there is no merit to the suit filed last November in Florida.
💤 In new election, Big Tech uses old strategies to fight 'big lie' (The Washington Post)
Despite knowing how false claims could delegitimize the upcoming elections, Big Tech is still choosing to ignore this and taking minimal actions to stop their spread. What is their end game?
❓ Elon Musk is wrong: research shows content rules on Twitter help preserve free speech from bots and other manipulation (The Conversation)
We’ve written about this ourselves and now research has borne out the possibly counter-intuitive idea that content moderation is actually necessary for the exercise of free speech. Several aspects of social media communications hinder a free marketplace of ideas. Add bad actors to the mix and authentic free expression becomes more difficult.
🗂️ Social Media Safety Index (Glaad Report)
Glaad, an advocacy group for the LGBTQ community, has issued its social media safety index, rating platforms according to the safety, privacy, and free expression afforded to LGBTQ individuals using those platforms. They consider issues such as explicit protections from hate speech and whether profiles offer gender pronoun options. All of the platforms they rated scored below 50 out of 100. To help rectify that, the report makes specific recommendations to individual platforms about how they can be safer and more welcoming.
👪 Snap launches parental controls. Now it needs parents to use Snap. (Protocol)
Snap is rolling out new safety features to protect kids on their platform. Some functions, for example, will allow parents to see which users their children are friends with and report suspicious accounts.
🔍 This New Tool Lets You Analyse TikTok Hashtags (Bellingcat)
Bellingcat is announcing a new tool that helps analyze hashtags and other content on TikTok. Use of the tool allowed them to discover that German anti-vaxxers have co-opted the hashtag ‘#schützteurekinder’ (protect your children). Much like QAnon acolytes promoting outlandish conspiracy theories with the ‘#savethechildren’ hashtag, Germans concerned about child welfare are now likely to run across vaccine disinformation.
Remedying COVID-19 and Vaccine Misinformation
💉🤰 How Misinformation About COVID Vaccines and Pregnancy Took Root Early On and Why It Won't Go Away (ProPublica)
Even before vaccines were made mandatory for all, there were strategies in place to take advantage of the public’s doubts around healthcare, to steer them away from taking vaccines and spreading such doubt to a wider crowd.
🥫 Anti-vax Twitter accounts pushing food crisis misinformation, study finds (The Guardian)
A recent study conducted by the Network Contagion Research Institute revealed that anti-vaxxers and QAnon worshipers seem to have switched strategies by propagating false claims regarding the after effects of the Russian invasion of Ukraine, particularly around the food crisis.
🦠 📈 A researcher asked COVID anti-vaxxers how they avoid Facebook moderation. Here's what they found (The Conversation)
Ever wondered how anti-vaxxers get around Facebook’s moderation system?
From self-moderating to finding loopholes through satire and sarcasm, they use it all.
🆖 Facebook and Instagram Remove Robert Kennedy Jr.’s Nonprofit for Misinformation (The New York Times)
The main accounts of Robert Kennedy Jr. 's nonprofit, Children’s Health Defense, were taken down by Facebook and Instagram for repeatedly spreading medical misinformation. However, accounts dedicated to its efforts still remain active in several states and so does Mr. Kennedy’s personal Facebook account. Is this the best way to limit the propagation of such content?
Regulatory News and Updates
🇺🇸 Push to rein in social media sweeps the states (POLITICO)
We missed this one last month, but it’s still worth a look as a good summary of the various U.S. state initiatives to regulate social media. The general tendency is that red states would compel platforms to host hate speech, and blue states want to require more and better reporting mechanisms, presumably to have hate speech removed. All 34 states are probably going to run afoul of the First Amendment, but the Supreme Court is a wild card at this point.
🇪🇺 Big Tech can’t outrun demands for accountability (Financial Times)
Just as Facebook is moving to be less transparent, the European Digital Services Act is taking shape and could put an end to their attempts at a new veil of secrecy. One important aspect of the coming legislation is more transparency around content-related harms. Platforms that host advertising will have to disclose information about who is behind those ads, and they will have to re-open access to researchers who study how content is promoted and moderated, among other things.
🇦🇺 Google to pay $60m fine for misleading Australians about collecting location data (The Guardian)
Google copped to misleading users in Australia about exactly when their Android devices were tracking their locations. They agreed to pay $60 million in penalties to the Australian agency responsible for overseeing fair competition.
Tweets worth a second look