Parents vs Social Media, HB 20 and The Buffalo Shooting
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
This month brings the very sad news of another horrific shooting in the United States. The 18-year-old suspect used Discord in advance of the attack to lay out his plans and then live-streamed the shooting on Twitch. Twitch responded very quickly and took down the stream, but that didn’t stop someone from copying and spreading it elsewhere where it has now garnered millions of views. A New Yorker Magazine interview with Kathleen Belew, an expert on the white power movement, connects this most recent attack to the history of a movement that has been promulgating racist replacement theory ideas and the notion of “white genocide” for many years now. The internet has been a great help in getting their messaging out.
Back in December’s newsletter we were a little dismissive of a new Texas law that would compel social media companies to host speech they deem dangerous or otherwise inappropriate for their platform. Our bad. An appeals court has now reversed the original ruling that stopped the state from enforcing the law (you know, on account of the First Amendment and all). The tech industry has appealed the decision, but until this plays out, we don’t envy anyone trying to reconcile the Texas law and the potentially contradictory new legislation coming from the U.K. and Europe. Meanwhile, our slightly mocking attitude towards a very similar law in Florida was justified this week by a different circuit court affirming that the First Amendment still applies and limits the state government’s ability to interfere with free speech. Depending on what ultimately happens in Texas, this could very well end up at the Supreme Court.
Our expert this month is ex-FBIer Andrew Johnston. Andrew is the CEO and co-founder of Recluse Laboratories. Drawing on twelve years of experience in cybersecurity, Andrew has worked with clients both in an incident response capacity as well as in proactive services. This text has been edited for space. You can read the full interview on our Medium publication.
1. What motivated you to start Recluse Labs?
We started Recluse Labs to solve problems endemic to the threat intelligence industry. Threat intelligence, whether conducted in the private industry, academia, or government is reliant on a set of highly skilled individuals. These individuals are tasked with gaining access to adversarial communities, building believable identities, and extracting actionable information to form intelligence. Such individuals are few and far between, and the effect is that threat intelligence is often incredibly limited.
2. Are they specific patterns that online platforms should be mindful of while tracking terrorist groups?
One of the more interesting patterns is the mobility of many terrorist groups from one platform to another. In the past few years, there has been plenty of media coverage of up-and-coming social media platforms being swarmed with profiles promoting ISIL and other terrorist groups. Oftentimes, this creates significant challenges for a nascent platform, especially those that aim to have less restrictive moderation than some of the major players. It is worth noting that a strategy of aggressive banning doesn’t appear to be effective; terrorists have become used to profiles becoming disposable and regularly creating new accounts.
3. Quite recently, a known terrorist group took control of the Afghanistan government, i.e. the Taliban. What should the platforms’ stance be on it?
This is a hard question to navigate, as the answer will likely vary greatly depending on the platform’s philosophy. There is merit to the concept that the Taliban are a significant geopolitical force and denying their content as a policy hinders people from seeing the whole story. Moreover, hosting such content gives other users an opportunity to criticize, fact-check, and analyze the content, which could enable users who would otherwise be influenced by the propaganda to see the counterargument on the same screen.
You can read the full interview on our Medium publication.
Checkstep News
📣 We’re extremely excited to announce our recent $5 million funding to support our mission of creating a more safer and inclusive internet.
Do join the Checkstep Team and our lead investors in celebrating this achievement. All you need to do is register here!
📣 Our Co-founder and CTO Jonathan Manfield recently graduated from Oxford University with a Distinction in Artificial Intelligence for Business.
The Information War
🏛️ Four House committee chairs ask Big Tech to archive evidence of war crimes in Ukraine (NBC News)
Four high ranking U.S. Representatives are asking social media companies to flag, then archive and protect potential evidence of Russian war crimes. The New York Times is already reporting on apparent war crimes with evidence from traditional sourcing, but a lot of other evidence has been showing up on social media.
😕 🤔 How Russia employs fake fact-checking in its disinformation arsenal (DFRLab)
The Kremlin and friends are spreading lies and stoking conspiracy theories under the guise of fact-checking. This form of misinformation reaps double rewards for Russia as it has the additional effect of making people suspicious of fact-checking itself.
⛔ YouTube removes more than 9,000 channels relating to Ukraine war (The Guardian)
YouTube has removed as many as 70,000 videos under its policy against content that denies major violent events. Surprisingly, YouTube is one of the few Western social media platforms still allowed in Russia.
💣 🌪️ Twitter expands content-moderation rules to cover crises like war and disasters. (The New York Times)
Despite Elon Musk’s pending purchase of Twitter and his antagonism towards its moderation policies, the company has created a new crisis misinformation policy intended to address misinformation about international armed conflicts, including the war in Ukraine. Twitter will deemphasize tweets that spread misinformation that could otherwise spread rapidly across the platform. The company will stop users from retweeting misleading content and block it from prominent areas like people’s timelines and in search results.
Moderating the Marketplace of Ideas
📉 Social media sites work to limit spread of Buffalo shooting footage (Ars Technica)
Another horrific and racially motivated mass shooting in the U.S. has challenged social media platforms to keep up with a proliferating number of copies of a video of the shooting. The shooter live-streamed his attack on Twitch who was able to take it down within minutes of it going up. But even that quick action wasn’t enough to stop numerous copies from showing up on other platforms.
👁️ The Enduring Afterlife of a Mass Shooting's Livestream Online (The New York Times)
Analysis from The New York Times shows how hard it can be to remove video clips of mass shooting incidents even years after they happen. The Buffalo shooter took inspiration from a similar attack in Christchurch, New Zealand in 2019 and will no doubt inspire future massacres despite media platforms unending attempts to remove content of this nature.
🥸 A fake Twitter account posing as CBS News shared false information about the Buffalo shooting (Poynter)
It’s not uncommon for fake Twitter accounts posing as legitimate news organizations to share misinformation in the midst of unfolding events. So, it’s not surprising that exactly that happened in the aftermath of the Buffalo shooting. An account posing as a CBS News outlet in New Zealand shared incorrect and invented information about the attack.
🔮 Former Facebook, WhatsApp Employees Lead New Push to Fix Social Media (The Wall Street Journal)
Several former Meta/Facebook employees are launching new startups with the idea that they can fix the sins of the father, or in this case, their former company. With ideas about over reliance on advertising, product misdesign, and overly large scale, they all have diagnoses for what’s ailing the existing social media platforms. Time will tell if their remedies do the trick or not.
🧑🏫 Do browser extensions keep anyone away from fake news sites? Maybe a tiny bit (Nieman Lab)
If our goal is to find interventions that will have a positive effect on people’s news consumption (i.e. pushing them towards more reliable content), this study says browser extensions that rate news sites probably won’t get us there.
🤦 How the Biden administration let right-wing attacks derail its disinformation efforts (The Washington Post)
The recently announced Disinformation Governance Board is now on hold. Last month the Department of Homeland Security announced the new group naming Nina Jankowicz as the head. Jankowicz almost immediately became the target of right-wing attacks full of the kind of misinformation she has spent her career fighting. The Biden administration is facing criticism for botching the rollout and its messaging.
🐔 ⁉️ Conspiracy theorists flock to bird flu, spreading falsehoods (AP News)
Some among the COVID conspiracy-minded are now expanding their false claims to bird flus that have long been affecting chicken farmers. While many conspiracists are newly turning their attention to bird flus, Joseph Mercola, who you’ll recognize from his number one spot on the Center for Countering Digital Hate’s Disinformation Dozen list, has apparently been at it for years.
🫠 Opinion | Roe dissent must be protected, even outside homes of justices (The Washington Post)
A staunch pro-life advocate makes an argument for allowing all kinds of speech, even if it’s inaccurate or makes others uncomfortable. He makes the case that both picketing in front of Justice Kavanaugh should be allowed and also that Trump should not be permanently banned from Twitter.
🪧 Spotify stopped political ads in 2020. It just quietly brought them back. (Protocol)
Spotify banned political ads in the runup to the U.S. presidential election in 2020. Now with the country in the midst of midterm primary elections, the company, claiming to have strengthened and enhanced their process, is opening the door to political advertisers again.
Regulatory News and Updates
🇺🇸 ❓ Texas fires back at tech industry in new Supreme Court filing (The Washington Post)
The future of online content moderation surely seems to be in limbo, with the decision resting on the Supreme Court, about whether or not it decides to block the recently introduced Texan HB 20 regulation.
🤠 Texas HB 20 conflicts with Texas abortion law (Protocol)
Insightful article about the inherent conflict in two recently passed Texas laws. “[They] create a nearly unanswerable question for platforms: Does a post about abortion access in Texas merely constitute a viewpoint that must, under HB 20, remain on the platform? Or is it, under SB 8, considered illegal content that platforms would be wise to remove?”
📜 GOP-Led Legislation Would Force Breakup of Google's Ad Business (The Wall Street Journal)
A significant piece of antitrust legislation that has been introduced in the U.S. Senate would prohibit very large companies from participating in more than one part of the digital advertising business. The legislation takes aim at Google and could lead to breaking up parts of its advertising business.
🚸 ⚖️ California parents could soon sue for social media addiction (AP News)
Joining ranks with Florida and Texas, the state Assembly in California has passed new legislation going after social media platforms. California’s proposed law is naturally different, providing for parents to bring lawsuits against platforms for harming children who have become addicted to their products. The bill now moves to the state Senate where it could undergo significant changes or possibly not be passed at all.
Tweets worth a second look