Possible updates to Section 230, deepfake shutdowns and more…
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
The really big news this month is the Supreme Court hearing arguments in the Gonzalez v. Google lawsuit that could potentially change the entire character of the internet. Unfortunately, we won’t know their decision until later in the year, but rest assured, we’ll be here to keep you updated. In other news, there’s just a lot of stories related to misinformation for some reason (it’s not even an election year in the U.S.!). There’s the first deepfake in the wild; at the same time Harvard is shutting down its Technology and Social Change Project, which has been publishing research on covid misinformation among other things. Elsewhere Google is launching a new initiative to inoculate people against misinformation at the same time they’re shrinking their resources to block it generally. Contradictions abound! Stick with us; we’re here to help you make sense of it all.
Checkstep News
📣 We attended the ICE Conference in London. Definitely a place to be to gain new insights into iGaming.
📣 We hosted a Trust and Safety Breakfast to celebrate Safer Internet Day. It was great to meet to fellow professionals in the space to discuss how we can work together to create a safer internet.
📣 With the Digital Services Act being only a year away from being fully enforced, we hosted an event in Brussels to mark its countdown and discuss to talk the practical realities of content moderation in new regulatory environments.
Moderating the Marketplace of Ideas
🏛️ Supreme Court considers if Google is liable for recommending ISIS videos (The Washington Post)
The Supreme Court heard oral arguments in the Gonzalez v. Google lawsuit this month. The case argues that tech companies should be legally liable for harmful content their algorithms promote, which in the past would have gone nowhere because of protections from Section 230 of the Digital Communications Act. But with a Supreme Court concerned less with precedent than in the past and plenty of Section 230 critics from all sides, this might be the time to reconsider the law.
👾 The People Onscreen Are Fake. The Disinformation Is Real. (The New York Times)
Legitimate deep fakes (we just can’t let a good oxymoron go by when it’s just sitting there) were found in the wild for the first time. Videos of broadcasters for a faux news outlet, Wolf News, were actually computer generated and not of any real people. The videos were being promoted by pro-China bot accounts on Facebook and Twitter. We shouldn’t get too concerned, however, as one commentator pointed out, there is already so much fake content created by real humans that no one could possibly keep up with it all.
💻 Ex-Twitter execs to testify in Congress on handling of Hunter Biden laptop reporting (The Guardian)
Former Twitter executives were called before the U.S. House of Representatives because of the company’s 2020 decision to temporarily restrict comments referring to a New York Post article about Hunter Biden’s laptop. Twitter execs say that the story triggered their rules against sharing hacked materials; Republicans in congress say Twitter doesn’t like conservatives. To be fair, Twitter did say that censoring the story was a mistake although they adamantly deny they were pressured by Democrats.
😶 Harvard Misinformation Expert Joan Donovan Forced to Leave by Kennedy School Dean, Sources Say (The Harvard Crimson)
Joan Donovan, who we’ve cited several times in this newsletter, is being forced out of her role at the Shorenstein Center on Media, Politics and Public Policy, and Harvard is ending her research project related to online misinformation according to the Harvard University paper. Semafor is reporting that the shutdown of the research project is for bureaucratic reasons and not related to Donovan who has not commented on the story. The Washington Post is also reporting on the shuttering of the center.
🕵️ Google to expand misinformation ‘prebunking’ in Europe (AP News)
Google is planning to do its bit against misinformation by starting a new campaign in Germany through videos on various platforms that teach people how to spot fake news. They’re planning a similar initiative in India.
📱 Combating Disinformation Wanes at Social Media Giants (The New York Times)
… but at the same time, Google and other social media companies are actually shrinking their efforts to combat misinformation generally. Expecting a worse economy and with political and legal pressure, social platforms are not prioritizing their fight against false information, which is likely to further erode trust online.
🗳️ Revealed: the hacking and disinformation team meddling in elections (The Guardian)
An investigation has revealed a major misinformation operation that claims to have manipulated over 30 elections around the world using several techniques including automated misinformation on social media. The for-hire operation called Team Jorge, which will covertly meddle in elections or operate on behalf of corporate clients, was exposed by an international consortium of journalists.
🧑💻 Key facts about Gettr (Pew Research)
Pew Research reports on a study they did of seven alternative social media sites that bill themselves as free speech alternatives. Gettr is especially popular in Brazil and may have had a role in the attempted coup there. The study shows that the reach of these sites is quite limited, they do perform some content moderation despite their criticism of the practice on other sites, and the communities that have formed are small but are generally quite satisfied with the information they receive. Pew also did an analysis of BitChute.
🌐 Pakistan blocks Wikipedia for 'blasphemous content' (BBC News)
Wikipedia has been blocked for people in Pakistan. The Pakistan Telecommunication Authority gave the Wikimedia Foundation 48 hours notice that the site contained blasphemous material and then removed access from within the country. Details of the violating content are not available. Tinder, Facebook and YouTube were previously blocked in the Muslim-majority country.
😵💫 Kenyan court says Facebook's parent, Meta, can be sued for psychological distress (The Washington Post)
Keeping content moderators at arm’s length by contracting to outside firms might not be enough to release Facebook from their responsibility to maintain worker health. A judge in Kenya said Facebook can be sued in that country despite the company’s argument that they have no office there and don’t operate in Kenya. The court’s ruling could matter for more than Facebook who employs around 15,000 workers around the world, mostly through contractors. YouTube and other Google products use about 10,000 employees to moderate their platforms. We’ve covered in the past just how hard content moderation can be on those tasked with protecting us from the worst of the internet.
💬 Gannett ends online comments for a majority of its news sites (Poynter)
Gannett, which is the largest newspaper publisher in the U.S., has stopped allowing readers to post comments on many of its news sites. They cite the difficulty in staffing content moderation efforts as the reason.
🚨 Toxic gaming tackled by Ubisoft's unique police alert system (BBC News)
Like most online operations and maybe more than most, the gaming industry has been trying to figure out how to get to grips with online rape jokes, racism, and bullying for years. Ubisoft, the maker of major games like Assassin's Creed and Rainbow Six, has signed a deal with police to try to tackle toxicity for its players. Involvement of police directly with a trust and safety team is a new approach, so we’ll see how this works out when less than 0.01% of current cases require police intervention.
🗣️ Audible reckoning: How top political podcasters spread unsubstantiated and false claims (Brookings Institution)
We’ve written before about how difficult it can be to moderate content in podcasts. The Brookings Institution has a new report that looks at the role of political podcasting in spreading false claims. Overall the podcasts they reviewed have a huge audience and span the political spectrum. They found among other things that misinformation is quite common among podcasts especially related to the 2020 U.S. presidential election and the coronavirus outbreak. The Hill covered the report emphasizing the fact ten prominent podcasters were responsible for the majority of all misleading content.
😳 Influence Networks in Russia Misled European Users, TikTok Says (The New York Times)
Last summer, thousands of TikTok accounts made a coordinated and covert effort to influence people’s opinions about Russia’s invasion of Ukraine. The accounts pretended to be European but were part of a network operating out of Russia. The accounts posted pro-Russia propaganda in local languages and attracted more than a hundred thousand followers before being discovered and removed by TikTok.
🧒 Musk Pledged to Cleanse Twitter of Child Abuse Content. It's Been Rough Going. (The New York Times)
Following layoffs and departures since Elon Musk’s ownership of Twitter, the problem of child sexual abuse material has grown significantly on the platform, and apparently Twitter has stopped paying for software designed to detect such content.
😧 ‘Every Parent’s Nightmare’: TikTok Is a Venue for Child Sexual Exploitation (The Wall Street Journal)
Meanwhile, TikTok is battling its own problem with child exploitation. As a magnet for teens, TikTok has also become a preferred platform for adults with an inappropriate sexual interest in kids.
🤷 The one problem with AI content moderation? It doesn’t work (Computer Weekly)
Fully automated moderation systems are not likely to work. Just the problem of defining “toxic” even for humans means that it’s all the more difficult for AI content moderation systems. Fixing the dangerous sentiments in the ugly and gray world of human interaction is a very hard problem for machine learning systems. This review of AI algorithms in the Guardian bolsters the point.
Regulatory News and Update
🇺🇸 How the US Could Ban TikTok in 7 Not-So-Easy Steps (Wired)
Despite its popularity and the ineffectiveness of trying to block it, several U.S. lawmakers from both parties are planning to introduce legislation that could lead to a ban on TikTok. Their concern is that China is exploiting the app to undermine American interests although no evidence has surfaced to indicate that is true.
🇪🇺 Twitter gets EU yellow card for disinformation reporting effort (Reuters)
While the EU Commission dinged all of Big Tech, they singled out Twitter in particular for coming up short in its reported efforts to tackle disinformation. The companies reported on their progress complying with the updated European Union (EU) code of practice on disinformation over the last six months.
⚖️ 🇺🇸 Lawmakers Seek Bipartisan Push on Big Tech Regulation. Voters’ Views Indicate Censorship, Content Moderation Could Be Sticking Points (Morning Consult)
As lawmakers consider online regulations, they are faced with the fact that Republicans and Democrats have diametrically opposed views of what that should look like. Over half of Republicans view censorship as a major threat, while an equal share of Democrats believe social media platforms need stricter content moderation policies.