Calls for more transparency and safety protections
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Transparency is the key idea this month. We need more of it. Governments, citizens, and even employees inside of Big Tech are pushing for it. Also in good-ish news, a few large purveyors of disinformation have been losing steam (but we’re not counting them out yet). Apart from that, a growing content subscription service has to do more about children joining and posting explicit content. And unavoidably we have an update about the Facebook Oversight Board.
But first, a word with Checkstep’s Head Research Advisor, Preslav Nakov. Preslav has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of online misinformation. We’ve included a shortened version of our conversation here. You can find the full interview on our Medium publication, the Checkstep Checkpoint.
Expert’s Corner with Preslav Nakov
1. What do you think about the on-going infodemic? With your extensive work on fake news, do you think there will be a point where we can see a decrease in such content?
The infodemic represents an interesting blending of political and medical misinformation and disinformation. Now, a year and a half later, both the pandemic and the infodemic persist. Yet, I am an optimist. As the severity of the pandemic has now started to decrease, for example, we see full stadiums at EURO 2020 with no masks and no social distancing, to a large extent, thanks to the vaccines. I expect that the infodemic will soon follow a similar downward trajectory. Yet, it will not die out completely, just decrease.
2. What drove you to pursue research in the fake news and misinformation domain?
As part of a collaboration between the Qatar Computing Research Institute (QCRI) and MIT, I was working on question answering in community forums, where the goal was to detect which answers in the forum were good. This got me interested in the factuality of user-generated content. Soon, along came the 2016 US Presidential election, and fake news and factuality became a global concern.
3. How should platforms better prepare themselves?
Big Tech companies are already taking this seriously and have been developing in-house solutions for years. However, complying with various pieces of new legislation is a challenge for small and mid-size companies. When it comes to content moderation at scale, there is a clear need for automation, which can take care of a large number of easy cases, but the final decision in hard cases should be taken by humans.
4. Any personal anecdotes where you fell prey to fake news?
I have fallen prey to fake news many times, and I keep being fooled from time to time. Many friends and relatives send me articles asking me: is this fake news? In most cases, it is easy to tell, for example, maybe the article is just 2-3 sentences long and doesn’t give much support to the claim in the title, or maybe the website is a known fake news or satirical one, or a simple reverse image search reveals that the photo in the article is from a different event, or maybe the claim was previously fact-checked and known to be true/false, etc. Yet, in many cases, this is very hard, and my answer is: I am sorry, but I do not have a crystal ball. In fact, several studies in different countries have shown the same thing: most people cannot distinguish fake from real news; in the EU, this is true for 75% of young people.
Please visit the Checkstep Checkpoint for Preslav’s full interview.
Checkstep was among the 32 newly identified safety tech providers in the UK, which makes up 25 percent of the global market share for safety tech. We’re very proud to be a part of it!
Moderating the Marketplace of Ideas
After getting schooled by their own Oversight Board about arbitrary, permanent bans, Facebook has come up with a policy that reduces Trump’s life sentence to a two-year suspension, that is “if the risk to public safety has receded.” Facebook also announced no more free passes for any politicians who violate hate speech rules.
A BBC investigation recently revealed minors were selling explicit videos on OnlyFans to earn money, despite having policies in place to ensure child safety. Dame Rachel de Souza, UK’s child commissioner, has asked the platform to do better!
Parler is back, but does anyone care? After their return to the Apple Store, the platform had its worst month since their big spike last year.
Twitter is enlisting the crowd’s wisdom to combat misinformation. A new initiative asks fact-checking participants to inject a little reality into misleading and false tweets through newly released Birdwatch notes. Who checks the checkers? Why, the rest of the crowd of course! We’re curious to see if Twitter’s decentralized approach will work.
The Nigerian President vs Twitter - who’s going to win? The Twitter ban came two days after the platform took down one of President Buhari’s tweets, for violation of its policies. Several Nigerians see this as an act of revenge and not righteousness.
Thank you, next! 🎶 That’s what Citizen has been telling its content moderators when they ask for mental support. Last year we saw Facebook pay a $52 million settlement to its content moderators for failing to provide support after exposing them to extremist content. Seems like they’re not the only ones contributing to this problem, smaller platforms just seem to go unreported, until now.
It’s not the first time we’ve seen Facebook employees unhappy with the company’s MO. This time it was the platform’s bias towards Arabs and Muslims. The employees asked for the formation of an internal task force to investigate these biases. Following the publication of internal complaints, Facebook’s Instagram tweaked its algorithms to promote more Pro-Palestinian messages. It seems it takes internal “leaks” for Facebook to bring about any change.
A sketchy magazine in Brazil somehow prevailed against fact-checkers who pointed out their dubious reporting. Revista Oeste successfully argued that fact-checking hurts their bottom line. Aos Fatos, a member of Facebook’s fact checking program, is hoping for an appellate judge who is a bit more familiar with the Brazilian constitution.
Manchester United star striker Marcus Rashford was subjected to at least 70 racial slurs after the team's loss to Villareal. Seems like the action demanding letter from the football community was to no avail, as the players still continue to receive hate on social media.
Alas, poor Q, we knew him. A fellow of infinite jest, of most excellent fancy…Where be your gibes now? Okay, we shouldn’t make light of the actual harmful content spewing from QAnon, but the good news is that increasingly aggressive content moderation seems to be helping. Researchers have found large drops in QAnon messaging attributing it to both stepped up efforts from Facebook, Twitter, and Google as well as Trump’s election loss.
Remedying COVID-19 and Vaccine Misinformation
Concerned about maximizing vaccine acceptance, the EU scolds Big Tech about the quality of their disclosure reports on COVID-19 misinformation. At the same time, the EU Commission says to keep them coming for another six months at least.
Oops, humanity might have been too quick to clear ourselves of having a hand in the hatching of COVID-19. We still don’t know where it came from, but Facebook is no longer taking down comments that suggest man-made origins.
In the topsy-turvy world of post-truth America, a lifetime of actual public service counts for little when you’re a convenient scapegoat. Devoid of even the slightest kompromat, Director Fauci’s email still serves as an irresistible prop for agents of misinformation who are literally calling for his head.
Regulatory News and Updates
Just because Section 230 is too hot to handle doesn’t mean we can’t have a little sunlight on social media algorithms. Senators from Massachusetts and California have proposed a new bill (unrelated to Section 230) that would require platforms to inform users how content is selected for them. Also included–algorithms can’t discriminate based on protected characteristics like race and gender. That last part should be a given, but nobody told the algorithms, so yeah, it’s probably a good idea to put it on the books.
The Indian government wants to get up in your business. To be fair, the new regulation is only supposed to be for people credibly accused of wrongdoing, but it means breaking the WhatsApp encryption protecting everybody’s privacy. WhatsApp’s lawsuit isn’t helping the government’s already tense relations with Big Tech.
Tweets worth a second look
Graphika @Graphika_NYCToday @Graphika_NYC published Posing as Patriots, our investigation into efforts by Russia-linked actors to target U.S. far-right communities on Gab, Parler, and patriots[.]win using a host of custom memes and photos: https://t.co/m2AqKgfa4j https://t.co/MPJeWl1wNs