Staged platform blackouts, landmark regulations and oh, The Facebook Files
The CheckStep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Honestly, we don’t want to pick on Facebook, but there was so much coming out this month it couldn’t be ignored. We’ve grouped all the Facebook stories upfront, so just scroll past if you’re already feeling full up on Facebook news. Once you’re past that, we’ve got user uprisings against platform complacency, German elections, new regulations, and lots more.
Checkstep News
📣 If you haven’t signed up for the Truth and Trust Online Conference, you can do so here. Checkstep is proud to be a supporter and a sponsor for this conference.
PS: You can still submit a journal paper to the special issue of the ACM Journal of Data and Information Quality on “Truth and Trust Online” -- the submission deadline is 15 November 2021.
Moderating the Marketplace of Ideas
✌️ No More Apologies: Inside Facebook's Push to Defend Its Image (New York Times)
Why be defensive when you can push a surplus of pro-Facebook content to subdue the negative press? And it’s been an especially damning stretch of press for Facebook, from banning New York University’s misinformation researchers to peddling content known to be harmful to teen girls. Facebook has had a busy month denying claims in The Wall Street Journal’s Facebook Files, a 5-article series on how Facebook gives preferential treatment to celebrities when it comes to moderation, how the platform angers its users to maintain engagement, how it ignores the welfare of teenagers, how drug cartels and human traffickers misuse the platform in developing countries and, last but not the least, how it supported the anti-vax movement. The secret strategies laid out in Facebook’s “Project Amplify”, which involve manipulating the platform’s News Feed to promote positive news often written by the company itself, may have come too late.
Having said that, we might want to give the platform some slack. Perhaps it’s time we all lent a hand in this global effort to tackle online harm.
📈 💻 Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows (MIT Technology Review)
Even after Mark Zuckerberg admitted it wasn’t ‘crazy’ that misinformation on Facebook had an impact on the 2016 U.S. presidential election, they failed to do much about it. A Facebook internal report reveals that election propaganda from Eastern European troll farms was promoted by Facebook to the extent that it reached nearly half of Americans leading up to the 2020 election. And in an almost unbelievable twist, these inauthentic actors even received payments from Facebook for the mostly plagiarized content.
🧽 How Facebook Relies on Accenture to Scrub Toxic Content
In the great American capitalist tradition, if you’ve got some unpleasant work, outsource it! Facebook pays Accenture a hefty paycheck of $500 million per year to get rid of all its content moderation problems. For that kind of money, Accenture is expected to keep any doubts to themselves if they have them and apparently they do.
🙏 Facebook Apologizes After A.I. Puts ‘Primates’ Label on Video of Black Men (New York Times)
If you get an uneasy feeling from deja vu, this one’s a doozy. Despite Google’s notorious 2015 episode demonstrating an outrageous lack of diversity in their AI training data, Facebook does the same thing six years later.
🤭 🤔 As Black users complain of censorship, LinkedIn faces a perception problem
And Facebook isn’t the only company stumbling over race issues on their platform. LinkedIn users discussing race and racial topics find their posts removed or not getting the expected levels of traffic. LinkedIn claims the problem stems from an unfortunate coincidence of content-neutral technical errors, but we suspect moderation tools that lack an understanding of context.
✋ AP urges DeSantis to end bullying aimed at reporter (AP)
In a sign of our times, even a state governor’s top staffer takes up bullying and harassment. Florida Governor DeSantis’ press secretary, Christina Pushaw, was locked out of Twitter after she tried to sic a mob of followers on a reporter because she objected to a story he wrote.
🤷 For misinformation peddlers on social media, it's three strikes and you're out. Or five. Maybe more (CNN)
Three strikes, or is it a line drive with bases loaded, for some habitual spreaders of disinformation. CNN tries to make sense of platforms’ complicated and confusing policies for repeat offenders.
🧒 A Hate Group Targeted My Kid Online (New York Times)
The face of hate is not just crowds of white men in white hoods. Gaming and messaging platforms are becoming more popular among extremists and the alt-right. With 90% of U.S. teens and 64% of tweens playing online games, hate groups are stepping up their efforts both to recruit and to target younger users online.
👽 The Rise Of Voice Cloning And DeepFakes In The Disinformation Wars (Forbes)
With advances in technology, there’s always an element of potential risk. Cue deep fakes and voice cloning capabilities. In the wrong hands, they can do more harm than good.
A prime example is a new AI based app that turns anyone into a porn star by swapping their faces into porn videos.
🪧 🇩🇪 Disinformation, fake news plague German election campaign (Deutsche Welle)
According to a new report published by Avaaz, a civil liberties organization, Germany’s Green party candidate Annalena Baerbock is receiving the blunt end of the disinformation stick compared to other candidates. As soon as she entered the race, she was the target of more than 70% of false claims including the lie that she supports pet bans. The German government further believes that the Russian group Ghostwriter may also try to take advantage of the already volatile situation through cyber attacks as part of their disinformation campaign. With the election date fast approaching, may the least “disinformed” candidate win….
In addition to disinformation campaigns, people are also using private messaging apps such as Telegram to spread bogus conspiracy theories in the hope of starting protests to swing election results.
💬 🔍 Is WhatsApp Really 'Lying' To You—Is This A Reason To Quit? (Forbes)
Thanks to an accusatory tweet by Telegram, WhatsApp’s 2 billion users are now questioning whether their messages are actually encrypted.
🧑💻 Facebook encryption could prevent detection of child abuse, NCA says (The Guardian)
Meanwhile, as Facebook claims to widen the scope of its end-to-end encryption capabilities to all its products, law enforcement officials fear the move might hinder their efforts in detecting online child sexual abuse material (CSAM).
💣 Twitter's Dantley Davis is waging a war against toxicity and misinformation (Fast Company)
Dantley Davis, Twitter’s new chief design officer and first Black executive reporting directly to the CEO, is taking on two monumental tasks: fix toxicity on the platform and get Twitter out of its product design slump.
🤝 Democrats create bilingual tool to combat disinformation aimed at Latinos (NBC News)
In an effort to tackle Spanish-language COVID disinformation propagated via radio shows and social media platforms, Democrats launched a bilingual digital information hub called “Juntos Together”. The hub makes use of easy-to-share infographics to debunk false claims surrounding vaccines.
❌ Amazon denies reports that it will proactively moderate content on its hosting service (The Verge)
Amazon’s future moderation plans seem to be up in the air after their recent refutation of a Reuters report claiming that Amazon Web Services (AWS) would move to a more proactive moderation approach through new policy updates and expansion of its in-house Trust and Safety team.
⚠️ Ivermectin, the Crate Challenge, and the Danger of Runaway Memes (The New Yorker)
The pandemic has given peer pressure a new face in the form of online memes. People are seeing multiple videos promoting hazardous acts as varied as climbing stacks of milk crates to ingesting a deworming drug meant for animals. Even though platforms end up taking down this content, their response is often too slow. The most effective way to spot such content before it becomes a viral trend is yet to be discovered.
🗣️ Twitch And Reddit Protests May Be Only the Beginning (Wired)
Staged platform blackouts seem to be the best approach for Twitch and Reddit users to force the platforms’ hand in improving online user experiences. In a sign that they’ve heard the message, Twitch sued two users who it suspects started the hate raids on the platform.
📱 🔔 Conspiracy Theories Have Gained Traction Since 9/11 Thanks To Social Media (Forbes)
The evolution of conspiracy theories from the “Salem Witches” to the very recent wild claims about the origin of the coronavirus hasn’t ever really abated. No doubt social media platforms have made it extremely easy to propagate such content, especially when people need a reason to validate their current views. Ignoring such content rather than debunking conspiracy theories seems to be the best course of action.
🔞 OnlyFans suspends proposed ban on sexually explicit content (CNN)
OnlyFans made a shocking decision to ban sexually explicit content from its platform in order to appease the banks supporting them, only to revoke the ban a week later stating the banks had complete faith in the platform.
📉 😕 New study finds internet freedom is rapidly declining worldwide (The Hill)
A recent study conducted by Freedom House highlighted how users felt that their internet freedom was being constrained as a result of increased regulations and content moderation.
Remedying COVID-19 and Vaccine Misinformation
🙅 Ron DeSantis and GOP refuse to correct vaccine misinformation (Washington Post)
Never mind the real-world harm misinformation causes, powerful politicians don’t want to rock the disinformation boat if it helps keep them in power. Of course, this makes our job of fighting disinformation that much harder, but that seems to be what we’re up against.
🦠 🔮 A Harvard professor predicted COVID disinformation on the web. Here's what may be coming next (Boston Globe)
Joan Donovan, a Harvard researcher known to many in the online disinformation space, was one of the first to warn about the infodemic that would accompany the COVID-19 outbreak. She’s now calling out Big Tech for algorithms that continue to promote lies damaging people’s health and lives.
⛪ Twitter permanently bans Greg Locke, pro-Trump, anti-vax pastor (Washington Post)
In another case of deplatforming, only this time a controversial pastor, Twitter permanently banned Greg Locke for anti-vax misinformation. After discovering the power of social media in 2015, he indulged in sharing homophobic content and recently took a strong stance against wearing masks.
Regulatory News and Updates
🏛️ MP calls for Facebook to be punished if it holds back evidence of harm to users (The Guardian)
Facebook is under fire from lawmakers on both sides of the Atlantic. The U.S. Congress has started investigations into Instagram’s overlooking the impact it has on the mental health of teenage girls through propagation of content that instills them with body image insecurities. Meanwhile, in the UK, MP’s are pushing for increased fines to compensate for the platform’s lack of accountability.
🇧🇷 Bolsonaro’s Ban on Removing Social Media Posts Is Overturned in Brazil (New York Times)
President Bolsonaro, former President Donald Trump’s political ally, issued a one-of-a-kind mandate to prevent social media companies from removing misinformation or other content that violates their rules. Sadly for him, the Brazilian congress and supreme court didn’t share the president’s “unique” thinking on the issue.
🇹🇷 Turkish government increases pressure on social media (DW)
More stringent regulations seem to be in the works from the Turkish government to tackle “fake news, disinformation, provocation and lynch justice.” While this seems to be a great initiative to tackle the spread of false news, many fear it’s just another way for the government to limit freedom of speech and opposition voices.
📜 Texas passes law that bans kicking people off social media based on ‘viewpoint’ (The Verge)
You may remember us telling you about the Florida legislature’s embarrassing day in court. Now, in the great “Let’s Trample the First Amendment Challenge”, Texas says, “Hey, Florida, hold my beer.” Let the next round of lawsuits begin.
🇦🇺 Australia media can be sued for social media comments, court rules (BBC)
No sign of Section 230 of the U.S. CDA law here. A landmark ruling by the Australian government holds Australian media companies accountable for defamatory comments posted on articles on social media platforms.
It doesn’t just stop here, more regulations are in the making as we speak.
👩💼 Senator Warren urges Amazon to tackle COVID-19 misinformation (Engadget)
Senator Elizabeth Warren urges Amazon to do better after members of her staff found books propagating COVID misinformation on the Amazon bestselling list. She penned an open letter to the company’s CEO, Andy Jassy, giving him 14 days to review the platform’s algorithms and ensure that such products are no longer promoted.
🇬🇧 Britain tamed Big Tech and nobody noticed (Wired)
Online platforms such as TikTok, Instagram and Google announced several policy changes such as limiting messaging capabilities and automatic birthday notices in order to comply with the UK’s newly passed Children’s Code. Tech companies aren’t necessarily connecting their policy changes to the new legislation, possibly to avoid provoking a domino effect across other countries’ legislatures.
🌐 G-7 ramps up digital safety efforts - POLITICO (Politico)
Increased accountability and effective solutions were among key discussion points for global leaders, during the 2021 G-7 Summit held in London. The UK’s Home Secretary Priti Patel also launched a new “Safety Tech Challenge Fund” to help support the development of effective child sexual abuse material (CSAM) detection tools.
✍️ Online Safety Bill not suitable for fraud, Google and Facebook suggest (Evening Standard)
Online fraud in the UK accounts for nearly £754 million in losses from banks, thereby making it a “national security threat”. Despite these substantial losses, the UK’s Online Safety Bill has no clauses in place to address such harm.
While consumer groups such as Which? are urging the government for their inclusion in the bill, Google and Facebook think the bill looks best as is.