Widespread misinformation, racist bots and Facebook under fire
The CheckStep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
The really big news this month is Facebook’s ban on a group of NYU researchers. We’d normally be pointing out opposing viewpoints, but we’re hard pressed to find anyone outside of Facebook who thinks they’re in the right on this.
Across other news, there’s lots of novel ideas being proposed to combat bad content or to get out healthy content ranging from the White House enlisting social media influencers to an Instagram personal limits filter for when you can’t take it anymore to fake accounts that might save the internet (yeah, no kidding), not to mention a veritable user uprising on Twitch urging the platform to do better.
Besides all this great content, we’ve got an invitation to write for our Medium publication at the end, so get scrolling so you don’t miss a thing.
Expert’s Corner with Todd Nilson
This month we had the great pleasure of speaking with Todd Nilson, the president of Clocktower Advisors. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. His work, ranging from large brands to small nonprofits, has led him to think deeply about how people interact and communicate online. Read on for an edited version of his interview. The full version is available on our Medium publication.
What kinds of challenges do platforms face to establish and maintain healthy communities?
There are a few big challenges to maintaining a healthy community that I bump into time and again:
Not having at least one dedicated community manager is the biggest reason communities struggle
Not being transparent enough about the purpose of community with members, how their information will be used, resulting in distrust and abandonment of the community
Focusing too much on the platform and not enough on building trust, reciprocity, and friendships can stall communities
What do you think is the difference between enterprise-based communities and social communities? What motivates people to participate in different communities?
Some of the most powerful, active, and interesting online communities are social groups. The motivation to participate in them is almost entirely intrinsic and the payoff for belonging is status, a feeling of belonging, and the ability to share one’s creativity. Enterprise-based communities, based often on a product and offering support from peers and the company itself, are immediately different because there’s money behind them. But enterprises need to pay special attention to the motivation of members and resist the urge to offer meaningless extrinsic rewards as a way to encourage participation. Instead, they should focus on solving problems, celebrating customers whose lives are enriched or transformed by a product or service, and encouraging creativity and connection.
Do you think there’s a gap which makes it difficult for smaller communities to actively want to moderate content on their platforms?
Absolutely! Smaller communities or underfunded communities without a dedicated (paid) community manager can become burdensome on the most active members, especially if they have to moderate posts or answer basic questions again and again. While I’m seeing a renaissance of new community platforms out there, one feature that I find myself wanting all the time is the means to automate the more rote activities performed by a community manager.
For more great insights from Todd, including an in-depth discussion of Section 230, visit our Medium publication.
Checkstep News
📣 Our CEO, Guillaume Bouchard was featured on The Near Featurist, a podcast series hosted by Guy Clapperton, where he discussed how algorithms can be effectively used to fight misinformation.
📣 The deadline for papers to be published in the special issue of the ACM Journal of Data and Information Quality is fast approaching! Send in your submissions before 15th November, 2021.
FYI: Checkstep is sponsor for the Truth and Trust Online (TTO) conference 2021
Moderating the Marketplace of Ideas
🧑🏫 ⛔ Opinion | Facebook Shuts Down Researchers Looking Into Misinformation
The NYU Center for Cybersecurity researchers received notice that they were banned from carrying out their research on Facebook via an automated email. While Facebook sees this as an attempt to preserve user privacy, the researchers think the platform is just trying to protect itself from further scrutiny. Despite promising increased transparency on various accounts, Facebook surely seems to be going back on its words.
📉 Instagram To Crack Down On Hate Speech, Testing ‘Limits’ Tool
In the wake of the tsunami of race-based harassment following the Euro 2020 cup, Instagram is testing a feature that allows users to temporarily lock down their accounts. Presumably, positive, encouraging comments won’t come through either, but perhaps imperfect protection is better than suffering the onslaught of abuse. In addition to ‘Limits’, they also introduced a new sensitivity filter, to give users control over how much sensitive content they want to be exposed to. However, the filters seem rather vague in their description for the amount of content shown or restricted.
✊ Twitch responds to ‘Twitch Do Better’ movement with improved chat filters
Several Twitch streamers took to social media to complain about the increase in online harassment faced by its marginalized creators, through the #twitchdobetter campaign. As a response, Twitch has updated it’s chat filters and also promises to release updated safety features.
The Arizona audit’s tweets come up short on truth and reality. Twitter banned nine accounts, including the official account for the Arizona audit team, for spreading thoroughly debunked conspiracy theories. The Arizona audit should be over soon, but don’t expect that to be the end of it.
📵 Biden Has to Play Hardball With Internet Platforms
With Congress mostly gridlocked, Roger McNamee at Wired Magazine argues that it’s up to the Biden administration to rein in the big social media platforms. The dangers created by these massive, data gathering media organizations that are uninterested in reforming themselves can only be mitigated by reducing the profits they gain from exploiting users’ emotional triggers.
🌱 These self-described trolls tackle climate disinformation on social media with wit and memes
Bet you didn’t know what “greenwashing” is….
“Greenwashing” is when fossil fuel companies use false advertising to show they’re doing their best to help abate global warming. However, in response to “greenwashing”, environmentalists have taken to “green trolling”, in order to debunk the companies false claims.
🤔 Can Fake Accounts Save the Internet?
This thought provoking piece by John Herrman makes us weigh the pros and cons of being anonymous online. While it may be a better way of expressing themselves for some; for others it could be a way of partaking in online abuse and getting away with no consequences. In addition to lawmakers, several people are also actively petitioning social media platforms to incorporate identity verification systems during account creation.
💸 Scammer Service Will Ban Anyone From Instagram for $60
Banning ex’s and enemies on Instagram seems to have turned into a thriving business opportunity. In some cases, a banned account can also be restored for a certain price. This surely makes us question Instagram’s current moderation services….
👩💼 🆘 Online harassment of female journalists is real, and it’s increasingly hard to endure
An increasing number of female journalists have been at the receiving end of extreme hate as a result of the work they’ve published. So much so that it’s starting to impact their mental health or forcing them to choose an alternate career path.
⚠️ Twitter locks account of India's largest opposition party
Twitter sure seems to be sticking its feet in the mud by blocking one of India’s largest political parties, i.e. the National Congress Party. The backlash from this can only be anticipated. Hope Twitter survives 🤞
🎮 How the Far Right Exploded on Steam and Discord
A new study by the Institute of Strategic Dialogue (ISD) pointed to trends in extremist behaviour online. Moreover, of the four platforms included in the study, i.e. DLive, Discord, Steam and Twitch, the platform most used by far-right groups was Steam.
💬 ⚽ Bulk of racist abuse after Euro soccer final sent from UK accounts, Twitter says
Twitter seems to have traced the source of the horrifying abuse directed towards the Euro 2020 players. Upon further analysis, the platform found that 99% of the abusers did not anonymize their identity, thus making them easily identifiable. In addition to this, the Professional Footballers' Association (PFA) also conducted their independent research in partnership with Signify, and found that there was a 49% increase in “unmoderated racist abuse” online, particularly during the Euro 2020 period.
A new report from the Brennan Center for Justice brings up an excellent question. Exactly who is content moderation supposed to protect? Seemingly arbitrary application of policies disproportionately targets vulnerable groups while others seem to violate rules with impunity. Forbes interviews Ángel Díaz, a co-author on the report.
Remedying COVID-19 and Vaccine Misinformation
🏛️ Senators target Section 230 to fight COVID-19 vaccine misinformation
The proposed Health Misinformation Act would create an exception to Section 230 protections for platforms hosting dangerous health misinformation. The carve-out is narrowly applied, but it’s not clear what potential lawsuits would even look like.
🙅 The YouTubers who blew the whistle on an anti-vax plot
Spreading lies has never sounded more appealing, especially when you can cash in by doing so. The example set by these upright YouTubers, it definitely sounds like it will take a village to solve the ongoing COVID misinformation.
🧑⚕️ The Most Influential Spreader of Coronavirus Misinformation Online
Dr. Joseph Mercola, prominently placed at the top of the Disinformation Dozen, is the chief spreader of coronavirus misinformation online. Researchers say he’s rather crafty in evading moderation. Rather than stating the inefficiency of vaccines directly, his modus operandi is to create doubts in people’s minds by asking questions about the vaccine’s safety and emphasizing dubious medical studies. Thus, making it harder for platforms to take action against him.
Update: After being called out for his behaviour, Dr. Mercola has declared that he would remove all his posts, within 48 hours, to avoid being “censored”.
🦠 📈 Facebook and YouTube spent a year fighting covid misinformation. It’s still spreading.
Back in the good old days of flat-earths and chemtrails, misinformation seemed almost quaint. That all changed with COVID-19 and so did efforts by social media companies to stop it. Despite their claims though, disinformation continues and is even being amplified by the platforms, including dangerous claims spewing from the Disinformation Dozen.
📺 😷 To Fight Vaccine Lies, Authorities Recruit an ‘Influencer Army’
Public Service Announcements for the digital age. The White House has enlisted social media influencers to encourage young people but especially 12- to 18-year-olds to get vaccinated.
Regulatory News and Updates
🇮🇳 😌 Twitter now in compliance with India's new IT rules, government says
The recent update in the Twitter vs India ordeal, Twitter seems to have given in and complied with the local regulation by appointing a “chief compliance officer, nodal contact person and resident grievance officer”.
We mentioned Facebook’s claiming “user-privacy” as an excuse for its recent decision to ban the NYU researchers studying political ad-targeting on its platform, and now US senators Amy Klobuchar, Ron Wyden and Mark Warner declare that the platform's claim is absolutely “bogus”.
🇨🇦 🤷 Canada's got the world's worst internet ideas | by Cory Doctorow | Aug, 2021 | Medium
Canada’s proposed harmful content regulation is deemed to be a “list of the worst ideas around the world” by content moderation expert Daphne Keller. The regulation is seen as an extreme measure to limit freedom of expression.
Tweets worth a second look
Collaboration Opportunity
🔔 Got the inside scoop on an issue related to fact-checking, disinformation, or free speech, or something you’d like to get off your chest? Join us as a guest writer in our Medium publication the Checkstep Checkpoint. Please send us your idea and a sample of your writing at coms@checkstep.com.