Anti-abuse actions, social media blackouts and more
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
Social media platform companies continue struggling to rein in abuse and hate speech. Women, always a favorite target of abuse, are speaking up to Big Tech in an open letter, but this month also saw a huge spike in racist attacks against England’s players following the finals of Euro 2020. But human beings aren’t the only ones suffering harm online. A new trend in video content has animals being put in harm’s way just for entertainment purposes. Less entertaining is the blocked access to the internet in Cuba amid heavy social protests there. As always, lots to keep up with but keep reading below for our pointers and summaries of everything going on.
This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox Research and UCL, focusing on large-scale predictive models, text understanding and distributed AI techniques. And has authored more than 60 international publications and holds more than 50 U.S. patents. In addition to being CEO and Co-Founder of Checkstep, he’s also a serial entrepreneur and angel investor.
Expert’s Corner with Guillaume Bouchard
What got you into the safety tech industry? What is your motivation behind creating Checkstep?
Pushing the boundaries of AI has been a life-long objective, but in the last few years, I realised the incredible impact it can have on society, and I decided to take the side of what I perceive as good for the world. Detecting online misinformation at Facebook was a great start, but maybe not in the right company. With the rise of Covid and its associated misinformation, I thought it was the right time to apply my skills, helping the full spectrum of online platforms that will build a brighter future for the Internet.
What are the key economic implications of the current rise of SafetyTech solutions?
Today, these technologies are expensive and a luxury that only the BigTech can afford, and even when they can afford it, their integrity is questionable. I think the key economic implication is the increased diversity of online platforms, because it reduces the barrier to entry for new social networks, dating apps, marketplaces or other online communities.
What industries, sectors or organisations do you think should feel most concerned?
I think that live streaming will benefit a lot from the recent advances in the SafetyTech industry because for them, online harms can be life threatening and nearly impossible to mitigate without the help of partially automated solutions.
AI vs Humans. Do you think there is competition between them?
This is a very important question to me. It is clear that AI will make many human jobs redundant. This change will be hard to accommodate, and hard to resist against: If taxi and truck drivers did not exist, what would be the downside of having secure self-driving transportation? I believe we need to be prepared to work significantly less, and more importantly to reinvent the notion of work, which ultimately will become closer to what we call a hobby: a recurrent activity that defines you, and that you would do even if you were not paid for it.
If not for the pandemic, do you think online harms would be as highlighted?
Given the spread and impact of covid-related health misinformation, which led to people dying, it is clear to me that the pandemic acted as a catalyst for acting earlier and faster against online harms. However, it is fair to say that the upcoming regulatory changes were in the works long before. Also, the George Floyd murder, the capitol event, and the recent post-Eurocup racist attacks were not directly linked to the pandemic.
📣 We’re very excited to announce our $1.8M raise to help tackle online toxicity.
📣 Our Co-Head of Research, Isabelle Augenstein was recently admitted to the Young Royal Danish Academy of Sciences and Letters.
Moderating the Marketplace of Ideas
Following an open letter from a raft of heavy hitters, Big Tech commits (again) to overhauling content moderation on their platforms.
President Biden pulls no punches as he singles out Big Tech for their culpability in the deaths among unvaccinated people. Also the Surgeon General’s recent advisory, Confronting Health Misinformation, is quite explicit about social media’s blame for accelerating the spread of bad information “to have extraordinary reach.”
Self-regulation, volume of hateful posts, faulty algorithms—reasons why online abuse still prevails. But what are we doing about it?
The Internet Watch Foundation’s (IWF) recent update to their hashing software, Intelligrade, seems to be a “data breakthrough” in detecting child abuse images and videos as per laws and regulations from 20 different countries.
In-depth analysis from the Southern Poverty Law Center’s Hatewatch details how the ‘activist libertarian’ ideology of Twitter’s leadership combined with the all-too-familiar social media business model (i.e., a pathological drive for more engagement) leads to a platform that all but encourages disinformation and hate on a large scale.
Facebook doesn’t mind transparency except when it might hurt their image. CrowdTangle, a Facebook analysis tool used by journalists and researchers, is being restructured, most likely because its use reveals Facebook as an ultra-right-wing echo chamber.
Content moderators dealing with defamation may see a changing legal landscape in the future. Two U.S. Supreme Court justices have signaled they see a need to change the ‘actual malice’ standard created sixty odd years ago.
Mistreating animals is the latest scheme for clicks and profit on YouTube. People’s natural attraction to heroes and an inclination for drama seems to be driving some content providers to create videos putting animals in dangerous and harmful situations only to be ‘saved’ by ersatz do-gooders.
Toxic gaming culture is alive and well, and half-hearted attempts from gaming companies haven't done much to quell it.
The man behind the viral Tom Cruise deepfakes is Hollywood bound.
Cuban’s nascent internet access (available to private business and homes only since 2019) was shut off amid widespread protests. Digital oppression is a favorite tool with authoritarians, so not a big surprise here.
Remedying COVID-19 and Vaccine Misinformation
Facebook speaks out against claims made by the White House. In fact, they responded by stating their own facts about how they’ve been effectively dealing with misinformation. Instead of working together to come up with a solution, it seems to be turning into a he said, she said situation.
*Recent update: President Biden has taken back his statement, saying that he meant the users of the platform and not the platform itself 😶
This won’t come as a surprise to any of our readers—misinformation is effective, spreads widely, and does real harm.
Surgeon General Vivek Murthy asks platforms to get their act together in limiting the spread of COVID-related misinformation. He released a report enlisting ways how the government, social media, individuals and professionals can work together to help tackle COVID-misinformation.
Regulatory News and Updates
The Labour Party wants to take matters into their own hands by starting a motion urging the government to act on the rising cases of online abuse following the Euro 2020 final. Moreover, the party is also asking all members of Parliament to show support for players who take the knee as a statement against racism.
Does Online Safety = No Freedom of Speech?
Several Civil liberties groups think the proposed Online Safety Bill will “politicize” decisions instead of ensuring online safety. The draft bill already seems to be getting a lot of heat and scrutiny, what do you think would be the timeline for when it actually turns into law?
India vs. Big Tech, what gives?
In addition to targeting social media companies, India’s new rules for online intermediaries also aim to hold platform’s executives accountable for their content. With no one to referee this “tug-of-power”, Indians seem to be in limbo about their social media usage.
No one saw this coming—no wait a minute, everybody did. What a fun little exercise by the Florida legislature to see if the First Amendment had suddenly evaporated. Nope, it’s still here.
I think we’re with France on this one. Hey, Twitter, what exactly are you doing about hate speech on your platform?
The German government will be exiting Facebook over the platform’s non-compliance with data protection rules. On the upside, German citizens won’t be forced to have Facebook accounts to access their government agencies’ public information anymore.
Tweets worth a second look
Adam Mosseri 😷 @mosseri@CristinaCriddle @BBCNews We have technology to try and prioritize reports, and we were mistakenly marking some of these as benign comments, which they are absolutely not. The issue has since been addressed, and the publication has all of this context.
Checkstep is currently working on a collaborative project called “Online Safety Guidance”, where we’re actively tracking content moderation regulations across the globe. This is to be used in conjunction with the community guidelines of online platforms.
We’re looking to collaborate with platform policy managers to make this project of ours more insightful for all online platforms. Please reach out to us if you’re interested in collaborating with us!
PS: We value your time. All participants will be given a thank you gift 🎁