Report cards, M&M jars and the Icelandverse
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
From a 15-fold increase in child sexual abuse material (CSAM) found online to misogynistic attitudes observed in social media platforms, one can’t put a finger on what to start with; add to this the surge in climate disinformation, with no sense of remorse from Oil Giants, often caught propagandizing against climate change.
Oh, and of course, there’s the Facebook Papers. With so many articles covering the whistleblower leaks, we thought we’d save you some time with a summary explainer that you can read here.
Expert’s Corner with Isabelle Augenstein
For this month’s expert corner, we had the pleasure of interviewing our very own Isabelle Augenstein. She’s one of the brains behind Checkstep and also a recognized talent among European academics. In addition to being our Co-Head of Research, she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section at the University of Copenhagen.
She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media', and was also recently admitted to the Young Royal Danish Academy of Sciences and Letters.
Read on for an edited version of her interview. The full version is available on our Medium publication.
The news is full of how bad actors propagate misinformation and also how platforms seem to exacerbate the problem. What role do you see for academics in addressing these problems?
Academics can provide crucial insights into why this phenomenon occurs, as well as potential solutions to the problem. For misinformation specifically, academics from many different disciplines have important and complementary research findings, which should be taken into account -- e.g., from psychology, about the perception of misinformation; from computer science, about how to develop automatic content moderation solutions; from law, about how legislation applies to online platforms in different countries.
Content moderation is a growing concern, given the recent infodemic. However, some criticize it as a means to suppress freedom of speech. How should content moderation companies position themselves? What are some of the areas they should focus on, to ensure online safety?
The concept of freedom of speech has existed since the 6th century BC, long before social media or even print media were invented. Before social media, it was much more challenging to spread and weaponize information—whereas now, everyone with access to the internet can do so, anonymously and with few repercussions. This format, by design, brings out the worst in people—things people would never feel comfortable saying to someone’s face, they feel comfortable writing in an anonymous online forum. The filter bubble effect means people additionally receive backing on their opinions from like-minded individuals. This means, in this day and age, one needs to very carefully weigh freedom of speech up against the real harms it can cause.
PS. Something to look out for -
Checkstep News
📣 We’re very proud to share that Checkstep is now a university graduate. After weeks of intense training and support sessions, our journey as part of the Oxford Foundry’s OXFO Elevate Accelerator - Cohort 4, culminated with the much anticipated Demo Day.
Moderating the Marketplace of Ideas
🔞 Fifteen times more child sexual abuse material found online than 10 years ago (The Guardian)
The International Watch Foundation, which is a U.K. organization dedicated to removing child sexual abuse images and videos, is seeing a “tidal wave of criminal material.” Their analysts have reported a 15-fold increase in such material since 2011. The group is calling on U.K. officials to include measures in the Online Safety Bill that will protect children.
✍️ Analysis | Reddit got the best grade on a ‘misogyny report card’ for social media. It was a C (Washington Post)
Ultraviolet, a group dedicated to fighting sexism and encouraging inclusivity, has issued a “misogyny report card” for social media platforms. Not surprisingly, most got failing or almost failing grades. Surprisingly, Reddit got the highest mark; yet, it was a C.
👵 🗣️ Anti-Asian Hate Speech Rocketed 1,662% Last Year (Forbes)
An analysis by the anti-bullying charity Ditch the Label and Brandwatch has revealed a disturbing increase in instances of and conversations about hate speech online. A notable increase was the rise of hate speech directed at Asians covering the period of the pandemic. Hate crimes in the real world in the U.K. and U.S. have roughly followed the increases we see online.
🤑 How a Mistake by YouTube Shows Its Power Over Media (New York Times)
Two separate stories cover the power of Big Tech and their control over free speech and people’s livelihoods. YouTube apologized and restored Novara Media following a backlash after initially banning them. It’s still unclear what prompted the shutdown to begin with. And in a related story, Wired covers Stripe’s seeming total control over companies and individuals hoping to monetize their content online.
🔮 Facebook, Twitter, and social media vs. the world (The Verge)
Social media giants such as Facebook and Twitter often give in to political pressure rather than choosing a more principled approach for moderation. Like all companies, social media platforms have to worry about the politics going on in the countries where they operate. Going against a political party in power would most likely result in their services being completely disbanded in that country. In such delicate situations, how are companies supposed to juggle online safety while also making sure to only propagate the “right” content?
🌱 How Twitter plans to 'pre-bunk' climate disinformation (Washington Post)
Trying to get ahead of climate change deniers and in support of global leaders’ efforts at the COP26 climate summit, Twitter started preemptively providing “credible, authoritative information” related to climate change. Past research would indicate this is probably a good idea.
💬 ❌ CBC is keeping Facebook comments closed on news posts (CBC)
Because of an inordinate number of toxic responses, the Canadian Broadcasting Corporation decided to no longer allow comments on their news stories posted on Facebook as an experiment starting last June. Now considering it a success, the CBC will continue without comments on Facebook. They will be allowing comments on their own website, however.
📱 😈 Substack Is Now a Playground for the Deplatformed (Wired UK)
A few high-profile writers claiming to be deplatformed elsewhere, have found themselves a home and possibly riches on Substack. The Substack cofounder, Chris Best, believes he’s pioneering an alternative to the extremely negative consequences resulting from ad-supported, attention-grabbing content of traditional media and first-generation social platforms. But if Substack’s success depends on writers considered by many to push harmful content, what change is Substack actually achieving?
🤐 New platform documents digital censorship of Palestinians | Israel-Palestine conflict News (Al Jazeera)
The Arab Center for Social Media Development has built an open-source platform to track digital rights violations and censorship of Palestinians. According to the group, the website comes in response to issues last May when Palestinian content was removed by Facebook and Instagram without prior notice or explanation.
🤷 Twitter says any move by Australia to ban anonymous accounts would not reduce abuse (The Guardian)
Given the amount of online abuse, the Australian prime minister has floated the idea of banning anonymous accounts on social media. Twitter responds that such a move would not be effective and would not reduce the amount of abuse on their platform, but it would stifle speech for vulnerable people. Korea tried similar regulation in 2004, reversing it eight years later because it failed to work and had the expected negative side effects.
🔍 ✅ Perspective | Fact checks actually work, even on Facebook. But not enough people see them (Washington Post)
A recent study shows that exposing people to fact-checks of previously seen misinformation is an effective way to reduce beliefs in false claims. The researchers behind the study go on to criticize Facebook for not implementing relatively simple measures to take advantage of it.
🤵 💸 Exclusive: Billionaires back new media firm to combat disinformation (Axios)
Good Information Inc. is a newly launched public benefit corporation that will fund new media companies and efforts designed to tackle misinformation. Tara McGowan, who has been involved in other efforts to fight fake news, will lead the new organization.
Remedying COVID-19 and Vaccine Misinformation
👮 Pfizer CEO says people who spread vaccine disinformation are ‘criminals’ (Washington Post)
With millions of lives at stake, Pfizer’s CEO calls the small group of people, actively spreading misinformation about the COVID-19 vaccine’s safety and efficacy, “criminals”. Furthermore, during his interview with the CEO of Atlantic Council, he also revealed that Pfizer is constantly targeted by several “dark organizations”.
🏈 What Aaron Rodgers got wrong about vaccine and more during The Pat McAfee Show interview, according to experts (Washington Post)
A classic example of a public megaphone being misused. Following in the footsteps of Joe Rogan, Green Bay Packers quarterback Aaron Rodgers went on to share his two cents about the COVID vaccines and how we should prefer “homeopathic treatments” instead.
💉 👶 Vaccine misinformation poised to spike as Covid shots for kids roll out (NBC News)
As Pfizer-BioNTech COVID-19 vaccines get authorized by the Food and Drug Administration (FDA) for emergency use in children 5 through 11 years of age, anti-vaxxers make a play for spreading falsehoods regarding the safety of the vaccine. Doctors and health experts ask parents to be wary of any such false information.
Dr. Lee Merritt, known for spreading false rumours about the COVID-19 vaccines, just renewed her medical license with merely a few clicks despite having made claims as outrageous as COVID being a bioweapon to the pandemic being a “conspiracy to exert social control”. One would think there would be more consequences for individuals spreading disinformation, particularly doctors, who take pledges to do no harm.
Regulatory News and Updates
🇬🇧 UK warns Facebook to focus on safety as minister eyes faster criminal sanctions for tech CEOs (TechCrunch)
Things are heating up in the U.K., with respect to the scope of the Online Safety Bill. The Secretary of State, Nadine Dorries, is pushing for criminal sanctions to be applied to all platform’s executives for failure to adhere to the Bill’s requirements. Moreover, anonymous trolls will also be subjected to jail time for causing psychological harm to their victims.
With such recommendations being pushed in Parliament, it doesn’t help when the platforms’ executives have no idea about the scope of the regulation in the first place.
🇮🇳 As fresh whistleblower leaks points to Facebook laxity in India, government promises action | India News (The Times of India)
It was only a matter of time before the Indian authorities started questioning Facebook’s motives, after several revelations were made by whistleblower Frances Haughen; particularly with respect to the platform’s inactions towards effectively implementing its hate speech policies in India. In addition to this, a member of Congress, the opposition party in India, has also penned a letter to the Facebook Head in India, to “address concerns surrounding the platform's partisan policies towards the removal of hate speech and inauthentic accounts.”
PS. India means serious business with its new IT Rules, particularly when it comes to the online safety of women and children.
🇦🇺 Platforms and individuals face tougher rules over image-based abuse (Australian eSafety Commissioner’s official website)
With image-based abuse affecting 1 in 10 Australians, the eSafety Commissioner aims to reduce such harms by inflicting fines as large as $111,000 on perpetrators who partake in non-consensual sharing of intimate images. This includes use of new technologies such as deepfakes. Moreover, the takedown time period for platforms has been reduced from 48 hours to 24 hours after receiving a removal notice. Platforms will need to get themselves in order before the new Online Safety Act comes into force on 23rd January.
🇮🇪 Committee recommends minimum age for social media usage (RTE)
Irish regulators hope to set a precedent by being the first to pass a comprehensive regulation to ensure online safety. The Online Safety and Media Regulation Bill is currently receiving several recommendations ranging from adding age limits for creating a social media account to banning certain types of advertisements for children.
🏛️ 🇺🇸 Analysis | Congress wants YouTube, TikTok and Snap to cough up internal research, too (Washington Post)
In response to the disclosures made in the Facebook Files, particularly with respect to child safety, lawmakers, for the first time, called in executives of TikTok, Snap and YouTube to enquire how these platforms deal with child safety. While these companies desperately tried to differentiate themselves from Facebook, the Senators were not having it. Sen. Richard Blumenthal (D-Conn.) went as far as saying that they were comparing themselves to the “gutter”. At the end of the session, not only were the execs made to “pledge” to support the forthcoming bill against ads for children, but also urged to share their data with external researchers.
⚖️ 🇺🇸 Oil Executives Grilled Over Industry’s Role in Climate Disinformation (The New York Times)
From shouting, shaming and even using M&M jars for demonstration, the Democrats tried to drive into Big Oil executives the repercussions of their inactions with respect to fossil fuels and how they’re adversely affecting the environment. However, it was to no avail.