Musk bought Twitter, of course, but there was that little election too
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
The U.S. midterm elections took place early this month, but the dominant story for weeks has been Elon Musk finally taking over Twitter. The major theme from the elections is that, as far as disinformation goes, it could have been much worse. The relatively few false stories knocking around failed to get any real traction and even candidates running on election fraud mostly conceded defeat when the counting was done. That’s not to say there haven’t been any polling challenges, but for the most part, the proceedings went smoothly.
The opposite is true at Twitter where it has been all chaos all the time. It has been estimated that almost 9000 people were either laid off or left the company in the past few weeks, including much of the content moderation team. There has been a lot of confusion around the on-again, off-again, maybe soon to be on-again verification program that hands out the blue check marks, and many advertisers have paused their use of the platform. There have been suggestions, including from Musk, that bankruptcy could be in Twitter’s future. There’s really too much going on for us to cover here, but we’ll just say that the neophyte Musk is learning just how difficult trust and safety is to get right. Like the leadership at all online platforms hosting user-generated content, Musk is learning the difficulties of balancing free speech against harmful speech, of consistent and fair moderation and how to run it all without going broke.
One last thought on the midterms. While the Democratic party retained control of the Senate, control of the House or Representatives will shift to the Republican party. The GOP has made a lot of noise about content moderation by social media companies. We might expect them to launch hearings and make a show out of bringing in Big Tech CEOs, although we don’t expect much in the way of legislative changes for online rules.
For this month we have Jeremy Gottschalk as our Expert for the month. Jeremy founded Marketplace Risk as an industry education, networking and knowledge-sharing platform for the marketplace ecosystem, and regularly consults venture capitalists, tech startups and vendors in this ecosystem. The following is a shortened version of the full interview, which is available on our Medium publication.
Why was the Marketplace Risk setup? What is its mission?
After seven years as in-house general counsel, I left to focus full-time on Marketplace Risk, and develop resources for startups to learn, network and share information about risk management, trust & safety, compliance and legal strategy. By this point, it was evident that there was a lack of resources, and an urgent need for additional support to manage these issues in our fast-growing industry. In essence, there were no readily available and cost-effective resources for resource-constrained startup founders and executive operators. To this day, our mission is to provide complimentary and low-cost resources for marketplace and sharing economy startups to successfully launch, grow and exit.
What does Trust and Safety (T&S) mean for marketplaces?
Trust & safety means different things to different people across platforms and verticals. We know that trust is one of the biggest barriers to conversion and customer acquisition for marketplaces. Building trust is therefore the key to unlocking growth. But, you can’t build trust without safety at the core of your operations. Again, safety looks different for different platforms in different verticals, but fundamentally, safety means your customers (supply and demand) and your broader community are protected from harm.
Please visit our Medium publication for the full interview with lots more thoughts and info about how to prepare.
Checkstep News
📣 A few members of our team attended the FOSI event in Washington and the WebSummit in Portugal. Lots of great discussions around Online Safety!
📣 In case you missed these updates to our Medium publication:
Here’s a piece on what Healthy Online Dating looks like: https://medium.com/checkstep/healthy-online-dating-which-swipe-will-keep-you-away-from-imposters-f10482cfa13e
The impact of social media on minors: https://medium.com/checkstep/bants-or-bullying-the-impact-of-social-media-on-minors-ebd552b18eb7
Even pre-Musk Twitter struggled with content moderation: https://medium.com/checkstep/despite-ban-on-grooming-slur-twitter-takes-no-action-on-hateful-posts-f994f4a7162a
Misinforming the Vote
😑 Candidates Keep Pushing Election Denial Online — Because It Works (Bloomberg)
Bloomberg has an excellent visualization of the effectiveness of election denialism online. They looked at the social media activity of candidates to see how much better or worse content denying the 2020 election results did. The conclusion probably won’t surprise you.
👌 After Election, Cautious Optimism That Few False Narratives Took Hold (The New York Times)
No one is declaring victory over election misinformation, but many of the false narratives that launched around the election failed to take hold…
🌟. Why misinformation didn't wreck the midterms (Axios)
… and Axios is reporting that the amount of bad content was actually larger than in 2020, but the effects were much smaller. The large platforms get some credit for keeping it at bay.
🫠 How News About Maricopa County’s Ballot-Counting Machines Went Viral (The New York Times)
And all of this good news comes despite technical glitches in Maricopa County, Arizona, which naturally kicked off a spate of claims about election fraud. The usual suspects used the problems to amplify their opposition to electronic voting, but they never really gained the traction they might have hoped for.
Moderating the Marketplace of Ideas
🫣 Leaked Documents Outline DHS’s Plans to Police Disinformation (The Intercept)
Although the U.S. Department of Homeland Security shut down their proposed Disinformation Governance Board, it continues to expand efforts to limit speech it considers dangerous. The agency took up their fight against disinformation motivated by attempts to influence the 2016 election and health lies spread during the COVID-19 pandemic.
📱 U.S. adults under 30 now trust information from social media almost as much as from national news outlets (Pew Research Center)
For some time now, Americans have been more likely to trust information from local and national news organizations than from online sources. But now, the youngest adults (under 30) are just as likely to say that social media is equally trustworthy.
💸 How Google’s Ad Business Funds Disinformation (ProPublica)
Despite their proclamations to fight disinformation all over the world, Google continues to enjoy significant profits from ads they place on sites known to spread false and potentially dangerous information.
🔍 Inside Meta’s Oversight Board: 2 Years of Pushing Limits (Wired)
Steve Levy at Wired has a very thorough and in-depth review of the history and current status of the Facebook Oversight Board.
🤷 What If Rumble Is the Future of the Social Web? (The Atlantic)
Although often flying under the radar, Rumble is among the most trafficked among the “alt-tech” platforms, and it has some serious money behind it. Influencers known for hate speech and copyright infringement have found a welcome home on the platform.
📺 BBC tries to understand politics by creating fake Americans (AP News)
A BBC reporter wanting to see how Big Tech platforms were likely to spread false information created false people to see what would happen. She used computer generated images of her ersatz humans and filled in their profiles according to five different political archetypes she defined. She is not alone among journalists researching recommender systems in this way, but has received some criticism for operating in ethically dubious waters.
🐦 White House deletes tweet crediting Biden with Social Security boost after Twitter flags (USA Today)
We don’t normally cover individual fact-checks here, but this seemed noteworthy. The White House deleted a tweet that attributed credit to President Biden for an increase in social security benefits. Twitter added a warning that the claim lacked context since the increase was due to a cost of living increase tied to inflation. Shortly after, the White House removed the tweet.
🏋️ TikTok glorifies weight loss among teens, young adults: Study (Axios)
This has been reported on before, but apparently this is the first formal study to confirm the dangerous influence of TikTok on some young people. It concluded that the platform definitely reinforces unhealthy attitudes about weight and health.
👾 Worries Grow That TikTok Is New Home for Manipulated Video and Photos (The New York Times)
Videos on TikTok that are manipulated for a laugh are innocent enough, but experts studying misinformation worry that the same techniques can be used to gin up political division and promote conspiracy theories. The same thing happens on other platforms like Facebook, but these experts are concerned that the more freewheeling nature of TikTok will make it harder to detect.
📈 Doctors and advocates tackle a spike of abortion misinformation – in Spanish (NPR)
Abortion rights opponents are capitalizing on confusion following the Supreme Court decision on abortion by deliberately spreading lies on social media. Since content moderation for Spanish language posts is not as comprehensive as in English, doctors and reproductive rights advocates are seeing a surge in abortion-related misinformation both online and in talking to patients.
👥 Value Pluralism and Human Rights in Content Moderation (Lawfare Blog)
The EU and United States have sharply different approaches to speech regulation with the U.S. highly skeptical of government intervention and the EU acutely aware of the concrete harms of inciting and discriminatory speech. International law may have to deal with the conflicts between these views and context will be essential to applying appropriate restrictions on speech globally.
💬 A New WhatsApp “Communities” Feature Makes Organizing Groups A Snap. Critics Say It Will Make Spreading Misinformation Easier. (BuzzFeed News)
WhatsApp has a new feature that will let users combine multiple groups into a “Community.” Critics worry that a feature to structure groups more easily will exacerbate problems of misinformation. They say that the Community feature does not include necessary safeguards.
Regulatory News and Update
🇬🇧 Online Safety Bill: Record amount of online child abuse blocked as legislation remains in limbo (Sky News)
The Online Safety Bill in the U.K. has been in a state of limbo for almost three years. It has been delayed most recently due to the toppling dominos of prime ministers with the most recent one saying that the bill will be taken up in “due course.” Others are urging the government to get the bill passed by Christmas fearing that it will be scrapped altogether if that doesn’t happen.
*Further update: Following backlash from free speech advocates and the tech industry, the U.K. government has dropped the “legal but harmful” clause from the Bill. Moreover, several new changes have been added including a new law which would make sharing of pornographic deepfakes illegal.
🇻🇳 Vietnam to require 24-hour take-down for "false" social media content (Reuters)
Vietnam continues to grow more stringent in policing online content with a cybersecurity law that went into effect in 2019 and national guidelines on social media behavior introduced in June last year. The information minister has now said that misinformation must now be taken down within 24 hours instead of the 48 hours previously.
🇮🇳 India sets up govt panel to hear social media content moderation complaints (Reuters)
Social media firms have been at odds with the Indian government for some time now. They are already required to have an in-house grievance redress officer and designate executives to coordinate with law enforcement officials. And, now under amended rules that create a government panel to hear complaints from users, companies will be required to acknowledge complaints from users within 24 hours and resolve them within 15 days or 72 hours in case of an information takedown request.