Election season vs infodemic, echo chambers and more….
The Checkstep Round-Up is a monthly newsletter that gives you fresh insights for content moderation, combating disinformation, fact-checking, and promoting free expression online. The editors of the newsletter are Kyle Dent and Vibha Nayak. Feel free to reach out!
The U.S. midterm elections are happening on November 8, so lots going on there, of course. But Brazil is squeezing in their run-off election on October 30 ahead of America. Fact-checkers in Brazil are working double-time, but apparently are not able to keep up with the chaos of misinformation and influence campaigns. If it’s any consolation, Nieman Labs is reporting that the effects of the infodemic around COVID-19 may not have been as bad as we all feared, so maybe, just maybe that’s true for elections? (Truth be told, we’re not so sure it’s true for the pandemic even). In other news, Turkey gets top billing on the regulatory front with a new law that cracks down on disinformation but probably free expression too, so…
And let’s not forget, our much loved Expert’s Corner, where we had the pleasure of interviewing Lauren Tharp, the Technical Program Manager at Tech Coalition. She focuses her time helping companies adopt technologies to combat online child exploitation and abuse, and facilitates collaboration among a diverse set of industry members. The following is a shortened version of the full interview, which is available on our Medium publication.
What is the mission of the Tech Coalition? What is the idea behind it?
The Tech Coalition facilitates the global tech industry’s fight against the online sexual abuse and exploitation of children. We are the place where tech companies all over the world come together on this important issue, recognizing that this is not an issue that can be tackled in isolation. We work relentlessly to coach, support, and inspire our industry Members to work together as a team to do their utmost to achieve this goal.
With more and more children spending time online, online child safety should always be a priority for social media platforms. Have you seen specific trends / patterns in terms of child abuse? Are they getting harder to detect?
I’d say there are two major factors making it harder to readily detect online child sexual exploitation and abuse (OSCEA). The first is access. Many of us spend a significant amount of time online where we’re engaging, not only with trusted family and friends, but also have contact with strangers. This has largely been a success — think about cold outreach for a new job or finding peers who have a similar niche hobby. But the tradeoff to this access is that it also made it easier for bad actors to make contact with or groom children online. Recent studies have shown that nearly 40% of children have been approached by adults who they believe were trying to “befriend and manipulate them”. So I think we will continue to face the challenge of how to safeguard children online as bad actors subvert protective measures at an increasingly rapid pace.
Please visit our Medium publication for the full interview with lots more thoughts and info about how to prepare.
Checkstep News
📣 We're excited to announce that Checkstep is working with Hopin to proactively fight bad actors on the Streamable (acquired by Hopin) video hosting platform!
📣 Not one, but we got into two highly noteworthy accelerator programs - Tech Nation and the Creative Destruction Lab.
Misinforming the Vote
😰 Social media platforms brace for midterm elections mayhem (AP News)
Social platforms like Twitter, TikTok, Facebook and YouTube say they’re working hard to detect and stop harmful election claims. But with less than three weeks before voting finishes (early voting has started in most places), misinformation about voting and elections continues to proliferate on social media.
🗳️ Misinformation Swirls in Non-English Languages Ahead of Midterms (The New York Times)
Just as in 2020, immigrant communities are actively targeted with rumors and lies leading up to the midterms in November. But things are ramping up even more this time around with disinformation campaigns in more languages, covering more topics, and across more digital platforms. Social media companies are doing very little to stop it.
🤔 US midterm elections: Does Finland have the answer to fake news? (BBC)
But on a brighter note, the Finns might be onto something, and perhaps other countries could learn from their model. Their population leads the pack in digital literacy and are less likely to be taken in by hoaxes and falsehoods than people in other countries. Critical thinking and media literacy have been part of the country’s school system curriculum for a long time.
💬 China's WeChat Is a Hot New Venue for US Election Misinformation (Wired)
With the US midterm elections coming right up, activists tracking misinformation in Chinese American communities are concerned about posts that play on racial tensions or spread doubt about the validity of elections might affect close races. WeChat is one of the main channels for Chinese-language posts containing misinformation and voter influence campaigns.
🗣️ Taiwan Local Elections Are Where China’s Disinformation Strategies Begin (Council on Foreign Relations blog)
Taiwan, another country gearing up to vote in November, is likely to be intensely targeted by China to influence the results. China has been shifting their disinformation focus onto local communities through social media. In the last local elections in 2018, China had great success influencing voters to select pro-China candidates.
😳 Brazil's fact-checkers concerned with their impact ahead of Oct. 30 runoff (Poynter)
Brazil’s run-off election is imminent, so it’s not surprising that there is lots of coverage. Misinformation has been and continues to be a significant issue leading into the run-off with the sitting president even attacking fact-checkers over the verification efforts. Trust in media and other information sources took a hit when polling ahead of the initial general election performed quite poorly in its predictions.
Moderating the Marketplace of Ideas
💚 Green Groups Ask Social Media to Disclose Effort on Climate Lies (Bloomberg)
Several environmental groups have asked Big Tech to take climate misinformation more seriously. They want platforms to treat it the same as other harmful content like hate speech and COVID lies. They’re using Europe’s Digital Services Act as a brand new shiny hammer to reinforce their plea.
🧒 🏥 Garland is asked to probe threats to children’s hospitals (AP News)
In another example of digital life spilling over IRL, three medical groups have asked the U.S. Department of Justice to look into a few prominent social media users and what the groups say is a deliberate misinformation campaign targeting transgender people. The misinformation campaign has ramped up to include harassing phone calls, protests, and threats against hospitals providing transgender health services to adolescents.
🌪️ Echo chambers, rabbit holes, and ideological bias: How YouTube recommends content to real users (Brookings Institution)
A new paper from NYU’s Center for Social Media and Politics finds that YouTube’s recommendation algorithms might not have the kind of influence people believe. The paper claims that YouTube recommends mostly mainstream media content, but it does push users into increasingly narrow ideological ranges of content that the authors call a mild ideological echo chamber. While they don’t believe YouTube is responsible for directing people into extremist views, they did find that the recommendation algorithms tend to push users to the right on the political spectrum.
📞 Health-Care Workers Are Swamped Again, This Time With Angry Calls From Podcast Listeners (Bloomberg)
The Tech Transparency Project is reporting that several podcasts are encouraging listeners to direct anger and harassment at healthcare workers and facilities provoking the listeners with misinformation. All of the podcasts analyzed were distributed through major platforms such as those owned by Apple and Google. There seems to be very little content moderation despite policies against this type of content.
🙅♂️ Facebook threatens to block news content over Canada's revenue-sharing bill (Reuters)
As Canada considers their own legislation (Australia beat them to it) to require Big Tech to pay publishers for their content, Facebook threatens to no longer allow news content sharing in Canada. Facebooks says they want to be at the table for deliberations and call on the committee considering the legislation to be transparent about their proceedings.
😐 Social media loses ground on abortion misinformation (Axios)
Caught off guard by the monumental U.S. Supreme Court decision overturning the right to reproductive choice, most social media companies do not have abortion-specific misinformation policies. Just as access to reliable healthcare information becomes even more important, misinformation around health services has gotten worse according to abortion rights advocates.
📈 How Social Media Amplifies Misinformation More Than Information (The New York Times)
The Integrity Institute issued a report confirming the idea that bad information is spread much more readily than the good stuff. The report also ranks platforms according to how severely misinformation spreads in their networks.
🫢 How Trump and covid-19 made social media "censorship" a partisan issue (The Washington Post)
The U.S. Presidential election in 2016 was ugly in many ways. It also marked the turning point that made content moderation a partisan cage match. Several legal cases are in the works now and according to Daphne Keller, “We’re approaching a pivotal moment for online speech.” The exact shape of the public sphere will likely be decided in the coming months.
🏛️ Section 230 heads to the Supreme Court (Columbia Journalism Review)
And on that note, the U.S. Supreme Court announced that it would hear two cases this term that might change the scope of Section 230 of the Communications Decency Act. They’re both related to terrorist content. These cases are likely, for the first time, to make a distinction between hosting content and recommending content but several experts point out that regulating amplification is not so straightforward.
🔔 The Role of Alternative Social Media in the News and Information Environment (Pew Research Center)
The Pew Research center has a new multi-method study that explores the alternative social media sites BitChute, Gab, Gettr, Parler, Rumble, Telegram and Truth Social. Their report looks at these relatively smaller sites and their emerging role in the overall news and information landscape. Fewer than one-in-ten Americans say they use any of these sites for news, but those who do claim they find community there. The New York Times covered the story too.
😮💨 Ahead of Midterms, Disinformation Is Even More Intractable (The New York Times)
Although the information landscape has splintered into many fringe social media platforms with much smaller reach, baseless claims of voter fraud continue to make their way through the public consciousness. Despite several years of research and attempts to thwart dangerous misinformation, it continues and might even be more widespread than in the past.
📱 Pew: One-Half of US Adults Sometimes Get News From Social Media, Led by Facebook (Adweek)
Another Pew study reveals that half of U.S. adults at least sometimes get their news from social media. Facebook is the most popular source followed by YouTube and then Twitter. Interestingly, only 27% of adults in the U.S. use Twitter, but of those 53% go to Twitter regularly for the purpose of getting news.
🤖 Eyeballs and AI power the research into how falsehoods travel online (NPR)
NPR article that looks at novel approaches to using AI for detecting misinformation and considers the tough choices society has to make, both in how to detect unacceptable content and then what we should do about it.
😨 Were fears about the “infodemic” overblown? (NiemanLab)
With a little distance from the early days of the pandemic, a few studies have come out in the past weeks looking at the degree to which people consumed junk information compared with higher-quality news, the extent to which people retreated into echo chambers that confirmed their beliefs about Covid-19, and what influence low-quality information had on public support for the pandemic response. As usual, the answers are not clear cut, but according to the studies things might not have been as bad as we feared.
⚖️ Only proper online regulation can stop poisonous conspiracists like Alex Jones (The Guardian)
Simon Jenkins argues in The Guardian that the reach and virality of social media means bad speech spreads well beyond the natural boundaries that limited it before the internet. Alex Jones’ lies about the Sandy Hook shooting are a prime example. Alex Jones just received a galactically large fine, but nothing happened to his many minions propagating his poison online. He suggests that international agreement on online regulations is necessary “to bring out the best and curtail the worst.”
📰 TikTok is increasingly becoming a news source (The Verge)
TikTok insists it's an entertainment platform, but it doth protest too much, we thinks. Around 10% of U.S. adults get news on the app, and for those under 30, it’s 26% percent. As a nice companion read, The Guardian US has a new series focused on the power and reach of the platform. The series gets into several concerns with the app from its black box algorithms, to the propensity to spread misinformation and its impact on mental health.
🐦 Twitter reviews policies around permanent user bans (Financial Times)
We still don’t know if Elon Musk will be the Owner-in-Chief at Twitter, but the platform might be shifting its content moderation stance to be more in line with his way of seeing things whether he becomes the owner or not. One major change being considered is a relaxation of the permanent ban against users violating Twitter policies. Note that none of the changes being considered would likely allow former President Trump back on the platform since bans due to inciting violence are not expected to change.
🫣 Twitter is asking users to enter their birthdate to view sensitive content (TechCrunch)
As part of its safety efforts, Twitter will be restricting sensitive tweets unless users add a birthdate to their accounts and they are over 18 years old. Sensitive content is broadly defined but includes adult content, graphic violence, gratuitous gore and hateful imagery. Most likely Twitter is trying to stay ahead of upcoming legislation in the U.K. and California but self-reported age isn’t really much of a check.
🧑💻 Fact-checkers say social media companies’ inaction on multilingual fake news fuels racism, threatens democracy (Yahoo News)
As purveyors of disinformation become more active in non-English speaking communities, fact-checkers and activists are calling on Big Tech to do more to stop spreading lies.
🔍 The Hunt for Wikipedia's Disinformation Moles (Wired)
Because of its transparency and auditability, Wikipedia is widely viewed as a trusted source of information. But even they are not immune to attempts to influence through misinformation, and, in their case, especially from state actors. Unethical governments target the site because of its wide audience. Wikipedia is employing new tools to try to detect coordinated campaigns.
Regulatory News and Update
🇹🇷 Turkey's parliament adopts media law jailing those spreading 'disinformation' (Reuters)
Turkey’s parliament has passed a controversial misinformation bill. Proponents say it’s meant to combat disinformation and fake news, but critics have denounced it as another way for the government to crack down and interfere with free expression. It and its likely harmful effects have been covered a lot since it was proposed.
🇺🇸 New York Leaders Seek to Criminalize the Spread of Violent Videos (Yahoo News)
New York State released a report that described how internet platforms influenced the Buffalo mass shooting suspect who announced his plans ahead of time online. The report recommends that New York lawmakers pass legislation for criminal penalties for creating images or videos of a murder. If the legislature takes this up, it will be the first in the United States. And the kicker is that those who share these images or videos online will be held liable for disseminating violent content.
🇬🇧 UK watchdog gives first report into how video sharing sites are tackling online harms (TechCrunch)
Meanwhile, Ofcom, the online regulator in the U.K. published their first report on how it has been regulating video sharing platforms following the introduction of new rules meant to protect children and others from seeing harmful content. Since the pending Online Safety Bill is jammed up in the revolving door of U.K. prime ministers, Ofcom’s regulatory framework may govern platforms’ handling of user-generated content for some time.
🇺🇬 Media groups ask Uganda's top court to scrap law over free speech fears (Reuters)
The media in Uganda thinks the new communications act goes too far. They’re taking their arguments to the highest court for relief from a law they say broke the constitution and cripples free speech. Other rights groups have called the law draconian and gives authorities more ammunition to target critics and independent media.
🇮🇳 India’s Tech Regulation Onslaught Poses Dilemma for U.S. Companies (The Wall Street Journal)
U.S. social media companies are going head-to-head with the Indian government (again) over the proposed Grievance Appellate Committee (GAC). The GAC would hear user complaints against technology companies’ content moderation decisions with the power to overturn a company’s decisions. Big Tech is pushing back hard against the plan.
🇸🇴 Somalia: Govt bans Al Shabaab 'propaganda' contents (Africanews)
In Somalia, authorities have announced a ban on extremist content targeting in particular Al Shabaab propaganda. The ban applies to social media as well as traditional media. The al-Qaeda linked group controls portions of the countryside. Quartz has more details as well.