image: Wachiwit /

Fighting the Spread of Disinformation Online by Spandana Singh

Three years ago, despite working in the internet policy space, I fell for a fake news story—a screenshot of a supposed book excerpt describing what has now been recognized as a satirical description of Donald Trump’s obsession with the Gorilla Channel. As I wrote at the time, the information environment of 2018 was complicated—and the current wisdom was that misleading information needed to be addressed gradually. But while it might have been generally harmless for a user to not be able to identify a false information story three years ago, in too many cases today, this failure could mean life or death.

In 2020, we grappled with a dangerous new infodemic, in which misinformation and disinformation were produced and amplified at scale by foreign actors, regular users, and politicians around the world—including in the United States. At the same time, misleading stories about COVID-19, the U.S. elections, racial justice protests, and vaccinations converged, creating a muddled and chaotic information environment. News reports of people drinking bleach to protect themselves from the coronavirus surfaced, anti-mask mandate protests grew around the world, and conspiracy theories like QAnon gained traction. The immediate offline consequences of these online posts became more evident.

Amid this information crisis, internet platforms like Facebook and Twitter broadened their efforts to combat misinformation and disinformation. As my colleagues and I evaluated these platforms’ new policies and practices to curb the spread of misinformation about COVID-19 and the U.S. election, we found that although these efforts are limited, in many ways they represent a notable shift in how platforms approach content on their sites, reflecting a growing recognition of the urgent threat the online infodemic poses in the offline world.

Platforms can—and should—do more. Introducing new policies and practices is a great first step. However, without adequate transparency and accountability, civil society groups, lawmakers, and the public will never know what kind of impact these efforts had, if any. In addition, while it is commendable that internet platforms small and large stepped up to address misinformation and disinformation in 2020, it does raise serious questions about why they didn’t do so sooner. Some of these companies indicated before last year that their anti-disinformation operations were close to top-notch. Their responses in 2020 indicate, however, that there was immense room for them to grow.

The offline harms of online disinformation have become starkly apparent. Over 800 people have died directly due to COVID-19 misinformation, and disinformation that stokes stoking social tensions has resulted in violence. We must hold platforms accountable and ensure they are addressing this key issue.

Learning Resilience to Disinformation by Lisa Guernsey

Working from home, I look up at a piece of artwork in my office almost daily which quotes the American abolitionist and activist Sojourner Truth, “Truth is powerful and it prevails.”

But does it?

This question has always taunted me, but it now looms larger than ever. There is no mistaking the prevalence and potency of disinformation this past year—falsehoods about the presidential election, COVID-19, racial justice, and more. Disinformation emerges from a wicked confluence of social media and digital disruption, polarization and social isolation, and myriad anxieties about our health, our families, and our planet. As a result, stopping the spread of disinformation seems nearly impossible.

But, human beings are endowed with something that can help turn the tables: We can learn. The human brain has evolved over thousands of years to read and write, developing and then decoding symbols, such as letters of the alphabet. And, we have learned to use reason and logic, to seek evidence.

Combating disinformation requires an all-hands-on-deck, multipronged approach. To tackle disinformation in our systems of learning, our Education Policy program embarked on projects in schools and across informal learning systems, from libraries to public media and more. For example, many children’s librarians and youth-services librarians are keen on becoming “media mentors” who can help us and our kids sort and filter information crossing our screens every day. In August, we hosted a month-long “Read & Chat” with librarians in Illinois, delving into the book Becoming a Media Mentor by Alaskan librarian Claudia Haines. This led us to develop a three-day media mentorship forum and workshop for education and library leaders around the country.

Using new approaches based in the science of learning and human behavior, we can train ourselves to read an article or check a source before sharing or retweeting it, to become more alert to scams and bot-generated content, and to think critically about our news environments. Experts in these areas are developing tools and instructional materials that teachers and community leaders can use to promote these skills. But many of them are so new, they are difficult to find. So, in partnership with our national security colleagues at New America and Cyber Florida, we are building a cyber citizenship portal to help educators find teaching tools appropriate for the grade-level and needs of their students.

This is not just about learning in an academic sense; it requires awareness of cultural and social spheres. Teachers and librarians will not succeed by simply telling a 15-year-old (or an adult for that matter) to avoid one source and use another; educators need to develop new skills in listening to their students, understand their mindsets, and meet them where they are. This is why a large portion of our work to support educators next year will also focus on the importance of culturally responsive teaching with technology and building trust. While a capital-T “truth” may always be elusive, our democracy is at risk if we can’t build better learning systems to seek it out.

image: Wachiwit /