“What is it about the Internet that bedevils intelligent people?” This question is on many of our minds after a year of contentious debate over the issue of fake news. It was also at the core of a recent study by the Stanford History Education Group, which took a close look at how expert readers behaved when asked to assess online information.
The question of how to vet news and information is an old one, but the 2016 presidential campaign and its aftermath pulled the issue of unreliable news into the realm of national politics with a renewed intensity. Although it is tempting to believe that only the most partisan individuals or the most uninitiated readers fall for fake news, the Stanford group discovered that even trained researchers and highly skilled readers had trouble discerning fact from partisan ploy.
The study focused on fact checkers, historians, and Stanford undergraduates and tracked their behavior when they were asked to assess the origin and validity of online information. Perhaps unsurprisingly, the fact checkers fared the best.
The researchers concluded that the relative success of the fact checkers was due to a reliance on two complimentary strategies, which the researchers termed taking bearings and lateral reading. When tasked with assessing a site, fact checkers almost immediately looked outside the source in question, opening new browser tabs and conducting searches to discern the site’s position in relation to other sources. Lateral readers, in a sense, assess a site by leaving it.
Students and historians, on the other hand, read vertically, spending much more time exploring the source material itself. Students tended to base their judgments on the aesthetic and design features of a particular site. Sites that looked the best and were easiest to use seemed the most bona fide. Historians vetted sites by the terms of their profession, looking for markers of authority such as footnotes and works cited lists or for links to outside sources like peer-reviewed journals.
While the historians and students may be highly skilled readers, fact checkers “understood that websites do not sprout from spontaneous generation but are designed, created, and financed by groups seeking to promote particular—and often partisan—interests.” Media-savvy readers understand “online structures, particularly how search results are organized and selected,” making them better at accessing information that would help them determine a site’s origin, hidden backers, or political program, even when this information is not readily available on the site.
In this regard, the Stanford group’s conclusions echo a recent study by a group of journalism professors who found that the more readers understand how news is produced, the less likely they are to believe conspiracy theories. Stephanie Craft, one of the authors of the report, attributes this savvy skepticism to increased media literacy, the ability of readers to understand the structures and methods by which news is generated and disseminated. The more aware readers are to these conventions, the more attuned they become to spotting false information masked as real news.
In a recent interview with NPR, Mike Caulfield, director of blended and networked learning at Washington State University, Vancouver, outlined a set of strategies he has developed for assessing the origin and validity of online sources. He encourages readers and researchers to “go upstream” to find the original source of digital material. The system he outlines complements the guidance offered in the most recent edition of the MLA Handbook, which acknowledges that “today the Internet . . . has accelerated the process of finding and retrieving sources—but at the same time it has complicated the researcher’s assessment of their reliability.” The MLA encourages researchers to ask themselves five questions when evaluating a source: What is the source? Who is its author? Where did you find it? When was it published? And—in keeping with the studies discussed here—how was it produced?
Photo Credit: New York Daily News