How to Know a Source is Credible

In this age of fake news, various types of misinformation and disinformation, lost cause lies, black confederate lies, the SCV and UDC, and other falsehood campaigns, we have to be able to discern good sources of information. In other words, “How do we know whom to believe?”

Professor Sam Wineburg of Stanford University has been researching how professional fact checkers decide if a website is credible. They use a technique called “lateral reading.” In their paper on lateral reading, Professor Wineburg and his colleagues present the results of one study. The paper’s abstract tells us, “In a study conducted across an urban school district, we tested a classroom-based intervention in which students were taught online evaluation strategies drawn from research with professional fact checkers. Students practiced the heuristic of lateral reading: leaving an unfamiliar website to search the open Web before investing attention in the site at hand. Professional development was provided to high school teachers who then implemented six 50-minute lessons in a district mandated government course. Using a matched control design, students in treatment classrooms (n = 271) were compared to peers (n = 228) in regular classrooms. A multilevel linear mixed model showed that students in experimental classrooms grew significantly in their ability to judge the credibility of digital content. These findings inform efforts to prepare young people to make wise decisions about the information that darts across their screens.”

Professor Wineburg and Professor Sarah McGrew also produced this paper on lateral reading. The abstract for this paper tells us, “The Internet has democratized access to information but in so doing has opened the floodgates to misinformation, fake news, and rank propaganda masquerading as dispassionate analysis. To investigate how people determine the credibility of digital information, we sampled 45 individuals: 10 Ph.D. historians, 10 professional fact checkers, and 25 Stanford University undergraduates. We observed them as they evaluated live websites and searched for information on social and political issues. Historians and students often fell victim to easily manipulated features of websites, such as official-looking logos and domain names. They read vertically, staying within a website to evaluate its reliability. In contrast, fact checkers read laterally, leaving a site after a quick scan and opening up new browser tabs in order to judge the credibility of the original site. Compared to the other groups, fact checkers arrived at more warranted conclusions in a fraction of the time. We contrast insights gleaned from the fact checkers’ practices with common approaches to teaching web credibility.”

Other scholars at Stanford are also addressing disinformation and how to confront it, as this article tells us. “Over the past decade, the spread of disinformation online has become a problem facing the U.S. and the world. Increasingly, domestic and foreign adversaries have used it as a way to unleash chaos on democratic processes, upend democratic norms and weaken confidence in public institutions, according to Stanford scholars. While propaganda and disinformation have long been used by malign actors to intentionally mislead and manipulate the public, disinformation online can spread fast and far across networks anonymously, cheaply and efficiently, making it a challenging problem to address. The internet and social media platforms have become ‘weaponized‘ to purposefully confuse, agitate and divide civil society, said Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator and former U.S. Ambassador to the UN Human Rights Council. ‘Democratic governments are now seized with the fact that digital information platforms have been exploited by malign actors to spread propaganda and disinformation, wreaking havoc on democratic elections and eroding trust in the digital information realm,’ said Donahoe in an online commentary published by the Stanford’s Cyber Policy Center. Donahoe and Stanford scholars from across the social sciences are studying the threats disinformation poses to democracy and also other areas of public and private life, such as health and education. In many instances, researchers are providing specific recommendations for what governments, digital platforms and the public can do to counter its deleterious effects. Here are some of those findings and recommendations, as well as insight into the role disinformation played during the global pandemic and more recently, the Russian invasion of Ukraine.”

In discussing how to stop the spread of disinformation, the article tells us, “Stanford researchers at the Graduate School of Education have found that many high school students are unable to evaluate the credibility of information they read on the internet – a finding they describe as ‘troubling.’ ‘Reliable information is to civic health what proper sanitation and potable water are to public health. A polluted information supply imperils our nation’s civic health,” said Sam Wineburg, the Margaret Jacks Professor of Education, Emeritus, and founder of the Stanford History Education Group (SHEG), in a report he co-authored with SHEG director Joel Breakstone, PhD ’13, and director of assessment Mark Smith, PhD ’14. Their research reveals just how unprepared high school students are to discern fact from fiction. In a survey with a national sample of high school students, they found that two-thirds of students were unable to distinguish between news stories and ads (despite being labeled ‘Sponsored Content’) and 52% of students believed a grainy video that claimed to show ballot stuffing in the 2016 Democratic primaries constituted ‘strong evidence’ of voter fraud in the U.S when in actuality the video was filmed in Russia. ‘Education moves slowly. Technology doesn’t. If we don’t act with urgency, our students’ ability to engage in civic life will be the casualty,’ Wineburg and his colleagues wrote. Wineburg and his colleagues have created a free curriculum called Civic Online Reasoning for educators to teach students the skills they need to evaluate information online. The scholars found that a small intervention can lead to an outsized impact on students’ ability to judge the credibility of digital content.”

The article also has a large number of links for further information that are very useful for us.

Here’s how to tell the difference between disinformation and misinformation:

This article gives tips on how to spot fake information. “Amid the alarming images of Russia’s invasion of Ukraine over the past few days, millions of people have also seen misleading, manipulated or false information about the conflict on social media platforms such as Facebook, Twitter, TikTok and Telegram. Visuals, because of their persuasive potential and attention-grabbing nature, are an especially potent choice for those seeking to mislead. Where creating, editing or sharing inauthentic visual content isn’t satire or art, it is usually politically or economically motivated. Disinformation campaigns aim to distract, confuse, manipulate and sow division, discord, and uncertainty in the community. This is a common strategy for highly polarised nations where socioeconomic inequalities, disenfranchisement and propaganda are prevalent. How is this fake content created and spread, what’s being done to debunk it, and how can you ensure you don’t fall for it yourself?”

First we look at common techniques of misinformation. “Using an existing photo or video and claiming it came from a different time or place is one of the most common forms of misinformation in this context. This requires no special software or technical skills – just a willingness to upload an old video of a missile attack or other arresting image, and describe it as new footage. Another low-tech option is to stage or pose actions or events and present them as reality. This was the case with destroyed vehicles that Russia claimed were bombed by Ukraine. Using a particular lens or vantage point can also change how the scene looks and can be used to deceive. A tight shot of people, for example, can make it hard to gauge how many were in a crowd, compared with an aerial shot. Taking things further still, Photoshop or equivalent software can be used to add or remove people or objects from a scene, or to crop elements out from a photograph. An example of object addition is the below photograph, which purports to show construction machinery outside a kindergarten in eastern Ukraine. The satirical text accompanying the image jokes about the ‘calibre of the construction machinery’ – the author suggesting that reports of damage to buildings from military ordinance are exaggerated or untrue. Close inspection reveals this image was digitally altered to include the machinery. This tweet could be seen as an attempt to downplay the extent of damage resulting from a Russian-backed missile attack, and in a wider context to create confusion and doubt as to veracity of other images emerging from the conflict zone.”

Here’s what others are doing about it: “European organizations such as Bellingcat have begun compiling lists of dubious social media claims about the Russia-Ukraine conflict and debunking them where necessary. Journalists and fact-checkers are also working to verify content and raise awareness of known fakes. Large, well-resourced news outlets such as the BBC are also calling out misinformation. Social media platforms have added new labels to identify state-run media organizations or provide more background information about sources or people in your networks who have also shared a particular story. They have also tweaked their algorithms to change what content is amplified and have hired staff to spot and flag misleading content. Platforms are also doing some work behind the scenes to detect and publicly share information on state-linked information operations.”

And there are things we can do about it. “You can attempt to fact-check images for yourself rather than taking them at face value. An article we wrote late last year for the Australian Associated Press explains the fact-checking process at each stage: image creation, editing and distribution. Here are five simple steps you can take:

Examine the metadata

“This Telegram post claims Polish-speaking saboteurs attacked a sewage facility in an attempt to place a tank of chlorine for a ‘false flag‘ attack. But the video’s metadata – the details about how and when the video was created – show it was filmed days before the alleged date of the incident. To check metadata for yourself, you can download the file and use software such as Adobe Photoshop or Bridge to examine it. Online metadata viewers also exist that allow you to check by using the image’s web link. One hurdle to this approach is that social media platforms such as Facebook and Twitter often strip the metadata from photos and videos when they are uploaded to their sites. In these cases, you can try requesting the original file or consulting fact-checking websites to see whether they have already verified or debunked the footage in question.”

Consult a fact-checking resource

“Organizations such as the Australian Associated PressRMIT/ABCAgence France-Presse (AFP) and Bellingcat maintain lists of fact-checks their teams have performed. The AFP has already debunked a video claiming to show an explosion from the current conflict in Ukraine as being from the 2020 port disaster in Beirut.”

Search more broadly

“If old content has been recycled and repurposed, you may be able to find the same footage used elsewhere. You can use Google Images or TinEye to ‘reverse image search’ a picture and see where else it appears online. But be aware that simple edits such as reversing the left-right orientation of an image can fool search engines and make them think the flipped image is new.”

Look for inconsistencies

“Does the purported time of day match the direction of light you would expect at that time, for example? Do watches or clocks visible in the image correspond to the alleged timeline claimed? You can also compare other data points, such as politicians’ schedules or verified sightings, Google Earth vision or Google Maps imagery, to try and triangulate claims and see whether the details are consistent.”

Ask yourself some simple questions

“Do you know wherewhen and why the photo or video was made? Do you know who made it, and whether what you’re looking at is the original version? Using online tools such as InVID or Forensically can potentially help answer some of these questions. Or you might like to refer to this list of 20 questions you can use to ‘interrogate’ social media footage with the right level of healthy scepticism. Ultimately, if you’re in doubt, don’t share or repeat claims that haven’t been published by a reputable source such as an international news organization. And consider using some of these principles when deciding which sources to trust.”

This article from the Washington Post gives some more tips. “Anyone with a phone and an Internet connection is able to watch the war in Ukraine unfold live online, or at least some version of it. Across social media, posts are flying up faster than most fact-checkers and moderators can handle, and they’re an unpredictable mix of true, fake, out of context and outright propaganda messages. How do you know what to trust, what not to share and what to report? Tech companies have said they’re trying to do more to help users spot misinformation about Ukraine, with labels and fact checking. On Saturday, Facebook parent company Meta announced it was adding more fact-checkers in the region dedicated to posts about the war. It’s also warning users who attempt to share war-related photo when they’re more than a year old — a common type of misinformation. Here are some basic tools everyone should use when consuming breaking news online. Social media fuels new type of ‘fog of war’ in Ukraine conflict

Slow down

“Do not hit that share button. Social media is built for things to go viral, for users to quickly retweet before they’re even done reading the words they’re amplifying. No matter how devastating, enlightening or enraging a TikTok, tweet or YouTube video is, you must wait before passing it on to your own network. Assume everything is suspect until you confirm its authenticity.”

Check the source

“Look at who is sharing the information. If it’s from friends or family members, don’t trust the posts unless they are personally on the ground or a confirmed expert. If it’s a stranger or organization, remember that a verified check mark or being well-known does not make an account trustworthy. There are plenty of political pundits and big-name Internet characters who are posting inaccurate information right now, and it’s on you to approach each post with skepticism. If the account posting is not the source of the words or images, investigate where it came from by digging back to find the original Facebook, YouTube or Twitter account that first shared it. If you can’t determine the origin of something, that’s a red flag. Be wary of things such as memes of screenshots, which can be even harder to pin down, or anything that elicits an especially strong emotional reaction. Disinformation can prey on that type of response to spread. When screening individual accounts, look at the date it was created, which should be listed in the profile. Be wary of anything extremely new (say, it started in the past few months) or with very few followers. For a website, you can see what year it was started on Google. Search for the name of the site, then click on the three vertical dots next to the URL in the results to see what date it was first indexed by the search engine. Again, avoid anything too new. And don’t skip the basics: Do a Google search for the person or organization’s name.”

Make a collection of trusted sources

“Doing mini background checks on every random Twitter account is extremely time-consuming, especially with new content coming from so many places simultaneously. Instead, trust the professionals. Legitimate mainstream news organizations arebuilt to vet these things for you, and often do report on the same videos or photos taken by real people after they’ve confirmed their origin. You are probably spreading misinformation. Here’s how to stop. Use a dedicated news tool such as Apple News, Google News or Yahoo News, which choose established sources and have some moderation built in. On social media, make or find lists of vetted experts and outlets to follow specifically for news about Ukraine. One of the best ways to consume breaking news on Twitter, for example, is to follow verified reporters from trusted outlets who are on the ground. On Twitter’s mobile app, you can add one of these lists and swipe to the right from your home screen to see it at any time.”

Seek out context

“There are thousands of legitimate posts coming out of Ukraine, including real videos of troops and first-person narratives from locals. Even if you see only real posts, it can still be confusing or misleading. Try to augment all these one-off clips or stories with broader context about what is happening. They may be the most compelling pieces of a puzzle, but they are not the whole picture. Mix in information from established experts on foreign policy, cyberwarfare, history and politics, or turn to online or television outlets that do this for most stories.”

Vet videos and images

“If you’re interested in doing deeper dives into things you see, start with this extensive guide on how to screen videos. Look for multiple edits and odd cuts, listen closely to the audio and run it through a third-party tool such as InVid, which helps check the authenticity of videos. This can be harder on live-streamed videos, like what’s on Twitch or any other live social media option. How to spot a fake video To check images, put them into Google’s image search by grabbing a screenshot and dragging it to the search field. If it’s an old image that’s circulated before, you may see telling results.”

Use fact-checking sites and tools

“Social media sites do have some of their own fact-checking tools or warning labels, and many have announced they’re adding resources and labels specifically for misinformation about the war. However, given the sheer volume of posts they’re dealing with, a problematic video can be seen by millions before ever getting flagged. Keep an eye out for content warnings on social media sites for individual posts, which can appear as labels below links or as warnings before you post something that could be misleading. Look up individual stories or images on fact-checking sites such as The Washington Post’s Fact CheckerSnopes and PolitiFact.”

John and Hank Green have a Crash Course on “Navigating Digital Information” to help us evaluate internet sources and data. It’s a playlist of eleven videos giving us superb information on how to evaluate internet sources.

This article from the Harvard University Graduate School of Education gives tips for teachers on how to teach students to navigate the web and evaluate their sources. “People of all ages struggle to evaluate the integrity of the digital information that rains down with every web search and social media scroll. When the Stanford History Education Group released findings showing that most students couldn’t tell sponsored ads from real articles, among other miscues, it intensified the scramble for tools and strategies to help students discern better. But a more recent study by Stanford’s Sam Wineburg and Sarah McGrew suggests that many of the techniques that students and teachers employ — which include checklists and other practices most recommended for digital literacy — are often misleading. A better solution for navigating our cluttered online environment, they say, can be found in the practices of professional fact-checkers. Their approach, which harnesses the power of the web to determine trustworthiness, is more likely to expose dubious information.”

Here are some tools and tips the article gives us:

Read Laterally, Not Vertically

“Wineberg and McGrew followed three groups of readers as they evaluated digital sources provided in the study: historians, Stanford undergraduates, and professional fact-checkers. They found that the fact-checkers were fastest and most accurate in vetting information, while the historians and students were easily deceived. The student participants did something they (and all of us) do often: they scrolled and read down the page. But their close reading of the very sources they were tasked to interrogate did little to advance their credibility assessment. Instead, it misled them. ‘The close reading of a digital source, when one doesn’t yet know if the source can be trusted,’ write Wineburg and McGrew, ‘proves a colossal waste of time.’ As students meandered and fluttered across their screens, they were drawn to websites’ most easily manipulated features — like scientific-sounding language or the presence of an ‘About Us’ page. Their grounds for inferring trustworthiness were largely centered on these incomplete evaluations, and they frequently misjudged websites’ origins and reliability. Unlike the student participants in the study, the professional fact-checkers began their evaluations by opening new tabs in their browser. They conducted refined searches, and consulted other sources with well-established credentials, to judge the integrity of the original website. This inclination to take bearings and gain a sense of direction fed fact-checkers’ success in the study. They often needed less prompting than historians and students, and learned far more by reading less. Takeaway: Encourage students to take the indirect route and begin their investigation of unfamiliar digital sources by leaving them. When students read laterally, they will avoid diving too deep into the actual content of the website in question and gain a wider, more impartial view of its credibility.

Don’t Fall for Appearances

“Students’ more superficial evaluations of digital sources are evidence of what Wineburg and McGrew call the “representativeness heuristic” — the tendency to evaluate probabilities by the degree to which A resembles B. It’s easy for cognitive bias to take over in such scenarios. For the great majority of the study’s student participants, this reliance on appearance determined their perception of given sources and created a ‘false sense of security.’ They were drawn to website layouts, abstracts, references, and, in one case, a .org domain — all elements that may easily meet the requirements of a checklist approach to verifying a digital source. ‘[Fact-checkers] understood the web as a maze filled with trap doors and blind alleys, where things are not always as they seem,’ Wineburg and McGrew write. ‘Their stance toward the unfamiliar was cautious: while things may be as they seem, in the words of Checker D, ‘I always want to make sure.’’ Takeaway: Communicate to students that more thorough evaluations, like those lateral reading allows, are crucial to establishing the trustworthiness of digital information.

Practice ‘Click Restraint’

“While engaging in lateral reading, fact-checkers also exercised what Wineburg and McGrew call ‘click restraint.’ They took more time than historians and students to sort through search results and, though slower to reach their conclusions, were the most selective and most accurate in assessing the integrity of sources. ‘Fact-checkers possessed knowledge of online structures,’ write the researchers. ‘They knew that the first result was not necessarily the most authoritative, and they spent time scrolling through results.’ Scanning through Google snippets, fact-checkers were able to bypass massive amounts of material and focused on credible information from news organizations like the New York Times and the Washington Post. Students, on the other hand, were far less strategic and ‘meandered to different parts of the site [itself], making decisions about where to click based on aspects that struck their fancy.’ Takeaway: When you encourage students to read laterally, you should also remind them to exercise restraint and avoid promiscuous clicking. Speed shouldn’t come at the expense of quality verifying — but more efficient, lateral reading will really make the mere minutes most spend searching count.”

The article also gives us some technical tips for checking facts:

Technical Tips for Fact-Checking

  • “Teach students how to open multiple pages within one window by right clicking. This will allow them to examine multiple sources of information faster.
  • “Introduce students to new verification vocabulary, which should guide them in lateral reading. The provenancesourcedate, and location of capture for a digital source (especially photos and videos) are crucial to determining its integrity. First Draft outlines these criteria for evaluations in its one-hour, free verification course — also available in Spanish and Portuguese.
  • “Show students how to conduct tailored searches. They can place the name of an organization or website in quotation marks and add keywords to avoid fruitless results.”

This article from the American Federation of Teachers gives us more great information: “Fake news is certainly a problem. Sadly, however, it’s not our biggest. Fact-checking organizations like Snopes and PolitiFact can help us detect canards invented by enterprising Macedonian teenagers, but the Internet is filled with content that defies labels like ‘fake’ or ‘real.’ Determining who’s behind information and whether it’s worthy of our trust is more complex than a true/false dichotomy. For every social issue, there are websites that blast half-true headlines, manipulate data, and advance partisan agendas. Some of these sites are transparent about who runs them and whom they represent. Others conceal their backing, portraying themselves as grassroots efforts when, in reality, they’re front groups for commercial or political interests. This doesn’t necessarily mean their information is false. But citizens trying to make decisions about, say, genetically modified foods should know whether a biotechnology company is behind the information they’re reading. Understanding where information comes from and who’s responsible for it are essential in making judgments of credibility. … Between January 2015 and June 2016, we administered 56 tasks to students across 12 states. (To see sample items, go to http://sheg.stanford.edu(link is external)) We collected and analyzed 7,804 student responses. Our sites for field-testing included middle and high schools in inner-city Los Angeles and suburban schools outside of Minneapolis. We also administered tasks to college-level students at six different universities that ranged from Stanford University, a school that rejects 94 percent of its applicants, to large state universities that admit the majority of students who apply. When thousands of students respond to dozens of tasks, we can expect many variations. That was certainly the case in our experience. However, at each level—middle school, high school, and college—these variations paled in comparison to a stunning and dismaying consistency. Overall, young people’s ability to reason about information on the Internet can be summed up in two words: needs improvement. Our ‘digital natives’ may be able to flit between Facebook and Twitter while simultaneously uploading a selfie to Instagram and texting a friend. But when it comes to evaluating information that flows through social media channels, they’re easily duped. Our exercises were not designed to assign letter grades or make hairsplitting distinctions between ‘good’ and ‘better.’ Rather, at each level, we sought to establish a reasonable bar that was within reach of middle school, high school, or college students. At each level, students fell far below the bar. In what follows, we describe three of our assessments. Our findings are troubling. Yet we believe that gauging students’ ability to evaluate online content is the first step in figuring out how best to support them.”

The article also tells us, “Our findings show that many young people lack the skills to distinguish reliable from misleading information. If they fall victim to misinformation, the consequences may be dire. Credible information is to civic engagement what clean air and water are to public health. If students cannot determine what is trustworthy—if they take all information at face value without considering where it comes from—democratic decision-making is imperiled. The quality of our decisions is directly affected by the quality of information on which they are based.”

Here’s what we should do:

1. Teach students to read laterally. Fact checkers approached unfamiliar content in a completely different way. They read laterally, hopping off an unfamiliar site almost immediately, opening new tabs, and investigating outside the site itself. They left a site in order to learn more about it. This may seem paradoxical, but it allowed fact checkers to leverage the strength of the entire Internet to get a fix on one node in its expansive web. A site like epionline.org stands up quite well to a close internal inspection: it’s well designed, clearly and convincingly written (if a bit short on details), and links to respected journalistic outlets. But a bit of lateral reading paints a different picture. Multiple stories come up in a search for the Employment Policies Institute that reveal the organization (and its creation, minimumwage.com) as the work of a Washington, D.C., public relations firm that represents the hotel and restaurant industries.

2. Help students make smarter selections from search results. In an open search, the first site we click matters. Our first impulse might send us down a road of further links, or, if we’re in a hurry, it might be the only venue we consult. Like the rest of us, fact checkers relied on Google. But instead of equating placement in search results with trustworthiness (the mistaken belief that the higher up a result, the more reliable), as college students tend to do, fact checkers understood how easily Google results can be gamed. Instead of mindlessly clicking on the first or second result, they exhibited click restraint, taking their time on search results, scrutinizing URLs and snippets (the short sentence accompanying each result) for clues. They regularly scrolled down to the bottom of the results page, sometimes even to the second or third page, before clicking on a result.

3. Teach students to use Wikipedia wisely. You read right: Wikipedia. Fact checkers’ first stop was often a site many educators tell students to avoid. What we should be doing instead is teaching students what fact checkers know about Wikipedia and helping them take advantage of the resources of the fifth-most trafficked site on the web. Students should learn about Wikipedia’s standards of verifiability and how to harvest entries for links to reliable sources. They should investigate Wikipedia’s ‘Talk’ pages (the tab hiding in plain sight next to the ‘Article’ tab), which, on contentious issues like gun control, the status of Kashmir, waterboarding, or climate change, are gold mines where students can see knowledge-making in action. And they should practice using Wikipedia as a resource for lateral reading. Fact checkers, short on time, often skipped the main article and headed straight to the references, clicking on a link to a more established venue. Why spend 15 minutes having students, armed with a checklist, evaluate a website on a tree octopus (www.zapatopi.net/treeoctopus(link is external)) when a few seconds on Wikipedia shows it to be ‘an Internet hoax created in 1998’? While we’re on the subject of octopi: a popular approach to teaching students to evaluate online information is to expose them to hoax websites like the Pacific Northwest Tree Octopus. The logic behind this activity is that if students can see how easily they’re duped, they’ll become more savvy consumers. But hoaxes constitute a miniscule fraction of what exists on the web. If we limit our digital literacy lessons to such sites, we create the false impression that establishing credibility is an either-or decision—if it’s real, I can trust it; if it’s not, I can’t. Instead, most of our online time is spent in a blurry gray zone where sites are real (and have real agendas) and decisions about whether to trust them are complex. Spend five minutes exploring any issue—from private prisons to a tax on sugary drinks—and you’ll find sites that mask their agendas alongside those that are forthcoming. We should devote our time to helping students evaluate such sites instead of limiting them to hoaxes.”

According to the article, “The senior fact checker at a national publication told us what she tells her staff: ‘The greatest enemy of fact checking is hubris’—that is, having excessive trust in one’s ability to accurately pass judgment on an unfamiliar website. Even on seemingly innocuous topics, the fact checker says to herself, ‘This seems official; it may be or may not be. I’d better check.’ The strategies we recommend here are ways to fend off hubris. They remind us that our eyes deceive, and that we, too, can fall prey to professional-looking graphics, strings of academic references, and the allure of ‘.org’ domains. Our approach does not turn students into cynics. It does the opposite: it provides them with a dose of humility. It helps them understand that they are fallible. The web is a sophisticated place, and all of us are susceptible to being taken in. Like hikers using a compass to make their way through the wilderness, we need a few powerful and flexible strategies for getting our bearings, gaining a sense of where we’ve landed, and deciding how to move forward through treacherous online terrain. Rather than having students slog through strings of questions about easily manipulated features, we should be teaching them that the World Wide Web is, in the words of web-literacy expert Mike Caulfield, ‘a web, and the way to establish authority and truth on the web is to use the web-like properties of it.’ This is what professional fact checkers do. It’s what we should be teaching our students to do as well.”

This post tells us about a new tool in Google we can use. “When you search for information on Google, you probably often come across results from sources that you’re familiar with: major retailer websites, national news sites and more. But there’s also a ton of great information on and services available from sites that you may not have come across before. And while you can always use Google to do some additional research about those sites, we’re working on a new way for you to find helpful info without having to do another search. Starting today, next to most results on Google, you’ll begin to see a menu icon that you can tap to learn more about the result or feature and where the information is coming from. With this additional context, you can make a more informed decision about the sites you may want to visit and what results will be most useful for you. When available, you’ll see a description of the website from Wikipedia, which provides free, reliable information about tens of millions of sites on the web. Based on Wikipedia’s open editing model, which relies on thousands of global volunteers to add content, these descriptions will provide the most up-to-date verified and sourced information available on Wikipedia about the site. If it’s a site you haven’t heard of before, that additional information can give you context or peace of mind, especially if you’re looking for something important, like health or financial information. If a website doesn’t have a Wikipedia description, we’ll show you additional context that may be available, such as when Google first indexed the site. For the features Google provides to organize different types of information, like job listings or local business listings, you’ll see a description about how Google sources that information from sites on the web, or from businesses themselves, and presents it in a helpful format. You’ll also be able to quickly see if your connection to the site is secure based on its use of the HTTPS protocol, which encrypts all data between the website and the browser you’re using, to help you stay safe as you browse the web. And if you need quick access to your privacy settings, or just want to learn a bit more about how Google Search works, links to resources are just a tap or click away.”

One comment

  1. […] Student of the Civil War’s Al Mackey has a good piece looking at how to evaluate the credibility of historical sources.  […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: