I wrote a post a few weeks ago about how even smart people often fail at discerning poor quality information from the good stuff.
It turns out that my cousin, Todd Quinn, a business and economics librarian at the University of New Mexico, deals with this issue every day and even teaches classes to help folks frame their research —what comes out of the other end of the Google Search pipe — with context.
I asked Todd to share his wisdom on the topic, which follows.
By Todd Quinn
Associate Professor and Business/Economics Librarian, University of New Mexico
A few weeks ago, Lou posted a story explaining even smart people have trouble evaluating information. His post linked to a post on Nieman Lab, which linked to a working paper (56 pages) from Stanford.
This information is nothing new to professional librarians. We help people evaluate information every day.
Many years ago, I worked summers for a small company, a purveyor of foods for upscale restaurants. Basically, I worked in a large warehouse, and I had two parts of my job 1) based on orders, find items (dry goods, refrigerated goods, frozen goods, etc.) in the warehouse, and get them ready for delivery, 2) deliver orders throughout the city of Pittsburgh. I was a novice at both.
Some items (especially bulk items) were very easy to find, but many were small, odd, items I never knew existed, items with unfamiliar names, etc. I would do my best, but so many times I could not find the item (e.g., imported brie from France) even if I was in the proper location of the warehouse (this was a joy when it was in the large walk-in freezer). Inevitably, I would need to ask the foreman to help me locate the item. He could scan all the boxes and find it in seconds. He knew how to find it because the item/box was a specific shape/color; he knew it was buried under/behind other specific boxes; he understood the organization of the warehouse.
Deliveries were another issue. I would be sent out for deliveries to locations I had never visited (this was before smartphones) with directions, but no map. It was not only finding the specific location, but also finding the loading dock, navigating traffic, etc. Of course, I got lost many times. Over the summers I did learn the location of most items in the warehouse and hardly got lost during deliveries, but it took time.
I provide these two examples as ways to view searching for information online, evaluating it for quality and determining whether it meets one’s informational need. As one of my colleagues has pointed out, when professional librarians view Google search results, we can “see” immediately in the results different types of information: webpages, reports, books, news articles, research articles, etc.
Each of these types/formats convey information on how it was produced, who produced it, effort in the production, among other information. Most people see only webpages and do not see the distinctions. Our goal is to help people understand different information, the organization of information and the process of knowledge creation. We want students, and others, to understand how to search in different environments, why to select one search tool over another, who is most likely to produce specific information, and how to evaluate the information. Evaluation is hard since all information is created by humans, and humans are inherently biased.
This takes time to master. A checklist or general rules in evaluation are just starting points. If you never thought about the sites you visit or evaluated them deeply, it may take 15-20 minutes to truly assess one. It may be ideal to separate the content (e.g., articles, posts, videos) from the context (author/organization, the URL, the design, how you found it, etc.). But if you use a process, over time you can see the value, or lack of value, of a site in seconds. This is especially true if you are always looking for information in a specific field or discipline.
Any evaluation standard can be gamed by a site, author or organization, which is why one uses his/her experiences and expertise, along with looking for multiple sources on any topic. For example, Lou is well-versed in the public relations field. He knows the names of his competitors, their areas of expertise, the trade journals in the field, which publish quality journalism, which publish mostly puff pieces, which sites provide quality information, and knows how to read between the lines of news items in his areas of expertise. This knowledge took years for him to develop, and I doubt he is even aware that he is using his experience and expertise while searching and evaluating information for his work. But what if he goes outside of his field of expertise? For example, how long would it take him to find and evaluate education policy, health information, climate change? Who would he trust?
Google Search is useful and always gets results, but it does not mean that quality rises to the top. There is no clean search from any search engine. Google’s algorithm uses your location, your past searches, your browser history, payments from companies, popularity of sites (its original process) to game your search results. Using multiple sources of content is ideal in understanding a topic. By the way, how many people use Google’s Advanced Search or even know it exists? It allows you to have more control over your search results, and lessen the effects of the gamed system.
So evaluation comes first from asking yourself what information you seek (beyond general facts). Who is most likely to produce it? Does the site have an About link explaining itself? Is the person/organization an expert in the field/discipline? Is it opinion or a news piece? What impact does the domain (.edu, .org, .com, .gov, .ca.gov, .ru) have on the content? If a site or organization is new to you, it is ideal to review all these things before the content. Use multiple sources of content.
I used to show students a few sites related to cloning, and one was from a pro-cloning organization. It did not hide that information, but there were clues all over the site to make one wary of its content. I could see that it purposely did not name its board of directors, co-opted well-known peoples’ names, displayed meaningless images — and this was before reading the content. Most of the content was poorly written by non-experts. Unfortunately, there are many sites on a variety of subjects/topics that are more sophisticated and purposely misleading.
The only silver bullets I have: Use multiples sources, and evaluate both the context and content. The less familiar you are with a topic the more time you need for evaluation.