The Search Engines Continuing Search For Quality

Add Your Comments

Synopsis — As a person who follows the patent filings of the search engines closely, Bill Slawski is the person to ask when you want to know how the items discussed in patent filings will influence — and have influenced — the algorithms that decide how sites are ranked in the search engine results pages. In this article, Bill looks at the issue from the aspect of the continual search that goes on at Google and other search engines to improve the quality of the search results. The name of the game is “relevancy,” and in “The Search Engines Continuing Search For Quality,” Bill uncovers items in recent patents filed by Google, Microsoft and Yahoo that may provide clues to how Panda has changed things and what might be coming up in the near future.

The complete article follows:

The Search Engines Continuing Search For Quality

How does one measure and define quality? It’s a question that the search engineers at Google, Yahoo, and Bing have been exploring in one fashion or another over the past few years, and one that’s integral to their business. Google’s Panda updates this year have focused upon presenting quality content to searchers, though there are site owners of very reputable businesses who claim to have been victims of Google’s algorithms. Yahoo recently published a patent application that describes different levels of “search success” in delivering the best results to searchers based upon a number of different metrics. Microsoft also published a patent that explores when a searcher might become dissatisfied with search results to predict when they might switch to a different search engine.

Yet quality is a difficult thing to predict or even define.

Defining Relevance

Search engines have ranked pages within their search results in the past through a combination of signals that look at things such as popularity and relevance, and yet relevance itself is a tough term to define.

Rutgers professor Tefko Saracevic has been writing about relevance for more than 35 years. In 1975, he wrote “Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science,” and updated it in 2007 where he classified relevance in information science under a number of different models. The models are interesting because we can see different behaviors from the search engines that echo those models. Here’s a very brief summary of some of the models of relevance discussed.

1. Relevance to a query — For example, do the documents that a search engine returns to a searcher contain the words found within the query itself? Does it try to rank pages higher that contain those words in the same order and adjacent to each other? Do those documents contain reasonable synonyms within the context of the query?

2. Topical or subject relevance — Are both the pages being returned by a search engine and the query about the same topic or subject? Might they fall under the same classification or category? If someone searches for “sushi bars,” is it better for the search engine to return a list of Japanese restaurants that have sushi bars or restaurant supply stores that might sell sushi bars?

3. Cognitive relevance or pertinence — Do the documents returned meet some informational type need inferred from the query. For example, someone may search for “diabetes” hoping to find information about the disease, its symptoms, its treatment, medical journal postings about it, and more. They aren’t just looking for pages that contain the word diabetes, or pages within similar categories, or about the same topic.

4. Situational relevance or utility — Here, a searcher is hoping to solve some task or problem at hand. The answer could be as simple as how to build a patio deck or where to find lunch when a query of “pizza” is entered into a search box.

5. Affective relevance — This type of relevance may overlap some of the others, but it is an attempt to return documents to someone submitting a query that helps meet their intents or goals. A search engine returning a number of local search results for pizza places on a search for Pizza is an example. Another may be someone searching for “how to xxxxxx,” and the search engine focusing upon returning highly informational pages or tutorials rather than pages from commercial sites. Or, when a searcher enters a simple search for a term like “sneakers,” they are likely to be more interested in buying sneakers than learning about the history of tennis shoes.

The quality of search results in large part relies upon the kind of relevance that a searcher might have had in mind when trying to use that search engine. A page appearing in search results that isn’t seen as relevant to a query might not be seen as a quality result either.

Rejecting Annoying Pages

When you’re working on sites as large as Google or Yahoo or Bing, one of the biggest challenges you face is handling the scale of work before you. Google’s income relies on presenting advertisements in combination with search results and on pages that feature Google AdSense. If the search engine had to have someone approve every advertisement and every potential landing page for those ads, it would be too labor-intensive to be profitable and too slow in approvals to be attractive to advertisers.

Google has various methods in place and planned for that address this problem. For example, Google was granted the patent “Detecting and rejecting annoying documents” (US Patent 7,971,137) this past June, which presents an automated system for evaluating documents such as ads and landing pages for approval or rejection and/or rating. The patent contains a laundry list of the kinds of features that Google wants advertisers to avoid in their advertisements, from poorly created images to pictures that are flashing and strobing and involve repetitive movements. It looks at a wide range of textual features and subjects and topics as well as potentially misleading features designed to trick users, that might be found offensive or unwanted.

Google has also given out quality scores to advertisers based upon a range of factors such as a historic clickthrough rate, account history, relevance of keywords to ads in an ad group, and the quality of a landing page. Google’s help page on the quality of landing pages stresses three main areas: (1) that the page features relevant and original content; (2) that the business behind the ad is transparent; and, (3) that the landing page and other pages in the path towards whatever was advertised are easy to navigate to.

In a Google patent application filed in 2009, and not yet granted, some Google Search engineers also spelled out some quality score considerations for website publishers who might be interested in showing off Google AdSense advertisers. The quality scores for pages those ads appear upon would influence how much the publishers might earn from the ads, with a premium to pages with very high quality scores. The criteria for these quality scores would include such things as:

  • Established facts rather than controversial opinions
  • Authoritativeness
  • Verifiability
  • Entertainment value
  • Grammatical accuracy
  • Educational value
  • Timeliness
  • Aesthetic quality
  • Originality
  • Cohesiveness
  • Reputation
  • Informational value
  • Search ranking
  • Popularity
  • Server responsiveness, or
  • Other quality criteria

Google Panda Stirs Things Up

When Google discussed the first Panda update on February 24, 2011, in a post on their official blog titled “Finding more high-quality sites in search,” they reported that the update impacted approximately 11.8% of all of Google’s queries, focusing upon sites, “which are low-value add for users, copy content from other websites or sites that are just not very useful.”

A follow-up Google Blog post about six weeks later (“High-quality sites algorithm goes global, incorporates user feedback”) commented both on the success of the original Panda update and acknowledged that some collateral damage may have happened to some “high quality” sites:

“Based on our testing, we’ve found the algorithm is very accurate at detecting site quality. If you believe your site is high-quality and has been impacted by this change, we encourage you to evaluate the different aspects of your site extensively. Google’s quality guidelines provide helpful information about how to improve your site. As sites change, our algorithmic rankings will update to reflect that. In addition, you’re welcome to post in our Webmaster Help Forums. While we aren’t making any manual exceptions, we will consider this feedback as we continue to refine our algorithms.”

Many site owners brought their sites to Google’s Webmaster Help Forums looking for explanations of why they might have dropped in rankings around the time of the Panda Update.

Around a month later, another Google post (“More guidance on building high-quality sites”) provided more insight into the update by providing a list of questions that webmasters should ask themselves about their sites to determine whether or not their pages might fit the “high quality” definition.

While the questions are useful, a number of following Panda updates affected more sites, including a number that were puzzled by being targeted. For example, in an ironic twist, one of the questions included was “Would you expect to see this article in a printed magazine, encyclopedia or book?” and one of the larger sites that was impacted by the latest update was the publisher of a popular magazine.

The Search Engines Search for Quality

I mentioned patents from Microsoft and Yahoo at the start of this article. The Microsoft patent covers how the search engine might attempt to understand why some searchers might abandon one search engine and use another. The inquiry described in the patent is also pretty well defined in a Microsoft whitepaper which covers much of the same ground — “Why Searchers Switch: Understanding and Predicting Engine Switching Rationales.” The authors of the paper tell us:

“Engine switching can occur for a number of reasons, including user dissatisfaction with search results, a desire for broader topic coverage or verification, user preferences, or even unintentionally. An improved understanding of switching rationales allows search providers to tailor the search experience according to the different causes.”

As much as site owners might be challenged by what the search engines are defining as “quality” sites, the search engines are also actively exploring how they can provide quality services to searchers.

The Yahoo patent, “System and method for development of search success metrics” (US Patent 8,024,336), tells us that one of the major problems they face in presenting quality search results is that the better a metric is at being helpful in determining how satisfied someone is with search results, the more difficult it is to collect information about their satisfaction. For example, the best measure for searcher satisfaction is being able to watch and hear a searcher describe their searches as they are performing them. Being able to do that is pretty much impossible. Often the next best measure is when a searcher provides feedback on their satisfaction or dissatisfaction with searches after the fact. Again, that’s difficult information to acquire, especially at a large scale.

The patent’s inventor, Lawrence Wai (now in charge of analytics at Groupon), tells us that metrics like clickthroughs and dwell time at a page aren’t quite as useful, but are still often fairly reliable. Rather than relying upon those directly though, if Yahoo could identify features on web pages that might be used to predict clickthroughs and dwell time, it could then use the actual clickthroughs and dwell time measure to see how accurate the feature identification algorithms might be.

Interestingly, those features or signals sound very similar to the signals that Google’s Matt Cutts describes in an interview with him and Amit Singhal in WIRED titled, “TED 2011: The ‘Panda’ That Hates Farms: A Q&A With Google’s Top Search Engineers”:

I think you look for signals that recreate that same intuition, that same experience that you have as an engineer and that users have. Whenever we look at the most blocked sites, it did match our intuition and experience, but the key is, you also have your experience of the sorts of sites that are going to be adding value for users versus not adding value for users. And we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons …

The search engines are looking for signals of quality in the advertisements they present, in the landing pages those lead to, on the pages of publishers who present advertisements, in pages that show up in search results, and in their own search results as well.

So, how do YOU define quality?

About the Author

Bill Slawski is a Senior SEO Consultant for Webimax, and blogs about search-related patents and papers at SEObythesea.com. He's been promoting websites since 1996, and can usually be found either with his nose buried in a patent or in the HTML code of a web page, or exploring the local history of a small town with camera in hand. Bill has an undergraduate degree in English from the University of Delaware, and a Juris Doctor from Widener University School of Law.

Add Your Comments

  • (will not be published)