Synopsis – The last couple of major Google updates (the now infamous Penguin and Panda algorithm adaptations) have a lot to do with authenticity — specifically, of links and content. When you think about it, authenticity is key to a search engine’s performance and perhaps second only to relevancy. For example, let’s say you do a search for “gravity theory,” and the first result is a previously unknown treatise on gravity said to be authored by Sir Isaac Newton that negates a major part of the current theory of gravity. In fact, the document is a clumsy fake that fools no one. Too many of these kind of fumbles and searchers will begin to question the search engine’s ability to deliver good results. But how is a search engine to know what is authentic and what isn’t?
In this article, Bill Slawski (expert on matters related to the patent filings that help uncover the rationale and workings behind a search engine such as Google) addresses the topic of how search engines try to differentiate the fake from the real. Working from a real-life experience of being temporarily fooled into believing that a blog comment was from Matt Cutts (Google’s WebSpam Guru), Slawski discusses Google’s agent rank and Bing’s author authority efforts. He then moves on to look at the increasing risk that impersonation brings resulting from the rise of social networks, with an intriguing look at a Google patent granted in July 2012 that provides evidence of the kind of signals that the search giant uses to judge identity.
This article is part of Search Marketing Standard’s premium offerings. If you are a subscriber, please log in to your account in order to view the complete article.