Patent Guru: How Bill Slawski Gets SEO Advice Directly From Search Engines (Part 1)

Add Your Comments

The following is the first half of an interview that Search Marketing Standard staff carried out with Bill Slawski, President of SEO By The Sea. The interview was originally published in the Winter 2008/2009 issue of Search Marketing Standard print magazine. Bill talks about how he searches through patent filings to try to figure out what trends are likely to soon be important for those involved in online marketing. By watching items like patents, one can get some idea of how the algorithms might be changing as well as new products/features in the works, all of which can be extremely helpful for SEO and SEM purposes. The second part of the interview with Bill will be posted in a few days. It’s particularly interesting to look at what Bill had to say about upcoming changes and trends he saw reflected in what were then recent filings, considering that the interview was conducted near the end of 2008.

=================================

SMS: Bill, you are considered to be a “patent guru” by most experts in the search marketing industry. How did you come to develop such a unique approach to search engine marketing research and how does this fit in to the service you provide for your clients?

Bill: My interest in search-related patent filings came from a desire to try to get as much information as possible directly from the search engines themselves, and to try to understand their perspective on search and the web. I dislike the term “guru,” because it’s misleading. While I often write at my blog about the patent filings that I find, I urge people to read the documents themselves and form their own opinions about what those documents say. I look at patent filings because they are primary sources of information that haven’t been filtered by opinion or folklore.

Search engines do have trade secrets that aren’t revealed in patent applications, and many patents that are filed cover processes and methods that may never be developed, but it’s possible to glean some insights into the assumptions and thought processes that go into the development of a patent. It’s also pretty exciting to see a new program or method come out from one of the search engines that had been described in a patent application published days or months before.

I do consider looking through search-related patents as a due diligence requirement for providing services to clients, in that the information contained in those patents is publicly available, and they come straight from the search engines themselves. The act of reading patents often raises more questions than answers, but knowing that those questions exist can be helpful.

For instance, a couple of recent patent filings from Microsoft explored a number of ways that they might rank images for image search, and create an image score that could influence the rankings of web pages in web search. It’s impossible to say how many of the processes they described are actually being used today, but the patent filings provided some insights and questions to explore about how search engines might view images, beyond just looking at alt text and text that might be associated with pictures.

SMS: We’ve seen blended search. We’ve seen some personalization. Based on the patent fillings you have seen, what are some of the new features search engines might have in store for us in the not-so-distant future?

Bill: Some interesting features that I’ve seen described in search-related patents? Well, one search feature that I thought was pretty interesting was an “inversion search” from Microsoft that would allow you to enter the URL for a web page, and see and search upon keywords that are related to that page. I think this would be pretty helpful to searchers who don’t know much about a topic, but wanted to explore it more deeply.

Another recent feature, from Google, is the creation of a database containing information found from different tables on the web as part of their WebTables project. Many web pages contain tables filled with data on different subjects – all the way from baseball statistics, to scientific data, to historical information, and many other topics. If you’re doing research, and you want facts and figures to back up that research, being able to find that kind of data might be very helpful, but may also be very difficult through traditional search.

Google has also recently presented their vision of an advanced local search for mobile devices that could use your changing position to update distances to locations and display information from specialized templates you select to show – for example – real estate prices in an area, cost of living information for those locations, weather and traffic data, fuel costs, political boundaries, and more.

Another mobile program – this one from Yahoo! – would allow you to create user-defined private maps, that you could share with friends, to help locate each other on those maps. This is a feature that could be useful to members of a group in places like resorts, shopping centers, and city locations. You would also be able to access tags that people leave about specific locations, and local search information relevant to the areas you visit.

The patents and white papers from the major commercial search engines are filled with many other possibilities – some will likely be developed, while others will remain intellectual property on paper only. Some interesting times are ahead for us from projects being worked on by the search engines.

SMS: Of course, every search marketer out there is interested in having their website indexed and ranked well. Right now, we know that factors like internal website structure, backlinks, and the age of a website are important in ranking. From what you have seen in the patents, what other factors can we expect to be added to the mix?

Bill: There are a number of different ways to classify ranking algorithms from search engines, but one that I find helpful is to break them down into link-based, content-based, and user-behavior-based signals.  I think using this kind of classification can help us identify some of the new ranking factors that search engines may be adding to rank pages.  The lists below include some signals that search engines may be looking at now, as well as others they may be looking at in the future.

Link-based signals – begin by looking at the number and importance of links between pages, but also consider:

  • the age of links pointing to pages
  • the frequency of growth and loss of links to a page
  • the number of broken or redirected links on pages
  • the use of anchor text in links
  • the use of “related” anchor text in links as determined by how frequently the text within those anchors co-occur with query terms that the page may be found for in searches at the search engine
  • the genre of sites that links originate from (such as blogs or news articles or web pages)
  • the age of domains that incoming links originate from
  • the number of links that come from pages that might be included in the top results from a search for a certain query

Content-based signals – begin by looking at the words that appear upon pages, but also consider:

  • where those words appear within the layout of a page
  • how those words are formatted through HTML
  • a reading level score for a page
  • spelling and grammar and sentence structure
  • whether “related” words and phrases appear on the same page, as defined by how frequently those words co-occur with the query terms on other pages upon the web
  • how facts about specific people and places and things are formatted and presented
  • the “freshness” of news results
  • what kinds of features and meta information about images might exist on the pages
  • the rate of change of content upon a page and site

The content a search engine examines might begin to focus to a finer level of granularity to look at individual segments of pages, or expand out to a greater range to explore how related the content of a page might be to a whole site or to a set of inter-related sites.

User-behavior-based signals – consider how people use websites by looking at such things as click-throughs in SERPs for specific queries and during query sessions covering visits to multiple pages, but also consider:

  • query refinements by searchers
  • bookmarking of pages
  • tagging of pages
  • browsing activity, including how long someone spends upon a page, how far they scroll down a page, and where they move their mouse pointer
  • selection and use of alerts
  • subscription to RSS feeds
  • searching activity in vertical searches such as maps or news or images
  • rankings and ratings of businesses offering goods and services
  • sentiment analysis of reviews

While some search patent filings describe possible signals that might play a role in delivering the right pages to the right people and meeting a searcher’s intentions based upon a small number of words typed into a search box, it really is difficult to state with any certainty what might be added to the mix.

===================================================

Part 2 of the interview will be posted in a few days.

About the Author

Frances Krug has worked in market research since graduating from UCLA with an MA and CPhil in Latin American history. As an editor and online content provider for the last 7 years, she currently is Associate Editor at iNET Interactive, where she also directs Search Marketing Standard's email marketing program.

Add Your Comments

  • (will not be published)