Search engines frequently crawl website pages in order to determine which ones are indexed in their search listings. Search engine crawlers, also known as robots or spiders, collect, store and download pages they find important, such as a sitesa�� homepage. The search engines may not download pages they find irrelevant.
After crawling a page, the search engine analyzes it to determine if it is significant enough to be indexed. As time progresses, the search engine continues to request previously downloaded pages to check for content updates. A certain amount of bandwidth is then allocated for the periodic reviews based on the pages perceived relevancy. Every time a page is downloaded, bandwidth is used and once a websitea��s allocated bandwidth limit is reached, no more pages will be crawled until the next review.
Since there is a limited amount of allotted bandwidth, it is crucial to direct the crawlers to the content you want to be included in the search enginea��s index of web content as well as eliminate any unnecessary or duplicate content.
Here are 6 tips to enhance SEO by improving crawling:
1. A�Steer Crawlers in the Right Direction
2. A�Increase Page Importance
Crawlers begin with the pages they deem important and they return to those pages most often. To increase the importance of pages, decrease the number of clicks from the websitea��s home page needed to reach important content that may be deep within your website. A�Increase the number of internal and external direct links to pages, and avoid using no-follow on internal links to important content.
3. A�Increase Pages Crawled per Session
4. A�A�Avoid Duplicate Content
Multiple pages on your website with the exact same content will not improve search results. It will however waste your websitea��s crawl bandwidth. In many cases, there are entire copies of your website under different domain names. You should have only one version of your website domain in the search index and 301 redirect any other domains to the appropriate domain. If there are duplicate pages on your main domain, you can 301 redirect requests to another relevant page or use the canonical META tag on the duplicate content pages to indicate the original source. Finally, replace any session variables in URLs with cookies to track users as session variables often cause search engines to crawl duplicate content.
5. A�On-Page Factors
6. A�Detect and Avoid Crawler Problems
Register your website for Googlea��s webmaster tools. Ita��s quick and the reporting tool will identify crawling problema��s the search engine encounters. Use this information to implement changes and then check back to see if Google is still encountering the issues.
Stay away from spider traps as they are essentially internal link black holes on dynamic pages that create numerous pages by adding parameters or sub-directories to URLs, which create infinite loops for search engine crawlers. Also, do not include items, such as calendars, that contain links that go forward and backward indefinitely.
To sum up, finding and resolving crawler issues will dramatically increase search traffic to your site. Depending on your content management system, fixing these problems may take some effort but it will pay off in the long run.