How to Make the Most of Google Search Console Crawl Stats in 2021

A couple of weeks ago, Google launched an improved Crawl Stats Report in the Search Console, helping businesses take their search engine optimization (SEO) game to a new level.

In a nutshell, the new Crawl Stats Report comes with features that allow digital marketers to “spy” on Google. For instance, you can now track changes in crawling patterns which are often an indicator that something is either improving or going wrong with a website.

Let’s look at some of the new Crawl Stats Report’s key features and learn how they can be interpreted for maximum results.

  1. The total number of requests grouped by response code crawled file type, crawl purpose, and Googlebot type.

 This addition enables you to see the total number of crawl requests issued for URLs on your site, whether successful or not. Why is this important? Sites that get more crawl requests from Googlebot (the generic name for Google's web crawler) are more likely to score a higher SERP ranking. If your server responds to requests quicker, bots might be able to crawl more pages on your site.

 In the new Crawl Stats Report, unsuccessful requests are now neatly organized, so you can clearly see if they are due to either of these problems: Domain Name System (DNS) resolution issues (suggesting network connectivity troubles), redirect loop errors (which are a sign that some of the site's cached information is incorrect or out of date), etc.

Here's how you can optimize your pages and resources for crawling:

  • Make sure that your pages are fast to load.
  • Be on the lookout for long redirect chains.
  • Help Google identify duplicate content to avoid unnecessary crawling.
  • Avoid low-quality and spam content. Google only wants to crawl high-quality content.

Some positive signs to look for include:

  • HTTP Status Code 200 - OK in the “Response” table for most of your URLs. It means that Google was able to successfully crawl them.
  • HTML is the type of file that you want to see most in the “File type” table.
  • In the “Crawl Purpose” table, you should see a higher percentage of “refreshed” URLs compared to “discovered” URLs. A refresh crawl involves updating content, while actively looking for new links.
  • The majority of your crawl requests shown in the “Googlebot type” section should come from your primary crawler (Smartphone or Desktop).

These tables are a goldmine of information, thanks to example URLs. You simply click into any of the grouped data type entries (response, file type, purpose, Googlebot type) and see a list of example URLs of that type.

  1. High level & detailed information on host status issues

Host status describes whether or not your site was available for crawling in the last 90 days. Ideally, it should be accompanied by a green icon. This would be the case, even if it’s been more than a week since Google encountered a single significant crawl availability issue. The Response table can shed more light on this matter, so you can decide whether you need to take any action.

If your availability status is red, it shows that Google ran into multiple crawl availability errors in the last week on your site and you should check if this is a recurring problem.

  1. Over time charts

You’ll also be able to see crawl data totals and overtime charts for total requests, total download size and average response time.

 One tab to pay attention to is the average response time that Google took to download resources from your website during the crawl process. If your site has poor page response times, it means that bots won’t recrawl pages as often, negatively impacting your rankings.

One way to solve this issue is by preventing large and irrelevant resources (such as decorative images) from being loaded by Googlebot using robots.txt.

How to interpret crawl spikes and drops


Unless you’ve just launched your site and don’t have many backlinks, a low crawl activity could point out a couple of issues:

  • You added a new rule to the robots.txt file and that is blocking the parts of your website you actually want to be crawled.
  • You have broken HTML or unsupported media types.
  • You have low-quality content. Read the Google Quality Evaluator Guidelines to see if that’s the case.


Spikes in crawl rate can typically be triggered by a significant volume of fresh content recently added such as a large new website section. Another reason might be if you unblocked a large section of your site from crawling. Whatever the case, here are some ways to protect your website from being heavily crawled:

  • Check the Googlebot type section of the crawl stats report to see what Googlebot type is working overtime.
  • Change your crawl rate using the Crawl Rate Settings page, if the option is available.
  • Access the crawl rate settings and lower Googlebot crawl rate.

For Google to consider your website for ranking, it first needs to properly crawl it. The Crawl Stats Report helps you understand how Googlebot is interacting with your website, so you can make the necessary changes for SEO success.

However, to detect all issues preventing your site from outranking competitors, specialized SEO tools might be needed.

At, we have an entire Digital Marketing department to help clients like you achieve their full ecommerce potential. Browse our services here or contact us today to discuss a potential collaboration.

Featured Posts