Requests per second a crawler makes to a website when it is crawling it.
An allocation of crawl requests to a host.
Program determining which sites to crawl, how often, and how many pages to fetch from each site.
The frequency a page is crawled by a search engine bot compared to the ranking position of that page on that search engine.
The totality of possible URLs for a website.
The number of pages crawled by a search engine bot compared to the total number of available pages to crawl on a website. 100% means search engine knows all the pages on that website.
Effective Crawl Ratio
The number of specific pages crawled by a search engine bot in the crawl window of that type of pages on that website for that search engine compared to the total number of available pages to crawl on that type of pages of the website.
Timeframe a search engine accepts to send visitors to a URL after it crawled it. That period will vary depending on the type of pages. Knowing that number allows to estimate the Effective Crawl Ratio.
Depth is the shortest path ‘minimal number of clicks’ from the homepage to a particular page. Crawl depth is how deep a crawler is programmed to explore a website.
Pages available to crawl on a website which have no unique content, no SEO aim, no add value neither for users nor for search engines.
Crawling not the website but the crawl of that website which is already performed.
The number of useful crawled pages by a search engine bot compared to all crawled pages by the same bot in a defined period.
Crawl timeframe by a search engine bot before a visit is recorded from that search engine.
Intelligent use of Crawl Budget(Allocation) on a website.
The average time a crawler spends downloading a page (in milliseconds)
Crawled pages of a website by a search engine bot which bring at least one visit from that search engine in a defined period.
Crawled pages of a website by a search engine bot which bring no visits from that search engine in a defined period.
Simulate Empty Crawl
URLs which are known and kept by a crawler but not requested from the host, typically URLs blocked by robots.txt of a website. A crawler is sometimes configured on purpose to perform that kind of crawl to perform analysis on the links of a website.
Crawling specific, selected parts of a website.
Unique URL, a crawler crawls on a single website in a defined period.
Have comments, questions or feedback about this article? Please do share them with us here.
If you like this article