Crawl error is an error that occurs when search engines bots (crawlers) can not reach your page or pages.
There are two types of crawl errors: website and URL ones.
Website Crawl errors
Website crawl errors mean that search engines can not reach the entire website which is the worst. There are several reasons for website errors:
- DNS errors: your server does not work. It usually goes away as soon as the server is up and running again
- server errors: the website takes way too long to load, there are way too many visitors, or there is something wrong with the website code
- robots error: crawlers can not reach robot.txt files
URL Crawl Errors
URL crawl errors occur when search engines can not reach an individual page on your website. It may happen if:
- your internal link redirects to the page that does not exist anymore
- the page is marked as “noindex”
- the page is blocked by the robot.txt file
- there are bad redirects from desktop to mobile versions
- there is malicious software on that URL
It is important to monitor your Google Search Console frequently, check up on your website, its loading speed, and responsiveness, and maintain good and up-to-date internal linking. In such a way, you will be able to either avoid or quickly fix the crawl errors.