Not Crawlable Pages

  1. Check the page code. If your page contains the <meta name = "robots" content = "noindex" /> directive, it means that the browser will pass the page without crawling. You need to uninstall this code snippet.
  2. Check no follow links. Sometimes your browser indexes your content but does not follow the links in this directory. This situation also causes problems. Make sure the following codes are not on the page:
<meta name="robots" content="nofollow"
href="pagename.html" rel="nofollow"/>
  1. The robot.txt on your web page may cause the indexing process to be blocked. If there is a field like the following in the source code of your web page, it will be useful to get rid of it immediately. Because this type of code means that all pages on your website are blocked from being indexed, and even if you do optimization work, these are useless because crawlers cannot crawl your site.

User-agent: *

Disallow: /

  1. Besides the code above, a code like the one below also means that many pages on your site, if not your entire site, cannot be indexed. If there is a part like this in your website HTML code, you need to delete it too.

User-agent: *

Disallow: / products /

If you are sure that such parts do not exist in the code structures on your website, what you need to do this time is to make sure there are no internal broken links, URL errors, outdated URLs, pages with denied access. You can get information about the current status of all pages thanks to the page scans of Screpy.

Test Your Website Issues

You can quickly analyze your site