0 days 0 hrs 0 mins 0 secs

Robots.txt term explained

Robots.txt – a text file, the main task of which is restricting access for search engine bots to the content of the site. For quick and correct indexing of your site, you need to understand how to configure robots.txt. In the file write down specific instructions, which specify what to index, and unnecessary. The main problems robots.txt this:

  • The prohibition or permission to the data section of the site;
  • The delay time of the robot between the loading;
  • Time (GMT) index page.

There are several non-standard commands: Allow, Crawl-deley, Clean-param:

Allow: is opposite in meaning to the Disallow directive. That is, provides access to the section of the site.

Crawl: specifies the time to delay before page loads. This is to prevent unnecessary stress on an http server, which may occur in cases of frequent download.

Clean-param: used to describe dynamic parameters on the website. This Directive saves the server from loading duplicate data and simplifies his work.

In order to connect this document is sufficient to place it within the directory site. However, in the case where there are multiple subdomains, it must be in the root of each of them. This document complements a set of standard Sitemaps.

SEO dictionary search terms and definitions list glossary
4 5 A B C D E F G H I J K L M N O P R S T U V W X Y