Robots.txt Generator | Generator Free Robots.txt Instanly.

optimisation du moteur de recherche

Robots.txt Generator


Default - Tous les robots sont:  
    
Retardement:
    
Plan du site: (laissez en blanc si vous n'en avez pas) 
     
Robots de Recherche: Google
  Google Image
  Google Mobile
  MSN Search
  Yahoo
  Yahoo MM
  Yahoo Blogs
  Ask/Teoma
  GigaBlast
  DMOZ Checker
  Nutch
  Alexa/Wayback
  Baidu
  Naver
  MSN PicSearch
   
Répertoires restreints: Le chemin est relatif à root et doit contenir une barre oblique "/"
 
 
 
 
 
 
   



Maintenant, créez le fichier "robots.txt" dans votre répertoire racine. Copiez le texte ci-dessus et collez-le dans le fichier texte.


Sur Robots.txt Generator

What is a Robots.txt Generator?

Robots.txt Generator is a tool used to generate the file which is placed in the root folder of your website to help robots while crawling your webpage to indexing, this text file helps to understand which page or part of the website has to crawled and indexed and major use is to avoid overloading of the site with more requests.  It doesn’t mean making some page or some part of the site out of the search engine permanently, it’s only for sometime temporarily while that page of the site is under work in progress or else that contains any irrelevant content needs to be improved. 

The first file that search engines needs when it start crawling your site is robot’s.txt file, if it’s not available there are more chance to miss the major part of the page or site have been missed an not indexed well in the search engine result page. Creating a robots.txt file manually is tedious work, it takes a lot of time and effort so you can prefer an online tool. While generating this kind of robots.txt file you need to know about the some directives like crawling delay, allow, disallow, etc.,

The directives of the robots.txt and their purpose:

  • Allow: Allow option is used to indicate to the bots which page of the site should be included while robots crawling and indexing, you can add more web page URLs as you need.
  • Disallow: Disallow is used to indicate bots which page of the site should be excluded i.e needn’t to be crawled and indexed by the bots.
  • Crawl delay: It is used to delay the crawlers to prevent overloading, while making more requests at the same time results, bad experience for the users. Crawl delay has been treated differently according to each search engine. 
  • You should be more careful about choosing these attributes, if you give wrong attributes it leads to you waiting more time to change that because there has been a long queue waiting for bots to crawl and indexing their webpages.

How to make use of our Robots.txt generator tool?

  • Our robots.txt generator can easily make you a robots.txt file, you can access it whenever you need, you can visit our tool by clicking this Robots.txt GeneratorOnline there you can see the attributes needed  to select within the option as you required, but it’s not necessary to choose option sometimes you can leave it as it is.
  • In the first row you have the option default-all robots are: in which you have two options allowed or refused. If you allow all the crawlers or robots to crawl and index then select the allow attribute otherwise choose refuse option.
  • In the second row you’ll see the option for crawl delay, there you can set the time for delay as you need, like 5 seconds, 10 seconds, 20 seconds, 60 seconds and 120 seconds. Otherwise you don’t need to delay the request, select the default-no delay option. 
  • Then you have the sitemap option, if you have a sitemap then don’t forget to enter the details, suppose if you don't have you just leave it blank, because it’s not mandatory if you don’t have.
  • Finally you have to give instructions to the robots about the searches which they need to search and crawl i.e google, Google Image, Google Mobile, MSN Search, Yahoo, Yahoo MM, Yahoo blogs, Ask/Teoma, Gigablast, DMOZ Checker, Nutch, Alexa/Wayback, Baidu, Naver, MSN PicSearch. In which each has the options of three as the same as default, allowed and refused. You have to choose according to your site performance. 
  • Atlast you have an option for  disallowing, where you can  restrict the crawlers from indexing the some part or some page of the site, you have to add the forward slash “/” before filling the field of the directory or page.address.
  • Then you can see the three options: create robots.txt, create and save as robots.txt and clear, you have to select any one according to your requirement. 
  • After selecting any one of the options you’ll have a confirmation pop up text which contains create robots.txt file? By clicking the ok option you will get your robotos.txt file .
  • If you select the create option it only creates robots.txt you have to copy and save for it for further usage. If you choose create and save as robots.txt option it’ll automatically download and save in the name of robots on the downloads file location.
  • Now you can create robots.txt file in your site root directory, then copy the result file and paste it into the text file.