Robots.txt records are text documents used to offer directions to web crawlers about how to creep the pages of a site. Robots.txt documents are here and there helpful for SEO, yet they should be maneuvered carefully. Inappropriate execution can mess major up, for example, incidentally keeping the whole site from being slithered.


For What Is Robots.txt Used?

Robots.txt records can be applied to any web crawler, not simply Google. Actually, these documents can be focused on Google crawlers, all crawlers other than Google or any blend of crawlers in the middle.

All things considered, you can figure out true applications for robots.txt documents by investigating Google’s proposals for utilizing them. Google proposes utilizing robots.txt for:


  • Refusing slithering of the whole site
  • Refusing slithering of a registry and its substance
  • Permitting admittance to a solitary crawler
  • Permitting admittance to everything except a solitary crawler
  • Refusing creeping of a solitary page
  • Impeding a particular picture from Google pictures
  • Impeding all pictures on a site from Google pictures
  • Refusing creeping of documents of a particular record type (e.g., GIF documents)
  • Refusing creeping of the whole webpage, yet indicating AdSense promotions on the site’s pages
  • Coordinating URLs that end with a particular string (e.g., .xls)


Robots.txt records are helpful for cordoning off specific pages or gatherings of pages from crawlers, and for keeping crawlers from ordering particular sorts of documents, for example, pictures.

Also, robots.txt documents can be utilized to defer slithering on a site, which can be extremely useful for huge locales that draw a ton of traffic — postponing creeps can decrease or forestall worker over-burdens that hinder page stacking or cause a site to crash.

What Is Robots.txt Used for in SEO?

Robots meta orders, not quite the same as robots.txt, are normally a superior methodology for SEO purposes when the objective is to forestall site pages from being slithered or keeping joins from being followed. Robots meta mandates are deciphered as more grounded orders, and are significantly simpler to actualize.

Also, if robots meta mandates are inappropriately executed, the harm is less extreme, by and large, than what can happen with misused robots.txt documents. This is on the grounds that robots meta mandates are set up page by page, though robots.txt orders apply to how the crawler should manage the whole site or a whole index.

Potential SEO issues With Robots.txt

One of the issues with robots.txt documents for SEO is that on the off chance that you use them to obstruct a page or set of pages and connections on that page or set of pages won’t be followed, any connection worth will be lost. That, however, on the off chance that the main connections to a page are on the hindered page(s), at that point the connection to page(s) won’t be filed.

Another issue with robots.txt — one that frequently applies to meta orders too — is that crawlers don’t need to follow them. Crawlers visiting a site with pernicious purpose will overlook them; hence, you ought not and can’t utilize robots.txt or meta orders to ensure delicate information on your site. Meta mandates are not generally a sheltered method to secure information; a superior methodology is to secret key ensure delicate information, (for example, private client data).

Numerous sites have no requirement for robots.txt documents by any means, and it is ideal to keep away from them except if you have a particular need that can’t be tended to better in another style. Ensure you band together with a SEO-keen web designer to help ensure robots.txt records are set up appropriately and executed just when essential.