robot.txt

a.k.a. robots.txt

A file located within a website's root directory providing instructions to search engines on which files to crawl or omit.

It is code that prevents spiders, crawlers, and bots from accessing all or part of a Web site which is otherwise publicly viewable. Known officially as the Robot Exclusion Standard, also known as the Robots Exclusion Protocol, the robots.txt protocol is achieved by placing a text file in the root of the Web site hierarchy on the server (e.g. www.example.com/robots.txt).

Robots are often used by search engines to categorize and archive Web sites, or by webmasters to proofread source code. The standard is different from, but can be used in conjunction with, Sitemaps, a robot inclusion standard for websites. If a site owner wishes to give instructions to web robots they must place a text file called robots.txt in the root of the web site hierarchy  This text file should contain the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the web site. If this file doesn't exist web robots assume that the web owner wishes to provide no specific instructions. A robots.txt file on a website will function as a request that specified robots ignore specified files or directories in their search. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operate on certain data.

BTW:  For websites with multiple subdomains, each subdomain must have its own robots.txt file. If example.com had a robots.txt file but my.example.com did not, the rules that would apply for example.com would not apply to my.example.com.

See also : everflux  ;alskdjf  robot  
NetLingo Classification: Net Technology

Updates