txt file is then parsed and may instruct the robot as to which pages will not be to become crawled. Like a search engine crawler may possibly keep a cached duplicate of the file, it may well from time to time crawl pages a webmaster does not prefer to crawl. Internet pages generally prevented from being crawled include login-unique internet pages f