International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 120 - Number 9 |
Year of Publication: 2015 |
Authors: Bhavin M. Jasani, C. K. Kumbharana |
10.5120/21258-4115 |
Bhavin M. Jasani, C. K. Kumbharana . Restructuring robots.txt for better Information Retrieval. International Journal of Computer Applications. 120, 9 ( June 2015), 35-40. DOI=10.5120/21258-4115
Now a days the users of the WWW are not only the human. There are other users or visitors like web crawlers and robots which are generated by the search engines or information retrievers. The direct visitors of your website are very less than those who reach to your website by using search engines or through other links. To collect information from your website search engines use crawlers or robots to access your website. There must be an access mechanism or protocol for such robots which restrict them to access unwanted content of the website. robots. txt is a partial mechanism for such facilities but not fully functional. This paper gives an enhancements to fully make use of the functionality of robots. txt file.