A file called Robots.txt gives instructions for crawling a website. This standard, also known as robots exclusion protocol, is used by websites to notify bots which parts of their website need to be indexed. You may also select which locations you don't want these crawlers to access; these sites may contain duplicate material or be under construction. Bots such as malware detectors and email harvesters don't follow this norm and will examine your security for flaws, and there's a good chance they'll start looking at your site from the sections you don't want indexed.