09/05/2016 by Nitesh

Disallow Search Engine Crawlers to Index Websites

When working for any client, we all showcase the work on a development server that is available on a public domain and easily accessible to client. With Search Engines crawling the internet at a huge speed, it’s likely that Search Engine Crawler aka Robots like Google, Bing, yahoo will index the development webpages that we don’t want to.To our rescue comes a handy .txt file named as “Robots.txt” In this post, we will see how can we disallow search engine crawlers using Robots.txt file.

Here’s the Robots.txt file to disallow the entire site to be indexed –

User-agent: *
Disallow: /

If you are interested in blocking only certain folders from the website to be indexed, you can case the following Robots.txt file contents.

User-agent: *
Disallow: /Images
Disallow: /Admin

An ideal situation to disallow indexing the admin, images, CSS folders of your website.

Let me know if you’re aware of any other way of disallowing apart from Robots.txt file. Happy dis-indexing!

#How To?#SEO#Utilities