Google explains how to hide a website from search results

Google goes over three ways you can hide a website from search results, and which one you should use depending on your situation.

Google says the best way to hide a website from search results is with a password, but there are other options you can consider.

This subject is spotlighted in the latest installment of the Ask Googlebot video series on YouTube.

Google’s John Mueller responds to a question asking how to prevent content from getting indexed in search, and whether that’s something websites are allowed to do.

“In short, yes you can,” Mueller says.

There are three ways to hide a website from search results:

  • Use a password
  • Block crawling
  • Block indexing

Websites can either opt out of indexing altogether, or they can get indexed and hide content from Googlebot by using a password.

Blocking content from Googlebot is not against webmaster guidelines, as long as it’s blocked to users at the same time.

For example, if the site is password protected when crawled by Googlebot, it must be password protected to users as well.

Alternately, the site must have directives in place to stop Googlebot from crawling or indexing the site.

Where you can run into trouble is if your website serves different content to Googlebot than it does to users.

That’s called “cloaking” and is against Google’s guidelines.

With that distinction made, here are the correct ways to go about hiding content from search engines.

3 Ways To Hide Content From Search Engines

1. Password Protection

Locking a website down with a password is often the best approach if you want to keep your site private.

A password will ensure neither search engines or random web users will be able to see your content.

This is a common practice for websites in development. Publishing the website live is an easy way to share in-progress work with clients, while preventing Google from accessing a website that isn’t ready to be seen yet.

2. Block Crawling

Another way to stop Googlebot from access your site is by blocking crawling. This is done with the robots.txt file.

With this method people can access your site with a direct link, but it will not be picked up by “well-behaved” search engines.

This isn’t the best option, Mueller says, because search engines might still index the address of the website without accessing the content.

It’s rare for that to happen, but it’s a possibility you should be aware of.

3. Block Indexing

The third and final option is to block your website from indexing.

For this, you add a noindex robots meta tag to your pages.

A noindex tag tells search engines not to index that page until after they crawl it.

Users don’t see the meta tag and can still access the page normally.

Mueller’s Final Thoughts

Mueller wraps up the video saying Google’s top recommendation is to go the password route:

“Overall, for private content, our recommendation is to use password protection. It’s easy to check that it’s working, and it prevents anyone from accessing your content.

Blocking crawling or indexing are good options when the content isn’t private. Or if there’s just parts of a website which you’d like to prevent from appearing in search.”

___
by Matt Southern
source: SEJ