Have you ever desired to stop Google from indexing a specific URL on your net web page and exhibiting it in their look for motor success web pages (SERPs)? If you manage internet websites very long plenty of, a day will possible come when you need to have to know how to do this.

The a few approaches most frequently made use of to avoid the indexing of a URL by Google are as follows:

Working with the rel=”nofollow” attribute on all anchor elements used to backlink to the page to avoid the one-way links from remaining followed by the crawler.
Using a disallow directive in the site’s robots.txt file to avert the page from currently being crawled and indexed.
Working with the meta robots tag with the information=”noindex” attribute to avert the page from remaining indexed.
Though the variations in the a few approaches show up to be subtle at initially glance, the effectiveness can differ significantly depending on which system you decide on.

Working with rel=”nofollow” to avert Google indexing

Quite a few inexperienced webmasters try to reduce Google from indexing a individual URL by employing the rel=”nofollow” attribute on HTML anchor things. They increase the attribute to each anchor factor on their web site applied to connection to that URL.

scrape google as a rel=”nofollow” attribute on a connection prevents Google’s crawler from pursuing the connection which, in convert, prevents them from getting, crawling, and indexing the goal webpage. While this process could possibly operate as a brief-phrase option, it is not a viable lengthy-expression alternative.

The flaw with this tactic is that it assumes all inbound inbound links to the URL will involve a rel=”nofollow” attribute. The webmaster, having said that, has no way to protect against other website internet sites from linking to the URL with a adopted hyperlink. So the possibilities that the URL will sooner or later get crawled and indexed using this strategy is quite substantial.

Employing robots.txt to reduce Google indexing

A further frequent method utilized to avert the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be additional to the robots.txt file for the URL in concern. Google’s crawler will honor the directive which will avoid the page from staying crawled and indexed. In some conditions, nonetheless, the URL can continue to surface in the SERPs.

In some cases Google will show a URL in their SERPs nevertheless they have hardly ever indexed the contents of that page. If more than enough web websites hyperlink to the URL then Google can frequently infer the subject matter of the webpage from the hyperlink textual content of all those inbound one-way links. As a consequence they will demonstrate the URL in the SERPs for related lookups. Although applying a disallow directive in the robots.txt file will protect against Google from crawling and indexing a URL, it does not ensure that the URL will hardly ever appear in the SERPs.

Using the meta robots tag to prevent Google indexing

If you need to stop Google from indexing a URL even though also protecting against that URL from currently being exhibited in the SERPs then the most powerful approach is to use a meta robots tag with a material=”noindex” attribute in the head aspect of the net webpage. Of system, for Google to essentially see this meta robots tag they want to initial be ready to find out and crawl the website page, so do not block the URL with robots.txt. When Google crawls the webpage and discovers the meta robots noindex tag, they will flag the URL so that it will hardly ever be revealed in the SERPs. This is the most powerful way to prevent Google from indexing a URL and exhibiting it in their lookup results.