Search engine optimization, in its the majority of standard sense, relies upon something above all others: Search engine spiders crawling and indexing your website.
However nearly every site is going to have pages that you do not wish to consist of in this expedition.
In a best-case circumstance, these are not doing anything to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more vital pages.
Thankfully, Google enables webmasters to tell online search engine bots what pages and content to crawl and what to neglect. There are several ways to do this, the most typical being using a robots.txt file or the meta robotics tag.
We have an outstanding and comprehensive explanation of the ins and outs of robots.txt, which you need to definitely check out.
But in high-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exemption Protocol (ASSOCIATE).
Robots.txt offers crawlers with instructions about the site as an entire, while meta robots tags consist of directions for specific pages.
Some meta robots tags you may employ consist of index, which tells online search engine to include the page to their index; noindex, which tells it not to include a page to the index or include it in search results; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags work tools to keep in your tool kit, however there’s likewise another method to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your web pages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for an entire page, along with the particular elements on that page.
And whereas utilizing meta robotics tags is fairly straightforward, the X-Robots-Tag is a bit more complex.
However this, obviously, raises the concern:
When Should You Use The X-Robots-Tag?
According to Google, “Any regulation that can be utilized in a robotics meta tag can also be defined as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are particular scenarios where you would wish to use the X-Robots-Tag– the 2 most common being when:
- You wish to manage how your non-HTML files are being crawled and indexed.
- You wish to serve regulations site-wide instead of on a page level.
For instance, if you wish to block a particular image or video from being crawled– the HTTP response approach makes this simple.
The X-Robots-Tag header is also beneficial due to the fact that it allows you to combine numerous tags within an HTTP reaction or use a comma-separated list of instructions to define directives.
Possibly you don’t desire a specific page to be cached and want it to be unavailable after a particular date. You can use a combination of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these instructions.
Basically, the power of the X-Robots-Tag is that it is much more versatile than the meta robots tag.
The advantage of using an X-Robots-Tag with HTTP actions is that it enables you to utilize regular expressions to carry out crawl regulations on non-HTML, as well as use parameters on a larger, international level.
To help you understand the difference between these instructions, it’s useful to classify them by type. That is, are they crawler instructions or indexer regulations?
Here’s a handy cheat sheet to describe:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, permit, disallow, and sitemap regulations to specify where on-site search engine bots are permitted to crawl and not allowed to crawl.||Meta Robots tag– permits you to define and prevent search engines from showing specific pages on a site in search engine result.
Nofollow– enables you to specify links that need to not hand down authority or PageRank.
X-Robots-tag– permits you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you wish to block particular file types. A perfect approach would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be contributed to a website’s HTTP reactions in an Apache server setup via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds terrific in theory, but what does it look like in the real life? Let’s have a look.
Let’s state we desired online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would appear like the below:
area ~ *. pdf$
Now, let’s look at a different situation. Let’s state we want to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You could do this with an X-Robots-Tag that would appear like the below:
Please note that comprehending how these instructions work and the impact they have on one another is crucial.
For example, what occurs if both the X-Robots-Tag and a meta robotics tag are located when spider bots discover a URL?
If that URL is obstructed from robots.txt, then specific indexing and serving regulations can not be discovered and will not be followed.
If instructions are to be followed, then the URLs including those can not be disallowed from crawling.
Look for An X-Robots-Tag
There are a few different techniques that can be used to look for an X-Robots-Tag on the site.
The simplest method to check is to set up a browser extension that will tell you X-Robots-Tag details about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can utilize to figure out whether an X-Robots-Tag is being utilized, for example, is the Web Developer plugin.
By clicking on the plugin in your internet browser and navigating to “View Action Headers,” you can see the different HTTP headers being used.
Another method that can be used for scaling in order to identify problems on sites with a million pages is Yelling Frog
. After running a site through Shouting Frog, you can navigate to the “X-Robots-Tag” column.
This will reveal you which areas of the site are using the tag, along with which particular directives.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Understanding and controlling how search engines engage with your site is
the cornerstone of search engine optimization. And the X-Robots-Tag is an effective tool you can utilize to do just that. Simply understand: It’s not without its threats. It is extremely easy to make a mistake
and deindex your entire website. That said, if you read this piece, you’re most likely not an SEO beginner.
So long as you utilize it sensibly, take your time and inspect your work, you’ll discover the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Featured Image: Song_about_summer/ SMM Panel