Skip to main content

Txt robots

Robots txt concept

What is Robots txt?

Robots txt, or rather, robots.txt, is a plain text file that is created and connected to an online portal to determine a series of rules related to the behavior of search engine indexing robots, crawlers or spiders. In general, it is used in order to prevent them from crawling certain content and, in this way, not indexing them to appear among the SERPs.

Using them is simple, even though it can be done in different ways. The person responsible for preparing it can directly indicate the URLs that should not be taken to the search engine or, if you prefer, determine directories, subdirectories or files that choose to stay away from Google and the rest of the search engines.

Despite its purpose, using this file is not a complete guarantee that there will not be indexing, so it is not recommended when keeping some sections of an online portal as private. It is a valid action, but not definitive since it is not capable of guaranteeing total secrecy. In cases like that, it is better to look for other alternatives that are more effective.

Commands like disallow are the ones that usually appear when opening a robots.txt file. It is very important to understand its structure and its use, even though for this we will add a series of links later in order to complement the information.

What is the Robots txt for

This file, the robots txt, is used so that neither Google nor other search engines index certain parts of an online portal in their results pages (SERPs). It is something that companies usually use to leave out those pages that can be penalized and negatively impact SEO, such as in cases of duplicate content, or if they basically prefer to remove certain content from search engines.

It can also be used to dictate to search engine robots how they should crawl other content on the web. Its function transcends the impediment and the permission, hence it is a very important element when developing an online portal.

Examples of robots txt

There are as many robots txt files as there are web pages on the network of networks; In spite of everything, to launch a simple example of how the structure of one can be, we are going to write the following lines:

User-Agent: *

Disallow: / agency-social-media /

Sitemap: https://neoattack.com/sitemap.xml

In this circumstance, a rule has been established for all the robots of the different search engines (first line) that indicates that our section of Social Media services It should not be indexed (second line) and, in summary, the supposed path of our sitemap has been indicated (third line), a mandatory requirement for these files.

More information about Robots txt

To learn how to build a Robots txt file, as well as to learn more about its use and possibilities, we suggest you take a look at the publications that exist below.

R Marketing Digital