Why are pages excluded from the search?

Site pages can disappear from the Yandex search results for a number of reasons. To find out why a page was excluded, open Yandex.Webmaster and go to Indexing → Pages in search and select Excluded pages. Learn more about the Excluded pages block

Reason for excluding a page Solution
The page was considered low-value or in low-demand

The algorithm decided not to include the page in search results because demand for the page is probably low. For example, this can happen if there's no content on the page, if the page is a duplicate of pages already known to the robot, or if its content doesn't completely suit user interests.

The algorithm automatically checks the pages on a regular basis, so the decision may change later. To learn more, see Low-value or low-demand pages.

An error occurred when the robot was loading or processing the page, and the server response contained HTTP status code 3XX, 4XX, or 5XX.

To find the error, use the Server response check tool.

If the page is accessible to the robot, make sure that:
  • The information about pages is present in the Sitemap file.

  • The Disallow and noindex prohibiting directives and the noindex HTML element in the robots.txt file prevent only technical and duplicate pages from indexing.
Page indexing is prohibited in the robots.txt file or using a meta tag with the noindex directive.

Remove the prohibiting directives. If you didn't place the ban in robots.txt yourself, contact your hosting provider or domain name registrar for details.

Also make sure that the domain name isn't blocked due to the registration period expiry.

The page redirects the robot to other pages Make sure that the excluded page should actually redirect users. To do this, use the Server response check tool.
The page duplicates the content of another page If the page is identified as duplicate by mistake, follow the instructions in the Duplicate pages section.
The page is not canonical Make sure that the pages should actually redirect the robot to the URL specified in the rel="canonical" attribute.
The site is recognized as a secondary mirror If the sites are grouped by mistake, follow the recommendations in the Separating site mirrors section.
Violations are found on the site You can check this by going to the Troubleshooting → Security and violations page in Yandex.Webmaster.

The robot continues to visit the pages excluded from the search, and a special algorithm checks the probability of displaying them in the search results before each update of the search database. So the page may appear in the search within two weeks after the robot finds out about its change.

If you fixed the reason for excluding the page, send it for reindexing. This way you'll inform the robot about the changes.

Click to solve the problem

Questions and answers about pages excluded from the search

The page's description and Keywords meta tags and the title element are filled in correctly and the page meets all requirements. Why isn't it in the search results?

In addition to checking the tags on the page, the algorithm checks if the page content is unique, informative, in-demand and up-to-date, as well as many other factors. However, you should pay attention to meta tags. For example, the Description meta tag and the title element can be created automatically and duplicate each other.

If the site contains a lot of similar products that differ only by color, size or configuration, they may be excluded from the search. The same is true about the pagination pages, product selection and comparison pages, and image pages without text content.

Pages that are marked as excluded open normally in the browser. What does this mean?

This can happen for several reasons:

  • Headers that the robot requests from the server differ from the headers that the browser requests. So excluded pages might open correctly in the browser.
  • A page excluded from the search because of a download error disappears from the list of excluded pages only once it is available at the robot's request. Check the server's response for the URL. If the response contains the HTTP 200 OK status, wait for the next robot crawl.
The “Excluded pages” list shows pages that aren't on the site anymore. How do I remove them?

The Excluded pages list on the Pages in search page shows the pages the robot accessed but didn't index (these may be non-existing pages previously known to the robot).

A page is removed from the excluded list if:

  • It is not available to the robot for a certain period of time.
  • It is not linked to from other pages on the site and external sources.

Excluded pages listed in the service don't affect the site position in the search results.

Tell us what your question is about so we can direct you to the right specialist:

Pages with different content can be considered duplicates if they responded to the robot with an error message (for example, in case of a stub page on the site). Check how the pages respond now. If pages return different content, send them for re-indexing — this way they can get back in the search results faster.

To prevent pages from being excluded from the search if the site is temporarily unavailable, configure the 503 HTTP response code.

Excluding pages from the search results is not an error on the part of a site or the indexing robot: it excludes pages that users won't be able to find using search queries. Therefore, their exclusion shouldn't affect the visibility of indexed pages on the site. To learn more, see Low-value or low-demand pages.

Contact support if:

  • Pages were ranked high in the search results before they were excluded.
  • The site's position after the exclusion of pages decreased dramatically.
  • The number of click-throughs from the search engine reduced significantly after the pages were excluded.