Update

Excluding search and member pages via Robots.txt


We shipped some improvements to the Robots.txt file based on your feedback. The Robots.txt file helps you to tell Search Engines what content you would like to have be made available in their search results, and what content you would like to remain hidden.

Note: This update will not be applied if you have already configured your Robots.txt file

When we first delivered the possibility to set up a Robots.txt file it was empty. It was your responsibility to update and maintain this file (often a one-time setup). Now, in case your Robots.txt is empty, we’ve pre filled this file with some generic rules to exclude some community pages from being crawled.

 

What did we change to your Robots.txt
We added the following generic rules (only if your Robots.txt file was not configured before). We advise everybody to add these rules to their Robots.txt file if they haven’t done so:

Disallow: /members/
Disallow: ?userid=
Disallow: ?sort=
Disallow: search_type=tag
Disallow: search?q=

 

Why did we do this?
By disallowing the pages above we tell search engines to not crawl them, eventually leading to them not being indexed anymore. The other pages of your platform, that hold importent content and information, will get more value and rank better in search engines. In the end it will be easier for the user to get to right information they are looking for from their favourite search engine.

 

How can I change my Robots.txt
If you want to make changes to your Robots.txt or revert the update we pushed you can follow this guide to configure your own Robots.txt file: Setting up a Robots.txt file

Good move - thanks!