[Skip navigation links]

Pages blocked by robots.txt, or too few pages scanned TN-M03

If too few pages are scanned there are several possible causes:

To find out which links are blocked by robots.txt for a site, http://www.google.com for example, open the address http://www.google.com/robots.txt. If you get a 404 Not Found message then no links are blocked, if you get a text file back this lists which links are blocked.

In the desktop versions, you can ignore the Robot Exclusion Standard by selecting the Options command from the View menu and unchecking Obey Robots.txt

Adding the following entries to the top of the robots.txt file, before any Disallow: directives, will bypass any blocks intended for other web crawlers:

User-agent: PowerMapper
Allow: /

The PowerMapper user agent in robots.txt is understood by all PowerMapper products.

See Also: What is robots.txt

Applies To: PowerMapper 3.0 and SortSite 3.0 or later

Last Reviewed: January 18, 2017