Amazonbot, Amazon's web crawler, has started properly respecting robots.txt directives after years of inconsistent behavior. This change is significant for website owners who want to control how their content is indexed by Amazon's search engine. The update brings Amazonbot in line with industry standards followed by other major crawlers like Googlebot.
Background
robots.txt is a standard used by websites to communicate with web crawlers about which pages should or shouldn't be crawled. Major search engines like Google have long respected these directives, but Amazonbot's compliance has been inconsistent until now.
- Source
- Lobsters
- Published
- May 15, 2026 at 07:42 AM
- Score
- 6.0 / 10