Easily create custom robots.txt files for your Blogger site
Use *
to match any sequence of characters, and $
to match the end of a URL.
E.g., Disallow: /*.pdf$
(blocks all PDF files), Disallow: /private*/
Noindex: To prevent pages from being indexed (not just crawled), use a noindex
meta tag on the page itself or an X-Robots-Tag HTTP header. Robots.txt only controls crawling.
What is robots.txt? A file that tells web crawlers (like Googlebot) which pages or files the crawler can or can't request from your site.
How to use on Blogger:
Validate: Always test your `robots.txt` using Google's Robots.txt Tester to ensure it behaves as expected.
Default Blogger Sitemap: Blogger automatically creates sitemaps (e.g., yourblog.blogspot.com/sitemap.xml
). This generator helps you include it.