Skip to content

Deny Robot takeover

Compare
Choose a tag to compare
@1bl4z3r 1bl4z3r released this 10 Sep 05:24
· 26 commits to main since this release

What's Changed

  • You can now customize how you can use robots meta. See hugo.toml.example
    • denyRobots : Specify what directives to follow when denying crawlers. Default is noindex, nofollow, noarchive (Link)
    • allowRobots : Specify what directives to follow when allowing crawlers. Default is index, follow (Link)
  • To deny robots on user content: set noIndex to true in page Frontmatter, the page will have noindex, nofollow, noarchive (unless specified by denyRobots) added to robots meta tag. Else, robots tag will have index, follow (unless specified by allowRobots)
  • To deny robots on Hugo generated pages: set noIndexPages to the page title where crawlers will be denied. Pages specified will have noindex, nofollow, noarchive (unless specified by denyRobots) added to robots meta tag. (Link)
noIndexPages = ["404 Page not found","Tags","Categories"]

Use Page title to select pages

  • To deny whole site from crawling : set siteNoIndex to true. (Link)

Removed

  • revisit-after meta, as it is not widely used

Compatibility

  • With Hugo version 0.134.1

Full Changelog: v1.1.9...v1.1.10