How will machine trust signals evolve for content creators next year?

LLM.txt files offer no actual SEO value for content creators. Duane Forrester, former Bing executive and founder of UnboundAnswers.com, explains why these files are "total trash" since all AI crawlers already follow standard robots.txt protocols. He details how Creative Commons bot handles most AI training data collection, making additional file formats unnecessary, and provides guidance on proper robots.txt syntax to avoid blocking beneficial AI crawlers from accessing content.
About the speaker

Duane Forrester

UnboundAnswers.com

 - UnboundAnswers.com

Duane is founder and CEO of UnboundAnswers.com and former Microsoft search engine insider

Show Notes

  • 00:07: LLM TXT Files Assessment

    A definitive evaluation of LLM TXT files as a concept, comparing them to established protocols like robots.txt and explaining why they're u ecessary for current crawler technology.

  • 01:12: PageRank Historical Context

    Discussion of how historical ranking metrics like PageRank were used for gaming search results and why exposing such scores creates manipulation opportunities rather than value.

  • 02:45: Creative Commons Bot Crawling

    Explanation of how CC bot serves as the primary data source for AI systems and why existing robots.txt protocol adequately controls all crawlers including new AI bots.

  • 03:04: Robots TXT Best Practices

    Technical guidance on proper robots.txt syntax and common implementation mistakes, emphasizing that crawlers operate on a default "do crawl" basis with only disallow directives being effective.

  • 04:20: AI Bot Access Strategy

    Recommendation to ensure IT departments aren't blocking AI system crawlers and the importance of allowing these bots access for content visibility in AI-powered search results.

About the speaker

Duane Forrester

UnboundAnswers.com

 - UnboundAnswers.com

Duane is founder and CEO of UnboundAnswers.com and former Microsoft search engine insider

Up Next: