Skip to content

Cloudflare's New Policy Gives Publishers Control Over AI Bot Content Use

Say goodbye to AI bots freely copying content. Cloudflare's new policy lets publishers opt out of AI training and even charge for access.

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

Cloudflare's New Policy Gives Publishers Control Over AI Bot Content Use

Cloudflare has introduced a new policy, Content Signals, to provide publishers with more control over how their online content is used by AI bots. This move comes in response to AI bots disregarding the traditional robots.txt file, which previously governed what search engines and bots could access.

Up until now, AI bots have been freely copying content for chatbot training or answer generation, regardless of the rules outlined in the robots.txt file. Cloudflare's new policy enables publishers to opt out of having their content used for AI training via an extension to the robots.txt file.

The policy introduces three new signals: 'search', 'ai-input', and 'ai-train'. These allow publishers to limit access to their content for specific purposes. Cloudflare has framed these signals as a 'reservation of rights', potentially paving the way for legal action against AI companies misusing content.

Cloudflare is currently trialing a 'pay-per-crawl' feature, which would charge AI crawlers for accessing a site. This could provide an additional revenue stream for publishers. The policy has been rolled out to over 3.8 million domains, with default settings for each signal.

Cloudflare's Content Signals Policy is a significant step towards giving publishers more control over how their online content is used by AI bots. With over 3.8 million domains already using the policy, it could set a new standard for content restriction. To stay updated with the latest news and features, follow the website on Google News.

Read also:

Latest