Skip to content

DNS service interruption by 1.1.1.1, operated by Cloudflare, linked to BGP attack or hijack.

Worldwide internet users experienced a 62-minute disruption on July 14, 2025, between 21:52 and 22:54 UTC, due to a global outage affecting Cloudflare's widely utilized 1.1.1.1 DNS resolver service.

Cloudflare Admits 1.1.1.1 DNS Disruption Resulted from BGP Incident or Hijacking
Cloudflare Admits 1.1.1.1 DNS Disruption Resulted from BGP Incident or Hijacking

DNS service interruption by 1.1.1.1, operated by Cloudflare, linked to BGP attack or hijack.

On July 14, 2025, a widespread global outage impacted Cloudflare's 1.1.1.1 DNS resolver service, lasting approximately 62 minutes from around 21:52 to 22:54 UTC. Millions of users worldwide experienced difficulties accessing websites and internet services, as the DNS service disruption caused numerous websites to fail loading or become unreachable [1][3].

The root cause of the outage was traced back to an internal configuration error related to Cloudflare's Data Localization Suite (DLS), a service designed to manage regional data routing [3][4]. On June 6, 2025, a configuration change was made that inadvertently linked the IP prefixes of the 1.1.1.1 resolver service to a non-production, inactive DLS environment. However, this error initially had no immediate effect [3][4].

On July 14, at 21:48 UTC, an engineer applied an additional configuration update adding a test location to this inactive DLS service. This update caused a global refresh of network configuration, unintentionally withdrawing BGP prefixes associated with the 1.1.1.1 resolver IP ranges (including 1.1.1.0/24, 1.0.0.0/24, and IPv6 ranges) from all production data centers worldwide, effectively removing the service IPs from the global routing tables [2][4].

As a result, DNS queries over UDP, TCP, and DNS-over-TLS (DoT) protocols failed immediately, leading to the outage, while DNS-over-HTTPS (DoH) traffic was less affected since most DoH traffic uses domain-based resolution rather than direct IP addresses [2]. It is important to note that this outage was not caused by an external attack or BGP hijacking, although an unrelated BGP hijack incident occurred at the same time, complicating the situation [2][3][4].

The misconfiguration was detected by Cloudflare by 22:01 UTC, shortly after the DNS traffic dropped. Cloudflare initiated a revert to the previous configuration at 22:20 UTC, which restored traffic levels to approximately 77% of normal capacity. Full restoration of resolver service functionality at all locations was achieved by 22:54 UTC [4].

In response to this incident, Cloudflare has announced plans to deprecate legacy systems that lack progressive deployment methodologies to prevent similar incidents [4]. The company also aims to boost detection, reduce alert fatigue, and accelerate response by building an interactive sandbox for security teams [4].

This outage serves as a reminder of the complexity involved in managing anycast routing, the method Cloudflare uses to distribute traffic across multiple global locations for improved performance and capacity [5]. As the world continues to rely heavily on DNS services, it is crucial for providers to implement enhanced validation and testing of BGP and network configuration changes, better segregation and monitoring of pre-production and production routes, and improvements in automated rollback mechanisms and real-time alerting for route withdrawal anomalies.

Sources: [1] Ars Technica. (2025, July 15). Cloudflare's DNS outage: Here's what we know so far. Retrieved from https://arstechnica.com/information-technology/2025/07/cloudflares-dns-outage-heres-whats-we-know-so-far/ [2] Cloudflare. (2025, July 15). Cloudflare incident report: DNS outage on July 14, 2025. Retrieved from https://blog.cloudflare.com/cloudflare-incident-report-dns-outage-on-july-14-2025/ [3] The Register. (2025, July 15). Cloudflare 1.1.1.1 DNS outage: BGP hijack not to blame, says Cloudflare. Retrieved from https://www.theregister.com/2025/07/15/cloudflare_1_1_1_dns_outage/ [4] ZDNet. (2025, July 15). Cloudflare DNS outage: What happened, and how was it fixed? Retrieved from https://www.zdnet.com/article/cloudflare-dns-outage-what-happened-and-how-was-it-fixed/ [5] Cloudflare. (n.d.). Anycast routing. Retrieved from https://www.cloudflare.com/learning/dns/what-is-anycast-routing/

  1. The DNS outage on July 14, 2025, was primarily due to a misconfiguration within Cloudflare's Data Localization Suite (DLS) service, an instance of technology designed for managing data routing in the context of data-and-cloud-computing services.
  2. In response to the outage, Cloudflare announced plans to deprecate legacy systems lacking technology for progressive deployment, with the intention of building a sandbox for security teams to improve detection, reduce alert fatigue, and accelerate response in such incidents involving data-and-cloud-computing and technology.

Read also:

    Latest