Recent Posts


Last Month in Internet Intelligence: October 2018

The level of significant Internet disruptions observed through the Oracle Internet Intelligence Map was lower in October, though the underlying reasons for these disruptions remained generally consistent compared to prior months. For enterprises, the importance of redundant Internet connectivity and regularly exercised failover plans is clear. Unfortunately, for state-imposed Internet outages, such planning and best practices may need to include failsafes for operations while periodically offline. Directed disconnection On October 10, Ethiopian Prime Minister Abiy Ahmed met with several hundred soldiers who had marched on his office to demand increased pay. The Ethiopian Broadcasting Corporation (formerly known as ETV) did not cover the soldiers marching but noted that Internet connectivity within the country had been shut off for several hours to prevent "fake news" from circulating on social media. This aligned with residents' reports of a three-hour Internet outage. The figure below shows that the disruption began around 12:00 GMT, significantly impacting both traceroutes to, and DNS query traffic from, Ethiopia for several hours. The impact of the Internet shutdown is also clearly evident in the figure below, which shows traceroutes into Ethio Telecom, the state-owned telecommunications service provider. Similar to the country-level graph shown above, the number of completed traceroutes into Ethio Telecom dropped significantly for several hours. However, a complete Internet outage was not observed in either case. Exams On October 14, the Ministry of Communications in Iraq announced the latest round of Internet shutdowns within the country in conjunction with nationwide exams. According to the translation of an article posted on the Iraqi Media Network, “The ministry's spokesman Hazem Mohammad Ali said in a statement that the Internet service will be interrupted for two hours a day from 11 am to 1 pm for ten days from Sunday, 2018/10/14 until Wednesday, 2018/10/24, This piece came at the request of the Ministry of Education and will stop the Internet service for the days of examinations exclusively.” As shown in the figures below, multiple Internet shutdowns were observed during the specified period, but they did not appear to take place on October 19, 23, or 24 as expected. As has been seen during previous Internet disruptions in Iraq, the government’s actions cause a decline in the number of completed traceroutes to targets in the country, a reduction in the number of routed networks from the country, and lower levels of DNS traffic seen from resolvers on Iraqi networks. As noted in the past, during these nationwide disruptions, telecoms with independent Internet connections through the north of Iraq often stay online, as do those with independent satellite links. However, the figures below illustrate the impact of these state-imposed disruptions on two major Iraqi network providers, ITC and Earthlink. These graphs show that the observed disruptions within these networks appear to be near-complete, as well as the lack of anticipated outages on the 19th, 23rd, and 24th. Power outage Ministro Motta Domínguez informa que por una explosión e incendio en la subestación La Arenosa al menos 13 estados del país se encuentren sin servicio de energía eléctrica desde las 6 de la tarde de este lunes, 15 de octubre — Efecto Cocuyo (@EfectoCocuyo) October 16, 2018 [#AHORA] Para las 7:32 p.m. Los estados que se reportan sin servicio eléctrico son: Zulia, Táchira, Lara, Portuguesa, Carabobo, Nueva Esparta, Barinas, Yaracuy, Trujillo, Mérida, Falcón y parte de Miranda — NTN24 Venezuela (@NTN24ve) October 15, 2018 On October 15, the Tweets above (among others) provided insight into the scope of a widespread power outage in Venezuela. This October blackout follows similar events in July and August. As seen in the figure below, the traceroute completion ratio metric saw a sharp but partial drop during the evening of October 15, aligned with the time the power outage reportedly began. (Venezuela is 4 hours behind GMT.) The metric recovered gradually over the following 24 hours, though it took a few days for it to return to normal. The blackout did not have a meaningful impact on the BGP routes metric, which is not surprising, because the relevant routers are generally located in data centers with backup/auxiliary power, such as generators. Interestingly, the power outage appeared to have something of a delayed impact on the DNS query rate metric; while the request traffic followed a pattern roughly similar to that seen on preceding/following days, the volume of requests was slightly lower. The impact of the power outage was also visible in the Traffic Shifts  graphs of a number of Venezuelan network providers, as shown in the figure below. It is particularly evident in the graphs for Net Uno and Inter, seen in the top row, with noticeable declines in the number of completed traceroutes to targets in those networks. The impact was less pronounced in other networks such as Digitel and CANTV, with the number of completed traceroutes seeing a more nominal decline. Severe weather After getting battered by Typhoon Mangkhut in September, the Northern Mariana Islands were devastated on October 24-25 by Super Typhoon Yutu, which hit as a Category 5 typhoon.  The figure below shows an Internet disruption starting mid-morning (GMT) on October 24. (The Northern Mariana Islands are 10 hours ahead of GMT.) As the graphs show, both the traceroute completion ratio and DNS query rate metrics dropped concurrent with the arrival of the storm, but the routing infrastructure remained stable. The other figure below illustrates the impact of Yutu on IT&E Overseas, a Guam-based telecommunications firm that also provides Internet connectivity in the Northern Mariana Islands.  As seen in the figure, the number of completed traceroutes reaching endpoints in IT&E declined as the storm hit, with upstream connectivity through Hutchinson, Level 3, Tata, and Cogent evidently impacted. Network malfunction On October 22, East Timor (also known as Timor-Leste) suffered a multi-hour Internet disruption, reportedly due to a failure at an upstream provider of the country’s largest telecommunications operator. Coverage of the issue noted that Timor Telecom’s link to upstream provider Telkomsel failed at around 17:45 local time (08:45 GMT), and that a failover to satellite provider O3b did not occur as expected. Services were reportedly restored just before 23:00 local time (14:00 GMT). The figure below shows how the link failure impacted connectivity at a country level in East Timor. The traceroute completion ratio metric was lower for the 5-plus hour duration of the disruption, and the number of routed networks dipped lower for the period as well. The impact is harder to see in the in the DNS query rate graph, likely due to the skew caused by the spikes on October 21 and 23, but the graph does appear to flatten during the period of the disruption. The traffic shifts graphs below illustrate how the Telkomsel link failure impacted network providers in East Timor. Published reports quoted a representative of Timor Telecom, and the first figure corroborates their report of the problems with Telkomsel and failed shift of upstream traffic to O3b. (Telkomsel is a subsidiary of Telekomunikasi Indonesia International, which is shown in the graphs below.) The graph shows that that the majority of the traceroutes to targets in Timor Telecom go through satellite Internet provider O3b, with a fraction going through Telekomunikasi Indonesia International (T.L.), which is presumably a network identifier that the Indonesian provider uses for Internet services in East Timor. However, when the link to Telekomunikasi Indonesia International failed, it appears that the link to O3b did as well, dropping the number of completed traceroutes to near zero, and spiking the latency for those that did complete. The second figure shows that Telekomunikasi Indonesia International (T.L.)  gets nearly all of its upstream connectivity through PT Telekomunikasi Indonesia, and the link failure is clearly evident in that graph. Finally, the third figure illustrates the impact of the link failure on Viettel Timor Leste, which also uses Telekomunikasi Indonesia International (T.L.) as an upstream provider. The graph shows that when the problems with Telekomunikasi Indonesia International (T.L.) occurred, traceroutes to targets in Viettel found alternate paths, with increasing numbers going through Asia Satellite Internet eXchange (ASIX) and PT. Sarana Mukti Adijaya. Conclusion In addition to the Internet disruptions reviewed above, notable irregularities were observed in Mayotte, Mali, Botswana, and St. Vincent and the Grenadines during October. However, the root causes of these disruptions remain unknown. Observed network-level disruptions aligned with the country-level ones, but no public information was found that explained exactly why these Internet outages occurred. And in addition to these, the Oracle Internet Intelligence Map surfaced hundreds of brief and/or minor Internet disruptions around the world over the course of the month. Regardless of the underlying causes, the importance of redundant Internet connections and the need to regularly test failover/backup infrastructure cannot be understated, as we saw in East Timor. While this may be challenging, and even expensive, in remote locations dependent on submarine cables or satellite connectivity for Internet access, the growing importance of the Internet for communication, commerce, and even government services means that wide-scale Internet disruptions, even brief ones, can no longer be tolerated.

The level of significant Internet disruptions observed through the Oracle Internet Intelligence Map was lower in October, though the underlying reasons for these disruptions remained generally...


China Telecom's Internet Traffic Misdirection

In recent weeks, the Naval War College published a paper that contained a number of claims about purported efforts by the Chinese government to manipulate BGP routing in order to intercept internet traffic. In this blog post, I don’t intend to address the paper’s claims around the motivations of these actions. However, there is truth to the assertion that China Telecom (whether intentionally or not) has misdirected internet traffic (including out of the United States) in recent years. I know because I expended a great deal of effort to stop it in 2017. Traffic misdirection by AS4134 On 9 December 2015, SK Broadband (formerly Hanaro) experienced a brief routing leak lasting little more than a minute. During the incident, SK’s ASN, AS9318, announced over 300 Verizon routes that were picked up by OpenDNS’s BGPstream service: Woah, an ASN in Korea just hijacked a bunch of other ASNs across APAC. — Compose Button               Richard Westmoreland (@RSWestmoreland) December 9, 2015 The leak was announced exclusively through China Telecom (AS4134), one of SK Broadband’s transit providers. Shortly afterwards, AS9318 began transiting the same routes from Verizon APAC (AS703) to China Telecom (AS4134), who in turn began announcing them to international carriers such as Telia (AS1299), Tata (AS6453), GTT (AS3257) and Vodafone (AS1273). This resulted in AS paths such as: … {1299, 6453, 3257, 1273} 4134 9318 703 Networks around the world who accepted these routes inadvertently sent traffic to Verizon APAC (AS703) through China Telecom (AS4134). Below is a traceroute mapping the path of internet traffic from London to address space belonging to the Australian government. Prior to this routing phenomenon, it never traversed China Telecom. Over the course of several months last year, I alerted Verizon and other Tier 1 carriers of the situation and, ultimately, Telia and GTT (the biggest carriers of these routes) put filters in place to ensure they would no longer accept Verizon routes from China Telecom. That action reduced the footprint of these routes by 90% but couldn’t prevent them from reaching those who were peering directly with China Telecom. At times in the past year, Verizon APAC sent routes from Verizon North America (AS701) via this AS path creating AS paths such as: … (peers_of_4134) 4134 9318 703 701 When these routes were in circulation, networks peering with China Telecom (including those in the US) accepted AS701 routes via AS4134, sending US-to-US traffic via mainland China. One of our affected clients was a major US internet infrastructure company. Shortly after alerting them of the situation, they deployed filters on their peering sessions with China Telecom to block Verizon routes from being accepted. Below is a screenshot of one of thousands of traceroutes from within the US to Verizon (in the US) that illustrate the path of traffic outside of the country. Internet Path Monitoring The common focus of BGP hijack alerting is looking for unexpected origins or immediate upstreams for routed address space. However, traffic misdirection can occur at other parts of the AS path. In this case, Verizon APAC (AS703) likely established a settlement-free peering relationship with SK Broadband (AS9318), unaware that AS9318 would then send Verizon’s routes exclusively on to China Telecom and who would in turn send them on to the global internet. We would classify this as a peer leak and the result was China Telecom’s network being inserted into the inbound path of traffic to Verizon. The problematic routing decisions were occurring multiple AS hops from the origin, beyond its immediate upstream. Conversely, the routes accepted from one’s peers also need monitoring – a fairly rare practice. Blindly accepting routes from a peer enables the peer to (intentionally or not) insert itself into the path of your outbound traffic. Conclusion In 2014, I wrote a blog post entitled “Use Protection if Peering Promiscuously” that highlighted the problems with bad route propagation, such as in the incidents described above. It is problems such as this that spurred my friend Alexander Azimov at QRator Labs to lead an on-going effort to create an IETF standard for RPKI-based AS path verification. Such a mechanism, if deployed, would drop BGP announcements with AS paths that violate the valley-free property, for example, based on a known set of AS-AS relationships. Such a mechanism would have at very least contained some of the bad routing described above. In the meantime, sign on to the Internet Society’s MANRS project to enforce routing security and watch your routes!

In recent weeks, the Naval War College published a paper that contained a number of claims about purported efforts by the Chinese government to manipulate BGP routing in order to intercept internet...

Internet Intelligence

Last Month in Internet Intelligence: September 2018

Over the course of a given month, hundreds of Internet-impacting “events” are visible within the Oracle Internet Intelligence Map. Many are extremely short-lived, lasting only minutes, while others last for hours or days; some have a minor impact on a single metric, while others significantly disrupt all three metrics. In addition, for some events, the root cause is publicly available/known, while for other events, digging into the underlying data helps us make an educated guess about what happened. Ultimately, this creates challenges in separating the signal from the noise, triaging and prioritizing that month’s events for review in this blog post. Having said that, in September we observed Internet disruptions due to exams, power outages, extreme weather, and submarine cable issues, as well as a number of others with unknown causes. Additionally, a third test of nationwide mobile Internet connectivity took place in Cuba. Cuba As noted in our August post, ETECSA (the Cuban state telecommunications company) carried out two tests of nationwide mobile Internet connectivity, which were evident as spikes in the DNS query rates from Cuba. In a Facebook post, they noted, “On August 14th was a first test that measured levels of traffic congestion and that audited the network in stress conditions, the second test was made on August 22 and its purpose was to try the portal my cubacel and the short codes for service management.” The company planned a third test, this one lasting three days from September 8-10, highlighting it in a promotional graphic that was posted on their Facebook page. They noted that this third test “was designed for three days with the purpose of checking traffic management in different structures of the network,” intended to validate optimizations made as a result of the connection difficulties and network congestion that resulted from the August tests. Similar to the prior tests, Cuba’s DNS Query Rate spikes at 05:00 GMT (midnight local time) on September 8, remaining elevated through the end of the day (local time) on the 10th, when it settles back down into a much lower diurnal pattern. ETECSA’s Facebook post noted that more than 1.5 million people had participated in these tests of nationwide mobile Internet access. Exams Similar to actions taken a number of times in the past, Internet connectivity in Iraq was shut down repeatedly between September 1-10 to prevent cheating on nationwide student exams. A published report noted that a statement from the Iraqi Ministry of Communications planned to suspend Internet service between 06:30 and 08:30 (local time). As seen in the figures below, multi-hour Internet shutdowns were implemented on nine of the 10 days, with September 7 the only exception. Partial drops seen in each metric indicate that the shutdowns were not complete – that is, Internet access remained available across some parts of the country. Power Outage According to a published report, late in the day on September 6, western and southern regions of Libya, including the capital city of Tripoli, experienced a total blackout. The power outage was reportedly related to the impact of bloody clashes in Tripoli, which prevented repair teams from reaching power stations and grids in the impacted area. The impact of the power outage is evident in the graph below, showing a drop in the Traceroute Completion Rate metric starting late in the day (GMT) on September 6, lasting for approximately half a day. A minor perturbation in the BGP Routes metric is evident as well. Ongoing turmoil in the country also impacted Internet availability in Libya several days later, with another multi-hour drop in the Traceroute Completion Rate evident on September 9. Typhoon Mangkhut After forming on September 7 as a tropical depression in the Pacific Ocean, Typhoon Mangkhut quickly strengthened and moved west towards Micronesia. On September 10, the typhoon moved across both the Northern Mariana Islands and Guam, causing damage with winds in excess of 100 miles per hour. As shown in the figure below, the storm impacted Internet connectivity in the Northern Mariana Islands, with the Traceroute Completion Rate metric declining around mid-day local time (the Islands are GMT+10) on September 10, and the DNS Query Rate also lower than normal for that time of day. The following figure shows that Internet connectivity on Guam was impacted several hours later, with the Traceroute Completion Rate metric declining later in the day local time (Guam is also GMT+10) on September 10. It also appears that there was a slight impact to the number of routed networks at around the same time, with a concurrent drop in the DNS Query Rate metric. By the next morning, the storm had reportedly moved past the islands, although the calculated metrics took several days to return to “normal” levels. Figures below illustrate the impact that Typhoon Mangkhut had on local network providers. AS7131 (IT&E Overseas) has prefixes that are routed on both Guam and the Northern Mariana Islands. The number of completed traceroutes to endpoints in this autonomous system begin to drop mid-day local time, likely due to power outages or damage to local infrastructure alongside the arrival of the storm. Interestingly, a number of traceroutes started to pass through AS9304 (Hutchinson) around the same time as well, but it isn’t clear if this is simply coincidental, or if traffic through this provider was increased as part of a disaster recovery process. The number of completed traceroutes to endpoints in AS9246 (Teleguam Holdings) also began to decline later in the day local time on September 10, also likely due to local power outages or infrastructure damage. Interestingly, while some endpoints across both networks became unreachable as a result of Typhoon Mangkhut, there did not appear to be a meaningful impact to measured latency, which remained within the ranges seen during the days ahead of the storm. Submarine Cables On September 4, Australian telecommunications infrastructure provider Vocus posted an “Incident Summary” regarding a suspected fault in the SeaMeWe-3 (SMW3) cable between Perth, Australia and Singapore. The figure below (from one of Oracle’s commercial Internet Intelligence tools) illustrates the impact of the cable failure on the median latency of paths between Singapore and Perth – specifically, from a measurement in cloud provider Digital Ocean’s Singapore location to endpoints in Perth on selected Internet service providers. Among the measured providers, latencies increased 3-4x on September 2/3, stabilizing by the 4th. The initial incident summary published by Vocus noted that similar faults seen in the past have taken upwards of 4 weeks to restore. However, on September 5, an article in ZDNet revealed that Vocus pressed the new Australia Singapore Cable (ASC) into service two weeks ahead of schedule, shifting customer traffic onto it from the damaged SMW3. The figures below, generated by internal Internet Intelligence tools, illustrate how failure of the SMW3 cable caused measured latencies to increase, and how they returned to previous levels when the ASC cable was activated and traffic was shifted onto it. On September 10, @RightsCon, a Twitter account associated with Internet advocacy group AccessNow, posted a Tweet looking for verification of Internet disruptions in several countries. Calling on the #RightsCon community: #KeepItOn is looking to verify internet shutdowns/disruptions in Angola, the Maldives, & Saint Vincent and the Grenadines. If you have information, or know someone who might, reach out to @btayeg PGP 0x8050D4F68EBB844E70BF94F6C7C45F98F350B5E3 — RightsCon (@rightscon) September 10, 2018 Doug Madory, Director of Internet Analysis on Oracle’s Internet Intelligence team, replied, noting that “Internet connectivity issues in Angola was due to problems on the WACS submarine cable.” The figure below shows the impact of the submarine cable issues that occurred several days earlier, with disruptions evident in both the Traceroute Completion Ratio and BGP Routes metrics on September 7. The disruptions reviewed above were caused by known issues with the SMW-3 and WACS submarine cables. However, September also saw a number of additional disruptions that may have been related to issues with submarine cable connectivity, but such correlations were not definitively confirmed. On September 5-6, a significant Internet disruption was observed in Comoros, impacting all three metrics as seen in the figure below. A complete outage was observed at Comores Telecom, with the number of completed traceroutes to endpoints in that network dropping to zero during the disruption. As the figure below shows, prior to the outage, traceroutes reached Comores Telecom primarily through Level 3 and BICS, but went through West Indian Ocean Cable Company for approximately three days after the outage, before passing through Level 3 and BICS once again. International connectivity to Comoros is carried over both the Eastern Africa Submarine System (EASSy) as well as FLY-LION3, although the latter only connects Comoros to Mayotte. The observed shift in upstream providers could be indicative of a problem on one submarine cable, forcing traffic onto the other until issues with the primary cable were resolved. Later in the month, Caribbean islands Saint Martin and Saint Barthelemy both saw disruptions that lasted for approximately 24 hours across September 28-29, as evident in the declines seen in the Traceroute Completion Rate and BGP Routes metrics shown in the figures below. (Because the disruption occurred on Friday night/Saturday, DNS query rates were lower anyway, so the evidence of the disruption in that metric would be harder to see.) Both islands are connected to Southern Caribbean Fiber, with a spur running from Saint Martin to Saint Barthelemy. On September 25, 26, and 30, disruptions to Internet connectivity in American Samoa were evident in the Internet Intelligence Map, as shown in the figure below. Brief drops across all three metrics were observed on the 25th and 26th, while multiple drops were observed on the 30th. Internal tools indicated that the underlying issues impacted BlueSky Communications/SamoaTel, and the issues can be seen in the traceroutes going through Hurricane Electric at times that align with the issues seen in the American Samoa graph. The territory has been connected to the American Samoa-Hawaii (ASH) submarine cable for nearly a decade, but also connected to the Hawaiki cable earlier this year. BlueSky appears to connect with Hurricane Electric in San Jose, California, but it isn’t clear which cable carries traffic from that exchange point to the island. Conclusion Associating Internet disruptions with an underlying cause can be easy to do when related events are publicly known – severe weather, power outages, civil unrest, and even school exams. In many cases, these disruptions last for hours or days, making it more likely that they will impact Internet connectivity for users in the impacted country. However, for each well-understood disruption, there are dozens more that we observe each month that are brief, partial (not dropping the calculated metrics to zero), and unexplained. Due to their nature, these disruptions may not have a significant impact on user connectivity, which makes finding public commentary (such as news articles or Twitter posts) on them all the more challenging. Using internal Internet infrastructure analysis tools and public tools like Telegeography’s Submarine Cable Map, we can surmise what may have caused the disruption, but the actual root cause remains unknown.

Over the course of a given month, hundreds of Internet-impacting “events” are visible within the Oracle Internet Intelligence Map. Many are extremely short-lived, lasting only minutes, while others...


First Subsea Cable Across South Atlantic Activated

Yesterday marked the first time in recent Internet history that a new submarine cable carried live traffic across the South Atlantic, directly connecting South America to Sub-Saharan Africa.  The South Atlantic Cable System (SACS) built by Angola Cables achieved this feat around midday on 18 September 2018. Our Internet monitoring tools noticed a change in latency between our measurement servers in various Brazilian cities and Luanda, Angola, decreasing from over 300ms to close to 100ms.  Below these are measurements to Angolan telecoms TVCABO (AS36907) and Movicel (AS37081) as the SACS cable came online yesterday. A race to be first In the past decade there have been multiple submarine cable proposals to fill this gap in international connectivity, such as South Atlantic Express (SAEx) and South Atlantic Inter Link (SAIL) cables. In recent weeks, the SAIL cable, financed and built by China, announced that they had completed construction of their cable and it was the first cable connecting Brazil to Africa (Cameroon). However, since we haven't seen any changes in international connectivity for Cameroon, we don't believe this cable is carrying any traffic yet. What's the significance? In addition to directly connecting Brazil to Portuguese-speaking Angola, the cable offers South America its first new submarine cable link to the outside world in 18 years that doesn't go through the United States.  The upcoming EllaLink cable that will connect Brazil directly to Europe (Portugal) has an RFS date in the year 2020. The SACS cable will enable South America to more directly reach the growing Internet economies of Africa, as well as offer an alternative path to Europe after cross-connecting to other submarine cables hugging Africa's western coast.  Eventually the SACS cable, by traversing the African continent itself, will enable more direct connectivity between South America and Asia bypassing Europe and the United States altogether. In addition, the SACS cable connects to Google's MONET cable at Fortaleza, Brazil enabling the African Internet a more direct path to the United States without first passing through Europe. Conclusion It is hard to overstate the potential for this new cable to profoundly alter how traffic is routed (or not) between the northern and southern hemispheres of the Internet.  The South Atlantic was the last major unserviced transoceanic Internet route and the activation of SACS is a tremendous milestone for the growth and resilience of the global Internet. I recently gave a talk with Angola Cables at LACNIC 2018 about the activation of the SACS cable. Watch the video here: 

Yesterday marked the first time in recent Internet history that a new submarine cable carried live traffic across the South Atlantic, directly connecting South America to Sub-Saharan Africa.  The South...


Last Month In Internet Intelligence: August 2018

During August 2018, the Oracle Internet Intelligence Map surfaced Internet disruptions around the world due to familiar causes including nationwide exams, elections, maintenance, and power outages. A more targeted disruption due to a DDoS attack was also evident, as were a number of issues that may have been related to submarine cable connectivity. In addition, in a bit of good news, the Internet Intelligence Map also provided evidence of two nationwide trials of mobile Internet services in Cuba. Cuba On August 15, the Oracle Internet Intelligence Twitter account highlighted that a surge in DNS queries observed the prior day was related to a nationwide test of mobile Internet service, marking the first time that Internet services were available nationwide in Cuba’s history. The figure below shows two marked peaks in DNS query rates from resolvers located in Cuba during the second half of the day (GMT) on the 14th. Paul Calvano, a Web performance architect at Akamai, also observed a roughly 25% increase in their HTTP traffic to Cuba during the trial period. This testing was reported by ETECSA (the Cuban state telecommunications company) in a Facebook post in which they noted: The Telecommunications company of Cuba S.A. Etecsa, reports that as part of the preparatory actions for the start of the internet service via mobile, it has carried out some tests to verify the functioning of all elements involved in it. A Timely test with customers of the prepaid cell service has been developed today, which have been able to make internet connections at no cost. This and other tests make it possible to assess available capacities and different experiences of use achieved in correspondence with the characteristics of the access network present at the time and place of connection in order to make some adjustments. Further tests shall be carried out in successive days. The start date of the service, as well as its rates and other details of interest, shall be informed through the official media and official channels of the company. A little more than a week later, another surge in DNS queries from Cuba was observed, resulting from ETECSA performing a second test of national mobile Internet service. As seen in the figure below, it appears that service was enabled around 12:00 GMT on August 22, but was shut off around 05:00 GMT the following day. A notice posted by ETECSA on their Facebook page on the 22nd informed customers of prepaid cellular services that they had the opportunity to acquire a one-time 70 MB data “package” free of charge that could be used at any time during the test period. A subsequent Facebook post noted that customers that exceeded their 70 MB allocation would no longer be able to browse the Internet. DDoS Attack On August 27, the Bank of Spain Web site was targeted with a DDoS attack that published reports indicate temporarily disrupted access to the site. A DNS lookup on the site’s hostname,, shows that it resolves to a single IP address ( This IP address is part of a block of addresses routed by AS20905, registered to Banco de España. The figure below shows a significant increase in latency in traceroutes to endpoints within that autonomous system during the reported time of the attack. Such increased latency is a concomitant effect of a DDoS attack that floods a target with traffic. Presumably in response to being targeted by a DDoS attack, the Bank of Spain activated a DDoS mitigation service the following day, as the figure below shows that on August 28, the majority of traceroutes to endpoints in AS20905 started going through Akamai’s Prolexic service (in green on the graph). Exam-Related Following a number of similar disruptions at the end of July, the Internet in Syria was down from 01:00-05:30 GMT (4:00 am-8:30 am local) on August 1 & 2 as part of an effort to prevent cheating on high school exams, as seen in the figure below. Four similar exam-related disruptions occurred the following week as well, as the figure below shows traceroute completion ratios and the number of routed prefixes dropping to zero on August 5, 6, 7, and 8. (As noted in last month’s post, we believe that the concurrent spikes in DNS queries are due to the Internet shutdowns being implemented asymmetrically – that is, traffic from within Syria can reach the global Internet, but traffic from outside the country can’t get in. These spikes in DNS traffic are likely related to local DNS resolvers retrying when they don’t receive the response from Oracle Dyn authoritative nameservers.) Elections Ahead of presidential run-off elections in the country on August 11, authorities in Mali reportedly disrupted Internet access in the country. While not as obvious as disruptions observed in other countries, the figure below shows a decrease in the number of routed networks within the country for several hours in the middle of the day (GMT) on August 10. A published report notes that the Internet shutdown was confirmed by on-the-ground reports from Internet users in Bamako and Gao. In addition, a study by advocacy group NetBlocks found that access to social media and messaging platforms was also disrupted during this period. Another Internet disruption occurred in Mali after the election took place, but before the results were publicly announced. On August 16, advocacy group Internet Without Borders Tweeted: Total blackout du réseau de télécommunications civiles au #Mali avant la proclamation des résultats de l'élection présidentielle. Acte final d'un cas atypique de censure en cascade d'Internet. #PresidentielleMali2018 #Keepiton — Internet_SF (@Internet_SF) August 16, 2018 As the figure below shows, the both the Traceroute Completion Ratio and BGP Routes metrics experienced noticeable multi-hour decreases starting later on August 15 (GMT). While not as obvious, the DNS Query Rate metric also appears to be at a slightly lower level than at similar times during the previous days. Published reports (Bourse Direct, La Nouvelle Tribune) indicated that this second disruption may have primarily targeted mobile networks in the capital city of Bamako, with local reports of connectivity working over fixed-line/Wi-Fi, but problems connecting over 3G. Maintenance On August 20, Sure Telecom of Diego Garcia in the British Indian Ocean Territory posted a  notice on their homepage alerting users of a multi-hour “maintenance outage” that would impact availability of Internet services offered by the company. The impact of this maintenance outage can be seen in the figure below, with the Traceroute Completion Ratio and DNS Query Rates dropping to zero during the specified maintenance window. (British Indian Ocean Territory is GMT+6.) The BGP Routes metric was also lower during the maintenance period, but didn’t drop to zero. Sure Telecom appears to be the sole commercial Internet Service Provider in the British Indian Ocean Territory, so it is not surprising that this maintenance had such a significant impact on Internet availability in the region. Power Outages On August 17, a massive power outage affected the Sindh and Balochistan provinces in Pakistan for several hours. The blackout also had an impact on Internet availability within the country. As the figure below shows, the Traceroute Completion Ratio metric declined sharply as the power outage occurred, remained low for a few hours, and then gradually recovered. This indicates that traceroute endpoints in impacted locations were likely unreachable due to the power outage. However, because it occurred later on Friday evening local time, Internet usage was likely ramping down anyway, so there was no clear impact on the DNS Query Rate metric; the BGP Routes metric was unaffected because the routers announcing routes to IP address prefixes in the affected regions are either located in data centers with backup power, or are located in Pakistani cities not impacted by the power outage. Closing out the month, on August 31, an explosion at an electrical substation in Maracaibo, Venezuela, plunged much of the city into darkness, and had a visible impact on the country’s Internet connectivity as well. The figure below shows a decline in the Traceroute Completion Ratio metric at approximately 05:30 GMT, coincident with the explosion, which published reports state occurred at 1:36 AM local time. A minor increase in the unstable network count can be seen at approximately the same time as well. Submarine Cables Internet disruptions due to issues with submarine cables are not uncommon, but are often hard to confirm as cable operators are often reticent to publicize faults in the cables that they are responsible for. However, sometimes the issues cause are significant enough to be covered in the news, and other times impacted service providers will expose such issues as the root cause of problems that their customers are experiencing. The latter scenario occurred on August 28/29 in the Maldives, as illustrated by Tweets from two local providers: Dear Customers, we are experiencing internet traffic instability on international routes due to a technical issue on submarine cable. We are working with our cable system provider to resolve the issue, apologies for the inconvenience. Thankyou — Raajje' Online (@ROLMaldives) August 29, 2018   We're experiencing interruption in our international routes due to an unforeseen circumstance from our supplier's end. While we are working to fix this, our customers may face difficulty in using Internet on Mobile, SuperNet & Faseyha, as well as making IDD calls during the time. — Ooredoo Maldives (@OoredooMaldives) August 29, 2018 As the figure below illustrates, at approximately 01:00 GMT on August 29, both the Traceroute Completion Ratio and BGP Routes metrics for the Maldives declined from their ‘steady-state’ values. As evidenced by the Tweets from local Internet Service Providers shown above, these declines are likely due to the referenced submarine cable issue. Two cables carry international traffic for the Maldives: the WARF Submarine Cable, which runs to Sri Lanka and India; and the Dhiraagu-SLT Submarine Cable Network, which runs to Sri Lanka. Based on analysis of routing path data collected by Oracle’s Internet Intelligence team, we can posit that issues with the WARF cable likely caused the observed/reported Internet disruption, as upstream providers for both Ooredoo Maldives and Raajjé Online include companies listed as owners of the cable. Several additional unattributed Internet disruptions were observed during August in areas that have a significant reliance on submarine cables for international Internet connectivity. On August 14, the Internet Intelligence Map showed an Internet disruption in Vanuatu, a nation made up of approximately 80 islands in the South Pacific. As seen in the figure below, the values of all three metrics declined for a multi-hour period. Vanuatu is connected to Fiji via Interchange Cable Network 1 (ICN1). The figure below shows that nearly all traceroutes to Telecom Vanuatu reach the network through Vodafone Fiji Limited, and that the number of completed traceroutes to Telecom Vanuatu dropped to near zero during the same period shown in the figure above. Nearly all traceroutes to Vodafone Fiji Limited reach the network through Telstra Global, an Australian provider, and the figure below also shows that the number of completed traceroutes to Vodafone Fiji Limited dropped to near zero concurrently. Fiji connects to Australia via the Southern Cross Cable Network (SCCN). As such, the observed disruption may be related to issues with one of these two cables, possibly cable maintenance, as the disruption occurred in the middle of the night local time. Interestingly, this disruption occurred while the 2018 Asia Pacific region Internet Governance Forum (APrIGF 2018) was taking place in Port Vila, Vanuatu. On August 4, concurrent Internet disruptions were observed in Angola, Cameroon, and Gabon, as seen in the figures below. Although brief, all three metrics were affected across the three impacted countries. While no specific publicly available information about the cause of the disruption could be found, research shows that all three countries are connected to both the SAT-3/WASC and Africa Coast to Europe (ACE) submarine cable systems. Damage to, or maintenance on, a segment of one of these cables could potentially have caused the observed disruption. This blog has previously covered the impact of damage to the ACE cable on Internet connectivity in African countries. In the Caribbean, a brief Internet disruption was observed on August 5 in Grenada and St. Vincent & the Grenadines. Both countries connect to the Eastern Caribbean Fiber System (ECFS) and Southern Caribbean Fiber cables. On August 21, a brief Internet disruption was observed in Saint Barthelemy, which is also connected to the Southern Caribbean Fiber cable (on a spur from Anguilla). These disruptions are evident in the figures below. As island nations, all three countries are heavily reliant on submarine cables for international Internet connectivity, meaning that the observed disruptions could have been caused by damage to, or maintenance of, these cables. Conclusion Although we have historically focused on the value of the Internet Intelligence Map in identifying disruptions to Internet service, it was encouraging to also see it offer evidence in August of the nationwide mobile Internet service tests conducted in Cuba. Limited Internet access has been available on the island through paid public hotspots, but if ETECSA makes mobile access widely available (and affordable), then disruptions to Internet connectivity in Cuba (ultimately visible in the Internet Intelligence Map) will have a much more significant impact.  

During August 2018, the Oracle Internet Intelligence Map surfaced Internet disruptions around the world due to familiar causes including nationwide exams, elections, maintenance, and power outages. A...


Does Establishing More IXPs Keep Data Local? Brazil and Mexico Might Offer Answers

Much like air travel, the internet has certain hubs that play important relay functions in the delivery of information. Just as Heathrow Airport serves as a hub for passengers traveling to or from Europe, AMS-IX (Amsterdam Internet Exchange) is a key hub for information getting in or out of Europe. Instead of airline companies gathering in one place to drop off or pick up passengers, it's internet service providers coming together to swap data – lots and lots of data.   Where the world's largest internet exchange points (IXPs) reside are mostly where you would expect to find them: advanced economies with sophisticated internet infrastructure. As internet access reached new populations around the world, however, growth in IXPs lagged and traffic tended to make some roundabout, and seemingly irrational, trips to the more established IXPs.   For example, users connected to a server just a few miles away may be surprised to discover that data will cross an entire ocean, turn 180 degrees, and cross that ocean again to arrive at its destination. This occurrence, known as the "boomeranging" or "hair-pinning” (or “trombone effect” due to the path's shape), is especially true for emerging markets, where local ISPs are less interconnected and hand off data to bigger carriers located in more established markets such as the United States or Europe, forcing all traffic to cover more ground and be billed transit rates.   To prevent this from occurring, and after realizing that building submarine cables wasn't enough, there was and continues to be a strong push to build out internet infrastructure by developing more IXPs with the idea that offering a place where local network operators can connect would reduce latencies (delays) and costs. International organizations like the Packet Clearing House set out to provide equipment, training, and operational support to governments and network providers to establish hundreds of IXPs around the world. The Internet Society offers an excellent policy brief describing how cost savings from IXPs can be upwards of 20 percent depending on what portion of a country's overall internet traffic is local.   The impact in some countries has been significant. In a 2012 study evaluating the impact of IXPs in sub-Sahara Africa, the introduction of the Internet Exchange Point of Nigeria (IXPN) allowed local operators to save over $1 million USD per year on international connectivity and encouraged Google to place a cache in Lagos (a service that significantly speeds up access to popular web content). The Kenya Internet Exchange Point (KIXP) allowed ISPs to save almost $1.5 million USD per year and increased mobile data revenues by an estimated $6 million USD. Both IXPs now act as regional hubs.  The call to expand IXP development grew louder as privacy concerns and the desire to keep data more sovereign increased, backed up by research from the Organization for Economic Cooperation and Development (OECD) asserting that countries with fewer IXPs typically experience more cross-border data flows. As a result, IXP growth accelerated even more, particularly in emerging economies. In 2011, OECD researchers counted 357 IXPs around the globe. By 2015, the number had grown to 452. Latin America went from 20 to 34 IXPs. In the past year alone, the percentage change in IXP growth for all regions was in the double digits.    Source: Packet Clearing House, Internet exchange point directory reports. Retrieved on 17 August 2018 from  Recent research, however, illustrates the challenges that hold IXPs back. In its 2012 review of Telecommunication Policy and Regulation in Mexico, the OECD noted that Mexico was the only member state that did not have an IXP and recommended they establish one for many of the reasons stated above: it improves efficiency, lowers cost, and keeps data local. They said it would also incentivize the creation of local content and data center infrastructure.   In 2017, the OECD followed up with another report, this time lamenting the fact that despite the establishment of an IXP in Mexico City, traffic exchange was low. They noted that the incumbent telecommunications operator in the country wasn't participating in the IXP, leading them to conclude that IXPs are inhibited in markets where a single player has an overwhelming share and does not participate in the exchange. Subsequent studies have concluded the same. Our own data confirms these insights. A week’s worth of traceroutes showed numerous examples of packets originating in Guadalajara destined for Mexico City hair-pinning up to the United States. One example actually went to straight to Mexico City and still hair-pinned outside of the country’s borders before coming right back.    Even in Brazil, which implemented a unique government-commissioned, multi-stakeholder steering committee that created IXPs throughout the country, local traffic still sometimes hairpins to the US, Italy, or Spain. Again, research shows that the dynamic political economy of the telecommunications industry may be at play. However, instead of an absent incumbent operator, IXPs are sometimes bypassed due to distrust among the country's many operators who are concerned that interconnection would benefit competitors. Moreover, trans-regional routing remains very common. In at least one scenario, traffic going from Sao Paolo, Brazil to Uruguay (about 1,700 kilometers away from each other), will transit through two US cities, Spain, and Argentina before arriving in Montevideo. Using our own data, we were able to observe packets routing up to Miami on multiple traceroutes from Sao Paolo to Rio de Janeiro.     While IXPs have certainly helped several regions reduce the costs and latencies of internet traffic, they should not be thought of as a panacea to transnational routing. Their ability to do that can be inhibited by a host of other factors, which the Internet Society summarizes in four categories; a lack of trust between service providers, limited technical expertise, the cost of network infrastructure, and the cost of hosting an IXP in a neutral location.   Until these are resolved, a lot data makes intercontinental roundtrips. And for now, the United States appears to still be Latin America’s biggest hub.  

Much like air travel, the internet has certain hubs that play important relay functions in the delivery of information. Just as Heathrow Airport serves as a hub for passengers traveling to or from...


Civil War in Yemen Begins to Divide Country’s Internet

The latest development in Yemen's long-running civil war is playing out in the global routing table. The country's Internet is now being partitioned along the conflict's battle lines with the recent activation of a new telecom in government-controlled Aden. Control of YemenNet The Iranian-backed Houthi rebels currently hold the nation's capital Sana'a in the north, while Saudi-backed forces loyal to the president hold the port city of Aden in the south (illustrated in the map below from Al Jazeera).  One advantage the Houthis enjoy while holding Sana'a is the ability to control Yemen's national operator YemenNet.  Last month, the Houthis cut fiber optic lines severing 80% of Internet service in Yemen. Launch of AdenNet In response to the loss of control of YemenNet, the government of President Hadi began plans to launch a new Yemeni telecom, AdenNet, that would provide service to Aden without relying on (or sending revenue to) the Houthi-controlled incumbent operator.  Backed with funding from UAE and built using Huawei gear, AdenNet (AS204317) went live in the past week exclusively using transit from Saudi Telecom (AS39386), as depicted below in a view from Dyn Internet Intelligence. The new Aden-based telecom would also allow the Yemeni government to restrict access to the submarine cables which land in Aden without impacting their own Internet service. More recently, the government of President Hadi has been lobbying RIPE NCC to regain control of Yemen's Internet numbers and ICANN to regain control of the country's ccTLD, which would restore their control over domains ending with .ye. These vital components to operating the Internet of Yemen are traditionally controlled by YemenNet, now in the hands of the rebels. Divided Internet Internet service in Yemen faces myriad challenges in this troubled nation from hackers to sabotage. As the conflict rages on in Yemen, the country's Internet is now being partitioned between YemenNet (AS12486, AS30873) controlled by the Houthi rebels and now AdenNet (AS204317) controlled by the Saudi-backed Yemeni government. The Internet doesn't exist in a vacuum.  From Cuba to Crimea, a country's Internet is regularly shaped by events and conditions on the ground.  And in the case of Yemen, divided along the lines of an intractable civil war. On the upside, Yemen now has two backbone providers which could improve resiliency and increase competition within the market. Thanks to Fahmi Albaheth, President of the Internet Society of Yemen for assistance in this analysis.

The latest development in Yemen's long-running civil war is playing out in the global routing table. The country's Internet is now being partitioned along the conflict's battle lines with the recent...


Last Month In Internet Intelligence: July 2018

In June, we launched the Internet Intelligence microsite, including the new Internet Intelligence Map. In July, we published the inaugural “Last Month in Internet Intelligence” overview, covering Internet disruptions observed during the prior month. The first summary included insights into exam-related outages and problems caused by fiber cuts. In this month’s summary, covering July, we saw power outages and fiber cuts, as well as exam-related and government-directed shutdowns, disrupt Internet connectivity. In addition, we observed Internet disruptions in several countries where we were unable to ascertain a definitive cause. Power Outages It is no surprise that power outages can wreak havoc on Internet connectivity – not every data center or router is connected to backup power, and last mile access often becomes impossible as well.  At approximately 20:00 GMT on July 2, the Internet Intelligence Map Country Statistics view showed a decline in the traceroute completion ratio and DNS query rate for Azerbaijan, related to a widespread blackout. These metrics gradually recovered over the next day. Published reports (Reuters, Washington Post) noted that the blackout was due to an explosion at a hydropower station, following an overload of the electrical system due to increased use of air conditioners, driven by a heat wave that saw temperatures exceed 100° F. Power was restored after several hours, but reportedly failed again, causing a second blackout, which again impacted the traceroute and DNS metrics as seen around 15:00 GMT on July 3. Just a day later, Tropical Storm Maria caused an islandwide power outage in Guam, which disrupted Internet service on the island for several hours. However, Guam Power Authority (GPA) responded quickly once the storm had passed, with the Guam Daily Post noting that the GPA expected “to have substantial load for power restoration around 11 am”. In looking at the graphs shown below, they appear to have hit that target as the traceroute completion ratio and BGP routes count returned to prior levels around that time (Guam is GMT+10). At the end of the month, Venezuela experienced a large power failure that left most of the capital city of Caracas without electricity, which caused a disruption in Internet connectivity as well. As shown in the figure below, both the traceroute and DNS metrics saw minor declines at around 13:00 GMT. Approximately two hours later, a Tweet from the country’s Energy Minister stated that 90% of the service had been restored in Caracas, and a subsequent Tweet several hours later explained that the initial fault in Caracas originated from voltage transformer control cables being cut. It appears that the measured metrics for Venezuela returned to regular levels several hours after power was restored. Fiber/Cable Cuts On July 4, Twitter user @ADMIRAL12 posted the following Tweet: أكثر من 80% من سرعة الإنترنت في #اليمن تم فقدانها نتيجة سبب غير معروف!#يمن_نت ماذا تفعلون؟!! More than 80% of Internet connection in #Yemen lost as a result of unknown reason!! Speed connection is very slowly .. Maximum speed I have 10 KB/s !! What are you doing #YemenNet ? — Rashad H. AL-Khmisy (@ADMIRAL12) July 4, 2018 Oracle Director of Internet Analysis Doug Madory responded, noting “DNS query rate is down. Otherwise BGP routes and completing traceroutes are unaffected.” Minutes later, Madory also commented, “Both YemenNet ASNs lost transit from @GlobalCloudX and @etisalat (AS15412 and AS8966) at that time.” YemenNet’s issues can be seen in the Traffic Shifts graphs below. A published report indicated that Houthi rebels disrupted Internet service to nearly 80% of Yemen by damaging a fiber optic cable in the port city of Hodeidah. The publication quoted a source from the Public Telecommunication Corporation, who explained "The cable that connects the country to the Internet was cut in three places in the districts of Al Kanawes and Al Marawya in Hodeidah as the Houthi militia continues to dig trenches in the area." Just days later, Internet connectivity in Haiti was disrupted for more than a day, including a complete outage for local telecommunications provider Digicel Haïti. The Internet disruptions occurred in the midst of widespread protests over government plans to raise gas prices. Several Digicel fiber optic lines were cut, and the U.S. Embassy in the country stated “Telecommunications services, including Internet and phone lines, have been affected throughout Haiti.” The disruption to Haiti’s Internet connectivity, as well as Digicel’s outage, can be seen in the graphs below. Following this damage to Digicel’s infrastructure, the company's Chairman took to Twitter to provide a status update on repairs: Update on Digicel internet: fibre connection from PaP to the north of Haiti repaired at 5am today. Now configuring this connection to take intl. voice traffic. Other 2 fibre cuts towards St Marc fibre landing station being repaired but facing unrest during repairs. #haiti — Maarten Boute (@mboute) July 8, 2018 As would be expected, Digicel’s outage impacted connectivity for downstream customers. As seen in the graph below, traceroutes to targets in AS263685 (Sogebank, one of Haiti's three largest commercial banks) passed through Digicel ahead of the fiber cuts, as seen in the yellow area on the left side of the graph. Concurrent with the fiber cut, traceroutes fail to reach Sogebank for several hours, before they shift to using Télécommunications de Haití as an upstream provider, as seen in the green area on the graph. They maintained this connectivity arrangement for approximately three days before shifting back to Digicel. On July 9, incumbent provider Telecommunication Services of Trinidad and Tobago (TSTT) was down for over three hours, causing a partial disruption to Internet connectivity in Trinidad and Tobago, as seen in the graphs below. A published report quoted a TSTT executive as stating that a major break in a fiber optic cable in the Chaguaramas area had caused a temporary disruption in all mobile data and Internet services. Similar to the Sogebank discussion above, the Traffic Shifts graph below shows the impact of this cable cut on AS26317 (Lisa Communications), which uses TSTT as an upstream provider. As the graph shows, the vast majority of traceroutes to Lisa Communications passed through TSTT in the days prior to the cut, as see in the yellow area on the left side of the graph. Concurrent with the cut, the number of completed traceroutes briefly declines to approximately half of its average rate, although provider Columbus Communications Trinidad Limited quickly picked up the slack, as seen in the blue area. After approximately half a day, the majority of the Lisa Communications-bound traceroutes begin passing through TSTT once again, as seen in the return of the yellow area on the right side of the graph. A fiber cut caused a multi-hour Internet disruption in Kenya on July 22, starting at approximately 06:30 GMT. A published report indicated that service was restored by 11:00 GMT, which aligns with the traceroute completion ratio and DNS query rates shown in the figure below. Safaricom, Kenya’s largest telecommunications provider, issued a statement to customers that noted “We wish to apologize to our customers and partners that are currently experiencing voice and data outage, caused by multiple fiber link cuts affecting critical transmission equipment”. To that end, the Traffic Shifts graph below shows the impact of the cut on One Communications. A subsidiary of Safaricom since 2008, it relies on its parent for Internet connectivity. The cut caused a complete loss of completed traceroutes to targets in One Communications for several hours, until service was restored. While not explicitly a cable cut, Internet connectivity in Bangladesh was significantly impacted at the end of July as the SeaMeWe-4 (SMW4) submarine cable was taken down from July 25-30 for maintenance, resulting in a loss of almost half of the country’s international Internet capacity. Repairs to the SMW4 cable also impacted the Internet in Bangladesh in May 2018, October 2017, and August 2011, as did cuts to the cable in June 2012. Taking down SMW4 for repairs resulted in a significant shift in how traffic reaches Bangladesh, as shown in the Traffic Shifts graph below for AS17494 (Bangladesh Telecommunications Company Limited). The biggest impact appeared to occur during the first few days of the repair period, stabilizing by July 27. Exam-Related On the heels of exam-related Internet shutdowns on June 21 and 27 (covered in last month’s post), similar disruptions were observed in Iraq on July 1, 4, 7, and 11 as seen in the figures below. A table published by media advocacy and development organization SMEX listed high school diploma exams as taking place between June 21 and July 12, which aligns with the shutdowns discussed here. In addition, the issues observed in June and July also fit the profile of similar past actions – a significant, but not complete, outage lasting two to three hours. As seen in the figure below, similar Internet disruptions in Iraq were also observed in the Internet Intelligence Map on July 17 and 19. While they appear to be similar in profile to the exam-related outages discussed above, there was no available information that could be found regarding exams taking place on these two days. Heading into the end of July, Syria closed out the month with three multi-hour outages where the Internet was shut down nationwide to prevent cheating on high school exams. As seen in the figure below, the number of completed traceroutes into Syrian endpoints dropped to near zero, and the number of routed networks in Syria also dropped to near zero. However, as we have seen with similar prior shutdowns in Syria, the number of DNS requests from resolvers within the country jumps sharply during the shutdown. We believe that this indicates that the shutdown was implemented asymmetrically – that is, traffic from within Syria can reach the global Internet, but traffic from outside the country can’t get in. These spikes in DNS traffic are likely related to local DNS resolvers retrying when they don’t receive the response from Oracle Dyn authoritative nameservers – normally, the client traffic they are making requests on behalf of would be served from the resolver's cache. Government-Directed Sandwiched between the exam-related outages referenced above, Iraq experienced a nationwide Internet blackout that lasted nearly two days, stemming from a shutdown ordered in response to a week of widespread protests. The disruption, shown in the figure below, lasted from July 14-16, and had a significant impact on all three measured metrics. Unfortunately, as noted in a blog post on the disruption, “Government-directed Internet outages have become a part of regular life in Iraq.” The first such outage documented by the Internet Intelligence team occurred in 2013 and revolved around a pricing dispute between the Iraqi Ministry of Communications and various telecommunications companies operating there. Over the subsequent five years, we have seen several more such Internet disruptions. The Internet Intelligence blog post referenced above highlighted that not all of Iraq was taken offline during the weekend disruption, with about 400 BGP routes (out of a total of 1,300 for Iraq) staying online. Some telecommunications providers with independent Internet connections through the north of Iraq stayed online, as did those with independent satellite links. ITC operates the Iraqi fiber backbone, and the impact of the government-directed disruption is clearly evident in the Traffic Shifts graph above over the July 14-15 weekend period. Iraqi provider Earthlink is based in Baghdad and is one of Iraq’s largest ISPs. It was also down during the same period, as seen in the Traffic Shifts graph below. Other On July 9, Twitter user @Abdalla_Salmi posted the following Tweet: It appears that there has been severe internet disruptions in #Eritrea, over the past two days, coinciding with the visit by #Ethiopia PM Abiy Ahmed (Source: @InternetIntel) — Abdalla S (@Abdalla_Salmi) July 9, 2018 The Country Statistics graphs below show that there was no change in the number of routed networks geolocated to Eritrea, but there were significant declines in the traceroute completion ratio and DNS query rate metrics during the time period highlighted in the above Tweet. As @Abdalla_Salmi noted, the Internet disruption in Eritrea was coincident with a visit from the Ethiopian Prime Minister. (The visit marked a shift in relations between Ethiopia and Eritrea, which have been locked in two decades of conflict.) In some instances in the past, we have observed state-ordered disruptions to a country’s Internet connectivity as a means of limiting their citizens from being able to organize protests around political events of this type. However, such government involvement in an Internet shutdown is often reported in the press and/or on social media; in this case, no such reports have been found. The Eritrea Telecommunication Service Corporation (EriTel) is the national telecommunications service provider, and is the state-owned monopoly for fixed and mobile connectivity. As the graph below shows, the number of completed traceroutes into EriTel dropped to approximately 10% of their previous rate during the period of disruption. While no publicly available information on a root cause has been found for the issues observed at a country level and with EriTel, the disruptions were corroborated by colleagues at Akamai and CAIDA through data they collect and analyze. On July 12/13 and again on July 17/18, the Internet Intelligence Map highlighted Internet disruptions in Bhutan, as shown in the figure below. Although the observed issues appeared to last less than a day in each case, they left artifacts across all three metrics. Unfortunately, the root cause of these disruptions is unknown, as there were no published reports found on state involvement, fiber cuts, power outages, or the like. Just after midnight GMT on July 23, the Internet Intelligence Map Country Statistics view for Syria showed an approximately 30% decline in the traceroute completion ratio metric, as seen in the graph below. This reduced ratio persisted through the end of the month and may represent the “new normal”, although the reduced rate of DNS queries from Syrian resolvers returned to previous levels after a few days; the number of routed networks from Syria remained unchanged. This type of profile is often indicative of last mile access issues or catastrophic technical failure closer to the edge of the network. However, in this case we believe that this observed disruption may have been due to a change in network configuration at Syrian Telecom. The impact of this possible network configuration change on traceroutes into AS29256 (Syrian Telecom) can be seen in the Traffic Shifts graph below. In this case, the number of completed traceroutes into Syrian Telecom appears to drop right before midnight GMT on July 23 – just ahead of the significant drop in the country-level traceroute completion ratio graph above. Summary July was a busy month for Internet disruptions around the world as observed within Oracle’s Internet Intelligence Map. For better or worse, the disruptions were largely due to familiar causes, with related information found in local or international press coverage, on Twitter, or on telecommunications provider Web sites. However, some had impacts large enough to leave artifacts in the Internet Intelligence graphs, but without correlated press coverage, provider apologies, or user complaints on Twitter. Although root cause information can be hard to find, we feel that it is valuable to highlight all significant Internet disruptions in support of #keepiton efforts around the world.

In June, we launched the Internet Intelligence microsite, including the new Internet Intelligence Map. In July, we published the inaugural “Last Month in Internet Intelligence” overview,...


Internet in Iraq Returns After Two-Day Blackout

After a week of widespread protests against corruption and poor government services, the Iraqi government declared a state of emergency last week.  And as part of that measure, the government ordered the disconnection of the fiber backbone of Iraq that carries traffic for most of the country. On Monday, Internet services in Iraq were coming back online (however, social media site are still blocked according to independent measurement outfit NetBlocks). The blackout, which lasted almost 48hrs, was clearly visible in our Internet Intelligence Map (screenshot below): A history of government-directed outages Government-directed Internet outages have become a part of regular life in Iraq.  Just yesterday, the government ordered its latest national outage to coincide this year’s last 6th grade placement exam. The first government-directed outage in Iraq that we documented occurred in the fall of 2013 and revolved around a pricing dispute between the Iraqi Ministry of Communications (MoC) and various telecommunications companies operating there.  While the intention of this outage was to enforce the MoC’s authority, it served mainly to reveal the extent to which Iraqi providers were now relying on Kurdish transit providers operating outside the control of the central government – a topic we covered here. In June 2014, the terrorist organization ISIS stormed out of the deserts of western Iraq and captured Mosul, Iraq’s second largest city.  As we covered at the time, the government’s reaction then was to block social media and disconnect Internet service. In the summer of 2015, we began to observe periodic early-morning outages that turned out to be the beginning of the practice of taking the national Internet down to combat cheating in Iraq, a practice that has sadly continued to this month. Since then, Iraq has used Internet outages as a way to combat anti-government protests. For example, the incident covered in the tweet below: Iraq govt downs Internet in response to massive anti-corruption protests #العراق — InternetIntelligence (@InternetIntel) July 15, 2016 Exceptions to the shutdown As depicted in the above Internet Intelligence Map screenshot, not all of Iraq was taken offline during this past weekend’s blackout.  For example, about 400 BGP routes stayed online through the weekend out of a total 1300 routes for Iraq.  Some telecoms with independent Internet connections through the north of Iraq stayed online, as did those with independent satellite links. For example, the graphic below depicts the shift to satellite service for a route used by the Russian oil company Lukoil for its operations in Iraq (AS200939). Using the Traffic Shifts section of the Internet Intelligence Map, we can observe a range of impacts by each individual Iraqi provider.  ITC operates the Iraqi fiber backbone, so it is not surprising to see it go completely offline during the weekends communications blackout, as well as for two exam-related outages on July 11th and 17th. Iraqi provider Earthlink is based in Baghdad and is one of Iraq’s largest ISPs.  And, as is usually the case with Earthlink, it was down during the government-directed outages in the past week. Newroz is an Iraqi telecom based in northern Iraq in what is commonly referred to as the KRG (Kurdish Regional Government).  For much of its existence, it has operated largely outside of the control of the central government of Iraq.  However, last year’s failed Kurdish independence vote energized the central government to bring the KRG into the fold.  Earlier this year, the central government accused Kurdish providers of “smuggling” Internet service – selling international bandwidth without paying taxes to the Ministry of Communications – and arrested some telecom executives. In the screenshot below, we can see minor dips in the volume of traceroutes completing into Newroz’s network that coincide with the outages in the previous two networks.  These dips represent Newroz’s service area dependent on the national fiber backbone that can be disabled by technical means from Baghdad.  In addition, we see a 12-hour complete blackout that does not correspond to outages seen in other Iraqi providers. Conclusion Yesterday, at the request of one of his Twitter followers, the Deputy Prime Minister of Kurdistan asked Iraqi Prime Minister Haider Al-Abadi to lift the country’s block on social media: Until that happens, our Twitter followers suggest Iraqi Internet users utilize tools such as VPN apps or Psiphon to circumvent online censorship.  Country-level connectivity issues such as these can be monitored as they develop at Oracle’s Internet Intelligence Map.

After a week of widespread protests against corruption and poor government services, the Iraqi government declared a state of emergency last week.  And as part of that measure, the government ordered...


Last Month In Internet Intelligence: June 2018

In June, we launched the Internet Intelligence microsite (home of this blog), featuring the new Internet Intelligence Map.  As the associated blog post noted, “This free site will help to democratize Internet analysis by exposing some of our internal capabilities to the general public in a single tool. …. And since major Internet outages (whether intentional or accidental) will be with us for the foreseeable future, we believe offering a self-serve capability for some of the insights we produce is a great way to move towards a healthier and more accountable Internet.” While we will continue to share information about Internet disruptions and events as they occur via @InternetIntel, we also plan to provide a monthly roundup in a blog post, allowing readers to learn about Internet disruptions and events that they may have missed, while enabling us to provide additional context and insight beyond what fits within Twitter’s character limit. Exams In the past, countries including Iraq, Syria, and Ethiopia have implemented partial or complete national Internet shutdowns in an effort to prevent student cheating on exams. This past month saw Iraq implement yet another round of Internet shutdowns, and Algeria began a similar program as well. The Internet Intelligence Map graphs shown above highlight Internet shutdowns that occurred in Iraq on June 21 and June 27. The national backbone was taken down from 4:00-6:00 UTC on the 21st, and from 3:00-5:00 UTC on the 27th to prevent student cheating as a second round of student exams began. An earlier round of shutdowns took place between May 27 and June 16, and this second round is expected to last until July 12. According to a published report, the shutdowns are being implemented at the request of the Ministry of Education. A day prior, three separate brief disruptions to Internet connectivity occurred in Algeria, as shown in the figure above. According to a published report, “The Algerian Ministry of National Education announced that it will cut the Internet service across the entire country for an hour after the start of each High School Certificate Examination to avoid any exam leakage.” In addition to this Internet shutdown, additional measures were put into place in an effort to limit cheating, including banning mobile phones, tablets and other digital devices at exam locations. The lower graph in the figure below shows the impact of these shutdowns on Telecom Algeria, the country’s state-owned telecommunications operator. Similar to the drops seen in the traceroute completion ratio in the figure above, three similar declines are seen in the number of completed traceroutes to endpoints within Telecom Algeria’s network on June 20. A blog post from advocacy group SMEX indicated that Mauritania was also planning to implement a similar set of Internet shutdowns for exams between June 11 and June 21, and a Twitter post from them on June 19 highlighted a four-hour shutdown observed that day. However, there was no evidence of such shutdowns within the country seen in the Internet Intelligence Map on June 19, or over the broader time period. This may be because the shutdowns were more targeted in nature, affecting only mobile connectivity, according to a published report. Fiber Cuts Internet outages due to fiber cuts are not all that unusual, unfortunately, and occur fairly frequently on a local basis. However, sometimes these cuts have a wider impact, impacting Internet connectivity on a national basis. The Internet Intelligence team has used our measurement data in the past to illustrate the impact of cuts in the Ukraine, Egypt, Armenia, Chile, and Arizona. On June 18, @Abdalla_Salmi posted the following Tweet: Internet is down in #Libya, specifically eastern and central areas (owing to fighting?) via @InternetIntel and confirmed by @Balzawawi_ly #Internetshutdown #KeepItOn — Abdalla S (@Abdalla_Salmi) June 18, 2018 This Internet disruption in Libya can be seen clearly in the figure below starting late in the day GMT, lasting for just over 16 hours. The graphs show that it was not a complete national Internet outage, with none of the metrics dropping to/near zero, in line with the Tweeted statement that connectivity issues were only seen in some areas of the country. According to a subsequent Facebook post from the Libyan Interim Government, the Internet disruption was due to a breakdown in a fiber optic cable – it was not due to fighting in the region. A published report included more details, explaining that the country’s General Authority for Communications and Informatics (GACI) said that the interruption was caused by a cut in the fiber optic cable in the Ghanema district near the city of Khoms in the western region, and that the services were restored gradually after maintenance work by Hatif Libya Company. Just a few days earlier, on June 13, the Democratic Republic of the Congo experienced an Internet disruption that lasted for approximately half a day, as seen in the figure below. The country is no stranger to Internet disruptions, and has experienced issues in the past related to widespread political protests. However, it appears that the problem this time was related to issues with a submarine cable, according to a Tweet from local telecommunications provider MTN Congo: Y’ello Chers abonnés MTN CONGO s’excuse pour les désagréments occasionnés par la coupure d’internet hier du fait de la lésion du câble sous-marin et vous remercie pour votre compréhension. — MTN CONGO (@MTN_123) June 14, 2018 According to TeleGeography’s, the Congo is connected to the West African Cable System (WACS) and the Africa Coast to Europe (ACE) cable. A March 30 cut to the ACE cable impacted connectivity to 10 countries (not including the Congo), but it is unclear if it was the culprit in this disruption as well.  

In June, we launched the Internet Intelligence microsite (home of this blog), featuring the new Internet Intelligence Map.  As the associated blog post noted, “This free site will help to democratize...


Shutting Down the BGP Hijack Factory

It started with a lengthy email to the NANOG mailing list on 25 June 2018: independent security researcher Ronald Guilmette detailed the suspicious routing activities of a company called Bitcanal, whom he referred to as a “Hijack Factory.”  In his post, Ronald detailed some of the Portuguese company’s most recent BGP hijacks and asked the question: why Bitcanal’s transit providers continue to carry its BGP hijacked routes on to the global internet? This email kicked off a discussion that led to a concerted effort to kick this bad actor, who has hijacked with impunity for many years, off the internet. Transit Providers When presented with the most recent evidence of hijacks, transit providers GTT and Cogent, to their credit, immediately disconnected Bitcanal as a customer.  With the loss of international transit, Bitcanal briefly reconnected via Belgian telecom BICS before being disconnected once they were informed of their new customer’s reputation. The following graphic illustrates a BGP hijack by Bitcanal via Cogent before Cogent disconnected them. Bitcanal’s announcement of (Beijing Jingdong 360 Degree E-commerce) was a more-specific hijack of, normally announced by AS131486 (Beijing Jingdong 360 Degree E-commerce).  The graphic on the right shows another prefix being initially transited by GTT (AS3257) and then briefly via BICS before those companies terminated service to Bitcanal. Following the loss of these transit providers, these three prefixes (below), previously announced by Bitcanal, moved to a new home at Meerfarbig GmbH.  However, when Meerfarbig learned where their new customer had come from, Meerfarbig quickly disconnected them as well. Anna Dragun-Damian PL Anna Dragun-Damian PL Xantho Ltd LT The loss of transit also disconnected Routed Solutions (AS39536), ostensibly a customer of Bitcanal, although Bitcanal is listed as its admin contact on its WHOIS registration. The leftmost graphic below shows a prefix moving briefly to Meerfarbig after Bitcanal was cut off by major transit providers.  The rightmost graphic shows a prefix originated by AS39536 being disconnected when Bitcanal lost its transit, but returning to circulation via M247 (AS9009). Internet Exchange Points (IXPs) But Bitcanal didn’t only announce hijacked routes via transit providers, it has also extensively used Internet exchange points (IXPs) as a way to send hijacked routes directly to unsuspecting networks.  While the German IXP DE-CIX reportedly dropped Bitcanal last year for bad behavior, it took behind-the-scenes coordination in recent days to get Bitcanal booted from LINX and AMSIX, the major IXPs in London and Amsterdam, respectively. Latest disconnections In the past 24 hours, there have been two additional significant disconnections which greatly limit Bitcanal’s ability to announce its hijacks. At 16:46 UTC on 9 July 2018, Hurricane Electric (AS6939) de-peered Bitcanal (AS197426) (graphic on left). Earlier today at 11:40 UTC Portuguese transit provider IPTelecom terminated service to Bitcanal (graphic on right). While Bitcanal appears to remain connected (for the time being) at ESPANIX, with the loss of IPTelecom transit, Bitcanal is effectively cutoff from the global internet. Bitcanal’s IPv6 route (2a00:4c80::/29) was also withdrawn at 16:04 UTC today. According to Spamhaus, it was also the source of large amounts of spam email and is listed on their IPv6 Drop list. A Long Running Reputation Longtime followers of this blog may recognize the name Bitcanal (retail name Ebony Horizon) as we have documented their numerous flagrant BGP hijacks in the past including the hijack of IP address space belonging to the State Attorney General of Texas ( back in 2014.  They warranted their own section (“Case 2”) in my 2015 blog post, The Vast World of Fraudulent Routing. We’re not the only ones to have noticed something suspicious with Bitcanal: Spamhaus lists all of their ASNs (AS197426, AS3266, AS200775, and AS42229) on their ASN Droplist due to a history of originating massive amounts of spam email. Lessons for IXPs There are lessons to be learned from the past couple of weeks, specifically lessons for IXPs. Bad actors like Bitcanal take advantage of IXPs to form myriad peering relationships for the purpose of injecting fraudulent routes.  These routes can be used to send spam and other malicious traffic.  These bad actors presume people don’t generally monitor the routes they receive from peers and by hijacking the IP space of others, they attempt to evade IP blacklists. Based on the discussions with IXPs regarding this particular case, the following points are worthy of consideration. 1) Even if abuse didn’t take place across your exchange, you can still consider disconnection to mitigate future risk.  If it had been widely known that DECIX kicked out Bitcanal last year, might other IXes have disconnected them?  Or at least started scrutinizing their activity at the exchange? 2) IXPs are not just a neutral transport bus anymore. They facilitate a unique service that malicious actors can leverage.  Like it or not, this makes IXPs responsible too. 3) Ensure that you have monitoring and analysis capabilities in place.  Multiple IXPs contacted did not have MRT files of their route servers, or PCAP collection to verify any claim.  If an IXP has a policy of requiring evidence of bad behavior, it must also be collecting that evidence and, most importantly, a process to review that evidence when a reasonable inquiry is made. The removal of this bad actor was accomplished with the work of a number of people in the internet community. I would especially like to thank Job Snijders of NTT for his assistance on this blog post.

It started with a lengthy email to the NANOG mailing list on 25 June 2018: independent security researcher Ronald Guilmette detailed the suspicious routing activities of a company called Bitcanal, who...


Introducing the Internet Intelligence Map

Today, we are proud to announce a new website we're calling the Internet Intelligence Map. This free site will help to democratize Internet analysis by exposing some of our internal capabilities to the general public in a single tool. For over a decade, the members of Oracle's Internet Intelligence team (first born as Renesys, more recently as Dyn Research, and now reborn with David Belson, former author of Akamai's State of the Internet report) have helped to break some of the biggest stories about the Internet.  From the Internet shutdowns of the Arab Spring to the impacts of the latest submarine cable cut, our continuing mission is to help inform the public by reporting on the technical underpinnings of the Internet and its intersection with, and impact on, geopolitics and e-Commerce. And since major Internet outages (whether intentional or accidental) will be with us for the foreseeable future, we believe offering a self-serve capability for some of the insights we produce is a great way to move towards a healthier and more accountable Internet. The website has two sections: Country Statistics and Traffic Shifts.  The Country Statistics section reports any potential Internet disruptions seen during the previous 48 hours. Disruption severity is based on three primary measures of Internet connectivity in that country:  BGP routes, traceroutes to responding hosts and DNS queries hitting our servers from that country. The screenshot below illustrates how recent national Internet blackouts in Syria are depicted in the Internet Intelligence Map.  Notably, while both BGP routes and traceroutes completing into Syria drop to zero during these blackouts, the number of DNS queries surges.  This suggests the outage may be asymmetrical — packets can egress the country but cannot enter.  We believe the spike in queries are due to additional DNS retries as queries go unanswered.  Visualizations such as these will now be widely available to the public. We can try to further understand the recent blackouts by analyzing Traffic Shifts, pictured below. Visualizations in the Traffic Shifts section may be a little less familiar to some viewers, so additional explanation is provided.  As part of our Internet measurement infrastructure, we run hundreds of millions of traceroutes daily to all parts of the Internet from hundreds of measurement servers distributed around the world.  In the bottom panel, we attempt to model how traffic reaches a target autonomous system (AS) by plotting the number of traceroutes that traverse a penultimate or 'upstream' AS as a function of time.  Additionally, in the upper panel, we report the geometric mean of all observed latencies for traceroutes that traversed the target AS. Below, we see which upstream traceroutes traversed to get to Syrian Telecom. The gaps in the colored stacked plot below correspond to the outages and the colors represent various transit providers for Syrian Telecom that we observe. PCCW (AS3491) appears to be the most commonly traversed AS and perhaps Tata (AS6453) is the least by traceroute volume. An astute user of the website will notice that CYTA, one of Syrian Telecom's transit providers also experienced traffic shifts that align with the blackouts (pictured below).  When Syrian Telecom is down, CYTA loses its transit from Cogent.  This is due to the fact that CYTA's Cogent transit only handles traffic to Syria, something plainly evident in BGP routing. We call these Traffic Shifts and color them blue on the map because they aren't necessarily outages or connectivity impairments.  They are simply changes — good, bad or neutral — in how traffic is being routed through the Internet.  On any given day, there are hundreds of such shifts as ISPs change transit providers or re-engineer their networks.  The tool enumerates the top one hundred shifts in the previous 48-hour period and allows our users to explore a macro-level connectivity picture for any given AS. Take it for a test drive and let us know what you think!

Today, we are proud to announce a new website we're calling the Internet Intelligence Map. This free site will help to democratize Internet analysis by exposing some of our internal capabilities...


IPv6 Adoption Still Lags In Federal Agencies

On September 28, 2010, Vivek Kundra, Federal CIO at the time, issued a “Transition to IPv6” memorandum noting that “The Federal government is committed to the operational deployment and use of Internet Protocol version 6 (IPv6).” The memo described specific steps for agencies to take to “expedite the operational deployment and use of IPv6”, and laid out target deadlines for key milestones. Of specific note, it noted that agencies shall “Upgrade public/external facing servers and services (e.g. web, email, DNS, ISP services, etc) to operationally use native IPv6 by the end of FY 2012.” For this sixth “launchiversary” of the World IPv6 Launch event, we used historical Internet Intelligence data collected from Oracle Dyn’s Internet Guide recursive DNS service to examine IPv6 adoption trends across federal agencies both ahead of the end of FY 2012 (September 2012) deadline, as well as after it. Background The data set used for this analysis is similar to the one used for the recent “Tracking CDN Usage Through Historical DNS Data” blog post, but in this case, it only includes .gov hostnames. While the memorandum calls out the use of IPv6 for ‘web, email, DNS, ISP services, etc.’, in order to simplify the analysis, this post only focuses on hostnames of the form www.[agency].gov, essentially limiting it to public Web properties.  Furthermore, the GSA’s master list of .gov domains was used to identify federal agencies for the analysis. Although they may have been present in the initial data set, .gov hostnames associated with cities, counties, interstate agencies, native sovereign nations, and state/local governments were not included in the analysis. The analysis was done on historical recursive DNS data from September 2009 through October 2017, encompassing federal fiscal years 2010-2017. The graphs below are aggregated by month, and reflect the first time that a given hostname was associated with a AAAA DNS resource record within our data set – note that this may differ from the date that the hostname was first available over IPv6. In addition, the data set used for this analysis is not necessarily exhaustive across .gov domains, as it reflects only those hostname requests made to the Internet Guide service. Summary Of Findings In short, Internet Intelligence data showed that IPv6 adoption across federal government www sites was less than aggressive across the survey period, with many agencies failing to deploy public Web sites on IPv6 by the end of FY 2017.  Ahead of the deadline, IPv6 adoption was generally slow through 2009-2011, although activity did begin to increase in December 2011, continuing through the September 2012 deadline. Adoption continued at a solid rate throughout FY 2013, but remained generally low through the end of the survey period, with some periods of increased activity in 2017. Among the sites identified, most remain available in a dual-stack (IPv4 & IPv6) setup, but some have fallen back to IPv4 only, and others are no longer available. Akamai and Amazon Web Services are the CDN and cloud platform providers of choice for sites delivered from third-party service providers. Detailed Analysis The Executive Branch has the largest number of agencies listed in the GSA master list referenced above. As shown in the figure below, there were a significant number for which we did not find www sites on IPv6 during the survey period. Five agencies deployed sites on IPv6 only ahead of the deadline, and 20 deployed sites only after the deadline, while 28 agencies showed activity both before and after the deadline.  Of the eleven listed agencies in the Legislative branch, four deployed www sites on IPv6 only after the deadline, while no IPv6 sites were found for the remaining seven. The two agencies in the Judicial branch were split, with one integrating IPv6 after the deadline, and no IPv6 Web sites found for the other. Examining that data in more detail shows some interesting activity and trends. In the figure below, the first big spike of activity is seen in June 2011, with AAAA record first seen dates for .gov www sites clustered around World IPv6 Day, which took place on June 8. (Click the graph to view a larger version of the figure.) The Departments of Commerce, Energy, and Health & Human Services launched the largest numbers of Web sites on IPv6 during that month. However, activity all but disappeared until December, when the Department of Veterans Affairs began a multi-month effort to make several hundred topical and city-specific Web sites available via IPv6.  Following the VA’s lead, a number of other agencies deployed Web sites on IPv6 through the first half of calendar year 2012, with a peak of activity around the initial World IPv6 Launch event in June. However, it is clear that a number of agencies scrambled to meet the end of FY 2012 deadline, with 115 Web sites from over 20 agencies first appearing on IPv6 in September. IPv6 adoption tailed off in the months following the September 2012 deadline, as illustrated in the figure below. (Click the graph to view a larger version of the figure.) Starting in June 2013, the Department of Commerce began turning up dozens of topical NOAA sites on IPv6, with the initiative lasting about a year. Beyond that, AAAA records were first seen for only 20-30 new federal Web sites per month through early 2017. Interestingly, the yearly World IPv6 Launch anniversaries during that period seemed to have little impact – no meaningful increases were observed around those dates. However, a significant spike was seen in June 2017, with 120 sites from 18 agencies first observed on IPv6. The Departments of Commerce, Energy, Health & Human Services, and the Interior were the most active agencies that month. Current Status The figures above illustrate the deployment of federal agency Web sites on IPv6 over an eight-year period that ended in October 2017. We also examined the current state of the 2,255 sites identified over that timeframe – that is, how many remain available over IPv6? As shown in the figure below, the news here is relatively good, with over 1,600 available as dual-stacked sites, reachable on IPv6 and IPv4. Interestingly, three sites (,, and are available only over IPv6, with DNS lookups returning only AAAA records. Unfortunately, over 200 of the identified sites have fallen back to being available only over IPv4, while over 360 of them are no longer reachable, responding to DNS lookups with an NXDOMAIN. Many federal agencies work with cloud and CDN providers as part of IT modernization efforts, or to improve the performance, reliability, and security of their Web presence. Some of the identified sites included CNAMEs within their DNS records. For those sites, we analyzed the CNAMEs to identify the use of popular cloud and CDN providers, with the results shown in the figure below. For those sites accelerated through a CDN, over 300 of them make use of Akamai’s IPv6-enabled services, while a smaller number are delivered over IPv6 via Amazon’s Cloudfront service, Cloudflare, and Limelight. Of those sites served directly from an IPv6-enabled cloud platform, the largest number came from Amazon Web Services, while the remainder came from Google Hosted Sites, IBM Cloud, and a small number of other providers. Conclusion A recent FedTech article noted that “Agency adoption of IPv6 moves at a glacial pace” but also that “Most have started to ensure their public websites are accessible via IPv6 using dual-stack environments”. Our analysis of eight years of historical recursive DNS data supports these assertions – while much progress has been made, there is still a long way to go. During the six years since the initial World IPv6 Launch event, many cloud and CDN providers have moved to ease the transition to IPv6, making it easy for customers to support it, either enabling it by default when a new site is configured on their platform, or via a simple configuration option. While federal agencies have been directed to support IPv6 throughout their technology stack, it is arguably easier than ever to do so for public-facing Web sites and applications.

On September 28, 2010, Vivek Kundra, Federal CIO at the time, issued a “Transition to IPv6” memorandum noting that “The Federal government is committed to the operational deployment and use of...


Tracking CDN Usage Through Historical DNS Data

With Mother’s Day having just passed, some e-commerce sites likely saw an associated boost in traffic. While not as significant as the increased traffic levels seen around Black Friday and Cyber Monday, these additional visitors can potentially impact site performance if it has not planned appropriately.  Some sites have extra infrastructure headroom and can absorb increased traffic without issue, but others turn to CDN providers to ensure that their sites remain fast and available, especially during holiday shopping periods. To that end, I thought that it would be interesting to use historical Internet Intelligence data (going back to 2010) collected from Oracle Dyn’s Internet Guide recursive DNS service, to examine CDN usage. As a sample set, I chose the top 50 “shopping” sites listed on Alexa, and looked at which sites are being delivered through CDNs, which CDN providers are most popular, and whether sites change or add providers over time. Although not all of the listed sites would commonly be considered “shopping” sites, as a free and publicly available list from a well-known source, it was acceptable for the purposes of this post. The historical research was done on the www hostname of the listed sites, with the exception of The site was considered to be using a given CDN provider if the hostname was CNAMEd to the provider’s namespace, or if associated A records belonged to the provider. For time periods before listed sites began using a CDN provider for whole site delivery as shown in the charts below, it is possible that they were delivering embedded content through a CDN, but that is not captured here. ( is also on the Alexa list, but is not included below because it is not a shopping site, and because it is delivered directly from Google’s own infrastructure.) In the interest of making the analyzed data easier to review here, the list of sites is broken out into several categories: Brick & Mortar: the online presence of well-known retailers with physical stores Online Native: retailers that primarily exist online Automotive: car shopping sites Video: media providers Publishing: focusing on academic & research content Nutrition: vitamin & supplement providers Amazon: the .com and properties, along with Zappos Brick & Mortar Looking at the chart below, it is clear that Akamai has an extremely strong presence within this set of sites, as a long-term CDN provider across all of them. Although Bed, Bath & Beyond and Forever21 previously used AT&T’s CDN solution, they eventually transitioned to Akamai, likely as a result of the strategic alliance between the two providers. Among this group, Walgreens is the only site not currently being delivered by Akamai, having transitioned to Instart Logic in mid-2016.   Online Native As businesses born on the Internet, they should recognize the value of whole site delivery through a CDN, incorporating these services into a site's architecture from the start. However, in contrast to the online presence of brick & mortar retailers reviewed above, many of the “online native” sites have only come to rely on CDN providers in the last five years. (Except for one – iFixit is on the Alexa list, but not included here because we found no evidence of it being delivered through a CDN provider in our historical data set. However, it does appear to be using Amazon’s Cloudfront CDN service to deliver embedded page content.) Akamai has a strong presence among this set of sites as well, with a few exceptions. As part of its ongoing efforts to optimize site performance, Etsy moved to a multi-CDN configuration in late 2012, adding Verizon Digital Media Services/EdgeCast alongside Akamai, with Fastly joining the set of providers in early 2013. Humble Bundle has been delivered from Google since late 2010, although it is not using the Google Cloud CDN solution. Among this set of sites, Redbubble was the last to begin delivering its site through a CDN, waiting until early 2016 to integrate Cloudflare.   Automotive Looking at AutoTrader, we see that its site has been delivered by Akamai since late 2011. CarGurus turned up CDN services from Verizon Digital Media Services/EdgeCast in early 2013, and shifted to a dual-vendor strategy with Fastly and Verizon in late 2015, before moving to use Fastly exclusively in early 2017. held out much longer than its counterparts did, relying on origin infrastructure until activating Akamai’s services at the end of 2015.   Video Sky and DirecTV have both been long-time Akamai customers, with Sky integrating the CDN services before the start of 2010, and DirecTV coming on board in late 2012. Netflix is well known as an Amazon Web Services customer, and its site has historically been hosted on Amazon’s EC2 service. Although not a CDN service, it appears Netflix used the cloud provider’s Elastic Load Balancing solution for a three year period between 2012 to 2015.   Publishing The Oxford University Press site is found on the Alexa list, but is not included here because we found no evidence of it being delivered through a CDN provider in our historical data set. (The www hostname simply redirects users to, but there is no evidence of CDN usage, either for whole site or embedded content delivery, on that site either.) Wiley’s site was also historically served directly from origin, before shifting to delivery through Amazon Cloudfront in late 2017. However, Cambridge University Press has had a longer-term reliance on CDN providers, delivering through Akamai for three years, before shifting to CDNetworks for two years, and then to Cloudflare for the last two years.   Nutrition Both sites in this category have had a long-term reliance on Akamai for delivery. However, iHerb briefly tested Amazon Cloudfront in early 2014. It also pursued a multi-CDN strategy with Akamai and Cloudflare starting in late 2015, before moving exclusively to Cloudflare two years later.   Amazon Zappos is included in this grouping, along with Amazon’s US and UK sites, because it has been owned by Amazon since 2009. As seen in the chart, it has relied on Akamai’s CDN services since that time as well. In contrast, Amazon’s native sites have only been served through a CDN since late 2016, with and being balanced between Akamai and Amazon’s own Cloudfront CDN solution.   Summary In short, our Internet Guide data shows that Akamai has an extremely strong presence within top shopping sites, and for the most part, has held that position for a number of years. It also exposes the fact that more than sixteen years after CDN providers launched basic whole site delivery services, and more than thirteen years after dynamic site acceleration services were launched, there are still a set of shopping sites that are not taking advantage of those capabilities for performance and availability improvement, to say nothing of the security benefits of the WAF services also offered by these providers. If you have other historical recursive DNS data research ideas for future blog posts, please comment on this post or e-mail us at

With Mother’s Day having just passed, some e-commerce sites likely saw an associated boost in traffic. While not as significant as the increased traffic levels seen around Black Friday and...


SeaMeWe-3 Experiences Another Cable Break

On Thursday, May 10 at approximately 02:00 UTC, the SeaMeWe-3 (SMW-3) subsea cable suffered yet another cable break. The break disrupted connectivity between Australia and Singapore, causing latencies to spike as illustrated below in our Internet Intelligence tool, because traffic had to take a more circuitous path. The SMW-3 cable has had a history of outages, which we have reported on multiple times in the past, including August 2017, December 2014, and January 2013. Latencies to/from Perth, Australia spike as SMW-3 submarine cable suffers fault off the coast of Singapore on 2-Dec. New fault occurred only six weeks following repair of previous fault on same cable segment. — InternetIntelligence (@InternetIntel) December 4, 2017   The incident summary posted by cable owner Vocus Communications for this most recent break noted that “There is no ETR at this stage.” However, based on our observations of past outages, time to recovery has been measured on the order of weeks.                  While this subsea cable is currently the only one carrying traffic from Western Australia to South East Asia, there are several additional cable projects in process that will help address this long-standing issue. The Australia-Singapore Cable (ASC) is expected to be ready for service in July 2018, and the Indigo-West Cable is expected to be ready for service in 1Q 2019. Both cables will connect Perth to Singapore. Given the great expense of repairing submarine cable breaks and the fact that a replacement cable will be live soon, it will be interesting to see if the Perth-Jakarta-Singapore segment of SMW-3 ever gets fixed at all. SMW-3 isn’t the only cable in the region that has suffered repeated cable breaks. AAG’s link to Vietnam also suffers cable breaks with regularity. The most recent one occurred in late April, and it saw five issues in 2017. Latencies to providers in #Philippines increased dramatically as a result of another AAG submarine cable cut. This time between Hong Kong and Philippines. — InternetIntelligence (@InternetIntel) May 1, 2018   The cable breaks experienced by AAG and the Perth-Jakarta-Singapore segment of SMW-3 have occurred in some of the busiest waters in the world for fishing and shipping, not to mention this is also a region prone to typhoons, earthquakes and underwater landslides. The frequency with which the SMW-3 and AAC cables experience breaks have led my colleague Doug Madory to question which should hold the title of the “world’s breakiest cable”.

On Thursday, May 10 at approximately 02:00 UTC, the SeaMeWe-3 (SMW-3) subsea cable suffered yet another cable break. The break disrupted connectivity between Australia and Singapore, causing latencies...


BGP Hijack of Amazon DNS to Steal Crypto Currency

Yesterday morning we posted a tweet (below) that Amazon’s authoritative DNS service had been impacted by a routing (BGP) hijack.  Little did we know this was part of an elaborate scheme to use the inherent security weaknesses of DNS and BGP to pilfer crypto currency, but that remarkable scenario appears to have taken place. After posting the hijack tweet, I observed reports of a DNS hijack relating to the cryptocurrency website and thought the two things might be related: Sure enough, it appears that eNet/XLHost (AS10297) suffered a breach enabling attackers to impersonate Amazon’s authoritative DNS service.  These attackers used AS10297 to announce five routes used by Amazon’s DNS:, Inc., Inc., Inc., Inc., Inc. As depicted above, these BGP routes weren’t globally routed.  In fact, only a little more than 15% of our BGP sources had them in their tables.  However, the users of networks that accepted the hijacked routes (evidently including Google’s recursive DNS service) sent their DNS queries to an imposter DNS service embedded within AS10297.  If these users attempted to visit, the imposter DNS service wouldn’t direct them to Amazon Web Services (which normally hosts the site), but to a set of Russian IP addresses, according to CloudFlare. Note that users did need to click through cert failure alerts in their browsers, but that didn’t stop many users. Within a couple of hours, MyEtherWallet had issued an announcement acknowledging that many of the users of their cryptocurrency service had been redirected to a fraudulent site (albeit incorrectly assigning blame to hijack of Google DNS instead of Amazon DNS): Conclusion This attack abused the trust-based nature of BGP to subvert Amazon’s DNS.  It then abused the trust-based nature of DNS to direct users to a malicious website in Russia primed and ready to take their crypto currency. Despite proposed technical fixes to secure BGP and DNS, it would appear that we presently have no way to completely prevent this from happening again. However, an idea worth considering comes from Job Snijders of NTT who proposes that major DNS authoritative services offer RPKI for origin validation of their routes. This would enable ASes and IXP route servers to drop invalid routes like the ones used to impersonate Amazon’s DNS yesterday. If attacks like these can be done with impunity and for profit, we can expect more to come.      

Yesterday morning we posted a tweet (below) that Amazon’s authoritative DNS service had been impacted by a routing (BGP) hijack.  Little did we know this was part of an elaborate scheme to use the...


ACE Submarine Cable Cut Impacts Ten Countries

The ACE (African Coast to Europe) submarine cable runs along the west coast of Africa between France and South Africa, connecting 22 countries. It extends over 17,000 km, and has a potential capacity of 5.12 Tbps. The cable system is managed by a consortium of 19 telecommunications operators & administrations, and the first phase entered service in December 2012. While it may not have been completely problem-free over the last 5+ years, online searches do not return any published reports of significant outages caused by damage to the cable. However, on March 30, damage to the cable disrupted Internet connectivity to a number of connected countries, with reported problems posted to social media over the next several days. These posts indicated that the ACE submarine cable was cut near Noukachott, Mauritania, but did not provide any specific information about what severed the cable. The Sierra Leone Cable Limited (SALCAB) says the data connection to #SierraLeone is partly down due to the ACE Submarine cable cut in Nouakchott, Mauritania. #SierraLeoneDecides — Leanne de Bassompierre (@leannedb01) April 1, 2018   Of the 22 countries listed as having landing points for the ACE Submarine Cable, 10 had significant disruptions evident in Oracle's Internet Intelligence data. The graphs below show the impact of the cable cut to Internet connectivity within the affected countries, with the countries shown grouped by the number of submarine cable connections each country has, based on information found on TeleGeography's Submarine Cable Map. This first figure, immediately above, includes graphs for the six affected countries (Sierra Leone, Mauritania, Liberia, Guinea-Bissau, Guinea, and the Gambia) that only have a single submarine cable connection (to ACE). While the disruption begins at the same time across all six countries, it is interesting to note that the duration and severity of impact varied widely. The most significant, and longest-lasting disruption was seen in Mauritania, with a complete outage lasting for nearly 48 hours, followed by partial restoration of connectivity. Sierra Leone also saw a significant impact as a result of the cable cut, followed by a complete outage on April 1. However, we believe that the April 1 outage may have been government-directed, related to recent national elections. The differences in duration and severity may be related to the other international Internet connections, via terrestrial cable or satellite, that the providers in these countries have in place, resulting in varying levels of reliance on the ACE cable system. The second figure, immediately above, illustrates the impact of the cable cut on Internet connectivity in Benin, which is connected to two submarine cables - ACE and SAT-3/WASC, which follows a similar path along the west coast of Africa, up to Portugal. The impact on Benin appeared to be nominal, affecting less than a tenth of the regularly available networks, with nearly all coming back after a little more than a day. The redundant connection to SAT-3/WASC, as well as other terrestrial/satellite connections, may have helped to mitigate the overall impact on Benin's connectivity. The third figure, immediately above, highlights the impact of the cable cut on three affected countries, each of which is connected to three submarine cables -- Senegal (ACE, SAT-3/WASC, and Atlantis-2), Ivory Coast (ACE, SAT-3/WASC, and the West African Cable System), and Equatorial Guinea (ACE, Ceiba-1, and Ceiba-2). The duration of the disruption in Senegal is longer than the other surveyed countries, but may be related to its proximity to the cable cut, which took place near Mauritania. And although Equatorial Guinea is connected to three submarine cables, the Ceiba-1 cable provides no international redundancy, while the Ceiba-2 cable is only connected internationally to Cameroon. (However, Cameroon is connected to four other submarine cables, in addition to Ceiba-2 and ACE.) And while Ivory Coast (Côte d'Ivoire) is connected to ACE, the observed disruption begins several hours after the other affected countries, and lasts significantly longer, which could indicate that it is actually unrelated to the reported cable cut. Conclusion With multiple submarine cables landing in Western African countries that provide connectivity between countries, as well as to landing points in Europe, network providers in these countries have an opportunity to take advantage of this redundancy to mitigate the potential impact of problems such as the ACE submarine cable cut discussed above. Internet Intelligence data showed that in this case, there was a somewhat limited impact to Internet connectivity in connected countries, but problems with other cables in the past (2008, 2012, 2013, 2014)  have resulted in much more significant issues.

The ACE (African Coast to Europe) submarine cable runs along the west coast of Africa between France and South Africa, connecting 22 countries. It extends over 17,000 km, and has a potential capacity...


Power Failure Leaves Brazilian Internet In The Dark

On Wednesday, March 21, a massive power failure impacted large parts of northern Brazil, leaving tens of millions of people without electricity. Beginning at about 3:40pm local time (18:40 UTC), the outage was reportedly due to the failure of a transmission line near the Belo Monte hydroelectric station. As occurred in a major power outage in Brazil in 2009, this power failure had a measureable impact on the country’s Internet. This is illustrated below through graphs from Oracle Dyn’s Internet Intelligence team based on BGP and traceroute data, as well as graphs from Akamai’s mPulse service, based on end user Web traffic. The graphic below depicts the counts of available networks (lower graph) and unstable networks (upper graph) for Brazil in the latter half of March 21. The number of unstable networks spikes around 18:40 UTC as routers of ISPs in Brazil began re-routing traffic away from disabled connections, while the lower graph shows that the corresponding drop in available networks (i.e. routed prefixes) was minor when compared to the total number routes that define the Internet of Brazil. In addition to aggregating BGP routing information from around the globe, the Internet Intelligence team also performs millions of traceroutes each day to endpoints in ISPs and enterprises across the Internet, providing us with insight into changes in Internet traffic paths and latencies. Based on the analysis of data from these traceroute measurements, we observed impacts to thirty networks (autonomous systems) from the power outage. Of those thirty, graphs highlighting six networks are shown in the figure below. The impact of the power outage on March 21 is clearly evident within each of these graphs, with the number of completing traceroutes dropping significantly in each case. The six graphs also show that the power failure had a mixed impact on the median latency of traceroutes to endpoints in the selected networks. The upper graphs for Eletrodata Ltda (AS262728), Softcomp Telecomunicacoes (AS52873), and Tascom Telecomunicaes (AS52871) show noticeable drops in median latency during the outage. These drops are likely attributable to a decrease in successfully completed traceroutes, with only traceroutes from nearby networks with short geographic paths reaching the endpoints. The upper graphs for FSF Tecnologia (AS61568) and Noroestecom Telecom (AS52579) show the opposite - a spike in latency during the outage. The increased median latency could be due to increased congestion, which would increase latency, or to the decrease in completing traceroutes, where the last measurements recorded were from networks far from the target. Interestingly, although the number of completed traceroutes to Electrodata Ltda (AS262740) dropped significantly, there appeared to be no meaningful impact on the median latency. It is also interesting to look at the impact of the power failure from a ‘last-mile’ perspective – how did it impact an end-user’s ability to consume Web content? The reported tens of millions of consumers impacted by the power outage account for at least 10% of Internet users in Brazil – losing power to their homes means that they were unable to get online for several hours. Akamai mPulse is a real user monitoring service that thousands of Web sites use to analyze their performance as experienced by actual end users.  Analysis of the data aggregated across customers can also highlight trends in traffic to those customer sites. It is clear in the figure below that the number of Web pages loaded by end users in Brazil dropped significantly at 18:47 UTC, gradually recovering over the next several hours. Brief traffic spikes at 23:21 UTC and 00:23 UTC may be related to power being restored to impacted areas. As noted above, the power failure affected large parts of northern Brazil, and using the mPulse data, we can drill down to examine the impact on traffic patterns across selected cities in the affected area.   The figure below illustrates the relative page views from three cities in northern Brazil – Fortaleza, Salvador, and Recife. It is clear that in each of these cities, there was a sudden drop in traffic at the time of the power failure. Looking more closely at Fortaleza in the figure below, the immediate drop in traffic is clear, followed by a gradual recovery over the next several hours. Interestingly, it appears that during the power failure, the percentage of users accessing the Web via mobile devices increased significantly. Conclusion Events like widespread power failures have an obvious impact on Web content consumption, as end users are unable to access the Internet from traditional wired last mile connections to their homes, apparently driving them to use mobile/cellular connectivity instead. These events also affect core network infrastructure, impacting latency and Internet traffic paths as traffic is routed away from disabled connections.   (Thank you to Paul Calvano, Senior Web Performance Architect at Akamai Technologies for the mPulse insights and graphs.) ("Featured Image" photo of power lines/pole by Cameron Kirby on Unsplash)

On Wednesday, March 21, a massive power failure impacted large parts of northern Brazil, leaving tens of millions of people without electricity. Beginning at about 3:40pm local time (18:40 UTC), the...


And The Gold Medal Goes To...

The PyeongChang 2018 Winter Games are underway, and all eyes are on South Korea, with nearly 3,000 athletes from over 90 countries around the world representing their countries across a myriad of events. The slogan of the 2018 Olympic Winter Games is "Passion. Connected." According to the Organizing Committee, 'connected' signifies the openness of the host city, where every generation can participate in the Games – no matter where they are – thanks to Korea’s cutting-edge technology and cultural convergence. Presumably, the vast majority of participants and spectators arrived in the country by air, likely through flights to Incheon International Airport. Others may have arrived by boat, with ferry services available from both China and Japan. However, the rest of us are attending the Olympics virtually, getting real-time results and streaming video of our favorite events over the Internet. This focus on connectivity got us thinking... how are those bits getting to the users consuming them? More specifically, how is the Internet carrying those event results and how are the source video streams getting out of South Korea to end user Internet networks around the world? Given South Korea's geographic and geopolitical location (on a peninsula, connected to the Asian continent through North Korea), the country's Internet traffic is heavily reliant on submarine cables to reach the rest of the world. TeleGeography's Submarine Cable Map is an excellent free and regularly updated resource, and provides great insight into submarine cable connectivity from South Korea. As shown in Figure 1 below, there are three primary submarine cable landing points in South Korea – cables come ashore in Shindu-Ri, Keoje, and Pusan. Figure 1: Submarine cable connections to South Korea, as shown by Telegeography's Submarine Cable Map Currently, a total of nine submarine cables carry data to and from South Korea. Based on data from Telegeography, the SeaMeWe-3 cable reaches the most countries (33), while Japan is the country most well-connected to South Korea, with a connection to all nine cables. China and Taiwan are also very well connected to South Korea, both with connections to six cables. Connections to Europe are more limited, with countries there connected via the SeaMeWe-3 cable. In addition, South Korea is connected to the United States via two cables, landing in two different cities in Oregon. A table including more specific information about the nine submarine cables and the countries that they are connected to can be found below. While these submarine cable connections provide a physical perspective on South Korea's Internet connectivity to the rest of the world, they don't give any real sense of the paths that packets ultimately take -- that is, to get from South Korea to other countries, what intermediate countries do those packets transit through? Oracle Dyn performs hundreds of millions of Internet measurements (traceroutes) each day to determine Internet paths and latencies, so to answer that question, we looked at the results of millions of those traceroutes over a one-week period from Oracle Dyn vantage points located in South Korea to targets in countries around the world. By geolocating the IP addresses seen in the hops of each traceroute, we can determine which countries the paths pass through. Figure 2 below illustrates the most popular countries that our traceroutes pass through, for measurements from South Korea to countries in a given region. In keeping with the Olympic theme, we awarded the top three through-countries for each region a Gold, Silver, or Bronze medal based on the percentage of traceroutes that passed through them en route to a country in the region.  The percentages shown will not add to 100% because multiple through-countries will appear in a given traceroute, and there are additional through-countries beyond the top three observed. (A table showing the member countries for each region in Figure 2 can be found at the end of this post. The Western Europe, Eastern Europe, Northern Europe, North America, and East Asia regions were chosen by counting the total number of Olympic athletes from each country and aggregating by region - these five have the most athletes. The South America and Africa regions were included to gain a more complete global perspective.) Figure 2: Awarding Gold, Silver, and Bronze medals for through-country presence in traceroutes from South Korea Looking at the measurements to targets in Europe, it is worth noting the differences in the top through-countries for the three regions. To countries in Western and Northern Europe, the United States takes the gold, present in over 40% of the traceroutes, while Germany just edges out the United States in traceroutes to countries in Eastern Europe. The Netherlands took the bronze for presence in traceroutes to Western and Eastern Europe, with Great Britain in that position for Northern Europe. Although the FLAG and SeaMeWe3 cables connect South Korea with a number of European countries, the underlying peering and transit relationships appear to prefer sending traffic in the other direction, heading east across the United States, instead of west directly to Europe. These longer paths through the United States see higher latencies than those heading directly to Europe. Not surprisingly, the United States takes the gold for path measurements to North America, present in over three-quarters of the traceroutes, in part because there are no direct submarine cable connections from South Korea to Canada or Mexico. Japan and Hong Kong place a distant second and third respectively, with a smaller number of traceroutes heading through those countries before heading to/through the United States. The United States also takes first place as a through-country for traceroutes headed towards countries in South America. Japan again earned a silver medal, but Great Britain's third-place finish is extremely interesting, even if it is present in only two percent of the traceroutes. We have found that in a small number of cases, South America-bound traffic from South Korea takes a path through European countries including Great Britain, before heading across the Atlantic Ocean through the United States. The latencies seen in the paths through Europe to South America are generally higher than those that take an eastern route through the United States. The results of measurements to countries in East Asia offer little surprise, with Japan taking the gold medal, present as a through country in just over half of the traceroutes due to multiple connections to South Korea, as well as other countries in the region. Hong Kong and China, both well-connected to South Korea and other regional countries through multiple submarine cables, earned the silver and bronze respectively. Finally, in examining Internet paths from South Korea to endpoints in Africa, we found some very interesting results. Great Britain earned the gold medal as a through-country in the greatest percentage of traceroutes, followed by the United States and Germany. It is interesting to note that Great Britain's first place percentage was significantly lower than the gold medal percentage found for most of the other regions (except Eastern Europe) and that the spread between the three top finishers was also fairly small, again in contrast to most of the other regions. These results indicate that for traceroutes to African targets, there is a longer tail of through-countries, representing a greater diversity of paths. Summary South Korea is well known for its strong national broadband connectivity, with some of the highest connection speeds in the world, but its connections to the rest of the world are primarily reliant on submarine cables. However, despite the cable connections from South Korea to a number of other countries, millions of traceroutes indicate that the United States is positioned as a through-country in the largest number of those measurements to endpoints in multiple regions around the world. Across the seven geographical regions reviewed in this post, the United States earned four gold medals (Western Europe, Northern Europe, North America, and South America) and two silver medals (Eastern Europe and Africa), failing to be among the top three through-countries only in measurements to East Asian countries. Appendix: Submarine Cables Connected to South Korea   EAC-C2C  FLAG  Europe-Asia  SeaMe-We-3  Trans-Pacific  Express  APCN-2  Asia Pacific  Gateway  FLAG North Asia   Loop/ REACH  North Asia Loop  Korea-Japan   Cable Network  New Cross  Pacific Cable  System  Australia                    Belgium                    Brunei                    China                    Cyprus                    Djibouti                    Egypt                    France                    Germany                    Greece                    Hong Kong                    India                    Indonesia                    Italy                    Japan                    Jordan                    Malaysia                    Morocco                    Myanmar                    Oman                    Pakistan                    Philippines                    Portugal                    Saudi Arabia                    Singapore                    Spain                    Sri Lanka                    Taiwan                    Thailand                    Turkey                    United Arab Emirates                    United Kingdom                    United States (Oregon)                    Vietnam                      Appendix: Regional Country Lists Western Europe Austria, Belgium, Switzerland, Germany, France, Liechtenstein, Luxembourg, Monaco, Netherlands Eastern Europe Bulgaria, Belarus, Cyprus, Czech Republic, Hungary, Moldova, Poland, Romania, Russia, Slovakia, Ukraine, Kosovo Northern Europe Aland Islands, Denmark, Estonia, Finland, Faroe Islands, Great Britain, Guernsey, Ireland, Isle of Man, Iceland, Jersey, Lithuania, Latvia, Norway, Sweden, Svalbard and Jan Mayen East Asia China, Hong Kong, Japan, North Korea, South Korea, Mongolia, Macau, Taiwan North America Canada, Mexico, United States South America Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Falkland Islands, French Guiana, South Georgia and the South Sandwich Islands, Guyana, Peru, Paraguay, Suriname, Uruguay, Venezuela Africa Angola, Burkina Faso, Burundi, Benin, Botswana, Democratic Republic of the Congo, Central African Republic, Congo, Ivory Coast, Cameroon, Cape Verde, Djibouti, Algeria, Egypt, Western Sahara,Eritrea, Ethiopia, Gabon, Ghana, Gambia, Guinea, Equatorial Guinea, Guinea-Bissau, British Indian Ocean Territory, Kenya, Comoros, Liberia, Lesotho, Libya, Morocco, Madagascar, Mali, Mauritania, Mauritius, Malawi, Mozambique, Namibia, Niger, Nigeria, Reunion, Rwanda, Seychelles, Sudan, Sierra Leone, Senegal, Somalia, South Sudan, Sao Tome and Principe, Swaziland, Chad, Togo, Tunisia, Tanzania, Uganda, Mayotte, South Africa, Zambia, Zimbabwe  

The PyeongChang 2018 Winter Games are underway, and all eyes are on South Korea, with nearly 3,000 athletes from over 90 countries around the world representing their countries across a myriad...


A Behind the Scenes Look at Mobile Ad Fraud

How did I use over a gigabyte of mobile data in a single day? Why is my phone as warm as a hot plate? If you have ever asked yourself either of these questions, you might be the victim of a malicious application that is using your device and consuming your mobile bandwidth to facilitate ad fraud. We have recently identified a large population of apps being distributed from the Google Play Store that support this behavior.  These apps are installed on devices on a majority of the major cell phone carriers around the world.  These carriers operate in the US (AT&T, Verizon, Sprint, and T-Mobile), Europe (KPN, Vodafone, Ziggo, Sky, Virgin, Talk Talk, BT, O2, and T-Mobile), and the Asia Pacific region (Optus, Telstra, iinet, and others) [Note: Mobile providers and Google have been notified]. Just this morning, before this article was published, Buzzfeed broke another ad fraud story. The Mechanics of the Grift Online advertising consists of a complex ecosystem of ad buyers, sellers, exchanges, and data providers. Operators of websites and application authors have available space in their content layout and interaction in the user experience that can be integrated to include various forms of advertising content.  Making markets that facilitate exchange of these various units of attention and interaction is a challenge; implementing such markets, which also facilitate real-time bidding, is even harder. Within this complex service mesh are integration points that expose oversight or inaccurate assumptions.  When an advertising slot is sold, it is expected a bona fide human end user will see the ad content.  The malicious applications covered in this blog were configured to side step this expectation, and, therefore, the application is illegitimately claiming credit for an end user having viewed the ad content. Step one in this fraud scenario: the mobile application visits a web page controlled by the fraud operator. This behavior is automated and the rate of requests increases when the phone is connected to a power source.  The server hosting the web page takes on the appearance of popular websites, including cbsnews[.]com,  aplus[.]com, wnyc[.]org, and others.  If you open one of these websites while you’re reading this, you will see each of these pages contain a number of online advertisements, including static images and video.  The organizations paying for these ads are targeting specific audiences and customer demographics.  In other words, mobile users of NPR might see very different ads from desktop visitors to NASCAR[.]com. This feature of the online ecosystem leads to companies bidding and paying more for their ads to hit their target audience. Once you consider that the website being visited affects the choice of ad target consumer, it’s no surprise that bad actors might seek to attempt to trick advertising platforms into giving them access to higher value ads.  The ad fraud operator creates a pretense that they are selling advertising on billboard space in Times Square when really, they are selling space inside the warehouse at the end of Indiana Jones.  The ad that someone is paying to have shown to a visitor of wnyc[.]org will be downloaded and viewed, or clicked on, by a process running on the device without any end user being involved.  The ad contract will be considered complete. Money will change hands because the advertisement was delivered with the requisite pixels and cookies tracked. The trick to this fraud relies on setting the Referer [sic] field in the HTTP request-header.  The Referer [sic] field ( 14.36 ) informs the site you’re visiting which site you came from, or were referred from based on what the field implies.  Borrowing from the above examples, the goal is to have the Referer [sic] field claim the visitor is coming from a popular website like, wnyc[.]org.  This is accomplished by configuring a webserver, controlled by the fraudster, with a Host header that contains the website they are impersonating.  This is as easy as going into the webserver configuration file, in this case NGINX, and setting the server_name variable to “ server { listen 80; server_name; ... } That’s it.  The malicious mobile application sends a GET request to the fraudster’s webserver and loads specially crafted HTML and JavaScript engineered to start the scam.  The web page contains all the required logic and instructions for the application to execute the scam. The HTML landing page that the app visits is a mashup containing components of a legitimate web page with some modified tags and added JavaScript.  As the page is loaded, the mobile device runs the downloaded JavaScript and follows the embedded instructions.  It follows links and passes along the referrers from the fraudster’s server (example below).  With these referrers in place, the pretense of an interaction with the advertising platform is established. Origin: http://wnyc[.]org Referer: http://www.wnyc[.]org/ This web server then becomes a Referer [sic] forging machine. The referrer is the ticket in the door; it qualifies the endpoint to receive higher quality ads, but the end point still needs to fulfill on the terms of the contract. The next event is a request made to tagmoxie[.]com; it passes along the Origin and Referer [sic] headers, as well as the X-Requested-With header that contains the name of the system and name of the application. The URI contains variables, which define the pixel height and width, as well as the referring domain, and a variable titled ‘cb’ most likely used for cache busting or as a form of unique token. The structure of the path of the GET request includes a /tag/ subdirectory and also the supply side ID related to the transaction. This number is later a variable defined as supplytag. The response to this GET request is an eXtensible markup language (XML) object, called a Video Ad Serving Template (VAST), which defines the ad content. The VAST template provides a standard structure for exchanging the metadata required for serving video ads. It also contains details about who is providing the ad, its duration, unique identifier, etc. Additionally the object defines the path to access the associated media (in this case a swf file), as well as a series of tracking events. These tracking events are part of the Video Player Ad-Serving Interface Definition (VPAID). VPAID provides a standard interface for integration between the ad content and the video player. This integration provides granular metrics and reporting about the ad viewing experience. For example, as the ad is loaded, the impression needs to be tracked by interacting with an API ( /api/events? ). This ensures that details of the country code, template, etc., are associated and passed along. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <VAST version="2.0"> <Ad> <InLine> <AdSystem>Tagcade</AdSystem> <AdTitle>VPAID Client</AdTitle> <Error><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Error> <Error><![CDATA[[ERRORCODE]&tagId=<REDACTED>&playerCb=[CACHEBUSTING]&requestId=<REDACTED>&country=<REDACTED>&cb=<REDACTED>]]></Error> <Impression><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>&identity=<REDACTED>&cb=<REDACTED>]]></Impression> <Impression><![CDATA[<REDACTED>&playerCb=[CACHEBUSTING]&requestId=<REDACTED>&country=<REDACTED>&cb=<REDACTED>]]></Impression> <Creatives> <Creative sequence="1"> <Linear> <Duration>00:00:30</Duration> <AdParameters>{"requestId":"<REDACTED>","secure":false,"country":"<REDACTED>","domain":"","player_size":"medium","tagId":"<REDACTED>","waterfallId":<REDACTED>,"waterfall":"<REDACTED>","ivtPixelConfigs":"W10="}</AdParameters> <MediaFiles> <MediaFile delivery="progressive" height="640" width="480" apiFramework="VPAID" type="application/x-shockwave-flash"></MediaFile> <MediaFile delivery="progressive" height="640" width="480" apiFramework="VPAID" type="application/javascript"></MediaFile> </MediaFiles> <TrackingEvents> <Tracking event="start"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="firstQuartile"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="midpoint"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="thirdQuartile"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="complete"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="close"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="pause"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="resume"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="acceptInvitationLinear"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="timeSpentViewin"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="otherAdInteraction"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="creativeView"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="mute"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="unmute"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> <Tracking event="fullscreen"><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></Tracking> </TrackingEvents> <VideoClicks> <ClickTracking><![CDATA[<REDACTED>&supply_tag=<REDACTED>&trid=0&<REDACTED>]]></ClickTracking> </VideoClicks> </Linear> </Creative> </Creatives> </InLine> </Ad> </VAST> We know, as the owner of the device, that we did not initiate the GET request to the referrer forgings server. We also know that this GET request has nothing to do with the service delivery of the installed mobile application. In this initial interaction, the mobile application passes a unique identifier and the application name, so there is sufficient data to attribute the traffic back to the mobile device. It could be that the traffic from the device itself is the product of the malicious app and the operator of the webserver is purchasing the traffic in order to execute their own independent scheme. The server responsible for configuring the Referer [sic] cycles through different referring sites to avoid suspicion on its own, e.g., after using wnyc[.]org, the scheme rotates to use cbsnews[.]com, Univision[.]com, sciencechannel[.]com, etc. From this point in the workflow, we can track the supply_tag and demand_tag variables being used and look for clusters of who is involved. From there, we would try to establish where the guilt falls. At a minimum, this illustrates the complexity of the market. The issue remains that a nefarious actor has control over a device with an IP address associated with a mobile provider. The device is visiting a webserver configured to forge a Referer [sic] for a premium web property. The webserver takes on the appearance of the premium web property and instructs the device to display the ads. A follow up post is in the works which seeks to unravel the knot of who is profiting from this on the ad tech side and how. A special thanks to Google for taking down the malicious apps, the Moat team for their insight, the telecom companies, and WNYC for all of the fine NPR content and secure web defaults. Interim Prevention and Mitigation As an end user, there are several basic actions you can take to prevent your device and bandwidth from being co-opted in one of these schemes. Do some due diligence before installing applications on your devices; if the application is being distributed from an official source like the Google PlayStore, you should still check the application’s developer and its device access. Consulting past reviews can also inform the end user if the application might be malicious.  A large number of the malicious applications we studied had a distribution of reviews that skewed toward “bad” with comments complaining about the applications crashing or behaving poorly.

How did I use over a gigabyte of mobile data in a single day? Why is my phone as warm as a hot plate? If you have ever asked yourself either of these questions, you might be the victim of a...


China Activates Historic Himalayan Link To Nepal

On 10 January 2018, China Telecom activated a long-awaited terrestrial link to the landlocked country of Nepal.  The new fiber optic connection, which traverses the Himalayan mountain range, alters a significant aspect of Nepal's exclusive dependency on India, shifting the balance of power (at least for international connectivity) in favor of Kathmandu. Breaking India’s monopoly in providing Internet access to Nepal, China becomes their second service provider. #China #Internet — The Hindu (@the_hindu) January 13, 2018 Following a number of brief trials since mid-November, Nepal Telecom fully activated Internet transit from China Telecom at 08:28 UTC on 10 January 2018, as depicted below.   Background In our 2015 coverage of the earthquake that devastated Nepal, I wrote: "Nepal, as well as Bhutan, are both South Asian landlocked countries wedged between India and China that are dependent on India for a number of services including telecommunications. As a result, each country has been courting Chinese engagement that would provide a redundant source of Internet connectivity." In December 2016, executives Ou Yan of China Telecom Global (CTG) and Lochan Lal Amatya of Nepal Telecom (pictured below) signed an agreement to route IP service through a new terrestrial cable running between Jilong county in China and Rasuwa district in Nepal.   Last week, the fiber link to China finally came to life and established Nepal's first redundant source of international transit.  An operational fiber optic circuit through China will provide Nepal several distinct benefits. First, it provides resiliency if the links to India were ever to go down, whether due to earthquake, fiber cut, or any other catastrophic technical failure. Second, it provides Nepal with additional bandwidth, although it isn't clear that lack of bandwidth has been limiting the country's Internet development.  Finally, with a second source of international transit, Nepal is in a better position to negotiate terms of service and pricing than when it was entirely captive to India's carriers. Changes in Performance Looking at the performance implications for Nepal Telecom, we can see that traffic from Far East locations will generally speed up along the Hong Kong to Nepal link, while connections from some Western European countries may experience a slowdown. The graphic below plots latencies from our measurement servers in Taipei, Tokyo, Seoul, and Hong Kong to Nepal Telecom.  In each case, the latencies decreased when the new China Telecom service was activated on 10 January.   Surprisingly, latencies from Zurich increase (plot below).   In this example, since the latencies via Bharti Airtel increase at the time of the transit activation, we can surmise that this may be due to a change in the unobservable return path of these round-trip measurements. The new return path is most likely egressing via China Telecom, since the forward path remains unchanged. Nonetheless, a materially new traffic pattern through China has emerged for Nepal. Conclusion The upside of this development for Nepal is clear: cheaper and more resilient international Internet bandwidth.  But China has something to gain as well. Infrastructure investment in countries along China's traditional trade routes is central to its One Belt One Road foreign policy agenda.  By making investments in neighboring countries (like a fiber optic cable to Nepal), China hopes to reap benefits in trade as well as achieve political and military influence. And while many in the United States are focused on the competition between the US and China for regional influence, China and India are locked in a battle for influence in South Asia.  China has made a significant move by connecting its Internet directly to Nepal. While similarly situated Bhutan would also benefit from a direct Internet connection to its northern neighbor, recent rising tensions between the world's two most populous nations over tiny Bhutan make this technical advancement unlikely for now.  Good thing Bhutan’s Gross National Happiness indicator doesn't measure Internet resilience.  At least, not yet.

On 10 January 2018, China Telecom activated a long-awaited terrestrial link to the landlocked country of Nepal.  The new fiber optic connection, which traverses the Himalayan mountain range, alters...


Recent Russian Routing Leak was Largely Preventable

Last week, the IP address space belonging to several high-profile companies, including Google, Facebook and Apple, was briefly announced out of Russia, as was first reported by BGPmon. Following the incident, Job Snijders of NTT wrote in a post entitled, "What to do about BGP hijacks".  He stated that, given the inherent security weaknesses in BGP, things will only improve "the moment it becomes socially unacceptable to operate an Internet network without adequate protections in place" and thus customers would stop buying transit from providers that operate without proper route filtering. Since Job has presented at NANOG about the various filtering methods employed by NTT, I decided to look into how well NTT (AS2914) did in this particular incident.  While a handful of the 80 misdirected routes were ultimately carried on by AS2914 to the greater internet, NTT didn't contribute to the leaking of any of the major internet companies, such as Facebook, Google, Apple, etc.  In fact, when one analyzes the propagation of every one of these leaked routes, a pattern begins to emerge. Route Leaks by AS39523 On 12 December 2017, AS39523 announced 80 prefixes (only one of which was theirs) for two different 3-4 minute intervals.  Below is a visualization of the origins of these prefixes over a three hour span, highlighting the portion that was originated by AS39523.  Some prefixes were in circulation already, but some were either more-specifics or less-specifics that were not normally routed - that’s why there are peaks into the white space of the graph when we aggregate across all of the prefixes. Regardless, AS39523 announced all these routes through Russian transit provider Megafon (AS31133).  In Dyn's IP Transit Intelligence, we track seven international transit providers for Megafon, namely, Cogent, Level3, Deutsche Telekom, Telecom Italia Sparkle, NTT, Hurricane Electric and Telia. The leaked Russian networks were carried by all of Megafon's transit providers such as this prefix from Rostelecom (Russian state telecom). But when it came to prefixes belonging to Facebook, Google (and YouTube), Microsoft, Twitch, Apple, and Riotgames, only Hurricane Electric, among Megafon's transit providers, carried these routes on to the greater internet.  Many of Megafon's settlement-free peers also accepted these errant routes, but without nearly the global impact. In the three graphics below, we can see propagation profiles of three prefixes ( of Google, of Apple, and of YouTube) that were leaked via Megafon.  After placing Megafon's various peers into the "Other" category, Hurricane Electric is the only transit provider that appeared upstream of AS31133 during the leaks. Yesterday I contacted Telia who confirmed to me that, like NTT, it was their route filtering that prevented them from carrying the leaked prefixes from the major internet companies.  Considering all seven of Megofon's international transit providers, it appears that Hurricane Electric was alone in failing to implement the type of route filtering that would have prevented this leak from being circulated across the broader internet. Conclusion If we can aside the "Russia Attacks!" rhetoric around this incident, there are some things we can learn from it.  For example, providers need to be more parsimonious in their AS-SET definitions.  As Qrator Labs cited in their write-up on this incident, some providers have added so many ASNs to their AS-SETs to render them useless as a tool for route filtering. But despite this limitation, 6 of the 7 transit providers for Megafon were still able to block erroneous BGP announcements pertaining to numerous major internet companies.  Had the 7th also done so, we might not all be discussing this incident at all.

Last week, the IP address space belonging to several high-profile companies, including Google, Facebook and Apple, was briefly announced out of Russia, as was first reported by BGPmon. Following the...


2017 Internet Intelligence Roundup

With 2017 drawing to a close, year-end lookbacks litter media and the blogosphere like so many leaves on the ground. (Or piles of snow, depending on where you are.) Many tend to focus on pop culture, product/movie/music releases, or professional sports. However, given the focus of Oracle Dyn’s Internet Intelligence team on monitoring and measuring the Internet, we’re going to take a look back at significant Internet “events” of the past year, and how they have impacted connectivity for Internet users around the world. Hurricanes Harvey, Irma, and Maria Cause Internet Disruptions In late August, and through September, an active Atlantic hurricane season spawned a number of destructive storms that wreaked havoc across the Caribbean, as well as Florida and Texas in the United States. On the Caribbean islands that were hardest hit by the storms, the resulting physical damage was immense, severely impacting last-mile Internet infrastructure across the whole country. This was also the case in Florida and Texas, though on a much more localized basis. On September 25, we looked at the impacts of these hurricanes on Internet connectivity in the affected areas, noting that while some “core” Internet components remained available during these storms thanks to hardened data center infrastructure, backup power generators, and comprehensive disaster planning, local infrastructure – the so-called “last mile” – often didn’t fare as well. Towards the end of August, Hurricane Harvey forced hundreds of network prefixes in Texas offline, while a few days later, Hurricane Irma caused similar problems in Florida and Puerto Rico. Sint Maarten was also hit extremely hard by Hurricane Irma, causing complete unavailability of network prefixes associated with the island nation. Internet Impacts from #HurricaneHarvey and #HurricaneIrma in #Texas, #Florida and #PuertoRico — InternetIntelligence (@InternetIntel) September 14, 2017 Nearly two weeks later, Hurricane Maria slammed into Puerto Rico, causing problems for local Internet connectivity as it made landfall. The power outages resulting from the storm caused last-mile connectivity to deteriorate, as we observed through a near-complete loss of recursive DNS queries coming from the island. Connectivity continued to struggle a week after Maria, and a recent Internet Intelligence blog post examined the state of Puerto Rico’s post-hurricane Internet connectivity. Politically Motivated Internet Shutdowns Nationwide Internet shutdowns for political reasons arguably had their genesis in a January 2011 Internet disruption that occurred in Egypt, which was followed in short order by similar disruptions in Bahrain, Libya, and Syria. These outages took place during what became known as the Arab Spring, highlighting the role that the Internet had come to play in political protest, and heralding the wider use of national Internet shutdowns as a means of control. A November blog post noted that while these shutdowns took place in the Middle East and Northern Africa, they have shifted over the last several years to become more common in sub-Saharan Africa. Such outages continued to be the case over this past year. In mid-November, Equatorial Guinea's government ordered a complete Internet blackout ahead of an election that was expected to keep the party of longtime President Teodoro Obiang Nguema in power. This blackout was in addition to blocking of access to opposition Web sites, which started in 2013. In September, the government in Togo blocked access to mobile Internet connectivity amid anti-government protests. Following months of protests, Cameroon’s government ordered an Internet blackout in English-speaking regions of the country starting in mid-January. This outage lasted until April, and Internet connectivity in these regions was again disrupted in early October, apparently in relation to mass protests. As of late November, this latest disruption was still in place. Multiple Exam-Related Outages in Syria & Iraq Students around the world have long attempted to get an advantage on standardized tests, by whatever means necessary. Of late, test-related information has been shared via the Internet, leading the governments of Syria and Iraq to sever Internet connectivity within their respective countries in an effort to prevent cheating on such tests. The Iraqi government employed such techniques in 2015 and 2016, while the Syrian government also did so several times in 2016. In February 2017, the Iraqi government took down the country’s Internet connectivity for multi-hour periods across multiple days. As we noted at the time, the duration of the Internet outages covers the period of time of the physical distribution of the exam materials to testing centers, which typically begins at 5:00 am on exam day. The outages are intended to prevent images of the questions from the exams, along with the answers, from being shared via social media. Similar outages were also observed in Iraq during the first half of June. Earlier today: 4th internet blackout in #Iraq, 9th in #Syria to combat cheating on exams. — InternetIntelligence (@InternetIntel) June 13, 2017 In late May, Syria began a series of nationwide Internet disruptions designed to combat cheating on exams. The outages occurred nine times over the course of two weeks. The Syrian Internet also appeared to go completely offline on July 12, but we don’t believe that outage was related to any academic testing taking place within the country. Leaked Routes Disrupt Connectivity in Japan and the U.S. Route leaks occur when a network provider inadvertently announces routes to prefixes other than the ones they are responsible for. Sometimes a provider will announce routes learned from a peer that were not supposed to be shared any further. In other cases, the leaking provider “masquerades” as the origin of the route, while more significant leaks occur when a provider announces a full routing table. Depending on the type of leak and how widely these leaks are propagated across upstream providers, the ultimate impact is that traffic to affected network prefixes is redirected, lost, or intercepted; the severity can range from unnoticed to catastrophic. Blog posts we published in 2015 and 2014 looked at several examples of route leaks and their impacts, while another 2015 post looked at the impact of a routing leak on the availability of Google services. However, in late August 2017, Google turned the tables, leaking over 160,000 prefixes to Verizon, who accepted the routes and passed them on, severely impacting major Japanese telecommunications providers including KDDI, NTT’s OCN, and IIJ, disrupting Internet connectivity for users across Japan. The leaked routes were “more specifics” of routes already in the global routing table -- these “more specific” routes cover smaller ranges of IP addresses, and are preferred to less-specific routes within the BGP route selection process. These “more specific” routes were believed to be used by Google for traffic shaping within their network, but when they were leaked to the world, they were selected by external network providers over existing less specific routes. This ultimately resulted in traffic between the impacted Japanese providers getting routed through Google’s network (in Chicago!), causing much of it to be dropped because of high latency or limited bandwidth. Just a few months later, a route leak from Level 3 (now CenturyLink) disrupted Internet connectivity for millions of Internet users across the United States and around the world. On November 6, Level 3 began globally announcing thousands of BGP routes that had been learned from customers and peers and that were intended to stay internal to Level 3.  By doing so, Internet traffic to major subscriber networks like Comcast and Bell Canada, as well as major content providers like Netflix, was mistakenly sent through Level 3. Our analysis indicated that other impacted networks included RCN, Giga Provedor de Internet Ltda (Brazil), Cablevision S.A. (Argentina), and even the Weill Cornell Medical College in Qatar. Based on our traceroute measurements, the leak ultimately resulted in increased latencies to reach the affected network prefixes, reportedly causing users to experience delays and problems in reaching some Web sites. A subsequent Tweet from Level 3’s Network Operations Center took responsibility but downplayed the impact, stating “On Nov. 6, our network experienced a disruption affecting some IP customers due to a configuration error. All are restored.” Attempted Censorship Through BGP Route Hijacking Authoritarian governments have long attempted to censor content for a variety of reasons, using a number of techniques. As more content (of all types) has moved onto the Internet, governments have often resorted to filtering end user Web and DNS requests, but the effectiveness of doing so has been inconsistent. However, hijacking IP address space belonging to content and/or hosting providers can allow a state telecom to functionally block access to sites served from those IP addresses for users on downstream networks in the country. While the routing announcements that implement the hijack are likely intended to stay within the country’s borders, sometimes they leak out. One example of this was Pakistan’s attempted block of YouTube in 2008. In January 2017, we observed TIC, the Iranian state telecommunications provider, attempt to do something similar, hijacking IP address space belonging to a provider that hosts numerous Web sites featuring adult content. Unfortunately, these routing announcements made their way to Omantel, which announced them to other network providers, meaning that users outside of Iran may have been unable to access Web sites hosted at the hijacked provider. However, rapid action by Oracle Dyn team members enabled the hosting provider to quickly regain control of their address space. A few days later, TIC announced BGP hijacks of address space belonging to another hosting provider that serves adult content, as well as of 20 individual IP addresses belonging to Apple’s iTunes service. Iranian state telecom hijacking IP space that is hosting adult websites. Censorship leaking out of Iran? #bgphijack — InternetIntelligence (@InternetIntel) January 6, 2017 In May, Ukrainian President Petro Poroshenko enacted a ban on Russia’s four most prominent Internet companies, reportedly in the name of national security.  The ban included the two most widely used social media websites, VKontakte and Odnoklassniki, as well as email service provider and search engine Yandex. In late July, Ukranian service provider UARNet began announcing new BGP routes that were hijacks of the IP address space belonging to these Russian companies, presumably as a means of implementing the previously announced ban. However, similar to what we have observed in the past, these hijacked routes escaped the country’s borders. Latency Impacts of Submarine Cable Damage and Repair Submarine cables span the globe like an ever-growing spider web, carrying Internet traffic between continents, and bringing international Internet connectivity to island nations. However, they are also prone to damage from errant ship anchors, as well as intentional sabotage. When cable breaks occur, observed latencies for Internet traffic to/from these countries generally increases as the traffic fails over to higher latency backup satellite connections. Conversely, when a new submarine cable connection is activated, observed latencies for Internet traffic in countries with these new connections generally drops. Over the course of 2017, we saw examples of both. Starting at the end of December 2016, the Marshall Islands saw a nearly three-week period of reduced connectivity resulting from a submarine cable break -- likely the HANTRU1 cable. The break caused lnternet traffic from the islands to transit a backup satellite connection with latency over 2x higher than the submarine cable. In mid-January 2017, damage to the Asia-America Gateway Cable System (AAG) and the Tata TGN-Intra Asia (TGN-IA) cable impacted Internet connectivity in Vietnam, resulting in latencies approximately 50% higher than normal, although the impact lasted just a few days. In late January, the Eastern Africa Submarine System (EASSy) cable was cut, crippling Internet connectivity to Madagascar. Based on measurements to Telecom Malagasy (TELMA), a leading telecommunications company in Madagascar, connectivity was significantly reduced for approximately six hours before a backup link to satellite provider O3b was activated. In late June, the EASSy cable was again cut, significantly impacting connectivity to Somalia. Satellite connectivity through O3b was again used as a fall-back, resulting in latencies approximately one-third higher than normal. The SeaMeWe-3 (SMW3) cable connects a number of countries in Europe, Africa, and Asia, as well as landing in Perth, Australia. In late August, damage to the cable caused latencies to Perth to spike, with repairs estimated at the time to take until mid-October. In November, another cut to the AAG cable again impacted connectivity to Vietnam. However, in this case, we observed that the cable cut caused latencies along some paths to increase as expected, but that latencies along other paths actually dropped because they were now taking a more efficient route instead of “tromboning” through a more distant connection point. Pacific island nation #Palau activated its 1st submarine cable yesterday. At 20:22 UTC on 21-Nov, PNCC completely switched from MEO satellite to subsea cable-based transit reducing latencies (see graphic). The new SEA-US spans the Pacific Ocean. — InternetIntelligence (@InternetIntel) November 22, 2017 The tiny Pacific island nation of Palau activated its first submarine cable in November. The country previously relied upon an O3b satellite connection for Internet connectivity, and was able to reduce latency by switching to the SEA-US cable. Cuba & North Korea Cuba and North Korea have historically been two of the least Internet connected countries in the world. However, during 2017, both saw improvements to their international Internet connectivity. (In-country connectivity for end users is still severely limited in both countries.) In early January 2017, we observed C&W Networks start to provide transit for ETECSA, marking the first time that a U.S. telecommunications firm provided direct transit to the Cuban telecom provider. C&W joined international providers Tata, Telefonica, and Intelsat in providing transit to ETECSA. Our measurements indicated that the C&W transit is being served from Boca Raton, Florida, with a 35ms round trip time to Havana, making it the lowest-latency link to the United States. North Korea has historically had a single Internet provider, Star JV, which has relied on China Unicom for international Internet connectivity. However, on October 1, we observed that North Korea had gained a new connection to the global Internet through Russian fixed-line provider Transtelecom (TTK). However, subsequent measurements appeared to indicate that the new transit relationship was somewhat unstable. While it is impossible to tell simply from our Internet measurement data how TTK’s network connects into North Korea, it may be via the Friendship Bridge, a railway crossing over the Tumen River that connects Khasan in Russia with Tumangang in North Korea, as it is the only connection between the two countries. North Korea activated first internet link via Russia as US steps up campaign against Pyongyang — InternetIntelligence (@InternetIntel) October 2, 2017 With just a couple of Internet providers at its international border, North Korea is at severe risk of Internet disconnection. As such, the country is susceptible to complete Internet outages, such as those observed on August 14 and July 31 -- the reasons for both are unknown. Cuba also saw a couple of unexplained outages on November 8, though both were brief, as our observations indicated that they lasted for approximately 10 minutes each. Working Together to Secure the Internet On August 17th, 2017, multiple Content Delivery Networks (CDNs) and content providers were subject to significant attacks from a botnet dubbed WireX, which was primarily comprised of Android devices running malicious applications and designed to generate DDoS traffic. Researchers from Akamai, Cloudflare, Flashpoint, Google, Oracle Dyn, RiskIQ, Team Cymru, and other organizations cooperated to combat the botnet. Collaborative work across these companies included identification of associated Web traffic and the IP addresses originating the requests, identifying the applications that housed the malware and removing them from app stores, and understanding the underlying code and command & control workflow. While certainly not the first instance of cross-industry collaboration, it is an example of how informal sharing can have a dramatically positive impact for the potential victims and the Internet as a whole. What Else? In addition to the events highlighted above, which were shared via blog posts and Tweets from Oracle Dyn’s Internet Intelligence team, 2017 also saw: Hundreds of additional smaller brief network outages and disruptions that we detected, but that weren’t significant enough to share on social media Other submarine cable cuts and activations, as well as an ongoing push by content/infrastructure providers like Google, Facebook, and Amazon to deploy their own cables Many additional route leaks and hijacks Hopefully the rest of December is quiet, Internet-wise.  But if it isn’t, be sure to follow us on Twitter at @InternetIntel, and on the Internet Intelligence blog at, for the latest information and analysis.

With 2017 drawing to a close, year-end lookbacks litter media and the blogosphere like so many leaves on the ground. (Or piles of snow, depending on where you are.) Many tend to focus on pop culture,...


Puerto Rico's Slow Internet Recovery

On 20 September 2017, Hurricane Maria made landfall in Puerto Rico.  Two and a half months later, the island is still recovering from the resulting devastation.  This extended phase of recovery is reflected in the state of the local internet and reveals how far Puerto Rico still has to go to make itself whole again. While most of the BGP routes for Puerto Rico have returned, DNS query volumes from the island are still only a fraction of what they were on September 19th  — the day before the storm hit.  DNS activity is a better indicator of actual internet use (or lack thereof) than the simple announcements of BGP routes. We have been analyzing the impacts of natural disasters such as hurricanes and earthquakes going back to Hurricane Katrina in 2005.  Compared to the earthquake near Japan in 2011, Hurricane Sandy in 2012, or the earthquake in Nepal in 2015, Puerto Rico's disaster stands alone with respect to its prolonged and widespread impact on internet access.  The following analysis tells that story. DNS statistics Queries from Puerto Rico to our Internet Guide recursive DNS service have still not recovered to pre-hurricane levels as illustrated in the plot below.  Earlier this week, on December 4th, we handled only 53% of the query volume from Puerto Rico that was received on September 18th, just before the hurricane.  Both dates are Mondays, hopefully ruling out possible day-of-the-week effects.   Queries from Puerto Rico to our authoritative DNS services are also reduced from prior to the hurricane, but not as much as to our recursive DNS service.  This may be because caching effects are more pronounced with our authoritative DNS services, since they handle queries for a smaller set of domains than our recursive DNS service.  Additionally, we may have lost some Internet Guide clients if those computers reverted to a default DNS configuration upon returning to service.  Regardless, the volume is still lower than pre-hurricane levels for authoritative DNS. On December 4th, we handled 71% of the query volume from Puerto Rico as compared to September 18th.   Based on these two figures (53% and 71%), we estimate that internet service in Puerto Rico is only a little more than half of where it was before the hurricane. BGP and Traceroute measurement statistics The graphic below shows the impact of the hurricane on the routed networks of Puerto Rico colored by the major providers.  Many of these BGP routes were withdrawn as the hurricane came ashore and the island suffered what has been labeled the largest power outage in U.S. history.  By early November, most of these routes were once again being announced in the global routing table. However, damage to last-mile infrastructure meant that many Puerto Ricans were still unable to obtain internet access.   Our traceroute measurements to Puerto Rico, illustrated below, tell a similar story — a steep drop-off on 20 September 2017, followed by a long slow recovery that appears to come incrementally as different pieces of Puerto Rican infrastructure come back online.  Despite an island-wide power outage, some networks in Puerto Rico (like Critical Hub Networks) continued to be reachable throughout the period of analysis.  While the plot below shows a steeper dip than the BGP-based plot above, the responding hosts that we measured to are often part of the core infrastructure. These hosts are more likely to be connected to backup power than access layer networks and could, like the BGP routes above, overstate the degree of recovery.   Submarine cable impact Perhaps less appreciated about this incident is Hurricane Maria's impact on connectivity in several South American countries.  Puerto Rico is an important landing site for several submarine cables that link South America to the global internet.  The cable landing station serving Telecom Italia's Seabone network had to be powered down due to flooding.  A statement from Seabone read: We must inform you that Hurricane Maria (Category 5) has impacted Puerto Rico causing serious damage and flooding on the island. We had to de-energize our nodes at the station to avoid serious damage to the equipment. As a result, in the early afternoon on 21 September 2017, we observed traffic shifts away from Telecom Italia in multiple South American countries as the submarine cable became unavailable.  To illustrate the impact, below are four South American ASNs that experienced a loss of one of their transit providers at this moment in time.  Cablevision Argentina (AS10481) is from Argentina, while the other three are from Brazil.  Brazilian provider Citta Telecom AS27720) lost service from Eweka Internet, while the others lost Telecom Italia transit. Additionally, following the hurricane, Venezuelan incumbent CANTV announced that their international capacity had been cut by 50% due to storm-related submarine cable damage. The announcement was met with skepticism from a citizenry increasingly subjected to censorship and surveillance by their government.   ¡SÍ CLARO OTRO MENTIRAS MAS! Cantv informa que Huracán María ocasiona fallas de internet en Venezuela — NoticiasSimonBolivar (@NoticiasSB1) September 22, 2017   However, our data shows that impairment of CANTV's international links aligns with other outages in the region due to the effects of the hurricane. The plots below show latencies to CANTV from several cities around the world spiking on 21 September 2017 after the submarine cable station in Puerto Rico was flooded. Conclusion Immediately following Hurricane Maria's arrival in Puerto Rico, Sean Donelan, Principal Security Architect of Qmulos, began dutifully posting status updates he had collected to the NANOG email list about the connectivity situation on the island.  In addition, the website was setup to collect various metrics about the status of the recovery. Now with over two months of hindsight, we can truly appreciate just how devastating the hurricane was in many respects, other than simply internet impairment.  Puerto Rico may no longer be in the headlines as it was just after the storm, but the resources required to get this part of the United States back on its feet are truly extensive. Here is more information about how you can help: Puerto Rico hurricane victims still need help. Here’s what you can do — PBS NewsHour (@NewsHour) November 10, 2017

On 20 September 2017, Hurricane Maria made landfall in Puerto Rico.  Two and a half months later, the island is still recovering from the resulting devastation.  This extended phase of recovery is...


The Migration of Political Internet Shutdowns

In January 2011, what was arguably the first significant disconnection of an entire country from the Internet took place when routes to Egyptian networks disappeared from the Internet’s global routing table, leaving no valid paths by which the rest of the world could exchange Internet traffic with Egypt’s service providers. It was followed in short order by nationwide disruptions in Bahrain, Libya, and Syria. These outages took place during what became known as the Arab Spring, highlighting the role that the Internet had come to play in political protest, and heralding the wider use of national Internet shutdowns as a means of control. “How hard is it to disconnect a country from the Internet, really?” After these events, and another significant Internet outage in Syria, this question led a blog post published in November 2012 by former Dyn Chief Scientist Jim Cowie that examined the risk of Internet disconnection for countries around the world, based on the number of Internet connections at their international border. “You can think of this, to [a] first approximation,” Cowie wrote, “as the number of phone calls (or legal writs, or infrastructure attacks) that would have to be performed in order to decouple the domestic Internet from the global Internet.” Defining Internet Disconnection Risk Based on our aggregated view of the global Internet routing table at the time, we identified the set of border providers in each country: domestic network providers (autonomous systems, in BGP parlance) who have direct connections, visible in routing, to international (foreign) providers. From that data set, four tiers were defined to classify a country’s risk of Internet disconnection. A summary of these classifications is below - additional context can be found in the original blog post: If a country has only 1 or 2 service providers at its international frontier, it is classified as being at severe risk of Internet disconnection. With fewer than 10 service providers at its international frontier, a country is classified as being at significant risk of Internet disconnection. A country’s risk of Internet disconnection is classified as low risk with between 10 and 40 internationally-connected service providers. Finally, countries with more than 40 providers at their borders are considered to be resistant to Internet disconnection. The original blog post classified 223 countries and territories, with the largest number of them classified as being at significant risk of Internet disconnection. A February 2014 update to the original post, entitled “Syria, Venezuela, Ukraine: Internet Under Fire” examined changes observed in the 16 months since the original post, highlighting both increases and decreases in Internet disconnection risk level across a number of countries. The post noted the continued fragility of Internet connectivity in Syria, owing in part to its classification of being at severe risk of Internet disconnection, as well as mentioning the lack of nationwide Internet disruptions in Venezuela despite periodic slowdowns and regional access disruptions. It has been five years since the original blog post, and over three and a half years since the follow-up post, so we thought that it would be interesting to take a new look at Internet resiliency around the world. Has connection diversity increased, and does that lead to a potential decrease in vulnerability to Internet shutdown? However, as the 2014 blog post notes, “We acknowledge the limitations of such a simple model in predicting complex events such as Internet shutdowns. Many factors can contribute to making countries more fragile than they appear at the surface (for example, shared physical infrastructure under the control of a central authority, or the physical limitations of a few shared fiber optic connections to distant countries).” For instance, at the time of the original (2012) post, New Zealand relied primarily on the Southern Cross submarine cable connection to Australia for international Internet connectivity, despite our data showing dozens of border network providers. And while Iraq has gained numerous border relationships since 2012, most of the country (except for Kurdistan in the north) relies on a national fiber backbone which the Iraqi government has shut down dozens of times since 2014 to combat cheating on student exams, stifle protests, and disrupt ISIS communication. In addition, it’s worth recognizing that there likely isn’t a meaningful difference in resilience in a country with 39 border providers (which would classify it as “low risk”) and 41 border providers (which would classify it as “resistant”). With these caveats in mind, an updated world map reflecting the risk of Internet disconnection as classified in our 2017 data set is presented below.   What’s Happened Since Then? In reviewing other notable Internet shutdowns that have occurred since the 2014 post was published, a few things stood out: The Internet in Syria remains fragile, with some nationwide outages occurring due to fighting/violence, while others occur more regularly during school testing periods. Similarly, Iraq’s international Internet connectivity remains tenuous, also seeing outages related to both fighting and school testing. North Korea’s Internet connectivity has seen a number of nationwide outages over the last several years, for reasons that remain unclear. (However, since the country recently added a second network provider, it will be interesting to see if these outages continue to occur.) Subsea cable cuts, damage to fiber, or other infrastructure issues (including fires and power outages) have significantly impacted Internet availability in countries including Madagascar, the Marshall Islands, Libya, Azerbaijan, Algeria, French Polynesia, and Colombia. However, the most interesting observation was the ‘migration’ of politically-motivated nationwide Internet disruptions. The outages that occurred during the Arab Spring time frame were largely concentrated in North Africa and the Middle East, shifting over the last several years into sub-Saharan Africa. This shift has not gone unnoticed, with online publication Quartz also highlighting the growing trend of African governments blocking the Internet to silence dissent, and the United Nations taking note as well. In addition, as these shutdowns are now a more regular occurrence, both in Africa and in other areas around the world, it is also worth looking at the financial impact that they have on affected countries. Nearly three years ago, in January 2015, an Internet shutdown was put into place in Kinshasa, the capital of the Democratic Republic of Congo, after bloody clashes took place between opponents of President Joseph Kabila and police.  Banks and government agencies reportedly regained access after four days, while subscribers remained offline for three weeks. Almost two years later, in December 2016, an Internet shutdown was ordered as a means of blocking access to social media sites to prevent mobilization of those protesting against the president’s stay in office beyond the two-term limit. DR Congo blocks Internet in Kinshasa after bloody clashes between police & protestors — InternetIntelligence (@InternetIntel) January 20, 2015   While many governments force Internet shutdowns that last for just a few hours, or across multiple days or weeks, Gabon combined both in September 2016, implementing a nightly “Internet curfew” that lasted for 23 days. The regular Internet disruptions occurred on the heels of a disputed national election that ultimately saw the incumbent president win a second term by a slim vote margin. International Internet connectivity was also reportedly restricted in the week before the election. With Internet access largely concentrated through Gabon Telecom, the country is at severe risk of Internet shutdown. Internet curfew in Gabon lifted (23 nightly blackouts later). @PresidentABO sworn in after disputed election — InternetIntelligence (@InternetIntel) September 29, 2016   In late November 2016, Internet connectivity in Gambia was shut down ahead of a national election that saw the country’s president of more than 20 years upset by the opposition candidate. Published reports noted that the opposition party relied on Internet messaging apps to organize rallies and demonstrations. Efforts by the incumbent party to disrupt Internet connectivity were presumably intended to derail this organizing, as well as to limit potential protests depending on the outcome of the election. Govt of Gambia orders Internet blackout ahead of national election. Service down since 20:05 UTC on 30-Nov. — InternetIntelligence (@InternetIntel) December 1, 2016   In Cameroon, Internet connectivity was blocked in English-speaking parts of the country starting in January 2017, reportedly affecting about 20 percent of the population. The government reportedly suspended Internet service for users in the Southwest and Northwest provinces after a series of protests that resulted in violence and the arrest of community leaders. Ten months later, Internet access remains unstable in Cameroon, highlighted by the #BringBackOurInternet hashtag on Twitter. Govt-directed blackout in English-speaking regions of Cameroon began 17-Jan. @africatechie @EricAcha1 — InternetIntelligence (@InternetIntel) February 13, 2017   In Togo, throughout the fall of 2017, protesters have been calling for the resignation of President Faure Gnassingbe, who has been in power since his father died in 2005. In response, the country’s government has limited Internet access in an effort to prevent demonstrators from organizing on social media, and has also blocked text messaging. Published reports indicate that the mobile messaging app WhatsApp was a particular target, although some users resorted to VPNs to maintain access to the tool. Looking at the graph below, the Internet restrictions have not generally been implemented through broad manipulation or removal of routes -- while some instability is evident, there have not been widespread outages, as have been seen in the past in countries such as Syria. #Togo downs internet service amid huge anti-government protests. — InternetIntelligence (@InternetIntel) September 6, 2017   Most recently, the government of Equatorial Guinea widely blocked access to the Internet ahead of a nationwide election that was widely expected to keep the ruling party in power. Local service providers GuineaNet and IPXEG, among others, were taken completely offline. This disruption followed blocking access to opposition Web sites, which has been going on since  2013, as well blocking access to Facebook, which was put into place when the electoral campaign started on October 27. Government in #EquatorialGuinea ordered internet blackout ahead of yesterday's election. GuineaNet and IPXEG among local ISPs that went offline. Troubling trend of internet shutdowns around African elections continues unabated. — InternetIntelligence (@InternetIntel) November 13, 2017 “Swift and Dramatic” Economic Damage In 2011, the Organisation for Economic Co-operation and Development (OECD) estimated that the economic impact of Egypt’s five-day national Internet shutdown “incurred direct costs of at minimum USD 90 million.” They estimated that lost revenues due to blocked telecommunications and Internet services accounted for approximately USD 18 million per day. However, the OECD also noted that “this amount does not include the secondary economic impacts which resulted from a loss of business in other sectors affected by the shutdown of communication services e.g. e-commerce, tourism and call centres.” The true cost to a country of a nationwide Internet shutdown can be significant. An October 2016 study produced by Deloitte reached the following conclusions: “The impacts of a temporary shutdown of the Internet grow larger as a country develops and as a more mature online ecosystem emerges. It is estimated that for a highly Internet connected country, the per day impact of a temporary shutdown of the Internet and all of its services would be on average $23.6 million per 10 million population. With lower levels of Internet access, the average estimated GDP impacts amount to $6.6 million and to $0.6 million per 10 million population for medium and low Internet connectivity economies, respectively.” The study also noted that if Internet disruptions become more frequent and longer-term in nature, these impacts are likely to be magnified. The Brookings Institute also published a report in October 2016 that looked at the cost of Internet shutdowns over the previous year. The report’s headline claim that “Internet shutdowns cost countries $2.4 billion last year” was cited in publications including Techcrunch and an Internet Society Policy Brief. However, within the report, so-called Internet shutdowns are broken down into a number of categories. By their count, 36 instances of “national Internet” shutdowns led to just under 20 days of aggregate downtime, responsible for almost USD 295 million of financial impact. In contrast, blocking access to apps at a nationwide level accounted for nearly half of the claimed financial impact. The costs of a nationwide Internet shutdown to a country’s economy are clearly very real. In an October 2016 article in The Atlantic on this topic, my colleague Doug Madory noted “The hope is that a government would be less likely to order an Internet blackout if it knew the negative impacts of such a decision in terms of hard dollar figures." We can hope that in the future, national governments will recognize that the money that these nationwide outages would cost them would be better redirected into improving Internet connectivity for citizens and businesses across their countries. Conclusion In 2012, we published the “Could It Happen In Your Country?” analysis in the aftermath of the Internet disruptions of the Arab Spring. Since then, we have observed and documented the trend of national Internet blackouts as they have migrated, most recently, to Africa. While the studies by Deloitte and Brookings have pointed out the severe negative economic consequences of these blackouts, NGOs like AccessNow and Internet Sans Frontières do advocacy work by drawing attention to the adverse impacts on human rights when governments decide to cut communications lines. The role we play, and have played for many years, is to inform the Internet blackout discussion with expert technical analysis. We can only hope that our combined efforts help to reduce the frequency of future government-directed Internet disruptions. Given the number of blackouts we’ve observed in recent months, help can’t come fast enough.

In January 2011, what was arguably the first significant disconnection of an entire country from the Internet took place when routes to Egyptian networks disappeared from the Internet’s global routing...


Widespread Impact Caused by Level 3 BGP Route Leak

For a little more than 90 minutes yesterday, internet service for millions of users in the U.S. and around the world slowed to a crawl.  Was this widespread service degradation caused by the latest botnet threat?  Not this time.  The cause was yet another BGP routing leak — a router misconfiguration directing internet traffic from its intended path to somewhere else. On Nov. 6, our network experienced a disruption affecting some IP customers due to a configuration error. All are restored. — Level 3 Network Ops (@Level3NOC) November 6, 2017 While not a day goes by without a routing leak or misconfiguration of some sort on the internet, it is an entirely different matter when the error is committed by the largest telecommunications network in the world. In this blog post, I’ll describe what happened in this routing leak and some of the impacts.  Unfortunately, there is no silver bullet to completely remove the possibility of these occurring in the future.  As long as we have humans configuring routers, mistakes will take place. What happened? At 17:47:05 UTC yesterday (6 November 2017), Level 3 (AS3356) began globally announcing thousands of BGP routes that had been learned from customers and peers and that were intended to stay internal to Level 3.  By doing so, internet traffic  to large eyeball networks like Comcast and Bell Canada, as well as major content providers like Netflix, was mistakenly sent through Level 3’s misconfigured routers.  Traffic engineering is a delicate process, so sending a large amount of traffic down an unexpected path is a recipe for service degradation.  Unfortunately, many of these leaked routes stayed in circulation until 19:24 UTC leading to over 90 minutes of problems on the internet. Bell Canada (AS577) Anyone else having Bell internet issues? I can’t even connect with their support people! #bell #bellcanada — Andrew J Dow (@andrewjdow) November 6, 2017 Bell Canada (AS577) typically sends Level 3 a little more than 2,400 prefixes for circulation into Level 3’s customer cone.  During the routing leak yesterday, that number jumped up to 6,459 prefixes – most of which were more-specifics of existing routes and, equally as important, announced to Level 3’s Tier 1 peers like NTT (AS2914) and XO (AS2828, now a part of Verizon). Below is a visualization of the latency impact of the routing leak.   Next is the propagation profile of just one of those Bell Canada routes leaked by Level 3., for example, is not normally in the global routing table.  That address space is covered by, a less-specific route.  During the leak, this route (along with about 4,000 others) appeared in the global routing table as originated by AS577 and transited by AS3356.  About 40% of our BGP sources had these leaked routes in their routing tables and most chose NTT (AS2914) to reach AS3356 en route to AS577 (below right).   Comcast (various ASNs) Comcast, the largest internet service provider in the United States, was also directly impacted by yesterday’s routing leak. When Comcast internet is down…#comcastoutage — Modiv (@ModivMusic) November 6, 2017 Comcast uses numerous ASNs to operate their network and Level 3 leaked prefixes from quite a few of them, diverting and slowing internet traffic bound for Comcast. According to our data, Level 3 leaked over 3000 prefixes from 18 of Comcast’s ASNs listed below. AS33491 (356 leaked prefixes) AS7725 (252 leaked prefixes) AS7015 (248 leaked prefixes) AS33287 (241 leaked prefixes) AS33651 (235 leaked prefixes) AS22909 (198 leaked prefixes) AS33657 (178 leaked prefixes) AS33668 (176 leaked prefixes) AS20214 (176 leaked prefixes) AS7016 (161 leaked prefixes) AS33650 (152 leaked prefixes) AS33667 (145 leaked prefixes) AS33652 (142 leaked prefixes) AS33490 (117 leaked prefixes) AS13367 (117 leaked prefixes) AS33660 (101 leaked prefixes) AS33659 (97 leaked prefixes) AS33662 (89 leaked prefixes) Our traceroute measurements into Comcast reveal the impact of the leak from a performance standpoint.  The two visualizations below show a bulge of internet traffic headed for the leaked IP address space diverted through Level 3, and the increase in observed latency.   Other Impacts Level 3 leaked 81 prefixes from RCN who appeared to pull the plug on their Level 3 connection at 18:34 UTC, once they figured out what was causing a slowdown in their network.   Level 3 leaked 97 prefixes from Netflix (AS2906) including the following:   Impacts were not limited to the United States. Networks in Brazil, Argentina and the UAE also had routes leaked by Level 3 yesterday.  Below are example routes leaked from Giga Provedor de Internet Ltda (AS52610, 42 leaked prefixes), Cablevision S.A. (AS10481, 365 leaked prefixes), and even the Weill Cornell Medical College in Qatar (AS32539, 3 leaked prefixes):   Conclusion It is important to keep in mind that the internet is still a best-effort endeavor, held together by a community of technicians in constant coordination.  In this particular case, initial clues as to the to origin of this incident were first reported in a technical forum (the outages list) when Job Snijders astutely observed new prefixes being routed between Comcast and Level 3 yesterday. Peer leaks are a continuing risk to the internet without any silver bullet solution.  We previously suggested to use protection when peering promiscuously, but even a well-run network like Google has been both the leaker and the leaked. Networks share more-specific routes to a peer in order to ensure that return traffic comes directly back over the peering link. But there is always the risk that the peer could leak those routes and adversely affect your network.  When the leaker is the biggest telecom in the world (and only getting bigger), the impact is likely to be significant.

For a little more than 90 minutes yesterday, internet service for millions of users in the U.S. and around the world slowed to a crawl.  Was this widespread service degradation caused by the latest...

Welcome to the New Internet Intelligence Blog

For more than a decade, our team of data scientists has been publishing their observations about Internet infrastructure and operations. Over the years and through two strategic acquisitions, this team has been known as Renesys, then Dyn Research. Today, we’re proud to call ourselves Internet Intelligence, powered by Oracle Cloud Infrastructure. To that end, welcome to the new Internet Intelligence blog. If you’re a new reader, we’re happy to have you. If you previously followed the Renesys or Dyn Research blogs, don’t worry—you’re in the right place. When we first began, our observations were enjoyed by a relatively small but influential group of network operators and Internet insiders. Today, these insights are leading the global conversation and breaking news that is consumed by reporters, businesses and the general public. Internet events have gone from niche to the mainstream and have become topics of executive and board level interest. This increased interest is a result of our overall dependency on the Internet. That dependency has grown exponentially since our team first started writing. So too has the scale, complexity, and volatility of the Internet itself. Back in 2015, I had the opportunity to host AOL co-founder Steve Case at our office in Manchester, New Hampshire. At the time, I took a look at some Internet statistics from 1995 when AOL was introducing millions of new users to the Internet. Let’s compare those with today: 1995 2017 <1% of the world’s population is online >50% of the world’s population is online 23,500 websites 1,300,000,000 websites Users spend 30 minutes online per month Users spend 179 hours online per month via People now spend about a third of their waking hours online. The complexity of the Websites they are visiting has grown as well. In fact, a typical Website now has more than 100 objects (think text, CSS, photos, video, advertisements, social sharing tools, live chat, Partners’ SaaS applications, etc). I can remember not that long ago when that number was under 20. Network operators must no longer worry only about their on-premise and cloud infrastructure decisions, but must also be aware of their company’s third-party SaaS vendors and THEIR infrastructure too. Issues like the size of the BGP routing table, which once would have been relegated to a meeting of network engineers, are now frequently discussed in tech and mainstream publications. Speaking of the routing table, it has more than doubled in size during each of the last several 10 year periods. Imagine a city doubling the number of streets it has every 10 years. Would you be able to get around effectively and efficiently? The Internet is OUR collective network, and it is all of our responsibility to understand, steward, protect and innovate it’s future. Scale and complexity are important because they show the importance of the Internet and that the Internet is being used in ways it wasn’t originally built for, which is leading to increased volatility. In a recent survey, 89% of enterprise IT pros reported that their organization had experienced a disruption in the last 12 months and that those disruptions were painful, excruciating and crippling. What is perplexing about those statistics is that in the same survey, only 57% of respondents reported that they were concerned about a major Internet disruption. More education on the impact and frequency of disruptions is clearly needed. Disruptions come in many forms, ranging from natural disasters to nation state endorsed censorship to DDoS attacks by bad actors. Companies can prepare for these types of disruptions. I, along with my team, will be sharing in-depth insights on how to do that in the future on this blog, on, and on the Oracle Cloud Infrastructure blog. But one of the best ways to prepare is to be informed about what is happening on the Internet. That is what the Internet Intelligence blog will provide you, as it has done for more than a decade. Whether Egypt goes offline during the Arab Spring, or North Korea adds a second Internet connection, or you’re interested in performance implications of CNAME Chains or the impact a hurricane has on the Internet or whether your data is really compliant with GDPR, our Internet Intelligence blog is where you should turn first. We were able to bring all of that insight to the general public as an independent company. Now we are powered by Oracle Cloud Infrastructure, which increases our reach and will allow us to bring you deeper insights and richer data sets, and we are excited about the possibilities that this creates. There will be more to come, but for now I would like to either welcome you or thank you for sticking around to see what we come up with next. The Internet has never been more important, and as a result, our need to understand it has never been more critical.

For more than a decade, our team of data scientists has been publishing their observations about Internet infrastructure and operations. Over the years and through two strategic acquisitions, this...

What Does "Internet Availability" Really Mean?

The Oracle Dyn team behind this blog have frequently covered 'network availability' in our blog posts and Twitter updates, and it has become a common topic of discussion after natural disasters (like hurricanes), man-made problems (including fiber cuts), and political instability (such as the Arab Spring protests). But what does it really mean for the Internet to be "available"? Since the Internet is defined as a network of networks, there are various levels of availability that need to be considered. How does the (un)availability of various networks impact an end user's experience, and their ability to access the content or applications that they are interested in? How can this availability be measured and monitored? Deriving Insight From BGP Data Many Tweets from @DynResearch feature graphs similar to this one, which was included in a September 20 post that noted "Internet connectivity in #PuertoRico hangs by a thread due to effects of #HurricaneMaria." There are two graphs shown—"Unstable Networks" and "Number of Available Networks", and the underlying source of information for those graphs is noted to be BGP Data. The Internet analysis team at Oracle Dyn collects routing information in over 700 locations around the world, giving us an extensive picture of how the networks that make up the Internet are interconnected with one another. Using a mix of commercial tools and proprietary enhancements, we are also able to geolocate the IP address (network) blocks that are part of these routing announcements—that is, we know with a high degree of certainty whether that network block is associated with Puerto Rico, Portugal, or Pakistan. With that insight, we can then determine the number of networks that are generally associated with that geography. The lower "Number of Available Networks" graph shows the number of networks (IP address blocks, also known as "prefixes") that we have geolocated to that particular geography. This number declines when paths to those networks are no longer present in routing announcements (are "withdrawn"), and increases when paths to those networks become available again. The upper "Unstable Networks" graph represents the number of networks that have recently exhibited route instability—when we see a flurry of messages about a network, we consider it to be unstable. Necessary But Not Sufficient However, as we mentioned in a previous blog post, "It is worth keeping in mind that core network availability is a necessary, but not sufficient, condition for Internet access. Just because a core network is up does not mean that users have Internet access—but if it is not up, then users definitely do not have access." In other words, if a network (prefix) is being announced, that announcement may be coming from a router in a hardened data center, likely on an uninterruptible power supply (and maybe a generator). Just because the routes (paths to the network prefixes) are seen as being available, it does not necessarily mean that those routes are usable, since the last mile network infrastructure behind them may still be damaged and unavailable. These "last mile" network connections to your house, your cell phone, or your local coffee shop, library, or place of business are critical links for end user access. When these networks are unavailable, then it becomes hard, if not impossible, for end users to access the Internet. More specifically, the components of the local networks in your house or coffee shop/library/business need to be functional—the routers/modems need to have power, and be connected to the last mile networks. Because of the power issues and physical damage (downed or broken power/phone/cable lines, impaired cell towers) that often accompany natural disasters, these local and last mile networks are arguably the most vulnerable critical links for Internet access. Determining Last Mile Network Availability While network availability can be measured at least in part by monitoring updates to routing announcements, last mile network availability can be determined both through reachability testing as well as observing traffic originating in those networks. On the latter point, our best perspective is currently provided by requests to Oracle Dyn's Internet Guide - an open recursive DNS resolution service. With this service, end user systems are configured to make DNS requests directly to the Internet Guide DNS resolvers, rather than the recursive resolvers run by their Internet Service Provider. (Users often do this for performance or privacy reasons, though some ISPs will simply have their users default to using a third-party resolver instead of running their own.) Using the same IP address geolocation tools described above, we can determine where the users appear to be connecting from. Looking at the graph below, we can see a roughly diurnal pattern in DNS traffic in the days before Hurricane Maria makes landfall in Puerto Rico. (It is interesting to note that the peaks increase significantly as the hurricane approaches.) However, the rate of queries drops sharply, reaching a near-zero level, at 11:30 UTC on September 20, about an hour and a half after Maria initially made landfall, due to damage caused to local power and Internet infrastructure. On the former point, regarding reachability testing, this insight can be gathered from the millions of daily traceroutes done to endpoints around the globe. Because the Oracle Dyn team has been actively gathering these traceroutes for nearly a decade, they have been able to identify endpoints across network providers that are reliably reachable, and can serve as a proxy for that network's availability. The graph below illustrates the results of regular traceroutes to an endpoint in Liberty Puerto Rico, a local telecom provider. It shows that traceroutes to IP addresses announced by Liberty PR generally traverse networks including San Juan Cable, AT&T, and AT&T Mobility Puerto Rico. These networks are some of Liberty PR's "upstream providers", connecting it to the rest of the Internet. It is clear that the number of responding targets (of these traceroutes) drops sharply just before mid-day (UTC) on September 20, and further degrades over the next 15 hours or so, reaching zero just after midnight. These endpoints presumably became unreachable as power was lost around the island, copper and fiber lines were damaged, etc. International Borders Above, we have examined the various ways that Oracle monitors and measures network availability in the face of disaster-caused damage. However, there is another common cause of Internet outages -- government-ordered shutdowns. In the past several years, we have seen Iraq shut down Internet access to prevent cheating on exams, and Syria has taken similar steps as well, as shown in the graph below. We have also seen countries such as Egypt shut down access to the global Internet in response to widespread protests against the government. In countries where such actions occur, the core networks often connect to the global Internet through a state-owned/controlled telecommunications provider and/or through a limited number of network providers at their international border. This situation was examined in more detail in a blog post published nearly five years ago by former Dyn Chief Scientist Jim Cowie. The post, entitled "Could It Happen In Your Country?", examines the diversity of Internet infrastructure at national borders, classifying the risk potential for Internet disconnection. In these cases, our measurements will see the number of available networks decline, often to zero, because all routes to the country's networks have been withdrawn. In other words, the networks within the country may still up and functional, but other Internet network providers elsewhere in the world have no way of reaching these in-country networks because paths to them are no longer present within the global routing tables. Summary In order for Internet access to be "available" to end users, international connectivity, core network infrastructure, and last mile networks must all be up, available, and interconnected. Availability of these networks can be measured and monitored through the analysis of several different data sets, including BGP routing announcements, recursive DNS traffic, and traceroute paths, and further refined through the analysis of Web traffic and EDNS Client Subnet information in authoritative DNS requests. And as always, we will continue to measure and monitor Internet availability around the world, providing evidence of brief and ongoing/repeated disruptions, whatever the underlying cause.

The Oracle Dyn team behind this blog have frequently covered 'network availability' in our blog posts and Twitter updates, and it has become a common topic of discussion after natural disasters (like...


Performance Implications of CNAME Chains vs Oracle ALIAS Record

The CNAME resource record was defined in RFC 1035 as “the canonical name for an alias.” It plays the role of a pointer, for example, the CNAME informs the requestor that is really this other name, The CNAME record provides a “configure once” point of integration for third party platforms and services. A CNAME is often used as opposed to an A/AAAA record for the same reason developers often use variables in their code as opposed to hard coded values. The CNAME can easily be redefined by the third party or service provider without requiring the end user to make any changes. A stipulation that prevents use of the CNAME at the apex is that no other records can exist at or alongside a CNAME. This specification is what prevents an end user from being able to place a CNAME at the apex of their zone due to the other records, which must be defined at the apex such as the Start of Authority (SOA). ALIAS / ANAME – The way of the future  The Oracle ALIAS record allows for CNAME-like functionality at the apex of a zone. The Oracle implementation of the ALIAS record at the apex uses private internal recursive resolvers to “unwind the CNAME chain.” Consider, for example, a web application firewall, WAF, implementation which uses a CNAME to direct users to the WAF endpoint. The consumer of the service simply creates a CNAME to the endpoint provided. The initial mapping is the only thing which the consumer has control over. After implementing the service, we can dig deeper into the way the service is implemented in the DNS. Below we see the full CNAME chain. 60 IN CNAME 300 IN CNAME 120 IN CNAME 3600 IN CNAME 60 IN A In the example above, the WAF service is implemented via a CNAME record mapping to The service operator maps the vanity CNAME to a service name, This is a CNAME to another record in the zone which is ultimately a CNAME to a load balancer endpoint at a cloud provider.   The Oracle ALIAS record is implemented in a way in which our internal resolver will constantly keep all of these records in cache. When a recursive resolver requests, we can hand back the A record for the cloud load balancer. This reduces variability from cache misses, network latency, packet loss, etc. Saying it reduces variability is one thing, quantifying it is another.  ALIAS Testing  To quantify the reduction in variability and potential performance gains from ALIAS/ANAME record implementation, we performed a number of tests using the RIPE Atlas network. The RIPE Atlas platform provides access to the internal resolvers used by a number of ISPs that are only accessible from their networks. It also allows us to run tests from the perspective of end users, providing insight into the last mile of a number of global networks. To select which networks would be included in testing, we took a one month sample of production traffic to our authoritative DNS platform and selected networks from the top twenty which also had appropriate RIPE Atlas probe density.   Variables being considered:  End User / Client – Testing from the perspective of end users is critical to understanding the nuance of internet performance.   Recursive Resolver – Recursive resolver implementations have varying configurations. Some modify the TTL of records, some are operated as clusters with a large shared cache others have many individual caches, some perform prefetching of popular records, etc.  Authoritative Resolvers – In the example above, there are three different namespaces being referenced. Each might be served by a different authoritative provider which might have varying proximity to the end user’s recursive resolver.  Networks – The networks facilitating communication between all these components have different performance profiles from the last mile to well-connected internet exchanges  Test 1: WAF Service Implementation  A set of RIPE Atlas probes acting as clients configured their default local resolver to request two records. One record being the first in a CNAME chain for a WAF, the other being an ALIAS record for the same WAF service. As expected, the raw results contain a number of outliers in both test scenarios created by packet loss and last mile performance issues.  For example: In the time series below, you can see some pretty serious outliers.  A time series isn’t ideal for communicating what happened. As you can see above, it looks like “most” response times were less than 1000 ms. To better quantify, we look at a histogram of the results.  The median response time for the WAF ALIAS record was 44.96 ms, whereas the median response time for the WAF CNAME Chain was 63.18 ms a difference of 18.22 ms. The boxplots below indicate that the median response time for the ALIAS record is aligned with the beginning of the 2nd quartile response times of the CNAME chain.   Test 2: Cloud Load Balancer   Test 1 focused on a CNAME chain with 5 links, whereas many implementations might have only a single CNAME. To test this scenario, the same population of probes requested one record, which was a CNAME, to a cloud load balancer and another record, which is an ALIAS, pointing to the same load balancer.  Test 3: Counter Point  The first two tests showed clear performance gains for the ALIAS ANAME implementation. We thought it was important to create an example of the opposite, an instance where the ALIAS record is slower to highlight some nuance. To accomplish this, we set up some tests in South Korea. South Korea is known for having well provisioned high-speed networks deployed within the country, but paths out of the country to the wider internet can be slower.  For this test, the CNAME chain example can be resolved within South Korea. The clients, recursive resolvers, and authoritative providers all have a presence within the country. Resolving the ALIAS record requires the in–country resolver to issue queries to either Hong Kong or Tokyo, which takes much longer than resolving the CNAME chain in country. South Korea internally is well connected but the paths to Tokyo and Hong Kong require traversing undersea cables. This is why it is important to understand your customers use case and monitor performance. The ANAME provides an option for infrastructure operators that are looking for CNAME at the apex of the zone functionality. The ANAME helps reduce variability in response times from the to recursive resolvers and clients by actively maintaining the CNAME chain in a local recursive cache. As Evan Hunt pointed out at the DNS OARC meeting in San Jose, as the ANAME standard is adopted, recursive resolvers may start to implement ANAME verification, potentially reducing some of the performance gains of the new record type. That being said, following Lord Kelvin’s advice “to measure is to know” … we will keep on measuring.  For more detail check out our webinar.

The CNAME resource record was defined in RFC 1035 as “the canonical name for an alias.” It plays the role of a pointer, for example, the CNAME informs the requestor that is...


North Korea Gets New Internet Link via Russia

This past weekend, North Korea expert Martyn Williams and I spotted the activation of a new internet path out of North Korea.  At 09:07:51 UTC on 1 October 2017, the country’s single internet provider, Star JV (AS131279), gained a new connection to the global internet through Russian fixed-line provider Transtelecom (AS20485), often referred to as TTK.  Williams published his analysis on the US-Korea Institute‘s 38 North blog, named after the dividing line between North and South Korea. The internet of North Korea is very small (four BGP routes) and reportedly only accessible by a few elites in the country.  Since the appearance of AS131279 in the global routing table almost 7 years ago, Star JV has almost exclusively relied on China Unicom for its connectivity to the global internet — the only exception was its partial usage of satellite service from Intelsat between 2012 and 2013.  In light of this history, a new internet connection out of North Korea is certainly a notable development. Unsteady Connection At 09:07:51 UTC, TTK (AS20485) appeared as a transit provider for three of the four BGP routes announced by AS131279, namely,,, and  But that only lasted a little longer than an hour and then TTK disappeared at 10:14:45 UTC.  All four routes then became unstable between 10:47 and 12:26 UTC including four brief periods when all four networks were down (pictured below).   TTK returned to providing transit for the same three routes again at 12:21:55 UTC. stopped getting TTK transit at 07:56:39 UTC on 2 October 2017, while stopped getting TTK transit a little over an hour later at 09:10:29 UTC.  At the time of this writing, only is being transited by TTK.   Who is TTK? Russia has two major fixed-line providers with networks spanning the entire country: former state telecom Rostelecom and Transtelecom (TTK).  A subsidiary of Russian Railways, Transtelecom’s backbone is made of fiber optics laid along the rail lines that crisscross the country.  It is common for telecoms to make use of existing right-of-ways to lay fiber over great distances, whether they be rail lines, highways, or pipelines.  Similar to TTK, China Mobile TeiTong uses fiber optics laid along China’s rail lines. While it is impossible to tell simply from internet measurement data how TTK’s network connects into North Korea, Williams suspects that it connects across “the Friendship Bridge, a railway crossing over the Tumen River that connects Khasan in Russia with Tumangang in North Korea” as “it’s the only connection between the two countries,” and is along Russia’s short 17km land border with the Hermit Kingdom. Conclusion: A Shift of Power In December 2014, Williams and I jointly reported on North Korea’s internet outage that resulted from a DDoS attack.  As we saw in that incident, North Korea’s lone link out to the global internet through China served as a single point of failure, one that if disabled could take the country offline.  This could happen by accident (e.g., a fiber cut or power outage), a cyber attack directed at networking equipment handling the link, or perhaps intentionally, should China Unicom disable the link. Being single-homed behind China Unicom gave China control over North Korea’s internet access.   This is important as the international community tries to persuade China to use its influence to reign in the nuclear aspirations of North Korea.  However, now with an independent connection to Russia via TTK, such leverage is greatly reduced.  With alternatives for international transit, the power shifts to North Korea in deciding whether or not to maintain its connectivity to the global internet.

This past weekend, North Korea expert Martyn Williams and I spotted the activation of a new internet path out of North Korea.  At 09:07:51 UTC on 1 October 2017, the country’s single...


Internet Impacts of Hurricanes Harvey, Irma, and Maria

Devastation caused by several storms during the 2017 Atlantic hurricane season has been significant, as Hurricanes Harvey, Irma, and Maria destroyed property and took lives across a number of Caribbean island nations, as well as Texas and Florida in the United States. The strength of these storms has made timely communication of information all the more important, from evacuation orders, to pleas for help and related coordination among first responders and civilian rescuers, to insight into open shelters, fuel stations, and grocery stores. The Internet has become a critical component of this communication, with mobile weather applications providing real-time insight into storm conditions and locations, social media tools like Facebook and Twitter used to contact loved ones or ask for assistance, “walkie talkie” apps like Zello used to coordinate rescue efforts, and “gas tracker” apps like GasBuddy used to crowdsource information about open fuel stations, gas availability, and current prices. As the Internet has come to play a more pivotal role here, the availability and performance of Internet services has become more important as well.  While some “core” Internet components remained available during these storms thanks to hardened data center infrastructure, backup power generators, and comprehensive disaster planning, local infrastructure – the so-called “last mile” – often didn’t fare as well. This local infrastructure, both fixed and mobile, plays a critical role in enabling end users to access the Internet, and in many cases, it experienced significant availability issues due to the high winds, excessive rain, and other havoc wreaked by these recent hurricanes.  This was especially the case among some of the hardest hit Caribbean islands, although networks in Florida and Texas were impacted as well. The monitoring and measurement performed by Oracle Dyn allows us to see network availability issues in near-real time. By analyzing BGP data shared by network peers in over 700 locations around the world, as well as traceroutes performed from over 300 locations across the global Internet, we can identify network outages as they occur, and use our geolocation tools to understand where they have the most significant impact.  Based on this data, as well as the analysis of data from our authoritative/secondary and open recursive DNS services, we were able to see the impact of Hurricanes Harvey, Irma, Jose, and Maria on Internet connectivity in affected areas. It is worth keeping in mind that core network availability is a necessary, but not sufficient, condition for Internet access. Just because a core network is up doesn’t mean that users have Internet access—but if it isn’t up, then users definitely don’t have access. Harvey Hurricane Harvey made landfall on the evening of August 25, near the small town of Rockport, Texas. Over the next several days, it slowly made its way north, essentially parking itself over Houston on the 26th and 27th, dropping record amounts of rain.  As the figure below shows, the number of network prefixes geolocated to Texas that became unavailable during that period grew significantly, from approximately 40 to over 120 at peak. Widespread power outages due to the hurricane forced some local network infrastructure offline, but repair efforts were swift, bringing many network prefixes back online by the 27th. Irma Less than a week later, Tropical Storm Irma intensified into a hurricane, and within days into a Category 5 storm. Hurricane warnings were issued for the U.S. Virgin Islands and Puerto Rico on September 4, and a hurricane watch was issued for the Turks and Caicos Islands on September 5.  Within a day, the eye of Hurricane Irma began to pass over island nations in the Caribbean, with Saint Barthelemy and Saint Martin/Sint Maarten among the first hit. As the figures below show, two of the three available networks on Saint Barthelemy saw outages, lasting from September 6 until the 9th. On Sint Maarten, most of the 30 available networks went offline starting around 10:00 UTC, with a brief complete outage, although availability largely returned on the 9th. Networks on Saint Martin, which shares the island with Sint Maarten, suffered a similar level of outages over the same time frame. Hurricane Irma also pounded Anguilla with 185 mph winds, but as the figure below shows, the impact to network availability on that island appeared to be significantly less severe, with just a few networks becoming unavailable between September 6-9. As it moved northwest towards Florida, Irma also impacted network availability on the Turks and Caicos Islands, hitting the islands with sustained winds of 175 mph on September 7. Similar to Anguilla, only a fraction of the total available networks went offline, but they were unavailable for a longer period of time, with connectivity appearing to return late on September 11. After moving through the Caribbean, Hurricane Irma made landfall in the Florida Keys as a Category 4 storm during the morning of September 10. As the figure below shows, Internet activity in Monroe County, as measured by geolocated requests to Dyn Oracle’s DNS services, drops off significantly at around the same time, presumably due to power outages and damaged local infrastructure. Evacuation orders for counties across Florida began to take effect on September 7 and 8 – some orders were mandatory, while others were made voluntary. The figure below shows query volume to Oracle Dyn’s Internet Guide open recursive DNS service, aggregated from IP addresses geolocated to the state of Florida. As it illustrates, the evacuation orders appeared to drive residents offline, as they prepared to flee the hurricane and find shelter elsewhere.  Query traffic clearly begins to decline on September 7, hitting its lowest volumes on the 10th and 11th as Irma roared across the state.  Request volume began to recover on the 12th, as Irma moved north, power was restored, and residents began to return to their homes. Maria Less than two weeks after Irma, Hurricane Maria also hit islands in the Caribbean, inflicting significant damage on both Dominica and Puerto Rico. On Monday, September 18, Hurricane Maria intensified into a Category 5 storm, and made landfall on Dominica, with winds near 160 mph causing widespread devastation. As the figure below shows, network instability on the island increased significantly just after 06:00 UTC, leaving just a fraction of the networks available thereafter.  Two days later, Maria made landfall on Puerto Rico, resulting in a rapid decline in the number of available networks seen from the island, as shown in the figure below. Later that morning, widespread power outages resulting from Hurricane Maria caused a near complete Internet outage in Puerto Rico, as the figures below show. Queries to Dyn’s Internet Guide open recursive DNS service from Puerto Rican IP addresses dropped to near zero at 11:30 UTC, while traceroutes to endpoints in Liberty Puerto Rico (a cable and broadband Internet service provider on the island) saw the number of responding targets also drop to near zero at around the same time, indicating that the target endpoint systems, or the networks they were connected to, were offline.   Further network instability was observed on the morning of September 21, likely due to ongoing power outages, as the number of available network prefixes dropped from near 600 to around 350 just after 03:00 UTC, as shown below. Admittedly, graphs showing Internet volatility resulting from hurricane damage in no way compare to the actual physical devastation caused by the storms. However, social media sites and applications, as well as the broader Internet, have come to play a greater role in preparedness, communications, and global dissemination of information, photos, and videos about the impacts of these natural disasters. As such, it remains important to monitor, measure, and understand how these storms affect local Internet connectivity and availability in those regions. The @DynResearch Twitter account has been posting this type of information, including many of the graphs included above, and is an excellent resource for understanding how natural and physical disasters, as well as actions by state actors and other network ‘events’, impact local and global Internet connectivity. Although the 2017 Atlantic hurricane season doesn’t end until November 30, we hope that this is our last post on this topic for the season.

Devastation caused by several storms during the 2017 Atlantic hurricane season has been significant, as Hurricanes Harvey, Irma, and Maria destroyed property and took lives across a number of...


Breaking the Internet: Swapping Backhoes for BGP

The term “break[ing] the Internet” has taken hold over the last few years – it sounds significant, and given the role that the Internet has come to play in our daily lives, even a little scary. A Google search for “break the Internet” returns 14.6 million results, while “broke the Internet” returns just under a half million results. Interestingly, Google Trends shows a spike in searches for the term in November 2014 (arguably representing its entry into mainstream usage), coincident with Kim Kardashian’s appearance in Paper Magazine, and on the magazine’s Web site. (Warning: NSFW) To that end, Time Magazine says “But in the context of viral media content, ‘breaking the Internet’ means engineering one story to dominate Facebook and Twitter at the expense of more newsworthy things.” Presumably in celebration of those efforts, there’s even now a “Break the Internet” Webby Award. “Breaking the Internet” in this context represents, at best, the failure of a website to do sufficient capacity planning, such as using a content delivery network (CDN) to help improve the scalability and performance of the Web site in the face of increased traffic from a flash crowd from the viral spread of a story.  (Full disclosure: I spent 18 years at Akamai, a leading CDN service provider, before joining Oracle in July 2017.) However, the definition and award are arguably based on a common misconception – that the Internet and the World Wide Web are the same thing. To be clear, they aren’t – the Web, as we know it, is based on a set of application-layer protocols (including HTTP and DNS) that make use of core Internet protocols (such as TCP, IP, and UDP) and infrastructure (like terrestrial and submarine cables and the routing hardware that connects networks).  In short, the Web rides on top of the Internet.  (In addition, the Internet can trace its beginnings back to the ARPANET in 1969, while Tim Berners-Lee’s Web efforts at CERN followed 20 years later.) These points begin to get at the true cause of a broken Internet – the underlying infrastructure, and perhaps more importantly, the configuration of that infrastructure. Sometimes, problems result from issues with infrastructure that is used by a large number of popular Web sites and applications.  For example, in late February 2017, Amazon’s Simple Storage Service (S3) experienced a service disruption in the Northern Virginia (US-EAST-1) Region as the result of an incorrect input to a command intended to remove a small number of servers for one of the subsystems that is used by its billing process.  According to published reports, popular Web sites and tools including Slack, Venmo, Trello, GitHub, Quora, ChartBeat, and Imgur, among others (including other AWS services), experienced availability issues for several hours. In many cases, though, the plumbing of the Internet breaks – that is, problems with the underlying networks and the connections between them. Interestingly, these problems happen more often than you realize, but their impact usually isn’t very widespread. However, extensive monitoring done by Oracle Dyn enables us to see these problems as they occur, and to measure their impact via hundreds of millions of traceroutes each day from hundreds of global vantage points, in addition to collecting routing information in over 700 locations around the world. This enables us to have an incredibly detailed, near-real time view of how the networks that comprise the Internet are interconnected, changes that occur, and the geographic scope and duration of such changes. Based on our data analysis and associated observations, the Internet breaks multiple times a day, every day. However, most of these problems tend to be short-lived, and generally rather localized.  However, some are significant enough, with sufficiently broad impact, to warrant more detailed examination.  Examples include: Routing leaks: These occur when one provider announces routes for blocks of IP addresses that don’t originate from their (or a downstream) network – these routes are often learned from peered networks. Such announcements are usually inadvertent in nature, and are often the result of filter misconfigurations. This can cause some Internet traffic to take a multi-continent detour, as happened recently with Google in Japan, or with several different providers nearly two years ago. Route hijacks: These occur when a provider intentionally claims to have a more specific route to a set of IP addresses normally controlled by another provider. Hijacks may be malicious in nature (to effect a man-in-the-middle attack or send spam, for instance), may be politically motivated, or may be due to mistakes (such as transposed digits). Earlier this year, Iran’s state telecom (shown as TIC, ASN 12880 in the diagram below) was observed to be hijacking address space belonging to websites containing adult content, as well as Apple’s iTunes services, ostensibly with the intent of censoring content found on those sites/services. Nearly a decade ago, Pakistan Telecom advertised a small part of YouTube’s assigned address space, causing some Internet users to attempt to reach YouTube via Pakistan Telecom’s network, effectively creating a black hole for that traffic. Outages: It’s not always clear what causes a country to essentially disappear from the Internet, like North Korea did in July and August 2017, as well as in December 2014. In other cases, a country is taken offline for a specific reason – Iraq has repeatedly shut off international Internet connectivity during exam periods in order to prevent cheating, and other countries (such as Syria) have started to follow suit. Physical damage to submarine cables, frequently from a passing ship’s anchor, can impact Internet availability, such as the recent cut to the East African Submarine System (EASSy) cable, which disrupted Internet connectivity for users in Mogadishu, Somalia. Natural disasters can also impact Internet connectivity, as has happened in Texas recently because of power outages caused by Hurricane Harvey. Lack of connection diversity: Most countries have gradually moved towards increased diversity in their Internet infrastructure over the last decade, especially as it concerns international connectivity to the global Internet. However, some countries remain at severe risk of Internet disconnection, with only one or two providers at their “international frontier”. This minimal diversity is often maintained for political purposes, making it easier to disable international Internet connectivity if deemed necessary, as we have seen in a number of countries in the Middle East. Blog posts published in 2012 and 2014 explored the risk of Internet disconnection for countries around the world, and we plan to publish an updated overview later this year, exploring if and how things have changed in the five years since the original post. To summarize, there are multiple ways to “break the Internet”. However, they are more likely to be related to routing misconfigurations (intentional and accidental), government-directed service disruptions, and physical damage to fiber-optic cables — more mundane than the latest celebrity scandal going viral on social media, for sure, but also arguably more important.  And of course, when something as important as the Internet breaks, it needs to be fixed. (To paraphrase a colleague, there is a lot of work that goes into putting Humpty Dumpty back together again on a daily basis.) Oracle Dyn’s monitoring enables us to quickly identify and actively observe problems related to Internet connectivity, and we will either notify affected parties that something needs to be fixed, or help fix it directly, collaborating with the larger  Internet infrastructure community.  We are proud to be an active participant in that community, working together (often behind the scenes) to address these issues, making the Internet a better, more performant, and safer place to work and play for customers and Internet users around the world.

The term “break[ing] the Internet” has taken hold over the last few years – it sounds significant, and given the role that the Internet has come to play in our daily lives, even a little scary. A...


Large BGP Leak by Google Disrupts Internet in Japan

At 03:22 UTC on Friday, 25 August 2017, the internet experienced the effects of another massive BGP routing leak.  This time it was Google who leaked over 160,000 prefixes to Verizon, who in turn accepted these routes and passed them on.  Despite the fact that the leak took place in Chicago, Illinois, it had devastating consequences for the internet in Japan, half a world away. Two of Japan’s major telecoms (KDDI and NTT’s OCN) were severely affected, posting outage notices (KDDI / OCN pictured below). Massive routing leaks continue In recent years, large-scale (100K+ prefix) BGP routing leaks typically fall into one of two buckets:  the leaker either 1) announces the global routing table as if it is the origin (or source) of all the routes (see Indosat in 2014), or 2) takes the global routing table as learned from providers and/or peers and mistakenly announced it to another provider (see Telekom Malaysia in 2015). This case is different because the vast majority of the routes involved in this massive routing leak were not in the global routing table at the time but instead were more-specifics of routes that were.  This is an important distinction over the previous cases.  In the vernacular of the BGP protocol, more-specific routes describe smaller ranges of IP addresses than less-specifics and, within the BGP route selection process, the path defined by the more-specifics are selected over those of less-specifics. These more-specifics were evidently used for traffic shaping within Google’s network. When announced to the world, they were selected by outside networks over existing routes to direct their traffic, thus having greater impact on traffic redirection than they might have otherwise. So why was Japan affected so severely? Of the 160,000 routes leaked, over 25,000 of them were of routed address space belonging to NTT OCN, the most of any network that was impacted.  None were from KDDI however. KDDI was impacted because, as a transit customer of Verizon, it accepted over 95,000 leaked prefixes from Verizon.  Compounding the problem for Japan, another major Japanese telecom, IIJ, also accepted over 97,000 leaked prefixes from Verizon.  As a result, any traffic going from KDDI or IIJ to OCN was being routed to Google’s network in Chicago – much of it likely getting dropped due to either high latency or bandwidth constraints. Traceroute misdirections Each day we perform hundreds of millions of traceroutes across the internet to measure paths and performance.  Whenever a major routing event like this takes place, we can see evidence of its impact by observing the change in these traces.  Below is a graphic showing the volume of traceroutes we see entering Google’s network around the time of the leak. The spike in the center of the graph is the sudden increase of traffic entering Google from Verizon.  In all, about 10,000 traceroutes got sucked into Google over a very brief period of time en-route to destinations around the world.   Below is a traceroute performed the day before the leak from our server in Equinix Japan to an IP address in OCN’s network in Japan.  As expected, it stays within Japan and arrives at its destination in 15ms.   Below is the same traceroute during the leak.  IIJ hands to off to Verizon ( in San Jose before taking a trip to Chicago to go to Google.  Google then takes over routing this traceroute back to Japan over its internal network.  Instead of 15ms, the round-trip time is 256ms – a very noticeable difference.   Here’s an example of a traceroute from Shanghai, China to Macau (on the coast of China) that makes the same detour through Chicago during the leak.   Starting from the other side of the world, here’s a traceroute that began at LINX in London but is taken by Verizon ( to Chicago and Google before completing its journey to Vodafone in Nürnberg, Germany.   Conclusion On Saturday it was reported that Google apologized for causing the disruption in internet connectivity in Japan on Friday.  Verizon also had a role to play for this leak.  On any given day, Google typically sends Verizon fewer than 50 prefixes.  An instantaneous jump to over 160,000 prefixes representing over 400 million unique IPv4 addresses should have tripped a MAXPREF setting on a Verizon router and triggered an automated response, at the very least.  Thankfully Verizon did not send the leaked routes on to any other major telecoms in the DFZ like Level 3, Telia, or NTT (AS2914, specifically), or the impact could have been much more severe. We’ve written about routing leaks a number of times, including here and here.  Not long ago we wrote up a case where a routing leak by another party managed to render Google unavailable for many.  In every case, there is more than one party involved.  There is a leaker, of course, but there is also always another network that distributes leaked routes out onto the internet.  We have to do better to look out for each other when mistakes inevitably arise.  The internet is a team effort.

At 03:22 UTC on Friday, 25 August 2017, the internet experienced the effects of another massive BGP routing leak.  This time it was Google who leaked over 160,000 prefixes to Verizon, who in...


Ukraine Leaks Russian Social Media Ban

Another development in the long-running conflict between Ukraine and Russia occurred in May of this year when Ukrainian President Petro Poroshenko enacted a ban on Russia’s four most prominent internet companies in the name of national security.  The ban included the two most widely used social media websites, VKontakte (often referred to as the “Russian Facebook“) and Odnoklassniki (“Classmates” in Russian), as well as email service provider and Russian search engine Yandex.   These websites have such a significant Ukrainian user base that says it expects to lose $13 million this year as a result of the ban and Yandex is appealing the ban through Ukraine’s Supreme Administrative Court. And now it appears that this ban has spilled out into the global routing table.  On 27 July 2017, Ukrainian ISP UARNet (AS3255) began announcing several new BGP routes that were hijacks of the IP address space of these Russian internet companies.  On this day, AS3255 briefly announced more-specific hijacks of each of these four Russian internet companies including (, (Yandex), (Vkontakte) and (Odnoklassniki).  While most of these routes were short-lived, AS3255’s announcement of (Odnoklassniki), a more-specific of announced by AS47764 (, has continued and is still in circulation at the time of this writing (pictured below).   The impact of this hijack didn’t last long – within an hour of UARNet announcing, began announcing and, effectively regaining control of the IP address space.  And as a belt-and-suspenders tactic, also began announcing the /23’s and /24’s under in an attempt to reduce the impact of another hijack should one occur. This development is reminiscent of an incident involving Iran in January, which we reported here.  In that case, an Iranian company leaked BGP routes intended to blackhole traffic to pornographic websites frustrating internet users around the world.  For Ukraine, it is likely that UARNet was simply attempting to implement the ban handed down from the Poroshenko government in their BGP tables and the routes leaked out – and continue to leak for Additionally, we observed a significant drop in Ukrainian peering for some of these Russian internet companies early on May 20th – very likely another outcome of President Poroshenko’s ban.  (Although last fall, Mail.Ru announced it would stop delivering traffic to Ukrainian internet exchange points citing cost.)  Regardless of the underlying reason, below is a visualization depicting the performance impact of switching from peering to transit for one of our measurement servers in Ukraine.   Other Developments in Ukraine As we have reported in the past, the internet in Ukraine continues to be shaped by events on the ground.  In January, gunmen seized the branch office of Ukrainian ISP Vega in Donetsk in eastern Ukraine (or the Donetsk People’s Republic, depending on who you ask). This event was observable on the internet as the BGP routes for Vega service in Donetsk went dark at 09:26:42 UTC (12:26:42 local) on 23 January 2017 for 30 prefixes originated by AS6703 (Vega) including the following: More recently in Crimea, ISPs have reportedly stopped using connectivity across the land bridge to mainland Ukraine.  Despite the construction of a submarine cable to mainland Russia across the Kerch Strait in 2014, ISPs on the disputed peninsula continued to make use of connectivity across the land bridge to mainland Ukraine.  However, according to recent reports, the Ukrainian security services shut off this fiber optic cable pushing everything through Russian providers Rostelecom and its Crimean agent Miranda-Media as depicted below in our measurements to Crimean ISP CRELCOM.   Miranda-Media has taken advantage of its new monopoly status and raised rates by 10%. Conclusion These days, nearly every major geopolitical development has an internet component to it whether it be a shutdown or an activation.  The internet doesn’t exist inside a vacuum and, whether it be in Cuba, Syria, or Ukraine, it is ultimately shaped by the events around it. However, in instances like with UARNet above (or Iran or China) when a leak occurs, what was intended to be a domestic measure can be sent abroad in unexpected ways.  The internet truly is the global commons.

Another development in the long-running conflict between Ukraine and Russia occurred in May of this year when Ukrainian President Petro Poroshenko enacted a ban on Russia’s four most prominent...


.NE Body Out There?

Protecting end users starts with understanding their use and integration of services. For authoritative DNS, this includes human error when copying and pasting information between interfaces. After purchasing a new domain, such as “,” the end user configures authoritative nameservers. Delegation is a “set it and forget it” operation; it is often made outside of scope of continuous integration pipelines and automated deployment systems. To quantify this risk and reconcile it with reality, we started to look at the existence of nameserver record typos in the .COM zone file. There are typos in nameserver records for a number of authoritative DNS providers made across a number of zones, making it clear that end users make delegation typos. The existence of the typo is one thing, it’s another when the typo has been registered and another provider is serving responses. One of the typos of interest was, which was registered some time in February of 2016. At that time, it was delegated to a pair of authoritative nameservers operated by, a name related to a Chinese hosting provider. Sometime around January 2017, the authoritative nameservers changed over to Yandex, the Russian internet services provider, and the domain began resolving to Using the IP address as a pivot point, we were able to identify thousands of domain names, all of which shared the IP. After verifying that the domain name had been registered, we wanted to understand how it was being used. One way to do this is to review passive DNS, a collection of timestamped observations of a domain name and its value at that given point in time. The initial results were troubling, passive DNS showed that in January of 2017 typos of a number of business critical domains were resolving to IP space ( ) of a VPS provider, The typo domains being resolved included our authoritative nameservers, for example: and domains used as part of our email platform, additionally, There were a number of observations of these resolutions in the passive DNS results, which seemed to indicate that someone or something was requesting resolution of these typos consistently. Things looked suspicious. The domain was resolving to one provider and business critical sub-domains were resolving to another. This initiated some active examination of the infrastructure supporting the domain. The first set of testing showed that the owner of the domain had configured a wildcard to match any requested subdomain of A wildcard will return the same resource records for any permutation that matches *, making any and all subdomains valid requests. An end user requesting will get the same response as someone requesting Finding the wildcard helped clarify why we were seeing so many resolutions in passive DNS. Whenever an end user requested a domain with a nameserver delegation typo, the nameserver would return the wildcard for what was requested, generating observations of the typo with its business critical subdomain. If you visit with a web browser, you will see the webpage above served from, an IP owned by Team Internet AG. Their website,, explains that they are “a leading provider of services in the direct navigation search market.” The Team Internet entity is tied to an advertising marketplace, Tonic, and domain monetization business, ParkingCrew. After reading through the details of the various business functions on and, it appeared that the nameserver typo domains were being used in a way which matched the “Domain Traffic” ad type. Explained on the Tonic page ( as: “Domain traffic is also referred to as zero-click traffic, redirect traffic or direct navigation traffic. It all means the same: A user types in a domain name that is parked with a domain parking company like and instead of PPC ads (usually from Google or Yahoo) the user gets instantly redirectet [sic] to the advertiser landing page. You can see an example of this by typing in your browser bar. You will get redirected to one of our advertisers, who is interested in bodybuilding traffic. This adtype brings the best conversions for advertisers.” Our next step was to reach out to the operators of .NE, the ccTLD of Niger, to get details on their domain usage policies. One issue that came up immediately was trying to contact the ccTLD, as was returning a PHP version page and didn’t resolve correctly. The next stop was to go to IANA and look at the WhoIS contacts. This provided two contacts with email addresses. We drafted up some details about the number of authoritative nameserver typos involved and sent over a note. Then a few days later: Final-Recipient: rfc822; Action: failed Status: 4.4.1 Diagnostic-Code: smtp; The recipient server did not accept our requests to connect. [ timed out] Meanwhile, since March 11, 2017, subdomains have started to resolve to an IP belonging to Intergenia (, a different infrastructure provider, part of Host Europe since acquisition in December of 2014. Thanks to some help from the tightly knit DNS Operator & ICANN community, we were able to find updated contact information for the .NE ccTLD. It appears they switched namespace and now operate using email addresses in the rather than We are currently waiting to hear back from them to see if .ne is willing to follow the ICANN Uniform Domain-Name Dispute-Resolution Policy (UDRP). Domain traffic monetization has been a staple internet ad business for years. Those who have been in the DNS/internet operator game for a while will most likely remember in August of 2006 when Cameroon (.cm) wildcarded their ccTLD for advertizing purposes. There are a couple lessons to be learned from this exercise. When configuring authoritative nameservers, always check twice for typos. When researching use of domain names with passive DNS, it’s important to keep things in context. When you’re done reading maybe go take a look at Oman (.om) and Ethiopia (.et) to make sure your bases are covered.

Protecting end users starts with understanding their use and integration of services. For authoritative DNS, this includes human error when copying and pasting information between interfaces....


Telecom Heroics in Somalia

Internet service in and around Mogadishu, Somalia suffered a crippling blow recently as the East African Submarine System (EASSy) cable, which provides service to the area, was cut by the anchor of a passing ship.  The government of Somalia estimated that the impact of the submarine cable cut was US$10 million per day and detained the MSC Alice, the cargo vessel that reportedly caused the damage. The cable was repaired on 17 July. The incident is the latest in a series of recent submarine cable breaks (see Nigeria, Ecuador, Congo-Brazzaville and Vietnam) that remind us how dependent much of the world remains on a limited set of physical connections which maintain connectivity to the global Internet. Internet in Mogadishu   The story of how high-speed Internet service came to Mogadishu is nothing short of remarkable.  It involved Somali telecommunications personnel staring down the threat of a local terrorist group (Al-Shabaab) in order to establish Somalia’s first submarine cable connection.  This submarine cable link would be vital if Mogadishu were to have any hope of improving its local economy and ending decades of violence and hunger.  However, in January 2014, Al-Shabaab announced a prohibition against ‘mobile Internet service’ and ‘fiber optic cables’ stating, Any individual or company that is found not following the order will be considered to be working with the enemy and they will be dealt with in accordance with Sharia law. The government of Somalia urged its telecoms not to comply with the Al-Shabaab ban.  Then in February 2014, technicians from Somalia’s largest operator Hormuud Telecom were forced at gunpoint by Al-Shabaab militants to disable their mobile Internet service. At that time, Internet service in Mogadishu was entirely reliant on bulk satellite service, which has limited capacity and suffers from high latency when compared to submarine cable or terrestrial fiber-based service.  Liquid Telecom’s terrestrial service to Mogadishu wouldn’t become active until December 2014 and the semi-autonomous regions of Somaliland and Puntland in the northern part of the country use terrestrial connections to Djibouti for international access. Despite the threats from Al-Shabaab, Hormuud Telecom elected to press ahead with its planned activation of new service via submarine cable that would be crucial for development of Mogadishu’s economy. Source: "Telecom companies in #Somalia spent millions on fiber optic internet service and have no plans to slow down despite #Shabab ban". — Harun Maruf (@HarunMaruf) January 22, 2014 The graphic below shows a dramatic drop in latencies as Hormuud Telecom shifted its transit from satellite to submarine cable.  Hormuud passed traffic across the cable for a little more than an hour on 18 February 2014, starting at 21:05 UTC.  It then shifted traffic again at 20:07 UTC on 20 February for about 12 hours before committing to the new cable for good at 17:17 UTC on 21 February.   Immediately following the activation, I drafted a blog post (as I had done in the cases of Cuba and Crimea) heralding the EASSy subsea cable activation in Mogadishu as a great milestone for the troubled region.  However, at the request of the leadership of WIOCC, the company that owns the EASSy cable, we refrained from publication.  The primary concern at the time was the safety of the hostages Al-Shabaab had recently taken when they raided a Hormuud Telecom office in the Jilib district.  We agreed not to publish the blog post so as not to draw additional publicity to Hormuud’s defiance of Al-Shabaab, which could have put those Hormuud employees at risk. Now, 3.5 years later, the fact that telecoms in Somalia use the EASSy cable to connect is no secret. January 2017 Outage Somalia held a presidential election earlier this year and as the candidates were getting ready for their first nationally televised debate, the country’s primary link to the global Internet went out. Many Somalis were understandably concerned: Questions as internet stalls on the day of Somalia’s first presidential debate @DalsanFM #unacceptable @nabadonline — ali.j.jira (@alijira) January 31, 2017 Internet blackout in Mogadishu shatters illusion of freedom during the presidential election in Somalia. — Bile Abdisalam (@BileAbdisalam) January 31, 2017 #Somalia's first presidential debate failed due to internet blackout in #Mogadishu #Pressconference — Mohamed Ali (@MohamedAlimas) January 31, 2017 However, despite its tremendously unfortunate timing, this Internet outage was due to emergency downtime on the EASSy cable which was needed to repair a cable break that occurred the previous week near Madagascar (which we reported on here).  Regardless, 12 presidential candidates walked out on the debate believing the outage was a political dirty trick. The following graphics depict how this outage impacted WIOCC service into Mogadishu as well as Mozambique.       Recent Cable Cut At 17:47 UTC on 24 June 2017, the spur from the EASSy cable leading to Mogadishu was severed by a passing ship — not an uncommon occurrence according to the International Cable Protection Committee, an advocacy group whose aim is “to protect the world’s submarine cables.” The long awaited ship has just docked at the fault point of fiber optic cable to fix the problem. Hopefully Somalia w get internet soon. — Ilyas Ahmed (@IlyasAbukar) July 12, 2017 As illustrated in the graphic below, the loss of EASSy caused Hormuud to revert to medium-earth orbit satellite operator O3b and, to a lesser degree, Liquid Telecom out of Kenya.  As we have noted in the past, O3b enjoys a latency advantage over traditional geostationary satellite service; however, a satellite link cannot replace the considerable capacity lost due to a submarine cable cut.  As a result, during the cable outage, there were widespread connectivity problems in Mogadishu.   Conclusion A couple of months after the EASSy cable went live in 2014, BBC reported on the ‘culture shock‘ sweeping Mogadishu due to the introduction of high-speed Internet service.  Its absence informs us of the value the EASSy cable brings the Mogadishu economy: $10 million/day. Presently, Mogadishu is lauded as one of the fastest growing cities in the world and is enjoying a resurgent economy primarily due to the withdrawal of Al-Shabaab but also due to improved telecommunication services, the lifeblood of a modern economy.  If it wasn’t for the heroic work of the dedicated telecommunications professionals in Mogadishu in 2013 and 2014, this service could have never been established.

Internet service in and around Mogadishu, Somalia suffered a crippling blow recently as the East African Submarine System (EASSy) cable, which provides service to the area, was cut by the anchor of a...


Who Controls the Internet

The title of the paper Who controls the Internet? Analyzing global threats using property traversal graphs is enough to ensnare any Internet researcher. The control plane for a number of attacks, as the paper points out, is the DNS due to the role it plays in mapping names to resources. MX records in the DNS control the flow of mail, CNAME records are used to implement content delivery networks (CDN) services, and TXT records are used to confirm access to and control over a namespace when implementing third party services. This post will cover an interesting case where control is exercised first via the DNS and then using BGP. Below the DNS, in the depths of internet plumbing, is the lizard brain of internet routing, which is governed by the border gateway protocol (BGP). A common term to describe BGP routing is “hot potato” routing. BGP conversations occur between autonomous systems, ASes, which are identified by their autonomous system number ASN. The ASN represents a system of networks and the policy associated with their routing. ASes are issued regionally by Regional Internet Registries (RIRs), which receive blocks of AS numbers to hand out from the Internet Assigned Numbers Authority (IANA). To be part of the Internet, an AS connects to at least one other network and they exchange network information with each other. A network operator advertises what networks are accessible through the operator’s networks, including both networks that originate within the operator’s AS and networks that are reachable passing traffic through that network. The advertisements are BGP announcements, and they say (very roughly), “I will carry traffic to AS-number, and I will send it from here through these other networks.”  The set of other networks could be empty, in which case the network is a peer of the target network.  The advertisements establish a number of potential ways for a packet to reach from originating AS to destination AS, which is part of what makes the Internet as a whole more resilient than many other kinds of network. The names — stored and distributed by the DNS — and numbers, IP addresses, and autonomous system numbers we have come to rely on are only accessible if they are properly resolved or routed. What better way to understand control than with an example of the system’s shortcomings, the hijacking of autonomous systems, and access to the IP space they contain leased or lent for nefarious purposes? We will review the commandeering of AS34991 Wireless Network Solutions. The current theory is that control over AS34991 was seized when someone registered the expired domain name, on April 11, 2017. We can confirm the domain appears in passive DNS on April 12, 2017 resolving to and hadn’t been observed in the DNS in some time. Looking for more detail, we reviewed the company and contact information available on the website ( The name, address, and phone number listed in the contact information of the DNS registration data directory service (RDDS, currently provided by whois) don’t match the detail in the RIPE NCC database ( or the website ). With control over the contact email address, the actor gained control over the autonomous system (As an aside, we note that this is probably a flaw in the RIR access recovery policy). Control over a registered AS is a start, but to make use of it the actor needs to establish peering. Connectivity was established via an internet exchange in Bulgaria (BIX.BG) and peering with AS206776 Histate Global Corp. With these steps executed, the actor now has control over AS34991 and can start making announcements to peers about the IP space they are responsible for routing. It came to our attention in late May that AS34991 had decided to announce a collection of LACNIC networks which weren’t seen or being routed on the Internet. These ranges include networks owned by a Colombian University, Telecom providers, and other businesses. We reached out to the owners of the IP space being announced but received no reply. On June 5th, the hijacking was mentioned on the NANOG (North American Network Operators Group) mailing list. One of the list members was asking how something like this is actually orchestrated. On the following day, June 6th, a mayday was sent to the RIPE Anti-Abuse Working group mailing list mentioning the hijacking of the ASN and specifically calling out the victim networks. In the post to the RIPE Anti-abuse working group, the author made note that the commandeered IP space was being sub-leased to spammers. This circles us back to the question posed by the research paper: who controls the internet? In the case of BGP, control over the network is distributed; individual ASNs can announce and filter as they see fit with a consensus held amongst peers. The terms and agreements are maintained in routing tables with alterations being passed amongst peers. In the event of such a hijack, the path to resolution is unclear. RIRs operate according to community-established policies, and are not in a position to act contrary to those policies as long as they’re being followed. The community has historically avoided rules that permitted shut downs due to content, for the obvious reason that RIRs would then become global censors. Similarly, IXes generally work according to a community agreement that explicitly disallows the IX itself to make choices about what content is allowed. IX members can, of course, refuse to accept any traffic they like; but the IX itself needs to be neutral, or it can’t perform its function. The promise of a solution relies on implementation and wider adoption of a cryptographically secure system. The technical term for this architecture is Resource Public Key Infrastructure, RPKI. When an autonomous system announces that it is originating routes for a network, its peer’s RPKI provides a means to verify that the autonomous system has approval of the owner to do so. This is done through the use of resource certificates (X.509), which are issued by the RIR to prove ownership over the networks you seek to originate. The resource certificate is a cryptographic proof of ownership held by the RIR. The networks are then associated with the ASN to create a route object authorization (ROA). The ROA is signed with the resource holder’s private key, creating a cryptographic proof. If RPKI was implemented then when AS34991 started announcing networks owned by the Colombian University, Telecom … etc others would reject the route because it wouldn’t have a valid ROA signed by the owners of the commandeered networks. Who controls the internet? In the example covering the recent hijacking of an autonomous system, you start to see how complex this question is to answer. By commandeering an autonomous system, you can impact internet routing by announcing someone else’s network as if it is your own. At the same time, the autonomous system was taken over by gaining access to a domain and configuring MX records. The IP routing layer introduces some additional nuances as well as questions of sphere of impact and control. In routing there are a number of variables such as network ownership, route leaks, propagation / number of peers impacted, which increase the complexity of defining control. _________________________________________________ Author’s note: After this post went live I received some valuable feedback and wanted to make a clarification. The statement “others would reject the route because it wouldn’t have a valid ROA signed by the owners” is incorrect. Currently network operators most likely have their configuration default to prefer the valid route vs. disallowing the unvalidated routes. RFC7115 clarifies that deployment of RPKI and adoption are expected to take some time. “As origin validation will be rolled out incrementally, coverage will be incomplete for a long time. Therefore, routing on NotFound validity state SHOULD be done for a long time.” In the future, once RPKI deployment is wide enough, configurations tested and infrastructure failure modes vetted...the default might be changed to block / not allow the route.

The title of the paper Who controls the Internet? Analyzing global threats using property traversal graphs is enough to ensnare any Internet researcher. The control plane for a number of attacks, as...