The process of hardening our links and spreading client websites among CDNs is ongoing, but appears to be having the desired effect.
Our network operations and systems teams have been working non-stop for the last ~36 hours toward a resolution of the Atlanta datacenter DDoS attack and resulting outage. Our hosting provider has acquired a dedicated transit link that is now directly connected to the our network, and we are waiting for our transit provider to apply DDoS mitigation hardening, after which we believe that Atlanta should be restored to full service. We will keep you updated as things progress.
Around two hours after bringing things back online, attacks on Atlanta have started again which are affecting the entire datacenter. At this time we are being taken back offline to attempt further mitigation.
Our Network Engineers are still working with our upstream provider on mitigating this latest DDoS attack. We’ll post an update here when we believe connectivity has normalized.
Update – Our upstream provider has informed us that they plan to restore network connectivity to our systems gradually over an undisclosed period of time. There is no ETA at the moment, but our engineers are working closely with theirs to accelerate this process as much as possible. We will provide another update as soon as more information becomes available.
I’d like to share some updates about the recent DDoS attacks.
I am one of several staff members who have been working around the clock on DDoS mitigation. While things are stable, I would like to take a moment to publicly address the large and frequent DDoS attacks that we have been receiving since Christmas Day.
It has become evident in the past two days that a bad actor is purchasing large amounts of botnet capacity in an attempt to significantly damage our business and that of our upstream providers.
The following is a partial list of attacks we have received in no particular order:
- Multiple volumetric attacks simultaneously directed toward all of our providers’ authoritative nameservers, causing DNS hosting outages
- Large volumetric attacks toward our colocation provider’s upstream interconnection points, overwhelming the router control planes and causing significant congestion/packet loss
- Large volumetric attacks toward Paradigm network infrastructure, overwhelming the router control planes and causing significant congestion/packet loss
All of these attacks have occurred multiple times. Over the course of the last week, we have seen over 30 attacks of significant duration and impact. As we have found ways to mitigate these attacks, the vectors used inevitably change.
As of yesterday afternoon, we had mostly hardened ourselves against the above attack vectors, but more continue to come. We are working with all of our technical partners, including our colocation providers, to prevent future attacks.
Once these attacks stop, we plan to share a complete technical explanation about what has been happening. Additionally, we will be sharing, with our clients, the details of an ongoing project to significantly improve our internet connectivity and resiliency.
We would like to apologize for the lack of detail in some of our recent correspondence. Please know that we are dedicating all resources from multiple departments to stopping these attacks. We acknowledge the amount of downtime we’ve been experiencing is unacceptable, and we appreciate the understanding and support we have received over the past several days. We will share more information as our investigation continues.
President, Paradigm Consulting Co.
Identified – We’re currently engaged with our upstream provider in mitigating a large, distributed DoS attack targeting our infrastructure in Atlanta. Users may experience packet loss and issues with connectivity as we work to resolve this. We thank you for your patience.