DDoS Protection at Risk From App-Layer Attacks

website ddos

On-premise DDoS defenses are at risk from application-layer DDoS attacks that consume plenty of bandwidth, even though they are unusual, according to security researchers.

Web application operators are at risk, if security researchers’ most recent observations are to be believed. A large, application-layer DDoS attack using a new technique that cripples DDoS defenses was detected by security researchers, who claim that such an attack technique could spell trouble for DDoS protection services.

The target was a Chinese lottery website that had protection enabled from Imperva, a DDoS protection service. The attack peaked at 8.7 Gbps. While this may seem like an attack with relatively low bandwidth at a time when DDoS attacks frequently scale beyond 100 Gbps, the bandwidth is unprecedented for an application-layer attack.

Normally, DDoS attacks target the network layer of its target. As is of the norm with such attacks, large streams of malicious packets across different network protocols are sent to the target. The objective here is to inundate all of the target’s available bandwidth, rendering it inaccessible.

With network-layer attacks, HTTP floods do not rely on the size of the data packets sent to inflict damage. Instead, the goal is to consume the Web server’s computing resources, including its CPU and RAM, with the number of requests that is required to be processed by the targeted web application. When the limit is reached, the server will simply stop answering new requests. This in turn results in a denial-of-service condition for legitimate clients.

Up until recently, the largest HTTP floods at 200,000 requests per second did not consumer over 500 Mbps of bandwidth, considering the small packet size of every request.

With this in mind, most companies build their infrastructure to facilitate defenses wherein an application can handle up to a maximum of 100 requests per second. However, unless the applications are protected by an anti-DDoS service that proactively identifies and filters bogus requests, they can be easily disrupted.

In this instance, the attack was launched from a computer botnet that carried a malware strain called Nitol. The malware sent legitimate HTTP POST requests that mimicked the web crawler of the Baidu search engine. Sent at 163,000 requests per second, the malware proceeded in an attempt to upload randomly-generated large files to the server. This resulted in the large bandwidth footprint of the attack.

In a blog post, Imperva researchers said:

Application layer traffic can only be filtered after the TCP connection has been established. Unless you are using an off-premise mitigation solution, this means that malicious requests are going to be allowed through your network pipe, which is a huge issue for multi-gig attacks.

Image credit: Imgur.