In the first part of this article series, Alejandro Gervasio explained how the XMLHttpRequest object and be used to generate massive GET requests to a targeted server, in order to launch denial of service attacks. In this article, he shows how http POST requests, commonly used on Web forms to collect user data, can be automated, again leaving your system vulnerable to attack. With the information you learn from this series, you should be able to build more robust and safer Web applications, making your system less of a target.
As a result, the snippet was capable of sending multiple http requests in asynchronous mode, by causing possibly heavy server overloads, and eventually complete system hangs.
From an attacker’s point of view, it’s extremely easy to use http-based hacking tools to launch attacks against unprotected websites, through the usage of scripts that implement iteration as the core logic for shooting harmful requests. After all, a Web server is an inherently public system, capable of handling limited resources, so the idea of using brute force techniques to consume computational resources is fairly logical.
Generally speaking, if http get requests are quite easily introduced as an automated process, the same concept can be applied to post requests. As you know, post requests are commonly used on Web forms as the default method of collecting user data, and certainly because of their inherent public access, they’re one of the most vulnerable points within the structure of a website.
Despite the fact that some methods are currently applied to make Web forms a safer structure, there are still plenty of websites that expose themselves to users by making the wrong assumption that they will never be used for hacking purposes. Of course, the situation gets more critical if form data is used directly to add or modify in some way sensitive information, without performing strict server-side validation.
However, even in cases where severe form validation is carried out, it’s possible to emulate form submissions that are considered valid and genuine by existing verification mechanisms. Taking into account this critical condition, different techniques are applied in order to reduce hacking possibilities, which range from noisy image generation, on-the-fly creation of Web pages (mostly using the DOM), to cryptographic methods, or a combination of techniques.
As I explicitly said in the first part of this series, this tutorial is not intended to promote the use of hacking techniques. It simply demonstrates how a potential attack can be launched against websites, in order to encourage developers to build up safer and more efficient Web programs.
With the preliminaries out of the way, let’s get started.