Tuesday, January 25, 2011

Myth Breaker - The Best Open Source Web Application Vulnerability Scanner

(The original benchmark post - comparison of 43 web application vulnerability scanners:

http://sectooladdict.blogspot.com/2010/12/web-application-scanner-benchmark.html)

It’s been a couple of weeks since the initial benchmark was published, and I used that time to contact most of the vendors and to come to some conclusions, as to which tool combinations are ideal for each task;

I believe that those of you that use these tools on a daily basis will find my conclusions interesting.

Please note that the conclusions refer to the condition of the tools in the day the benchmark was released (see the full explanation at the end of the post).

Glossary

AND – combining the tools is required to obtain the best results.

OR – using either one of the tools will provide nearly identical results.

AND/OR – it is currently unknown if combining them will provide additional benefits.

SAFE scan – a scan method in which the tester can select which URLs to scan, in order to prevent the scanner from accessing links that could delete data, lock user accounts or cause any other unintentional hazard (generally requires the scanner to have a proxy/manual crawling/URL file parsing/pre-configured URL restriction module); Recommended while scanning the internal section of an application that resides in a production environment.

UNSAFE scan – a scan method that scans all the URLs, without any restrictions or limitations; Recommended while scanning the public section of an application, and for scanning the internal section of an application that resides in the testing/development environment.

The Ideal Combination of Tools (Relevant to the release date of the initial benchmark – 26/12/2010):

(Constructed according to the cases detected by each tool, and according to tool capabilities and application scope restrictions)

Scan Type & Target

Reflected XSS

SQL Injection (MySQL)

Initial Public Scan

Initial Scan on the Application’s Public (unauthenticated) Section

(Purpose: gather as many “Low Hanging Fruit” exposures as possible with a minimal amount of false positives)

Netsparker AND Acunetix AND N-Stalker AND SkipFish

(Nearly False Positive Free Combination)

ProxyStrike AND WebCruiser

(Nearly False Positive Free Combination)

Internal Scan - Unsafe

The Application’s Internal (authenticated) Section

Netsparker AND Acunetix AND SkipFish

(Nearly False Positive Free Combination)

Wapiti

(Verification with other tools is recommended to reduce False Positives – ProxyStrike AND WebCruiser, In addition to one of the following: W3AF/Andipaors/ZAP/

Netsparker/Sandcat/

Oedipus)

Internal Scan - Safe

The Application’s Internal (authenticated) Section

(Method: scan internal application pages without activating any delete, logout or other dangerous operations).

ZAP AND W3AF

(Safe combination with relatively efficient accuracy)

W3AF AND Andiparos/Paros AND Oedipus AND ProxyStrike

Additional Public Scan

Detect additional potential exposures that require manual verification, and aren’t covered by previous tools

(Public Section)

ProxyStrike OR Sandcat (Grabber detects 1-2 additional POST cases - optional)

Wapiti

2nd Internal – Unsafe

Detect additional potential exposures that require manual verification, and aren’t covered by previous tools

ProxyStrike OR Sandcat

Wapiti

(No substantial change, so there’s no need to run another scan)

2nd Internal – Safe

Detect additional potential exposures that require manual verification, and aren’t covered by previous tools

(Method: scan internal application pages for additional exposure instances without activating any delete, logout or other dangerous operations)

ProxyStrike

W3AF AND Andiparos/Paros AND Oedipus AND ProxyStrike

(No substantial change, so there’s no need to run another scan)

Complementary Scan for Additional Exposures

Complementary Scan

Scan the applications with scanners that have a wider range of features, to cover additional security flaws

W3AF AND/OR Arachni AND/OR Skipfish AND/OR Sandcat

Notable Open Source & Freeware Tools – SQL Injection Detection

The highest SQLi detection ratio of open source & freeware tools belongs to Wapiti, currently the undisputed winner in this category.

A bit behind Wapiti were AndiParos, Zapproxy and Paros Proxy (all forks of the original Paros project), followed closely by Netsparker and W3AF (two tools that were prone to less false positives test cases, compared to all of the tools described so far - 30% compared to 40% or 50%).

* it is important to mention that Netsparker CE 1.5 does not contain Netsparker’s Blind-SQL injection module (disabled in this version), only the regular SQL-Injection module and the Boolean SQL-Injection module.

However, we cannot ignore the fact that the following tools had pretty decent accuracy with 0 false positives(!): WebCruiser (55.88%) and ProxyStrike (52.21%), making them ideal tools for an initial scan (Mini MySqlat0r and Scrawler had 0 false positives as well, but with lower accuracy).

Notable Open Source & Freeware Tools – XSS Detection

The Highest XSS detection ratio belongs to Sandcat, which detected nearly 100% of the overall test-cases (although like ProxyStrike & Grabber, it was misled by a few extra false positive test cases).

The highest XSS detection ratio of open source tools (and 2nd best in total) belongs to ProxyStrike (Grabber detected more POST test cases, but had a higher false positive ratio, and did not detect GET cases).

The best overall XSS detection ratio (while considering the low amount of false positives) belongs to Netsparker CE (63.64% and 3rd in the efficiency order, right after ProxyStrike), followed closely by N-Stalker and by Acunetix FE (and since Skipfish and these tools “complete” missing test cases in each other, they are ideal for initial scans, since they all have 0 false positives!).

The best overall XSS detection ratio (while considering the low amount of false positives) of open source tools belongs to WebSecurify.

The best HTTP GET XSS detection ratio (while considering the low amount of false positives) of open source tools belongs to XSSer.

The following open source tools had XSS detection modules that were free of false positives (while still having a relatively efficient detection ratio) – Grendel-Scan (GET) and Skipfish (Secubat had 0 false positives as well, but its detection ratio was a bit lower).

Notes

  • When using ProxyStrike for the initial scan, It’s probably best to use an external spider instead of the built in spider (e.g. use ProxyStrike as an outgoing/upstream proxy for Burp Suite FE or Paros/ZAP/Andiparos and then use the spider feature of the external tool through ProxyStrike).

  • As mentioned before, the conclusions reflect the condition of the various tools in the date the initial benchmark was published. Since the benchmark, many vendors had released new versions (some even in response to the benchmark), so the list of conclusions will change as soon as the next benchmark is released; I know for a fact that some vendors invested so much effort in improving their detection modules that some of the new versions get to nearly 100% detection ratio (but since I don’t have updated statistics, well have to wait).

Conclusions

So… it seems that I didn't find “the best web application vulnerability scanner” after all… but I did find combinations of open source & freeware tools that get pretty good results.

As I mentioned in previous posts, my work is only beginning.

Various open source vendors already released new versions that should be tested, tools that were improperly executed (or had a bug) should be retested as soon as their issues are mitigated, additional research led me to discover a couple of additional open source web application scanner projects, and at least one new open source web application scanner was released in the last couple of weeks (and I haven’t even mentioned commercial scanners).

Time to get back to work…

Tuesday, January 11, 2011

Follow Up & Clarifications

I’ve been pretty busy trying to contact the various vendors and deliver materials that they can use for QA & development, and I must mention that so far every vendor / developer that I have contacted responded kindly, and many of them responded with excitement and already started enhancing their tool (which is GREAT news for all of us).

I managed to find the time to contact about 18 vendors, and hopefully I’ll manage to contact more in the following weeks (25 left to go). This process requires me to analyze the benefits of the tools of each vendor, and as a result, is more time consuming then I originally thought; however, thanks to this process, I believe that soon it will lead me to some additional interesting conclusions and insights, which I’ll publish separately.

In the process of contacting the vendors, I realized that I have neglected some of my duties and forgot to publish some important clarifications:

  • Although the test cases implemented as “False Positives” are by no means vulnerable to SQL Injection or Cross Site Scripting, some of the test cases still fall into a category of information that should be presented in the report under the context of another type of exposure:

o Pages that disclose sensitive information / exceptions (some SQL Injection False Positive test cases that are meant to simulate SQL errors that do not derive from user originating input, such as connection failures, etc).

o Pages that fall under the category of insecure coding practices (some of the False RXSS & SQLi pages).

  • Some tools are still in early beta, and some didn’t even publish an official alpha version (aidSQL, iScan, and some of the other tools that had zero accuracy); the accuracy of these tools was not really audited, due to limitations or bugs that will surely be mitigated in the future versions. The benchmark will be updated as soon as the tool vendors release a new stable version.
  • The execution of certain tools which were reported as having zero accuracy failed due to bugs or configuration flaws, and not accuracy related issues; These tools include SQLMap, aidSQL, VulnDetector, and a couple of more; I’m currently working with the various vendors to figure out how to execute them properly (or how to work around the specific bugs), so the test will actually reflect their accuracy level.

As a result, I believe that the next benchmark is going to be performed sooner then I planned;

It will probably include the same results alongside the corrected scans of the tools that had execution issues (particularly SQL tools), and maybe additional enhancements (under discussion).

I wish you all a Happy New Year :)