Crawling Ajax-driven Web 2.0 Applications

by Shreeraj Shah
Oct. 2, 2017 1 comment Infosecwriters Apps & Hardening

Crawling web applications is one of the key phases of automated web application scanning. The objective of crawling is to collect all possible resources from the server in order to automate vulnerability detection on each of these resources. A resource that is overlooked during this discovery phase can mean a failure to detect some vulnerabilities. The introduction of Ajax throws up new challenges [1] for the crawling engine. New ways of handling the crawling process are required as a result of these challenges. The objective of this paper is to use a practical approach to address this issue using rbNarcissus, Watir and Ruby.

negrii_irina88 7 months, 2 weeks ago

I really enjoyed this article ... the language is clear, concise and the examples are very well chosen..congratulations!