Its special Safe-PenTest feature, allowing to explore vulnerabilities in your Web Applications, at the same time to keeping them securely. No need of Backups before PenTest, we guarantee our tool will keep your system and database integrity.
Dynamic Reviewer DAST provides a robust and stable framework for Web Application Security Testing, suitable for all Security Analysts, QA and Developers with False Positives and False Negatives support, offering an easy-to-use Web GUI, Advanced Scan and Enterprise Reporting capabilities.
Dynamic Reviewer Provides two usage modes:
White Box mode. It performs Authentication before starting the scan. It provides the following Login modes:
Form-Based Authentication: login with User and Password as Web form, You can configure more than one user, they will be tested all.
JSON-Based Authentication: submit a JSON object with credentials
Tokern-Based Authentication: You can modify the request headers for inserting tokens
Script-Based Authentication: upload and execute a custom script used to login
Both on premises and Cloud installations can connect to the target Web Application in different modes:
Direct. Dynamic Reviewer will reach the target Web Application using a Direct connection to Internet
Through Proxy. For reaching the target Web Application you need a proxy. You can configure the Proxy URI, Proxy TCP Port, Proxy User and Proxy Password
SSH Tunnelling. A temporary SSH key will be automatically generated for the current Scan. The User can download it and execute the commands shown in the screen. It will create a SSH Tunnel to reach the target Web Application.
Once Scan is terminated, you have a list of Findings. You can:
Suppress a Finding Category (example: all Blind SQL Injection issues)
Suppress one or more Findings inside a Category
Add Comments to the entire scan, to a Finding Category, to a single Finding
All vulnerabilities resulting from the above OSS tools, will be collected and correlated and included in the Dynamic Reviewer results.
Further the above listed tools, Dynamic Reviewer providesits own Security Scan Engine, but you can also add results coming from third-party Security Scanners in order to cover possible False Negatives.
Usage of Stream Editor (sed) for pattern matching: Privilege Escalation, Exploiting sudo/administrator rights, DirtyPipe (CVE 2022-0847), Windows Privilege Escalation: PrintNightmare.
Relevant information include:
Page DOM, as HTML code.
With a list of DOM transitions required to restore the state of the page to the one at the time it was logged.
Original DOM (i.e. prior to the action that caused the page to be logged), as HTML code.
With a list of DOM transitions.
Data-flow sinks -- Each sink is a JS method which received a tainted argument.
Parent object of the method (ex.:DOMWindow).
Method signature (ex.:decodeURIComponent()).
With the identified taint located recursively in the included objects.
Method source code.
Execution flow sinks -- Each sink is a successfully executed JS payload, as injected by the security checks.
A bunch of frameworks are supported, like Cordova/Phonegap and Node.js
In essence, you have access to roughly the same information that your favorite debugger (for example, FireBug) would provide, as if you had set a breakpoint to take place at the right time for identifying an issue.
DOM Security Issues
The list of DOM Security Issues found by Dynamic Reviewer are:
Code Injection - Client Side
Code Injection - PHP input wrapper
Code injection - Timing
File Inclusion - Client Side
OS Command Injection - Client Side
OS Command Injection - Timing
Remote File Inclusion Client Side
XSS - DOM
XSS - DOM - Script Context
XSS - Event
Data from attacker controllable navigation based DOM properties is executed as HTML
Data from attacker controllable URL based DOM properties is executed as HTML
Non-HTML format Data from DOM storage is executed as HTML
HTML format Data from DOM storage is executed as HTML
Data from user input is executed as HTML
Non-HTML format Data taken from external site(s) (via Ajax, WebSocket or Cross-Window Messages) is executed as HTML
HTML format Data taken from external site(s) (via Ajax, WebSocket or Cross-Window Messages) is executed as HTML
Non-HTML format Data taken from across sub-domain (via Ajax, WebSocket or Cross-Window Messages) is executed as HTML
HTML format Data taken from across sub-domain (via Ajax, WebSocket or Cross-Window Messages) is executed as HTML
Non-HTML format Data taken from same domain (via Ajax, WebSocket or Cross-Window Messages) is executed as HTML
HTML format Data taken from same domain (via Ajax, WebSocket or Cross-Window Messages) is executed as HTML
Weak Hashing algorithms are used
Weak Encryption algorithms are used
Weak Decryption algorithms are used
Cryptographic Hashing Operations were made
Encryption operations were made
Decryption operations were made
Potentially Sensitive Data is leaked (via HTTP, Ajax, WebSocket or Cross-Window Messages)
Potentially Sensitive Data is leaked through Referrer Headers
Data is leaked through HTTP
Data is leaked through WebSocket
Data is leaked through Cross-Window Messages
Data is leaked through Referrer Headers
Potentially Sensitive Data is stored on Client-side Storage (in LocalStorage, SessionStorage, Cookies or IndexedDB)
Data is stored on Client-side Storage (in LocalStorage, SessionStorage, Cookies or IndexedDB)
Cross-window Messages are sent insecurely
Cross-site communications are made
Communications across sub-domains are made
Same Origin communications are made
Configuration options include:
Adjustable pool-size, i.e. the amount of browser workers to utilize.
Timeout for each job.
Worker TTL counted in jobs -- Workers which exceed the TTL have their browser process re-spawned.
Ability to disable loading images.
Adjustable screen width and height.
Can be used to analyze responsive and mobile applications.
Ability to wait until certain elements appear in the page.
Configurable local storage data.
In addition to that, it also knows about which browser state changes the application has been programmed to handle and is able to trigger them programmatically in order to provide coverage for a full set of possible scenarios.
By inspecting all possible pages and their states (when using client-side code) Dynamic Reviewer is able to extract and audit the following elements and their inputs:
Along with ones that require interaction via a real browser due to DOM events.
Input and button groups which don't belong to an HTML<form>element but are instead associated via JS code.
Generic client-side elements which have associated DOM events.
JSON request data.
XML request data.
Web Security Issues
Dynamic Reviewer runs testing to identify all of the major web application security vulnerabilities, such as SQL Injection, Cross-Site Scripting, Cross Site Request Forgery, and more. Dynamic Reviewer has an ever growing list of tests that are run against the application and APIs to identify potential security vulnerabilities.
Dynamic Reviewer provides the following HTTP passive and active scan rules which find specific vulnerabilities. Dynamic Reviewer can discover the following OWASP ZAP Web Security Issues:
Thay review all HTTP requests and responses from the application, looking for indicators of security vulnerabilities. These scans do not change anything about the requests.
Whenever Dynamic Reviewer obtains a fingerprint from the observed traffic, passing through any firewall, it identifies the Operating System and obtain some ancillary data needed for other analysis tasks.
For TCP/IP, the tool fingerprints the client-originating SYN packet and the first SYN+ACK response from the server, paying attention to factors such as the ordering of TCP options, the relation between maximum segment size and window size, the progression of TCP timestamps, and the state of about a dozen possible implementation quirks (e.g. non-zero values in "must be zero" fields). The metrics used for application-level traffic vary from one module to another; where possible, the tool relies on signals such as the ordering or syntax of HTTP headers or SMTP commands, rather than any declarative statements such as User-Agent. Application-level fingerprinting modules currently support HTTP, SMTP, FTP, POP3, IMAP, SSH, and SSL/TLS. Some of its capabilities include:
- Highly scalable and extremely fast identification of the operating system and software on both endpoints of a vanilla TCP connection - especially in settings where NMap probes are blocked, too slow, unreliable, or would simply set off alarms,
- Measurement of system uptime and network hookup, distance (including topology behind NAT or packet filters), and so on.
- Automated detection of connection sharing / NAT, load balancing, and application-level proxying setups.
- Detection of dishonest clients / servers that forge declarative statements such as X-Mailer or User-Agent.
On the other hand, they will create and modify requests being sent to the application, sending test requests that will surface vulnerabilities that would not be caught in a passive scan.
Active scans are definitely a better way to test for vulnerabilities in your application, as the test suite injects requests that will surface vulnerabilities. These scans are, however, actively attempting to attack the application, which may include creating or deleting data.
Fuzzing is a technique of submitting lots of invalid or unexpected data to a target.
Dynamic Reviewer allows you to fuzz any request still using:
A built-in set of Payloads. Payload Generators generate the raw attacks that the fuzzer submits to the target application
Payload Processors are used to automatically change specific payloads before they are submitted
Fuzz Location Processors are used to automatically change all of the payloads before they are submitted
Custom scripts can be uploaded
Dynamic Reviewer is integrated with the following third-party Host Scanning tools:
The Dynamic Reviewer and thrid-party tool results will be automatically correlated for obtaining a unique results and a unified report.
Dynamic Reviewer is a DAST tool, it can detect all vulnerabilities of your site and list all possible Exploits. It won’t execute those Exploits.
If you need a full Penetration Test that includes the Exploits, you need more than one tool.
The Penetration Testing process requires an extensive set of tools. These include network (Host Scanning) and vulnerability scanning software, as well as tools that can launch specific attacks and exploits such as brute-force attacks or SQL injections, a custom reporting and a unified dashboard.
The final and most important stage of a Penetration Test is the Enterprise Report. This is a detailed report to be shared with the target company’s security team. It documents the pentesting process, vulnerabilities discovered (including the ones at client-side), proof that they are exploitable, and actionable recommendations for remediating them.
Internal teams can then use this information to improve security measures and remediate vulnerabilities. This can include patching vulnerable systems. These upgrades include rate limiting, new firewall or WAF rules, DDoS mitigation, and stricter form validation.
Detailed Report. A standard DAST automatically-generated technical report in PDF, Word, Excel and HTML formats, listing all detailed information necessary for Indentifying and Remediate the vulnerabilities. Fully customizable ISO 9001-compliant Cover Pages, can be saved as different Profiles
You can upload two logos, and define the ISO 9001 responsability chain (Created By, Verified By, Approved By. You can add a Disclaimer Note, a ISO template code, the Confidentiality Level and a Document version.
Enterprise Report. Fully-customized automatically-generated Executive Summary and Technical reports, starting from a customer-driven Form Template (on which customized tags will be filled), Word template, written in the preferred language (Report template in Word format, containg the custom tags).
Scan-times using traditional tools can range between a few hours to a couple of weeks – maybe even more. This means that wasted time can easily pile up, even when we’re talking about mere milliseconds per request/response.
Dynamic Reviewer benefits from great network performance due to its asynchronous HTTP request/response model. In this case – and from a high-level perspective –, asynchronous I/O means that you can schedule operations in such a way that they appear like they’re happening at the same time, which in turn means higher efficiency and better bandwidth utilization. That means:
Faster hyperlink processing
Faster numbering processing
Faster screenshot processing
It provides a high-performance environment for the tests that need to be executed while making adding new tests very easy. Thus, you can rest assured that the scan will be as fast as possible and performance will only be limited by your or the audited server’s physical resources.
Avoiding useless technical details, the gist is the following:
Every type of resource usage has been massively reduced — CPU, RAM, bandwidth.
CPU intensive code has been rewritten and key parts of the system are now 2 to 55 times faster, depending on where you look.
The scheduling of all scan operations has been completely redesigned.
DOM operations have been massively optimized and require much less time and overall resources.
Suspension to disk is now near instant.
Previously browser jobs could not be dumped to disk and had to be completed, which could cause large delays depending on the amount of queued jobs.
Default configuration is much less aggressive, further reducing the amount of resource usage and web application stress.
Talk is cheap though, so let’s look as some numbers under Linux:
As you can see, the impact of the Performances' improvements becomes more substantial as the target’s complexity and size increases, especially when it comes to scan duration and RAM usage — and for the production site the new engine consistently yielded better coverage, which is why it performed more browser jobs.
Runs fast on under-powered machines.
You can run many more scans at the same time.
You can complete scans many times faster than before.
If you’re running scans in the “cloud”, it means that it’ll cost you many, many times less than before.
The ML is what enables Dynamic Reviewer to learn from the scan it performs and incorporate that knowledge, on the fly, for the duration of the audit.
It uses various techniques to compensate for the widely heterogeneous environment of web applications. This includes a combination of widely deployed techniques (taint-analysis, fuzzing, differential analysis, timing/delay attacks) along with novel technologies (rDiff analysis, modular meta-analysis) developed specifically for the framework.
This allows the system to make highly informed decisions using a variety of different inputs; a process which diminishes false positives and even uses them to provide human-like insights into the inner workings of web applications.
Dynamic Reviewer is aware of which requests are more likely to uncover new elements or attack vectors and adapts itself accordingly.
Also, components have the ability to individually force the Core Engine to learn from the HTTP responses they are going to induce thus improving the chance of uncovering a hidden vector that would appear as a result of their probing.
DISCLAIMER: Due we make use of open source components (OWASP ZAP, w3af, pWeb, p0f, wXf, OSVDB), we do not sell the product, but we offer a yearly subscription-based Commercial Support, plus our Commercial Security Scanner.
COPYRIGHT (C) 2015-2023 SECURITY REVIEWER SRL. ALL RIGHTS RESERVED.