Quality Reviewer evaluates regressions and understands changes in the source code using automated Software Metrics visualization (SW complexity, size and structure Metrics, Halstead Metrics, ISO 9126 maintainability, ISO 25010, Chidamber & Kemerer, SQALE), as well as Effort Estimation (APPW, AFP, QSM FP, SRM FP, COSMIC, COCOMO, Revic) and reporting features. It helps to keep code entropy under control, be it in house development or outsourced maintenance projects.
Quality Reviewer is part of Security Reviewer Suite.
The information collected, analysed and visualised with the SQALE methodology is easily comprehended and offers an incomparable insight into your software development. That facilitates the communication at all levels, from the IT directors to the developers and vice versa.
How can a technical manager communicate the positive effects of his or her work if the software quality remains largely invisible? How can budgeting departments, decision makers and internal customers be convinced of the necessity of the quality and productivity enhancing measures or the complexity of a particular work request?
Quality Reviewer makes a significant contribution in this area. SQALE enhanced reporting feature provide an overview of your entire software landscape which even non-technical individuals can understand easily. Managers and decision makers can see evidence of the quality of the system and the productivity increases achieved and can therefore be more easily convinced of measures such as software quality assurance. In reverse, developers and team leaders can show managers and directors what they have achieved.
Blockchain is a meta technology on the internet that works as a decentralized database thanks to a peer to peer network of computers and people which share a distributed ledger.
It consists of data structure blocks—which hold exclusively data in initial blockchain implementations, and both data and programs in some of the more recent implementations—with each block holding batches of individual transactions and the results of any blockchain executables. Each block contains a timestamp and information linking it to a previous block.
A transaction represents a unit of value somebody has and that is willing to exchange for something (physical or not) with somebody else. This unit of value will go from owner A to owner B by broadcasting to the network that the amount on your account goes down and the amount on the other person goes up. How do nodes in the network keep track of account balances? Ownership of funds is verified through links to previous transactions, which are called inputs.
Quality Reviewer can share the results in anonymous way using Blockchain, under permission of the User.
Existing electronic Quality systems, like QSM and ISBSG, all suffer from a serious design flaw: they are proprietary, that is, centralized by design, meaning there is a single supplier that controls the code base, the database, and the system outputs and supplies the browsing tools at the same time. The lack of an open-source, independently verifiable output makes it difficult for such centralized systems to acquire the trustworthiness required by enterprises and quality standard makers. The blockchain works as a secure transaction database, to log the audit quality results in a trustworthy way. The results are classified by Industry, Application Types and Size.
The software metrics data available on Blockchain can be used to assist you with:
The Static Confidence Factor is a measurement standard combining the most important Quality Analysis results in a single value. It is calculated by collecting 20 Quality Metrics and 20 Anti-Patterns, classified in 5 Severity Levels. The lower is the Static Confidence Factor, the higher is the Application Quality. From Static Confidence Factor derives the Quality Index, both provided by Quality Reviewer. Example of Quality Index:
Each Severity has a different weight, named Defect Probability (DP). It is based on two decades of field experience, about correlation between code Quality and Defects in production.
The Violations (V) mean out-of-range Metrics as well as the number of Anti-Patterns found in the analyzed code, grouped in Static Defect Count (SDC).
For each Severity:
SDC(severity) = (V(severity)/NViol) * DP(severity)
where: V is the # of Violations per severity, and DP is the Defect Probability per severity
NViol= Total # of Violations
The Static Confidence Factor (SCF) is calculated:
SCF = (SDC(Blocker)+4)+(SDC(Critical)+2)+(SDC(Major)+2)+(SDC(Minor)+1)+(SDC(Info)+1)
Where: SDC is the Static Defect Count per severity.
Further, in a single view, you can have a summary of Quality Violations for the entire Project:
McCabe® IQ-style Kiviat graph can help to show where your Quality issues are mainly located (Maintanability, Testability or Size). For each source file, all related Classes or Prgrams are listed with detected Anti-Patterns.
You can create custom Anti-Patterns based on metrics’ search queries, using graphs to interpret the impact of the values. When metrics based searches provide quick access to elements of interest, saving these queries serve as input for custom analysis.
McCabe® tab shows a complete list of McCabe® metrics, with Violations marked in different colors:
Halstead Science metrics are also provided at Application/Program, File, Class and Method/Perform level, clicking on Halstead tab:
The Chidamber & Kemerer (CK) metrics suite originally consists of 6 metrics calculated for each class: WMC, LCOM, CBO, DIT, RFC and NOC. A bunch of additional Object Oriented metrics are also calculated, like Mood, Cognitive Metrics and Computed Metrics. You can view them by clicking on OO Metrics tab:
Primitive Metrics: McCabe® Cyclomatic Complexity (vG), Essential Complexity (evG) Normal vG, sum vG, ivG, pvG, Cyclomatic Density, Design Density, Essential Density, Maintenance Severity, pctcom, pctPub, PUBDATA, PUBACCESS. SEI Maintanability Index (MI3, MI4), LOC, SLOC, LLOC. Halstead Length, Vocabulary, Difficulty, Effort, Errors, Testing Time, Predicted Length, Purity Ratio, Intelligent Content. OOPLOCM, Depth, Weighted Methods Complexity (WMC), LCOM, LCOM HS, CBO, DIT, RFC, NOA, NOC, NPM, FANIN, FANOUT, #Classes, #Methods, #Interfaces, #Abstract, #Abstractness, #DepOnChild.
Computed Metrics: let you define a new higher-level metric by specifying an arbitrary set of mathematical transformations to perform on a selection of Primitive metrics. A number of Computed Metrics are provided by default, like: Class Cohesion rate, Class Size (CS), Unweighted Class Size (UWCS), Specialization/Reuse Metrics, Logical Complexity Rate (TEVG), Class Complexity Rate (TWMC), Information Flow (Kafura & Henry), ISBSG Derived Metrics, Structure Complexity, Architectural Complexity Metrics, MVC Points (Gundappa).
You can configure Metrics ranges, having Low-Threshold-High values, you can set Alarm limits. It can be shown graphically. You can have System (Application/Program), File, Class and Method/Perform scope view, different for each supported programming language:
Comparing metrics helps us visualize a pattern and trend of the data and subsequently the code which exhibits these qualities.
Comparing Weighted Method Complexity (WMC) with Inheritance depth of a class (DIT), gives us a fact that with increasing inheritance depth the complexity decreases, i.e, the complexity of classes are well distributed. In a system where there is very little use of inheritance the complexity of classes are concentrated and lead to more Blob classes and Speculative Generality classes. It is also easy to find extreme cases or deviating elements.
The following McCabe®Scatterplot graph is also provided:
The idea behind this graph is that the more a code element of a program is popular, the more it should be abstract. Or in other words, avoid depending too much directly on implementations, depend on abstractions instead. By popular code element I mean a project ﴾but the idea works also for packages and types﴿ that is massively used by other projects of the program. It is not a good idea to have concrete types very popular in your code base. This provokes some Zones of Pains in your program, where changing the implementations can potentially affect a large portion of the program. And implementations are known to evolve more often than abstractions. The main sequence line ﴾dotted﴿ in the above diagram shows the how abstractness and instability should be balanced. A stable component would be positioned on the left. If you check the main sequence you can see that such a component should be very abstract to be near the desirable line – on the other hand, if it’s degree of abstraction is low, it is positioned in an area that is called the “Zone of Pain, else when the degree of abstraction is high, it is located in the “Zone of Uselessness”.
Thresholds or acceptable value range of software metrics are often debated, but it is not to be a reason to opt out. Application/Program specific threshold can be used to evaluate code quality at the first level, however to find a perfect threshold/justification for a metric involves deeper analysis on the values specific to the project. Quality Reviewer provides a feature to understand an object's measurement compared to their peers. For example comparing a Class complexity vs. its peers in the project gives a justification for its value/state. In addition, to make a judgment about the Class, one has to understand the complexity of its Methods (Composition). Comparing metrics for a method vs. all the methods of a class gives its relative rank, in order to assign importance to that method. Apart from other uses of this comparison, it can be used to categorize metric distribution in "Distribution Analysis", as the metrics view gives the Minimum, Maximum and Average value of a metric in the project one could adjust the categorization to fit the project. This methodology has to be excised with caution as the categorization could become too much project specific and its generalness could be lost.
Visualizing distribution helps to categorize elements based on value range. Quality Reviewer provides a customizable categorization interface to quickly evaluate the distribution. For example distribution of Maintainability Index metric can be categorized into "Very Poor Maintainability", "Poor Maintainability", "Good Maintainability" and "Excellent Maintainability" based on value range. Categorization helps to address issues specifically and on a reduced set of similar elements.
Comparison, Distribution and Custom Analysis are visualized using graphs/charts. While the graphs are abstract representation of data (metrics), we often need to trace back to the resource whose data is being displayed. Quality Reviewer provides a feature to list the resources of a graph elements (categories/individual) and assists further to locate the resource in the project. This View provides an interface which is sensitive to selection in the graphs.
COPYRIGHT (C) 2014-2021 SECURITY REVIEWER SRL. ALL RIGHTS RESERVED.