MATRIX
CHART
NETWORK
Generating matrices...
FILES: ...   DENSITY: ...
FILES: ...   DENSITY: ...

What does this mean?

These adjacency matrices, sometimes called Design Structure Matrices (DSMs), show direct dependencies between files in a codebase. A dot represents one or more dependencies and is read

file A depends on file B

where A is a file along the left edge and B is a file along the top edge. A module's dependencies are colored distinctly. A square-shaped cluster indicates the density of dependencies within a module.

All the dots to the right and left of a cluster are files that the module depends on. All files above and below it are files that depend on it. In many paradigms, modularity (low coupling, high cohesion) is a desirable quality attribute. Hence, an ideal system has modules that have more intra-module dependencies and fewer inter-module dependencies.

A matrix can show different types of files, which have varying impacts on quality. Although we calculate the values for these four types of files using the visibility matrix, it is still useful to look for them in a first-order matrix. Shared files, also known as utility files, are ones that a lot of other files depend on; a shared file that every other file in a codebase depends on would appear as a solid vertical line in an adjacency matrix. Shared files appear to be a positive predictor of quality.

Control files, conversely, depend on a high number of files. Such files appear to be a negative predictor of quality and are hence undesirable in high numbers. They may appear as horizontal lines in an adjacency matrix.

Two other types of files are core files, which depend on a lot of files and have a lot of files depend on them, and periphery files, which don't depend on a lot of files and don't have a lot of files depend on them. These may be a bit more difficult to spot in a matrix. The collective size of clusters can give a partial sense of the number of core files in a codebase. Core files seem to be a negative predictor of quality whereas periphery files seem to be a positive predictor of quality.

 

Evolution of the Firefox Codebase presents a set of metrics for all releases of Firefox that are indicative of quality and allows one to inspect them through one of several views. By looking at changes in these metrics, one can see the evolution of the Firefox codebase over time. This work is also be useful as a retrospective, investigative tool to help infer when, say, architectural issues may be the cause for unfavorable user sentiment following a release. Metrics such as lines-of-code (LOC) and cyclomatic complexity are widely used in industry, whereas others like propagation cost are based on some of the more recent research to come out of academia.

Only modules that have a reasonable level of complexity, i.e. several hundred files, are shown. Select an option from the top to change the view. In the Chart view, to see how much a metric has changed between two releases on the same axis, use z and the left and right arrow keys to cycle through versions on the left-hand side and x and the arrow keys to cycle through versions on the right-hand side. Descriptions of what each metric captures and how it is calculated are shown below, as are some additional measures of quality, such as speed and resident memory usage, both of which have improved in Firefox.

For more, please read my post How maintainable is the Firefox codebase?


LOC

LOC measures the number executable lines of code in each release, ignoring comments and blank lines, and is widely used as a baseline measure of quality. A system with more LOC is typically more difficult to maintain. LOC and defect density have an inverse relationship1 due to architecture not changing at the same rate as LOC and architectural elements such as interfaces having a higher propensity for defects than individual components.

The set of analyzed files includes all files that meet our file-type filter2 and excludes unit tests.

1 Stephen Kan, S., Metrics and Models in Software Quality (2002).

2 .c, .C, .cc, .cpp, .css, .cxx, .h, .H, .hh, .hpp, .htm, .html, .hxx, .inl, .java, .js, .jsm, .py, .s, .xml

LOC

Cyclomatic complexity

Cyclomatic complexity, developed by Thomas McCabe in 1976, measures the number of linearly independent paths within a software system and can be applied either to the entire system or to a particular class or function. By viewing a block of code as a control graph, the nodes constitute indivisible lines of code that execute in sequence and the directed edges connect two nodes if one can occur after the other. So, for example, branching constructs like if-else statements would result in a node being connected to two output nodes, one for each branch.

Cyclomatic complexity is defined as v(G) = e – n + 2p where v(G) is the cyclomatic number of a graph G, e is the number of edges, n is the number of nodes and p is the number of connected components, or exit nodes, in the graph. A block of code with a single set of if-else statements would be calculated as follows: e = 6, n = 6 and p = 1, therefore, v(G) = 6 – 6 + 2 * 1 = 2. The additive nature of the metric means that the complexity of several graphs is equal to the sum of each graph. In our measure of cyclomatic complexity, we control for size and hence, the value for each release is per 1,000 LOC.

Cyclomatic complexity

First-order density

First-order density measures the number of direct dependencies between files. It is calculated by first building an adjacency matrix, sometimes called a Design Structure Matrix (DSM), using the set of source-code files sorted by their hierarchical directory structure. Wherever a file in a particular row depends on a file in a particular column, we mark the element with a '1'. Because we're only capturing direct dependencies, this matrix is referred to as a first-order dependency matrix. The density of said matrix is the first-order density. For releases, we show it per 10,000 file pairs whereas for modules, where matrices are generally not as sparse, we show the density as a percentage.

In such a matrix, a square-shaped cluster indicates many dependencies between files within a module. All the dots to the right and left of a cluster are files that the module depends on. All files above and below it are files that depend on it. In many paradigms, modularity (low coupling, high cohesion) is a desirable quality attribute. Hence, an ideal system has modules that have more intra-module dependencies and fewer inter-module dependencies.

First-order density

Propagation cost

Propagation cost measures direct as well as indirect dependencies between files in a codebase. In practical terms, it gives a sense of the proportion of files that may be impacted, on average, when a change is made to a randomly chosen file3,4.

The process of transforming a first-order dependency matrix that captures only direct dependencies between files to a visibility matrix, also known as a reachability matrix, that captures indirect dependencies as well is achieved through matrix multiplication by raising the first-order dependency matrix to multiple successive powers until we achieve its transitive closure. Thus, a matrix raised to the power of two would show the indirect dependencies between elements that have a path length of two, i.e. calls from A to C, if A calls B and B calls C. Thereafter, by summing these matrices together one gets the visibility matrix. For this ripple effect to be of use in analysis, the density of the visibility matrix is captured within the metric that we call propagation cost.

3 Alan MacCormack, John Rusnak and Carliss Baldwin, Exploring the Structure of Complex Software Designs: An Empirical Study of Open Source and Proprietary Code (2006).

4 Steven D. Eppinger and Tyson R. Browning, Design Structure Matrix Methods and Applications (2012).

Propagation cost

Core size

Core files are files that are highly interconnected via a chain of cyclic dependencies and have been shown in various studies to have a higher propensity for defects5. They are one of four types of files that one sees when plotting files along the axes of fan-in and fan-out, the intuition for this breakout being that different directions and magnitudes of dependencies have varying impacts on software quality. This intuition has been validated by several studies and a smaller core has been shown to result in fewer defects. Core size is the percentage of files with one or more dependencies that have a high fan-in and a high fan-out.

Other types of files are peripheral files, which don’t depend on a lot of files and don't have a lot of files depend on them (low fan-in, low fan-out); shared files, which don’t depend on a lot of files, but have a lot of files depend on them (high fan-in, low fan-out) and control files, which depend on a lot of files, but don’t have a lot of files depend on them (low fan-in, high fan-out).

5 Alan MacCormack and Dan Sturtevant, System Design and the Cost of Complexity: Putting a Value on Modularity (2011).

Core size

Defect density

Defect density measures the number of confirmed bugs in each release per 100,000 LOC. A set of http queries are run against Mozilla's bug tracking system, Bugzilla, to get the counts per release. Since the version field in Bugzilla is an optional field, the defect counts currently don't include confirmed bugs that have not been assigned to particular releases. We have a relatively robust method for including unassigned defects that we may end up using in the future. We start from version 5.0 since that is when the rapid release cycle began.

Defect density

Resident memory

This metric captures the amount of resident memory used after Mozilla's TP5 test is left running for 30 seconds. Resident memory is the physical amount of memory used by the Firefox process. The TP5 test, run five times in sequence, loads a set of 100 popular webpages into the browser, with a ten-second delay between each page load.

We run our memory usage tests on builds from mozilla-central. The result for each release is based on the latest test that was done on the code in mozilla-central before it was branched off to its release-specific branch. There is a gap of a number of weeks between that codebase and the codebase of the final release, hence it is important to keep this caveat in mind when interpreting the data.

This data is provided by our colleagues at areweslimyet.com. See their FAQ for more info.

note: v20 saw two regressions that were fixed prior to the release. See this and this. Since data points here are taken from mozilla-central several weeks before a release, the data point for v20 has been taken out.

Resident memory

Speed

Up until version 19, this metric shows the V8 score of each release. From version 20 onwards, it shows the Octane score, which succeeded the V8 benchmark suite. We run our speed tests on builds from mozilla-central. As before, the score for each release is based on the latest test that was done on the code in mozilla-central before it was branched off to its release-specific branch. There is a gap of a number of weeks between that codebase and the codebase of the final release. Also worth noting is that the IonMonkey JavasScript engine was introduced in version 18, which noticeably improved Firefox's score and accounts for the increase in speed performance from then on. A higher score indicates better performance.

The data is provided by our colleagues at arewefastyet.com.

Speed
ALI ALMOSSAWI · ali@mozilla.com · This is still a work-in-progress; please get in touch for all thoughts and comments · May 2013
Background image courtesy of Ty Flanagan. This project is being released under a Creative Commons license (CC BY-NC).