criterion performance measurements

overview

want to understand this report?

parseCmd (foo "bar")

lower bound estimate upper bound
OLS regression xxx xxx xxx
R² goodness-of-fit xxx xxx xxx
Mean execution time 4.28104689513341e-6 4.52654834683996e-6 4.90588781266136e-6
Standard deviation 1.0625068410346834e-6 1.2545845308314575e-6 1.7504580108253014e-6

Outlying measurements have severe (0.9824354319909373%) effect on estimated standard deviation.

parseCmd (f (f ..28x...))

lower bound estimate upper bound
OLS regression xxx xxx xxx
R² goodness-of-fit xxx xxx xxx
Mean execution time 5.5674492706428895e-5 6.147631818924635e-5 7.032176389854423e-5
Standard deviation 1.475909426326071e-5 2.5348526620781456e-5 3.795255320003821e-5

Outlying measurements have severe (0.9914806816493408%) effect on estimated standard deviation.

understanding this report

In this report, each function benchmarked by criterion is assigned a section of its own. The charts in each section are active; if you hover your mouse over data points and annotations, you will see more details.

Under the charts is a small table. The first two rows are the results of a linear regression run on the measurements displayed in the right-hand chart.

We use a statistical technique called the bootstrap to provide confidence intervals on our estimates. The bootstrap-derived upper and lower bounds on estimates let you see how accurate we believe those estimates to be. (Hover the mouse over the table headers to see the confidence levels.)

A noisy benchmarking environment can cause some or many measurements to fall far from the mean. These outlying measurements can have a significant inflationary effect on the estimate of the standard deviation. We calculate and display an estimate of the extent to which the standard deviation has been inflated by outliers.