Benchmark 1.0.2

dotnet add package Benchmark --version 1.0.2
NuGet\Install-Package Benchmark -Version 1.0.2
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Benchmark" Version="1.0.2" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Benchmark --version 1.0.2
#r "nuget: Benchmark, 1.0.2"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Benchmark as a Cake Addin
#addin nuget:?package=Benchmark&version=1.0.2

// Install Benchmark as a Cake Tool
#tool nuget:?package=Benchmark&version=1.0.2

Benchmark

Simple library that allows you to compare the performance of algorithms and output benchmark results as text, markdown or json. The following example shows how to benchmark two algorithms for concatenating strings:

public void StringBuilder()
{
  var text = new StringBuilder();

  for (var i = 0; i < 5; i++)
  {
	text.Append(' ');
  }

  ObservableObject.Observe(text.ToString());
}

public void Concatenate()
{
  var text = string.Empty;

  for (var i = 0; i < 5; i++)
  {
	text += ' ';
  }

  ObservableObject.Observe(text);
}

var report = Measure
  .Candidates(
    ("Five Concatenations", Concatenate),
    ("Five String Builder Appends", StringBuilder))
  .Go();

Console.Write(report);

Examples

All example code can be found here

Lambda Actions

Lambda actions are a quick way to evaluate an algorithm. Just create any number of void methods and compare their execution times:

var report = Measure
  .Candidates(
    ("First Algo", RunFirstAlgo),
    ("Second Algo", RunSecondAlgo))
  .Go();
Benchmark Candidates With Contexts

Often it is useful to compare algorithms against different data scenarios. An algorithm which performs well against a few data records may become inefficient when run against large datasets. You can define classes implementing the IBenchmarkContext interface to pass parameters to your test methods:

var items = Enumerable.Range(0, 1000)
  .Select(_ => new ObservableObject())
  .ToArray();

var report = Measure<LoopContext>
  .Candidates<WhileLoopCandidate, ForLoopCandidate, ForEachLoopCandidate, ForLoopInlineRangeEvaluationCandidate>()
  .WithContexts(
    new LoopContext(items.Take(10).ToArray(), 1),
    new LoopContext(items, 0),
    new LoopContext(items, 1),
    new LoopContext(items, 10))
  .Go();  
Benchmark Candidates Without Context

This is similar to lambda actions but written in a more formalized way example:

var report = Measure
  .Candidates<ConcatenateStringsCandidate, StringBuilderCandidate>()
  .Go();

Number of runs

You have multiple options to define how many runs a benchmark test should perform. Your test setting can either be time based or based on a fixed number of executions:

// this will run the tests 100 times
Measure
  .Candidates<Foo, Bar>()
  .NumberOfRuns(100)
  .Go();
  
// this will run each test context for one second
Measure
  .Candidates<Foo, Bar>()
  .RunEachContextFor(TimeSpan.FromSecond(1))
  .Go();
  
// this will run each context candidate for one second
Measure
  .Candidates<Foo, Bar>()
  .RunEachContextFor(TimeSpan.FromSecond(1))
  .Go();

If no option is defined, Benchmark will run each candidate for one second per context.

Output Options

ToString(), ToString(RankColumn column)

Good for console output or Visual Studio debugging:

| Context       | Candidate   | Rank | +/- Median | Total     | Average   | Median    | Runs | Comment        |
| ------------- | ----------- | ---- | ---------- | --------- | --------- | --------- | ---- | -------------- |
| 100 x 10 obj  | MessagePack |    1 |            | 0.608 sec | 0.003 sec | 0.002 sec |  228 | Size: 1.44 KB  |
|               | Avro        |    2 |  + 23.16 % | 0.738 sec | 0.003 sec | 0.003 sec |  228 | Size: 1.14 KB  |
|               | Protobuf    |    3 |  + 66.13 % | 0.969 sec | 0.004 sec | 0.004 sec |  228 | Size: 1.51 KB  |
|               | Json        |    4 | + 382.12 % | 2.691 sec | 0.012 sec | 0.011 sec |  228 | Size: 3.04 KB  |
| ------------- | ----------- | ---- | ---------- | --------- | --------- | --------- | ---- | -------------- |
| 100 x 100 obj | MessagePack |    1 |            | 0.606 sec | 0.023 sec | 0.024 sec |   26 | Size: 14.36 KB |
|               | Avro        |    2 |  + 17.71 % | 0.754 sec | 0.029 sec | 0.028 sec |   26 | Size: 11.42 KB |
|               | Protobuf    |    3 |  + 44.71 % | 0.978 sec | 0.038 sec | 0.035 sec |   26 | Size: 15.06 KB |
|               | Json        |    4 | + 350.81 % | 2.789 sec | 0.107 sec | 0.108 sec |   26 | Size: 30.28 KB |
| ------------- | ----------- | ---- | ---------- | --------- | --------- | --------- | ---- | -------------- |

ToMarkdown(), ToMarkdown(RankColumn column)

Good for posting your results on github:

Context Candidate Rank +/- Median Total Average Median Runs Comment
100 x 10 obj MessagePack 1 0.608 sec 0.003 sec 0.002 sec 228 Size: 1.44 KB
Avro 2 + 23.16 % 0.738 sec 0.003 sec 0.003 sec 228 Size: 1.14 KB
Protobuf 3 + 66.13 % 0.969 sec 0.004 sec 0.004 sec 228 Size: 1.51 KB
Json 4 + 382.12 % 2.691 sec 0.012 sec 0.011 sec 228 Size: 3.04 KB
100 x 100 obj MessagePack 1 0.606 sec 0.023 sec 0.024 sec 26 Size: 14.36 KB
Avro 2 + 17.71 % 0.754 sec 0.029 sec 0.028 sec 26 Size: 11.42 KB
Protobuf 3 + 44.71 % 0.978 sec 0.038 sec 0.035 sec 26 Size: 15.06 KB
Json 4 + 350.81 % 2.789 sec 0.107 sec 0.108 sec 26 Size: 30.28 KB
ToJson()

Returns results in json format

IBenchmarkComment

If your IBenchmarkCandidate also implements the IBenchmarkComment interface you can render a comment for each candidate per context. You can use this method to output additional information gathered during tests.

Warm Up Runs

The Measure builder allows you to specify a number of warm up runs for your algorithm to counter JIT compilation influencing your results. You can use the .NumberOfWarmUpRuns(...) method to specify how many warm up runs to perform. You can also pass in a warm up context if needed.

By default, Benchmark will do one warm up run for each test context. To disable warm up runs, define .NumberOfWarmUpRuns(0).

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp2.0 is compatible.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • .NETCoreApp 2.0

    • No dependencies.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.0.2 4,021 10/16/2018
1.0.0 916 10/1/2018