CSharpDB.Storage
2.9.1
Prefix Reserved
dotnet add package CSharpDB.Storage --version 2.9.1
NuGet\Install-Package CSharpDB.Storage -Version 2.9.1
<PackageReference Include="CSharpDB.Storage" Version="2.9.1" />
<PackageVersion Include="CSharpDB.Storage" Version="2.9.1" />
<PackageReference Include="CSharpDB.Storage" />
paket add CSharpDB.Storage --version 2.9.1
#r "nuget: CSharpDB.Storage, 2.9.1"
#:package CSharpDB.Storage@2.9.1
#addin nuget:?package=CSharpDB.Storage&version=2.9.1
#tool nuget:?package=CSharpDB.Storage&version=2.9.1
CSharpDB.Storage
CSharpDB.Storage is the page-oriented durability layer used by the CSharpDB embedded database engine. It owns:
- physical file I/O through
IStorageDevice - page caching and dirty tracking through
Pager - write-ahead logging and crash recovery through
WriteAheadLog - row-id keyed B+trees for table and index storage
- schema metadata persistence through
SchemaCatalog
This package is usually consumed indirectly through CSharpDB.Engine, but it also supports direct low-level use for tooling, diagnostics, and storage experiments.
Most users: configure storage through Database
If you are using SQL or the engine layer, customize storage like this:
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseDirectLookupOptimizedPreset();
});
await using var db = await Database.OpenAsync("app.cdb", options);
UseDirectLookupOptimizedPreset() is the current recommended opt-in preset for direct file-backed lookup workloads. It keeps the existing page-cache shape and read path, and keeps the standard B-tree index provider, so hot local workloads stay close to default behavior.
For cache-pressured or cold-file direct lookups, use the explicit cold-file preset:
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseDirectColdFileLookupPreset();
});
await using var db = await Database.OpenAsync("cold-read.cdb", options);
UseDirectColdFileLookupPreset() keeps the existing cache shape but enables memory-mapped reads for clean main-file pages when the storage device supports them.
For explicit bounded file-cache scenarios, use the explicit hybrid file-cache preset instead:
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseHybridFileCachePreset();
});
await using var db = await Database.OpenAsync("app.cdb", options);
UseHybridFileCachePreset() is the current recommended opt-in preset for explicit bounded file-cache runs. It sets MaxCachedPages = 2048, adds a small bounded WAL read cache (MaxCachedWalReadPages = 256), keeps sequential B-tree leaf read-ahead enabled, enables memory-mapped reads for clean main-file pages when the storage device supports them, and keeps the standard B-tree index provider, which outperformed the caching index wrapper in the current tuning matrix.
For sustained durable writes, use the write-heavy preset instead:
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseWriteOptimizedPreset();
});
await using var db = await Database.OpenAsync("ingest.cdb", options);
UseWriteOptimizedPreset() is the current recommended opt-in preset for file-backed write-heavy workloads. It keeps the existing cache and index configuration, raises the auto-checkpoint frame threshold to 4096, and runs auto-checkpoints in background slices instead of blocking the triggering commit. PagerOptions.AutoCheckpointMaxPagesPerStep controls how much work each background slice performs; the default remains 64 pages. Treat this as the stable baseline preset, not a promise that the frame-count/background row is always the top line in every harness. In the latest March 27 durable SQL batching median, auto-commit single-row SQL on this preset measured about 270.5 ops/sec, and the analyzed-table row measured about 267.8 ops/sec. In the stable March 28 single-writer diagnostics rerun, the preset's FrameCount(4096)+Background(256 pages/step) row measured about 275.9 ops/sec, effectively tied with the top single-writer rows (FrameCount(4096) at 276.3 and WalSize(8 MiB) at 273.8) while still keeping checkpoint work off the triggering commit.
If you want to experiment with moving advisory statistics persistence off the ordinary durable commit path, the storage builder also exposes UseLowLatencyDurableWritePreset():
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseLowLatencyDurableWritePreset();
});
await using var db = await Database.OpenAsync("ingest.cdb", options);
Treat this as a measure-first preset rather than a new baseline. The preset now deliberately separates exact committed-row durability from advisory planner-stat persistence: committed user rows remain WAL-durable per commit, while sys.table_stats.row_count_is_exact and stale column-stat tracking make any deferred planner metadata explicit after reopen/recovery. In the latest durable-sql-batching median-of-3 run, analyzed single-row durable SQL measured about 267.8 ops/sec on UseWriteOptimizedPreset() and about 261.4 ops/sec on UseLowLatencyDurableWritePreset(). The current biggest durable ingest win is still explicit transaction batching, not the low-latency preset by itself.
If you want to experiment with durable group commit, the storage builder now exposes UseDurableCommitBatchWindow(...):
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseWriteOptimizedPreset();
builder.UseDurableCommitBatchWindow(TimeSpan.FromMilliseconds(0.25));
});
await using var db = await Database.OpenAsync("ingest.cdb", options);
Keep this at TimeSpan.Zero unless you have benchmark data for your workload. The delay only affects file-backed Durable commits and trades commit latency for more opportunity to share one OS flush across multiple writers. The flush leader now skips or short-circuits that wait once the pending commit queue is already large enough, so the option behaves more like "batch briefly when lightly contended" than "always sleep before every durable flush." In the stable March 28 concurrent median-of-3 rerun, 250us was the best 4-writer row at about 553.4 commits/sec and the narrow best pure batch-window 8-writer row at about 1070.4 commits/sec, while the single-writer harness still regressed to about 267.2 ops/sec. This should remain an opt-in knob for measured in-process contention rather than a new default. When you test it, look at queue depth, commits per flush, and latency percentiles in addition to raw throughput.
For sustained file-backed ingest, the builder also exposes UseWalPreallocationChunkBytes(...):
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseWriteOptimizedPreset();
builder.UseWalPreallocationChunkBytes(1 * 1024 * 1024);
});
await using var db = await Database.OpenAsync("ingest.cdb", options);
Keep this at 0 by default. In the stable March 28 concurrent rerun it was helpful on the 8-writer rows, where WalPrealloc(1MiB) with BatchWindow(0) was the best measured row at about 1078.6 commits/sec, but it was still not a general single-writer answer: on the latest single-writer diagnostics it moved the FrameCount(4096)+Background(256 pages/step) row from about 275.9 to 273.7 ops/sec, while WalSize(8 MiB) measured about 273.8 ops/sec and plain FrameCount(4096) remained the top row at about 276.3 ops/sec. Treat it as an experimental opt-in for specific local-disk ingest workloads rather than a general preset.
If you are trying to reproduce the concurrent durable-write benchmark shape, the key detail is that the writers share one Database instance in-process:
using System.Threading;
using CSharpDB.Engine;
using CSharpDB.Execution;
static async ValueTask ExecuteNonQueryAsync(Database db, string sql, CancellationToken ct = default)
{
await using QueryResult result = await db.ExecuteAsync(sql, ct);
}
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder =>
{
builder.UseWriteOptimizedPreset();
builder.UseWalPreallocationChunkBytes(1 * 1024 * 1024); // Best measured 8-writer row on the current perf runner
});
await using var db = await Database.OpenAsync("ingest.cdb", options);
await ExecuteNonQueryAsync(
db,
"CREATE TABLE IF NOT EXISTS bench (id INTEGER PRIMARY KEY, value INTEGER, text_col TEXT, category TEXT)");
int nextId = 0;
Task[] writers = new Task[8];
for (int writerId = 0; writerId < writers.Length; writerId++)
{
int localWriterId = writerId;
writers[writerId] = Task.Run(async () =>
{
for (int i = 0; i < 10_000; i++)
{
int id = Interlocked.Increment(ref nextId);
await ExecuteNonQueryAsync(
db,
$"INSERT INTO bench (id, value, text_col, category) VALUES ({id}, {localWriterId}, 'durable', 'Alpha')");
}
});
}
await Task.WhenAll(writers);
For better write-heavy numbers, start with these rules:
UseWriteOptimizedPreset()first. It is the baseline recommendation for file-backed durable ingest.- If your workload can batch multiple logical writes into one explicit transaction, do that before tuning microsecond batch windows. In the latest durable SQL batching median, that scaled from about
270 rows/secat auto-commit to about2.7K,27K, and197K rows/secat10,100, and1000rows per commit. - If you have
8in-process durable writers sharing oneDatabase, benchmarkUseWalPreallocationChunkBytes(1 * 1024 * 1024)first with the batch window left at0; that was the best measured8-writer row on the current perf runner. - If you want to tune the batch window under
8-writer contention, benchmarkTimeSpan.FromMilliseconds(0.25)next; that was the narrow best pure batch-window row in the latest median-of-3 rerun. - If you have
4in-process durable writers, benchmarkTimeSpan.FromMilliseconds(0.25)first. - Measure
UseLowLatencyDurableWritePreset()on your own workload rather than assuming it helps. On the current perf runner it did not beatUseWriteOptimizedPreset()for analyzed single-row durable SQL.
Recommended Read/Write Topology
- In one process, prefer one long-lived
Databaseinstance for writes and createReaderSessions from that same instance for snapshot reads. - Avoid opening the same
.cdbfile twice in one process just to split "read DB" and "write DB". That duplicates engine state instead of using the intended shared-instance coordination path. - If you need multiple callers or transports, put one warm
Databasebehind your host/service boundary and route both reads and writes through that owner.
using CSharpDB.Engine;
await using var db = await Database.OpenAsync("app.cdb", options);
using var reader = db.CreateReaderSession();
await using var result = await reader.ExecuteReadAsync("SELECT COUNT(*) FROM bench");
Separately from durable flush tuning, the storage write path now does partial async I/O batching on its own. Direct AppendFramesAndCommitAsync(...) already writes WAL frames in chunks, checkpoint copies already batch contiguous page writes back into the main database file, repeated AppendFrameAsync(...) calls inside one transaction are now staged and emitted as chunked WAL writes at CommitAsync(...) time, and the snapshot/export-style copy paths now share one batched storage-device copy helper. The remaining roadmap work here is to audit the remaining export/rewrite paths and decide which ones are worth batching further.
The current crash-level durability coverage is process-based rather than mock-based. The test suite now verifies recovery after a real process crash at four points: immediately after commit returns, at checkpoint start, after checkpoint page copies have been flushed to the main DB file, and after WAL checkpoint finalization but before pager state refresh completes.
Low-level use: open the storage graph directly
If you need direct access to Pager, SchemaCatalog, or BTree, use the default storage engine factory:
using CSharpDB.Storage.BTrees;
using CSharpDB.Storage.Paging;
using CSharpDB.Storage.StorageEngine;
var storageOptions = new StorageEngineOptionsBuilder()
.UsePagerOptions(new PagerOptions { MaxCachedPages = 1024 })
.UseBTreeIndexes()
.Build();
var factory = new DefaultStorageEngineFactory();
var context = await factory.OpenAsync("lowlevel.cdb", storageOptions);
await using var pager = context.Pager;
await pager.BeginTransactionAsync();
try
{
uint rootPageId = await BTree.CreateNewAsync(pager);
var tree = new BTree(pager, rootPageId);
await tree.InsertAsync(1, new byte[] { 1, 2, 3, 4 });
byte[]? payload = await tree.FindAsync(1);
await pager.CommitAsync();
}
catch
{
await pager.RollbackAsync();
throw;
}
Key extension points
IStorageDevicefor alternate storage backendsIPageCachethroughPagerOptions.PageCacheFactoryICheckpointPolicyfor auto-checkpoint decisionsIPageOperationInterceptorfor diagnostics and fault injectionIPageChecksumProviderfor WAL checksum behaviorIIndexProviderfor index-store compositionISerializerProviderfor record and schema serializationICatalogStorefor catalog payload encodingIStorageEngineFactoryfor replacing the default storage composition root
Related docs
Related packages
| Package | Description |
|---|---|
| CSharpDB.Engine | SQL/engine layer built on this storage package |
| CSharpDB.Storage.Diagnostics | Read-only inspection and integrity tooling |
| CSharpDB.Execution | Query execution layer that reads/writes through storage |
Installation
dotnet add package CSharpDB.Storage
For the all-in-one package:
dotnet add package CSharpDB
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- CSharpDB.Primitives (>= 2.9.1)
NuGet packages (4)
Showing the top 4 NuGet packages that depend on CSharpDB.Storage:
| Package | Downloads |
|---|---|
|
CSharpDB.Execution
Query planner, operator tree, and expression evaluator for the CSharpDB embedded database. |
|
|
CSharpDB.Engine
Lightweight embedded SQL database engine for .NET. Single-file storage, WAL durability, concurrent readers, and a typed Collection<T> NoSQL API. |
|
|
CSharpDB.Storage.Diagnostics
Read-only storage diagnostics toolkit for CSharpDB database and WAL files. |
|
|
CSharpDB
All-in-one package for CSharpDB application development. Includes the unified client, engine, ADO.NET provider, and diagnostics. |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 2.9.1 | 0 | 4/7/2026 |
| 2.8.1 | 30 | 4/6/2026 |
| 2.8.0 | 37 | 4/4/2026 |
| 2.7.0 | 103 | 3/31/2026 |
| 2.6.0 | 122 | 3/29/2026 |
| 2.5.0 | 217 | 3/28/2026 |
| 2.4.0 | 121 | 3/24/2026 |
| 2.3.0 | 117 | 3/22/2026 |
| 2.2.0 | 113 | 3/21/2026 |
| 2.0.1 | 133 | 3/14/2026 |
| 2.0.0 | 119 | 3/13/2026 |
| 1.9.0 | 142 | 3/12/2026 |
| 1.8.0 | 135 | 3/11/2026 |
| 1.7.0 | 132 | 3/8/2026 |
| 1.6.0 | 127 | 3/8/2026 |
| 1.5.0 | 129 | 3/7/2026 |
| 1.4.0 | 127 | 3/7/2026 |
| 1.3.0 | 130 | 3/6/2026 |
| 1.2.0 | 127 | 3/5/2026 |
| 1.1.0 | 118 | 3/4/2026 |