CosmoS3 1.9.1

dotnet add package CosmoS3 --version 1.9.1
                    
NuGet\Install-Package CosmoS3 -Version 1.9.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="CosmoS3" Version="1.9.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="CosmoS3" Version="1.9.1" />
                    
Directory.Packages.props
<PackageReference Include="CosmoS3" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add CosmoS3 --version 1.9.1
                    
#r "nuget: CosmoS3, 1.9.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package CosmoS3@1.9.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=CosmoS3&version=1.9.1
                    
Install as a Cake Addin
#tool nuget:?package=CosmoS3&version=1.9.1
                    
Install as a Cake Tool

CosmoS3

NuGet Version

CosmoS3 is an Amazon S3–compatible object storage middleware library for CosmoApiServer. It implements core S3 operations using SQL Server for metadata and the local disk (or a pluggable storage driver) for object data.


Table of Contents

  1. Architecture
  2. Performance
  3. Recent Updates
  4. Quick Start
  5. Configuration
  6. Database Schema
  7. S3 Feature Compatibility
  8. Static Website Hosting
  9. Presigned URLs
  10. Multipart Upload
  11. Using with AWS CLI
  12. Running Integration Tests
  13. Project Structure

Architecture

┌─────────────────────────────────────────────────────┐
│                   CosmoApiServer                    │
│  (System.IO.Pipelines transport, port 8100)         │
└──────────────────────┬──────────────────────────────┘
                       │ IMiddleware
                       ▼
┌─────────────────────────────────────────────────────┐
│                  S3Middleware                        │
│  • Parses incoming requests into S3Context          │
│  • Authenticates (SigV4 / SigV2 / presigned)        │
│  • Routes to ServiceHandler / BucketHandler /       │
│    ObjectHandler / AdminHandler                      │
└────────────────┬──────────────────────┬─────────────┘
                 │                      │
     ┌───────────▼──────┐   ┌───────────▼───────────┐
     │   DataAccess      │   │   Storage Driver       │
     │  (SQL Server via  │   │  (DiskStorageDriver    │
     │  CosmoSQLClient)  │   │   ./data/objects/)     │
     └──────────────────┘   └───────────────────────┘

Key types:

Type Role
S3Middleware Entry point; implements IMiddleware
S3Request Parses HTTP request into S3 context (method, bucket, key, auth type)
S3Response Writes S3-formatted HTTP responses
S3Context Combines S3Request + S3Response for handler use
BucketManager In-memory bucket registry; synced with DB at startup
ConfigManager DB lookup helpers (users, credentials, buckets, objects)
AuthManager SigV2 / SigV4 / presigned URL authentication
DataAccess All SQL stored-proc calls via CosmoSQLClient (with IMemoryCache)
DiskStorageDriver High-performance I/O using ArrayPool and SubStream

Performance

CosmoS3 is optimized for high-throughput and low-latency workloads. Recent architectural improvements have pushed performance to near-NVMe speeds on local hardware.

Benchmark: CosmoS3 vs. MinIO (Throughput in MiB/s)

Tests were performed locally using the warp S3 benchmark tool with 1MiB and 16MiB object sizes.

Operation Size Concurrency CosmoS3 MinIO CosmoS3 vs MinIO
PUT 1MiB 32 402.67 370.33 +8.7%
PUT 16MiB 32 654.66 471.79 +38.7%
GET 1KB 1 7.5 19.3 +157.3%
GET 1MiB 8 951.58 513.32 +85.3%
GET 16MiB 8 936.21 690.99 +35.4%
GET 16MiB 32 860.05 603.13 +42.5%

Key Optimizations:

  • Zero-Allocation I/O: Integrated ArrayPool<byte> across all read/write paths to virtually eliminate transient allocations and GC pressure.
  • End-to-End Streaming: Implemented SubStream for range requests, allowing data to be streamed directly from disk to the transport without intermediate buffering.
  • Lock-Free Concurrency: Migrated bucket registry to ConcurrentDictionary, enabling non-blocking lookups during high-concurrency requests.
  • Metadata Acceleration: Optimized SQL schema with composite indexes on (bucketguid, objectkey, version DESC) for near-instant latest-version lookups.

Recent Updates

  • Static Website Hosting: Full support for serving static content, including custom index/error documents and virtual hosted-style requests.
  • Performance Engine: Comprehensive refactor of the I/O and storage layer for high-performance workloads.
  • Database Scalability: Added support for PostgreSQL, MySQL, and SQLite alongside SQL Server.
  • Reliable aws-chunked Decoding: Rewritten chunked-payload engine to support high-reliability streaming and multipart uploads from AWS CLI.
  • Benchmarking Suite: Integrated a new performance testing tool to compare CosmoS3 against any S3-compatible backend.

Quick Start

1. Add the CosmoS3Host sample project

A ready-to-run host is included at samples/CosmoS3Host/.

cd samples/CosmoS3Host
dotnet run

2. Wire CosmoS3 into your own CosmoApiServer app

using CosmoS3;
using CosmoS3.Settings;

var settings = new SettingsBase
{
    RegionString       = "us-east-1",
    ValidateSignatures = false,   // set true in production

    Storage = new StorageSettings
    {
        StorageType   = CosmoS3.Storage.StorageDriverType.Disk,
        DiskDirectory = "./data/objects"
    },

    Database = new DatabaseSettings
    {
        Hostname     = "localhost",
        Port         = 1433,
        DatabaseName = "MyDatabase",
        Username     = "sa",
        Password     = "your-password"
    },

    // Optional: enable CORS for browser-based S3 clients
    Cors = new CorsSettings { Enabled = true },

    // Optional: enable HTTPS
    // CertificatePath     = "./certs/server.pfx",
    // CertificatePassword = "changeme",

    // Optional: enable HTTP/2 cleartext (h2c)
    // EnableHttp2 = true,
};

// CosmoS3Application.Create() wires TLS, HTTP/2, CORS, logging, and S3Middleware.
var app = CosmoS3Application.Create(settings, port: 8100);
app.Run();

Or wire manually for full control over the middleware order:

using CosmoApiServer.Core.Hosting;
using CosmoS3;
using CosmoS3.Settings;

var app = CosmoWebApplicationBuilder.Create()
    .ListenOn(8100)
    .UseHttps("./certs/server.pfx", "changeme")   // optional TLS
    .UseHttp2()                                    // optional h2c
    .UseCors()                                     // optional CORS
    .UseLogging()
    .UseMiddleware(new S3Middleware(settings))
    .Build();

app.Run();

Configuration

SettingsBase

Property Type Default Description
ValidateSignatures bool true Verify AWS Signature V4/V2 on every request. Disable for local dev only.
BaseDomain string? null Set to enable virtual-hosted–style URLs (e.g. "localhost"). Leave null for path-style.
RegionString string "us-west-1" AWS region identifier returned in responses.
HeaderApiKey string "x-api-key" HTTP header name for admin API authentication.
AdminApiKey string "cosmos3admin" Secret value expected in HeaderApiKey for admin endpoints.
Database DatabaseSettings (required) SQL Server connection details.
Storage StorageSettings (required) Object storage configuration.
Logging LoggingSettings default Log level callbacks.
Debug DebugSettings default Enable extra debug output.
Users / Credentials / Buckets List<T> empty Seed in-memory data for no-database mode (testing).
CertificatePath string? null Path to PFX file for HTTPS. When set, TLS is automatically applied.
CertificatePassword string? null Password for the PFX certificate.
EnableHttp2 bool false Enable h2c (HTTP/2 cleartext) support.
Cors CorsSettings disabled CORS configuration for browser-based S3 clients.

DatabaseSettings

Property Default Description
Hostname SQL Server hostname or IP
Port 0 TCP port (use 1433 for SQL Server)
DatabaseName Database name
Username SQL login
Password SQL password

The connection string is constructed as:

server=HOSTNAME,PORT;database=DBNAME;user id=USER;password=PASS;TrustServerCertificate=true;

StorageSettings

Property Default Description
StorageType Disk Disk is the only currently supported driver
DiskDirectory "./disk/" Root directory for object files (no trailing slash)
TempDirectory "./temp/" Scratch directory for multipart upload assembly

Database Schema

CosmoS3 supports multiple database engines including SQL Server, PostgreSQL, MySQL, and SQLite. The schema is automatically created or updated at startup via DatabaseFactory.EnsureSchemaAsync.

Key Tables

Table Purpose
s3_users S3 user accounts
s3_credentials Access key / secret key pairs linked to users
s3_buckets Bucket metadata (name, owner, region, storage config)
s3_objects Object metadata (key, size, ETag, version, blob reference)
s3_objecttags Per-object tags
s3_buckettags Per-bucket tags
s3_uploads Active multipart upload sessions
s3_uploadparts Uploaded parts for active sessions

Key Indexes for Performance

The schema includes optimized composite indexes to ensure high performance even with millions of objects:

  • idx_s3_objects_bucket_key_version: (bucketguid, objectkey, version DESC) — Optimizes the common "get latest version" lookup.
  • idx_s3_credentials_accesskey: (accesskey) — Rapid authentication.
  • idx_s3_buckets_name: (name) — Fast bucket resolution.

All database operations are abstracted via DataAccess and the IS3Repository interface.


S3 Feature Compatibility

Service-Level Operations

Operation AWS CLI command Status
List Buckets aws s3 ls

Bucket Operations

Operation AWS CLI command Status
Create Bucket aws s3 mb s3://bucket
Delete Bucket aws s3 rb s3://bucket
List Objects (v1 & v2) aws s3 ls s3://bucket/
Get Bucket ACL aws s3api get-bucket-acl
Put Bucket ACL aws s3api put-bucket-acl
Get Bucket Tags aws s3api get-bucket-tagging
Put Bucket Tags aws s3api put-bucket-tagging
Delete Bucket Tags aws s3api delete-bucket-tagging
Get/Put/Delete Bucket Website aws s3api *-bucket-website
Get Bucket Location aws s3api get-bucket-location
Get Bucket Versioning aws s3api get-bucket-versioning

Object Operations

Operation AWS CLI command Status
Put Object aws s3 cp local.txt s3://bucket/key
Get Object aws s3 cp s3://bucket/key local.txt
Head Object aws s3api head-object
Delete Object aws s3 rm s3://bucket/key
Delete Objects (batch) aws s3 sync --delete
Copy Object aws s3 cp s3://src s3://dst
Get Object ACL aws s3api get-object-acl
Put Object ACL aws s3api put-object-acl
Get Object Tags aws s3api get-object-tagging
Put Object Tags aws s3api put-object-tagging
Delete Object Tags aws s3api delete-object-tagging
Presigned GET/PUT URLs SDK GetPreSignedURL
Multipart Upload aws s3 cp (large files)
List Multipart Uploads aws s3api list-multipart-uploads
Abort Multipart Upload aws s3api abort-multipart-upload

Notes on Compatibility

  • Signature versions: Both SigV4 and SigV2 are supported for authentication and presigned URLs.
  • aws-chunked transfer encoding: Automatically decoded; works with aws s3 cp for any file size.
  • Versioning: Version IDs are not supported; all operations act on the current (only) version.
  • Bucket policies / CORS / lifecycle / replication: Not implemented.

Static Website Hosting

A bucket can be configured to serve static files over plain HTTP (no AWS credentials required).

Configure a bucket for website hosting

# Create bucket
aws --endpoint-url http://localhost:8100 s3 mb s3://my-site

# Upload content
aws --endpoint-url http://localhost:8100 s3 cp index.html  s3://my-site/index.html  --content-type text/html
aws --endpoint-url http://localhost:8100 s3 cp error.html  s3://my-site/error.html   --content-type text/html

# Enable website hosting
aws --endpoint-url http://localhost:8100 s3 website s3://my-site \
    --index-document index.html \
    --error-document error.html

Browse the site

# Bucket root returns index.html
curl http://localhost:8100/my-site/

# Unknown path returns error.html with 404
curl http://localhost:8100/my-site/missing.html

Redirect all requests

aws --endpoint-url http://localhost:8100 s3api put-bucket-website \
    --bucket my-site \
    --website-configuration '{
        "RedirectAllRequestsTo": { "HostName": "example.com", "Protocol": "https" }
    }'

Routing rules

aws --endpoint-url http://localhost:8100 s3api put-bucket-website \
    --bucket my-site \
    --website-configuration '{
        "IndexDocument": { "Suffix": "index.html" },
        "ErrorDocument": { "Key": "error.html" },
        "RoutingRules": [
            {
                "Condition": { "KeyPrefixEquals": "old/" },
                "Redirect":  { "ReplaceKeyPrefixWith": "new/" }
            }
        ]
    }'

How it works:

  • Website configuration is stored as website.xml at <DiskDirectory>/<bucketName>/website.xml.
  • Requests to a website-enabled bucket without AWS authentication headers are served as static files.
  • If the request path ends with /, the index document is served.
  • If the object is not found, the error document is returned with HTTP 404.
  • Redirect rules are evaluated before object lookup.

Presigned URLs

Presigned URLs grant time-limited access to an S3 object without requiring the caller to have AWS credentials.

Generate a presigned URL (C# SDK)

var request = new GetPreSignedUrlRequest
{
    BucketName = "my-bucket",
    Key        = "my-object.txt",
    Expires    = DateTime.UtcNow.AddMinutes(15),
    Verb       = HttpVerb.GET
};

string url = s3Client.GetPreSignedURL(request);

Use the presigned URL

# Download with curl (no AWS credentials needed)
curl "<presigned-url>" -o downloaded.txt

# Upload with a presigned PUT URL
curl -X PUT "<presigned-put-url>" --data-binary @file.txt

Signature version behavior:

AWSSDK generates SigV2 presigned URLs for custom (non-AWS) endpoints. CosmoS3 validates both:

Version Query params
SigV2 AWSAccessKeyId, Signature, Expires (Unix timestamp)
SigV4 X-Amz-Credential, X-Amz-Signature, X-Amz-Expires

Expired presigned URLs return HTTP 403 ExpiredToken.


Multipart Upload

Multipart upload allows large files to be uploaded in parts and assembled server-side.

Via AWS CLI (automatic for files > 8 MB by default)

# CosmoS3 handles chunked uploads transparently
aws --endpoint-url http://localhost:8100 \
    s3 cp large-file.bin s3://my-bucket/large-file.bin \
    --expected-size 1073741824   # hint for 1 GB file

Via SDK (manual)

// 1. Initiate upload
var initResponse = await s3.InitiateMultipartUploadAsync(new InitiateMultipartUploadRequest
{
    BucketName  = "my-bucket",
    Key         = "my-object"
});
string uploadId = initResponse.UploadId;

// 2. Upload parts (minimum 5 MB each, except the last)
var uploadPartResponse = await s3.UploadPartAsync(new UploadPartRequest
{
    BucketName   = "my-bucket",
    Key          = "my-object",
    UploadId     = uploadId,
    PartNumber   = 1,
    InputStream  = partStream,
    PartSize     = partStream.Length
});

// 3. Complete the upload
await s3.CompleteMultipartUploadAsync(new CompleteMultipartUploadRequest
{
    BucketName = "my-bucket",
    Key        = "my-object",
    UploadId   = uploadId,
    PartETags  = new List<PartETag> { new PartETag(1, uploadPartResponse.ETag) }
});

Internals:

  • Part data is stored temporarily in TempDirectory during upload.
  • On CompleteMultipartUpload, parts are assembled and written to DiskDirectory.
  • Incomplete uploads are tracked in s3.multipartuploads and can be aborted or listed.

Using with AWS CLI

Configure the AWS CLI for local use

aws configure
# AWS Access Key ID:     default
# AWS Secret Access Key: default
# Default region name:   us-east-1
# Default output format: json

Common commands

ENDPOINT=http://localhost:8100

# List buckets
aws --endpoint-url $ENDPOINT s3 ls

# Create bucket
aws --endpoint-url $ENDPOINT s3 mb s3://my-bucket

# Upload file
aws --endpoint-url $ENDPOINT s3 cp file.txt s3://my-bucket/

# Download file
aws --endpoint-url $ENDPOINT s3 cp s3://my-bucket/file.txt ./

# List objects
aws --endpoint-url $ENDPOINT s3 ls s3://my-bucket/

# Sync directory
aws --endpoint-url $ENDPOINT s3 sync ./local-dir/ s3://my-bucket/prefix/

# Delete object
aws --endpoint-url $ENDPOINT s3 rm s3://my-bucket/file.txt

# Delete bucket (must be empty)
aws --endpoint-url $ENDPOINT s3 rb s3://my-bucket

# Tag a bucket
aws --endpoint-url $ENDPOINT s3api put-bucket-tagging \
    --bucket my-bucket \
    --tagging '{"TagSet":[{"Key":"env","Value":"dev"}]}'

# Get bucket tags
aws --endpoint-url $ENDPOINT s3api get-bucket-tagging --bucket my-bucket

# Get bucket website config
aws --endpoint-url $ENDPOINT s3api get-bucket-website --bucket my-bucket

# Presigned URL (60 seconds)
aws --endpoint-url $ENDPOINT s3 presign s3://my-bucket/file.txt --expires-in 60

Running Integration Tests

The test suite (tests/CosmoS3.Tests/) uses xUnit + AWSSDK.S3. Tests require a running CosmoS3 server and a SQL Server database.

Start the server

cd samples/CosmoS3Host
dotnet run

Run all tests

cd tests/CosmoS3.Tests
dotnet test -c Release --logger "console;verbosity=minimal"

Test coverage

Test file Tests Feature area
BucketTests.cs 9 Bucket CRUD, ACL, tags, location
ObjectTests.cs 9 Object CRUD, ACL, tags, copy
MultipartTests.cs 5 Initiate, upload parts, complete, abort, list
PresignedUrlTests.cs 5 GET / PUT / HEAD presigned, expiry
WebsiteTests.cs 9 Static serving, routing rules, redirect-all

Total: 35 tests, all passing.

Fixture

S3Fixture (tests/CosmoS3.Tests/S3Fixture.cs) creates a unique bucket per test class and tears it down after all tests in the class complete.

public class S3Fixture : IAsyncLifetime
{
    public IAmazonS3 S3Client { get; }
    public string BucketName  { get; }   // e.g. "test-a1b2c3d4"
    public HttpClient HttpClient { get; } // for non-S3 HTTP assertions
    public string EndpointUrl { get; } = "http://localhost:8100";
    // ...
}

Project Structure

src/CosmoS3/
├── S3Middleware.cs          # IMiddleware entry point; request routing
├── S3Request.cs             # HTTP → S3 request parsing (auth, path, query)
├── S3Response.cs            # S3-formatted response writer
├── S3Context.cs             # Combined request + response context
├── S3Exception.cs           # Typed S3 error thrown by handlers
├── DataAccess.cs            # All DB stored-proc calls (with IMemoryCache)
├── SerializationHelper.cs   # XML/JSON serialization helpers
├── Settings/
│   ├── Settings.cs          # SettingsBase (top-level configuration)
│   ├── StorageSettings.cs
│   ├── LoggingSettings.cs
│   └── DebugSettings.cs
├── Classes/
│   ├── AuthManager.cs       # SigV2 / SigV4 / presigned authentication
│   ├── BucketManager.cs     # In-memory bucket list + DB-backed lookup
│   ├── BucketClient.cs      # Per-bucket storage driver accessor
│   ├── ConfigManager.cs     # User / credential / bucket / object lookup
│   └── CleanupManager.cs    # Background task: expire stale temp files
├── Api/S3/
│   ├── ApiHandler.cs        # Top-level dispatcher (service/bucket/object)
│   ├── ServiceHandler.cs    # ListBuckets
│   ├── BucketHandler.cs     # All bucket operations
│   ├── ObjectHandler.cs     # All object operations + multipart
│   └── ApiHelper.cs         # Shared XML response helpers
├── Storage/
│   ├── StorageDriverBase.cs
│   └── DiskStorageDriver.cs # Filesystem-based object storage
├── S3Objects/               # XML DTOs (request/response bodies)
│   ├── Error.cs
│   ├── ListAllMyBucketsResult.cs
│   ├── ListBucketResult.cs
│   └── ...
└── Logging/
    └── S3Logger.cs          # Console/callback-based logger
Product Compatible and additional computed target framework versions.
.NET net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.9.1 32 3/26/2026
1.9.0 71 3/24/2026
1.8.0 85 3/17/2026
1.7.0 87 3/17/2026
1.6.5 88 3/16/2026
1.6.4 104 3/16/2026
1.6.3 84 3/16/2026
1.6.2 89 3/16/2026
1.6.1 82 3/10/2026
1.6.0 81 3/10/2026
1.5.3 82 3/6/2026
1.0.0 83 3/4/2026