Prakrishta.Data.Bulk 1.0.1

dotnet add package Prakrishta.Data.Bulk --version 1.0.1
                    
NuGet\Install-Package Prakrishta.Data.Bulk -Version 1.0.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Prakrishta.Data.Bulk" Version="1.0.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Prakrishta.Data.Bulk" Version="1.0.1" />
                    
Directory.Packages.props
<PackageReference Include="Prakrishta.Data.Bulk" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Prakrishta.Data.Bulk --version 1.0.1
                    
#r "nuget: Prakrishta.Data.Bulk, 1.0.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Prakrishta.Data.Bulk@1.0.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Prakrishta.Data.Bulk&version=1.0.1
                    
Install as a Cake Addin
#tool nuget:?package=Prakrishta.Data.Bulk&version=1.0.1
                    
Install as a Cake Tool

Prakrishta.Data.Bulk

High performance, extensible bulk operations for .NET. Prakrishta.Data.Bulk is a provider agnostic, pipeline based bulk engine designed for speed, flexibility, and testability. It complements Prakrishta.Data by enabling large scale insert, update, and delete operations with minimal overhead.

Features

  • Fastest‑in‑class bulk insert performance
  • TVP‑based stored procedure strategy (fastest for pure inserts)
  • Staging table strategy with MERGE (best for upsert/update/delete)
  • Zero reflection, zero EF Core overhead
  • Linear scaling from 1k → 50k+ rows
  • Async/await support
  • BenchmarkDotNet‑verified performance
  • Works with SQL Server, Azure SQL, LocalDB
  • Pure ADO.NET — no EF Core required
  • Clean, extensible architecture

Feature Comparison Table

Feature Prakrishta (Stored Proc) Prakrishta (Staging) EFCore.BulkExtensions Raw SqlBulkCopy
Bulk Insert ⭐ Fastest ⭐ Very Fast Fast Fast
Bulk Update ⭐ Yes (MERGE) Yes
Bulk Delete ⭐ Yes (MERGE) Yes
Upsert ⭐ Yes (MERGE) Yes
TVP Support ⭐ Yes Yes Yes No
Reflection‑Free ⭐ Yes ⭐ Yes ❌ No Yes
EF Core Required No No Yes No

Getting Started (Quickstart Guide)

  1. Install the NuGet package
dotnet add package Prakrishta.Data.Bulk
  1. Define your entity
public sealed class SalesRecord
{
    public int Id { get; set; }
    public DateTime SaleDate { get; set; }
    public decimal Amount { get; set; }
}
  1. Create a bulk engine full example
var builder = WebApplication.CreateBuilder(args);

// Register Bulk Engine
builder.Services.AddBulkEngine(opts =>
{
    opts.DefaultStrategy = BulkStrategyKind.StoredProcedureTvp;
});

var app = builder.Build();

// Resolve engine
var bulk = app.Services.GetRequiredService<BulkEngine>();

// Sample data
var items = new List<SalesRecord>
{
    new() { Id = 1, SaleDate = DateTime.UtcNow, Amount = 100 },
    new() { Id = 2, SaleDate = DateTime.UtcNow, Amount = 200 }
};

// Insert
await bulk.InsertAsync(
    items,
    "dbo.Sales",
    "dbo.SalesType",
    "dbo.Sales_Insert");

// Partition Switch
await bulk.ReplacePartitionAsync(
    items,
    "dbo.FactSales",
    opts => opts
        .UseStagingTable("dbo.FactSales_Staging_7")
        .ForPartition(7));

app.Run();
  1. Service Collection Extensions
public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddBulkEngine(
        this IServiceCollection services,
        string connectionString,
        Action<BulkOptions>? configure = null)
    {
        var options = new BulkOptions();
        configure?.Invoke(options);

        services.AddSingleton(options);

        // Factories
        services.AddSingleton<IBulkCopyFactory, SqlBulkCopyFactory>();
        services.AddSingleton<IDbConnectionFactory, SqlConnectionFactory>();

        // Register schema resolver
        services.AddSingleton<ISchemaResolver>(sp =>
        {
            var factory = sp.GetRequiredService<IDbConnectionFactory>();
            return new SchemaResolver(factory, connectionString);
        });

        // Strategy selector
        services.AddSingleton<BulkStrategySelector>();

        // Strategies
        services.AddSingleton<IBulkStrategy>(sp =>
            new StoredProcedureTvpStrategy(
                sp.GetRequiredService<IDbConnectionFactory>()));

        services.AddSingleton<IBulkStrategy>(sp =>
            new StagingTableStrategy(
                sp.GetRequiredService<BulkOptions>(),
                sp.GetRequiredService<IBulkCopyFactory>(),
                sp.GetRequiredService<IDbConnectionFactory>()));

        services.AddSingleton<IBulkStrategy>(sp =>
            new TruncateAndReloadStrategy(
                sp.GetRequiredService<BulkOptions>(),
                sp.GetRequiredService<IBulkCopyFactory>(),
                sp.GetRequiredService<IDbConnectionFactory>()));

        services.AddSingleton<IBulkStrategy>(sp =>
            new PartitionSwitchStrategy(
                sp.GetRequiredService<IBulkCopyFactory>(),
                sp.GetRequiredService<IDbConnectionFactory>()));

        // Strategy dictionary
        services.AddSingleton<IDictionary<BulkStrategyKind, IBulkStrategy>>(sp =>
        {
            var strategies = sp.GetServices<IBulkStrategy>();
            return strategies.ToDictionary(s => s.Kind, s => s);
        });

        // Pipeline
        services.AddSingleton<IBulkPipeline, BulkPipelineEngine>();

        // Register BulkEngine
        services.AddSingleton<BulkEngine>(sp =>
        {
            var factory = sp.GetRequiredService<IDbConnectionFactory>();
            var pipeline = sp.GetRequiredService<IBulkPipeline>();
            return new BulkEngine(connectionString, factory, pipeline);
        });


        return services;
    }
}
  1. TVP Type for SalesRecord
CREATE TYPE dbo.SalesType AS TABLE
(
    Id          INT            NOT NULL,
    SaleDate    DATETIME2(7)   NOT NULL,
    Amount      DECIMAL(18,2)  NOT NULL
);

✔ Must match your C# entity ✔ Must match your staging table ✔ Must NOT include identity or constraints

  1. Auto‑Drop + Recreate Script
IF OBJECT_ID('dbo.SalesRecord_Staging', 'U') IS NOT NULL
    DROP TABLE dbo.SalesRecord_Staging;

CREATE TABLE dbo.SalesRecord_Staging
(
    Id          INT            NOT NULL,
    SaleDate    DATETIME2(7)   NOT NULL,
    Amount      DECIMAL(18,2)  NOT NULL
);

CREATE CLUSTERED INDEX IX_SalesRecord_Staging_Id
    ON dbo.SalesRecord_Staging (Id);

Performance Benchmarks

Rows Prakrishta (Stored Proc) Prakrishta (Staging) Raw Sql EFCore.BulkExtensions Result
1,000 10.0 ms 12.8 ms 14.6 ms 11.4 ms EFCore slightly faster
10,000 37.3 ms 48.2 ms 49.26 ms 87.4 ms Prakrishta ~2× faster
50,000 188.0 ms 195.0 ms 203.2 ms 395.0 ms Prakrishta ~2× faster

Key Findings

  • EFCore.BulkExtensions performs well for small batches due to low setup overhead.
  • Prakrishta’s stored-proc strategy is the fastest overall, especially at large batch sizes.
  • Prakrishta’s staging-table strategy is also extremely fast and scales linearly.
  • At 10k–50k rows, Prakrishta is 2× faster than EFCore.BulkExtensions.
  • Staging-table strategy even outperforms raw SqlBulkCopy at large sizes.
  • Performance is linear, predictable, and optimized for high-volume ingestion.

Performance Chart (Markdown)

This chart visualizes the current benchmark results for 50,000 rows, the most meaningful scale for real‑world ETL and ingestion workloads.

Milliseconds (lower is better)

Bulk Insert Performance (50,000 rows)

Prakrishta (Stored Proc)   | ████████████████████████████ 188 ms
Prakrishta (Staging)       | ██████████████████████████████ 195 ms
Raw SqlBulkCopy            | ███████████████████████████████ 203 ms
EFCore.BulkExtensions      | █████████████████████████████████████████████ 395 ms

Bulk Insert Performance (10,000 rows)

Prakrishta (Stored Proc)   | ████████████████ 37.3 ms
Prakrishta (Staging)       | ████████████████████ 48.2 ms
Raw SqlBulkCopy            | ████████████████████ 49.2 ms
EFCore.BulkExtensions      | █████████████████████████████████ 87.4 ms

Bulk Insert Performance (1,000 rows)

Prakrishta (Stored Proc)   | ████████ 10.0 ms
EFCore.BulkExtensions      | ████████ 11.4 ms
Prakrishta (Staging)       | █████████ 12.8 ms
Raw SqlBulkCopy            | ██████████ 14.6 ms

Why Prakrishta Is Faster

Prakrishta.Data.Bulk achieves industry‑grade performance because it:

  • Uses pure ADO.NET
  • Avoids EF Core overhead
  • Eliminates reflection
  • Uses optimized TVP ingestion
  • Uses linear‑scaling staging tables
  • Minimizes SQL Server I/O and logging
  • Reduces round trips
  • Produces predictable, stable performance curves

This is why Prakrishta Data Bulk engine is:

  • Faster than EFCore.BulkExtensions
  • Faster than Raw SqlBulkCopy at scale
  • The fastest overall at 50K rows (stored‑proc strategy)

Choosing the Right Strategy

Different workloads benefit from different bulk‑loading strategies. Prakrishta.Data.Bulk gives you three optimized paths — each designed for a specific class of problems.

1. Stored Procedure Strategy (TVP‑based) — Best Overall for Inserts

Use when you want:

  • Maximum raw insert speed
  • Minimal SQL Server overhead
  • A single round‑trip to the database
  • No MERGE logic
  • No staging table

Ideal for:

  • High‑volume inserts
  • ETL ingestion
  • Logging pipelines
  • Append‑only tables
  • Scenarios where the target table has no complex constraints

Why choose it:

Fastest strategy at 1k, 10k, and 50k rows. Outperforms EFCore.BulkExtensions and even Raw SqlBulkCopy.

2. Staging Table Strategy — Best for Upserts, Updates & Deletes

Use when you need:

  • MERGE semantics
  • Update‑or‑insert behavior
  • Delete‑or‑insert behavior
  • Full control over matching keys
  • Idempotent ingestion

Ideal for:

  • Slowly changing dimensions (SCD)
  • Data warehouse loads
  • Sync jobs
  • Reconciliation pipelines
  • Any scenario requiring deterministic upsert logic

Why choose it:

Linear scaling, extremely stable, and 2× faster than EFCore.BulkExtensions at medium and large batch sizes.

3. Raw SqlBulkCopy Strategy — Baseline / Custom Scenarios

Use when you want:

  • Absolute minimal overhead
  • Full control over the SqlBulkCopy pipeline
  • Custom batching or streaming logic
  • No MERGE or stored proc logic

Ideal for:

  • Internal pipelines
  • Custom ETL frameworks
  • Scenarios where you want to build your own logic on top of SqlBulkCopy

Why choose it:

Great baseline — and your strategies outperform it at scale.

4. When to Choose Which Strategy

Scenario Best Strategy why
Pure inserts Stored Proc Fastest end‑to‑end path
Inserts + updates Staging MERGE logic built‑in
Inserts + deletes Staging MERGE handles delete conditions
Large batch ingestion Stored Proc / Staging Both scale linearly
Small batch inserts Stored Proc Lowest overhead
EF Core replacement Stored Proc / Staging 2× faster at scale
Custom pipelines Raw SqlBulkCopy Maximum control

Performance Badges

Performance Speed Benchmark Winner

Attribute‑Based Configuration

The Bulk Engine supports strongly‑typed attributes that allow you to configure schema, table names, TVP names, stored procedures, and column mappings directly on your entity classes. This provides a clean, declarative alternative to fluent configuration and integrates seamlessly with automatic schema discovery. Attributes are optional — the engine continues to work with conventions and fluent overrides.

Why Use Attributes?

Attributes allow you to:

  • Keep configuration close to your entity model
  • Avoid repeating table/TVP/procedure names in multiple places
  • Override conventions without using fluent API
  • Disable automatic schema discovery when schema is explicitly defined
  • Customize column names or ignore properties
  • Mark key columns explicitly

They also follow a clear precedence model:

Precedence Order (Highest → Lowest)

  • Fluent API overrides
  • Attributes
  • Automatic schema discovery
  • Conventions (dbo.TableName, property name)

Property‑Level Attributes

Column Rename

[BulkColumn("CustomerName")]
public string Name { get; set; }

Maps the property to a different column name.

Ignore Property

[BulkIgnore]
public string TempValue { get; set; }

Ignored during:

  • Insert
  • Update
  • Delete
  • TVP generation
  • Partition switch staging

Explicit Key

[BulkKey]
public Guid CustomerId { get; set; }

Overrides the default "Id" convention.

How Attributes Interact with Fluent API

Attributes provide defaults, but fluent API always wins:

[BulkSchema("sales")]
[BulkTable("Customer")]
public class Customer { ... }

await bulk
    .For<Customer>()
    .ToTable("custom.Customers")   // overrides attribute
    .InsertAsync(items);

Final table name: custom.Customers

How Attributes Interact with Schema Discovery

If schema is not provided via:

  • .InSchema("...")
  • [BulkSchema("...")]
  • .ToTable("schema.Table")

Then the engine automatically discovers the schema from the database:

SELECT TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'Customer'

If multiple schemas contain the same table, the engine throws a clear error and instructs the user to specify a schema explicitly.

Example Entity Using All Attributes

[BulkSchema("sales")]
[BulkTable("Customer")]
[BulkTvp("CustomerType")]
[BulkInsertProcedure("Customer_Insert")]
[BulkUpdateProcedure("Customer_Update")]
[BulkDeleteProcedure("Customer_Delete")]
public class Customer
{
    [BulkKey]
    public int CustomerId { get; set; }

    [BulkColumn("CustomerName")]
    public string Name { get; set; }

    [BulkIgnore]
    public string TempValue { get; set; }
}

Usage:

await bulk.For<Customer>().InsertAsync(customers);

Everything resolves automatically:

  • Table → sales.Customer
  • TVP → sales.CustomerType
  • Insert SP → sales.Customer_Insert
  • Column map → { CustomerId, CustomerName }

License

MIT License — free for commercial and open‑source use.

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.0.1 79 2/24/2026
1.0.0 86 2/9/2026