HeroParser 1.6.2

There is a newer version of this package available.
See the version list below for details.
dotnet add package HeroParser --version 1.6.2
                    
NuGet\Install-Package HeroParser -Version 1.6.2
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="HeroParser" Version="1.6.2" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="HeroParser" Version="1.6.2" />
                    
Directory.Packages.props
<PackageReference Include="HeroParser" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add HeroParser --version 1.6.2
                    
#r "nuget: HeroParser, 1.6.2"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package HeroParser@1.6.2
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=HeroParser&version=1.6.2
                    
Install as a Cake Addin
#tool nuget:?package=HeroParser&version=1.6.2
                    
Install as a Cake Tool

HeroParser - A .Net High-Performance CSV & Fixed-Width Parser

Build and Test NuGet License: MIT

High-Performance SIMD Parsing | Zero Allocations | AOT/Trimming Ready | Fixed-Width Support | Fluent APIs

🚀 Key Features

Reading

  • RFC 4180 Quote Handling: Supports quoted fields with escaped quotes (""), commas in quotes, per spec
  • Quote-Aware SIMD: Maintains SIMD performance even with quoted fields
  • Zero Allocations: Stack-only parsing with ArrayPool for column metadata
  • Lazy Evaluation: Columns parsed only when accessed
  • Configurable RFC vs Speed: Toggle quote parsing and opt-in newlines-in-quotes; defaults favor speed
  • Fluent Builder API: Configure readers with chainable methods (Csv.Read<T>())
  • LINQ-Style Extensions: Where(), Select(), First(), ToList(), GroupBy(), and more

Writing

  • High-Performance CSV Writer: 2-5x faster than Sep with 35-85% less memory allocation
  • SIMD-Accelerated: Uses AVX2/SSE2 for quote detection and field analysis
  • RFC 4180 Compliant: Proper quote escaping and field quoting
  • Fluent Builder API: Configure writers with chainable methods (Csv.Write<T>())
  • Multiple Output Targets: Write to strings, streams, or files

General

  • Async Streaming: True async I/O with IAsyncEnumerable<T> support for reading and writing
  • AOT/Trimming Support: Source generators for reflection-free binding ([CsvGenerateBinder])
  • Line Number Tracking: Both logical row numbers and physical source line numbers for error reporting
  • Progress Reporting: Track parsing progress for large files with callbacks
  • Custom Type Converters: Register converters for domain-specific types
  • Multi-Framework: .NET 8, 9, and 10 support
  • Zero Dependencies: No external packages for core library

🎯 Design Philosophy

Zero-Allocation, RFC-Compliant Design

  • Target Frameworks: .NET 8, 9, 10 (modern JIT optimizations)
  • Memory Safety: No unsafe keyword - uses safe Unsafe class and MemoryMarshal APIs for performance
  • Minimal API: Simple, focused API surface
  • Zero Dependencies: No external packages for core library
  • RFC 4180: Quote handling, escaped quotes, delimiters in quotes; optional newlines-in-quotes (default off), no header detection
  • SIMD First: Quote-aware SIMD for AVX-512, AVX2, NEON
  • Allocation Notes: Char-span parsing remains allocation-free; UTF-8 parsing stays zero-allocation for invariant primitives. Culture/format-based parsing on UTF-8 columns decodes to UTF-16 and allocates by design.

API Surface

// Primary API - parse from string with options
var reader = Csv.ReadFromText(csvData);

// Custom options (delimiter, quote character, max columns)
var options = new CsvReadOptions
{
    Delimiter = ',',  // Default
    Quote = '"',      // Default - RFC 4180 compliant
    MaxColumnCount = 100, // Default
    AllowNewlinesInsideQuotes = false, // Enable for full RFC newlines-in-quotes support (slower)
    EnableQuotedFields = true         // Disable for maximum speed when your data has no quotes
};
var reader = Csv.ReadFromText(csvData, options);

📊 Usage Examples

Basic Iteration (Zero Allocations)

foreach (var row in Csv.ReadFromText(csv))
{
    // Access columns by index - no allocations
    var id = row[0].Parse<int>();
    var name = row[1].CharSpan; // ReadOnlySpan<char>
    var price = row[2].Parse<decimal>();
}

Files and Streams

using var fileReader = Csv.ReadFromFile("data.csv"); // streams file without loading it fully

using var stream = File.OpenRead("data.csv");
using var streamReader = Csv.ReadFromStream(stream); // leaveOpen defaults to true

Both overloads stream with pooled buffers and do not load the entire file/stream; dispose the reader (and the stream if you own it) to release resources.

Async I/O
var source = await Csv.ReadFromFileAsync("data.csv");
using var reader = source.CreateReader();

Async overloads also buffer the full payload (required because readers are ref structs); use when you need non-blocking file/stream reads.

Streaming large files (low memory)
using var reader = Csv.ReadFromStream(File.OpenRead("data.csv"));
while (reader.MoveNext())
{
    var row = reader.Current;
    var id = row[0].Parse<int>();
}

Streaming keeps a pooled buffer and does not load the entire file into memory; rows remain valid until the next MoveNext call.

Async streaming (without buffering entire file)
await using var reader = Csv.CreateAsyncStreamReader(File.OpenRead("data.csv"));
while (await reader.MoveNextAsync())
{
    var row = reader.Current;
    var id = row[0].Parse<int>();
}

Async streaming uses pooled buffers and async I/O; each row stays valid until the next MoveNextAsync invocation.

Fluent Reader Builder

Use the fluent builder API for a clean, chainable configuration:

// Read CSV records with fluent configuration
var records = Csv.Read<Person>()
    .WithDelimiter(';')
    .TrimFields()
    .AllowMissingColumns()
    .SkipRows(2)  // Skip metadata rows
    .FromText(csvData)
    .ToList();

// Read from file with async streaming
await foreach (var person in Csv.Read<Person>()
    .WithDelimiter(',')
    .FromFileAsync("data.csv"))
{
    Console.WriteLine($"{person.Name}: {person.Age}");
}

The builder provides a symmetric API to CsvWriterBuilder<T> for reading records.

Manual Row-by-Row Reading (Fluent)

Use the non-generic builder for low-level row-by-row parsing:

// Manual row-by-row reading with fluent configuration
using var reader = Csv.Read()
    .WithDelimiter(';')
    .TrimFields()
    .WithCommentCharacter('#')
    .FromText(csvData);

foreach (var row in reader)
{
    var id = row[0].Parse<int>();
    var name = row[1].ToString();
}

// Stream from file with custom options
using var fileReader = Csv.Read()
    .WithMaxFieldSize(10_000)
    .AllowNewlinesInQuotes()
    .FromFile("data.csv");

LINQ-Style Extension Methods

CSV record readers provide familiar LINQ-style operations for working with records:

// Materialize all records
var allPeople = Csv.Read<Person>().FromText(csv).ToList();
var peopleArray = Csv.Read<Person>().FromText(csv).ToArray();

// Query operations
var adults = Csv.Read<Person>()
    .FromText(csv)
    .Where(p => p.Age >= 18);

var names = Csv.Read<Person>()
    .FromText(csv)
    .Select(p => p.Name);

// First/Single operations
var first = Csv.Read<Person>().FromText(csv).First();
var firstAdult = Csv.Read<Person>().FromText(csv).First(p => p.Age >= 18);
var single = Csv.Read<Person>().FromText(csv).SingleOrDefault();

// Aggregation
var count = Csv.Read<Person>().FromText(csv).Count();
var adultCount = Csv.Read<Person>().FromText(csv).Count(p => p.Age >= 18);
var hasRecords = Csv.Read<Person>().FromText(csv).Any();
var allAdults = Csv.Read<Person>().FromText(csv).All(p => p.Age >= 18);

// Pagination
var page = Csv.Read<Person>().FromText(csv).Skip(10).Take(5);

// Grouping and indexing
var byCity = Csv.Read<Person>()
    .FromText(csv)
    .GroupBy(p => p.City);

var byId = Csv.Read<Person>()
    .FromText(csv)
    .ToDictionary(p => p.Id);

// Iteration
Csv.Read<Person>()
    .FromText(csv)
    .ForEach(p => Console.WriteLine(p.Name));

Note: Since CSV readers are ref structs, they cannot implement IEnumerable<T>. These extension methods consume the reader and return materialized results.

Multi-Schema CSV Parsing

Parse CSV files where different rows map to different record types based on a discriminator column. This is common in banking/financial file formats (NACHA, BAI, EDI) with header/detail/trailer patterns:

// Define record types
[CsvGenerateBinder]
public class HeaderRecord
{
    [CsvColumn(Name = "Type")]
    public string Type { get; set; } = "";

    [CsvColumn(Name = "Date")]
    public DateTime Date { get; set; }
}

[CsvGenerateBinder]
public class DetailRecord
{
    [CsvColumn(Name = "Type")]
    public string Type { get; set; } = "";

    [CsvColumn(Name = "Id")]
    public int Id { get; set; }

    [CsvColumn(Name = "Amount")]
    public decimal Amount { get; set; }
}

[CsvGenerateBinder]
public class TrailerRecord
{
    [CsvColumn(Name = "Type")]
    public string Type { get; set; } = "";

    [CsvColumn(Name = "Count")]
    public int Count { get; set; }
}

// Parse with discriminator-based type routing
var csv = """
Type,Id,Amount,Date,Count
H,0,0.00,2024-01-15,0
D,1,100.50,,0
D,2,200.75,,0
T,0,301.25,,2
""";

foreach (var record in Csv.Read()
    .WithMultiSchema()
    .WithDiscriminator("Type")           // By column name
    .MapRecord<HeaderRecord>("H")
    .MapRecord<DetailRecord>("D")
    .MapRecord<TrailerRecord>("T")
    .AllowMissingColumns()
    .FromText(csv))
{
    switch (record)
    {
        case HeaderRecord h:
            Console.WriteLine($"Header: {h.Date}");
            break;
        case DetailRecord d:
            Console.WriteLine($"Detail: {d.Id} = {d.Amount:C}");
            break;
        case TrailerRecord t:
            Console.WriteLine($"Trailer: {t.Count} records");
            break;
    }
}
Discriminator Options
// By column index (0-based)
.WithDiscriminator(columnIndex: 0)

// By column name (resolved from header)
.WithDiscriminator("RecordType")

// Case-insensitive discriminator matching (default)
.CaseSensitiveDiscriminator(false)
Handling Unmatched Rows
// Skip rows that don't match any registered type
.OnUnmatchedRow(UnmatchedRowBehavior.Skip)

// Throw exception for unmatched rows (default)
.OnUnmatchedRow(UnmatchedRowBehavior.Throw)

// Use custom factory for unmatched rows
.MapRecord((discriminator, columns, rowNum) => new UnknownRecord
{
    Type = discriminator,
    RawData = string.Join(",", columns)
})
Streaming and Async Support
// From file
foreach (var record in Csv.Read()
    .WithMultiSchema()
    .WithDiscriminator("Type")
    .MapRecord<HeaderRecord>("H")
    .MapRecord<DetailRecord>("D")
    .FromFile("transactions.csv"))
{
    // Process records
}

// Async streaming
await foreach (var record in Csv.Read()
    .WithMultiSchema()
    .WithDiscriminator("Type")
    .MapRecord<HeaderRecord>("H")
    .MapRecord<DetailRecord>("D")
    .FromFileAsync("transactions.csv"))
{
    // Process records asynchronously
}
Source-Generated Dispatch (Optimal Performance)

For maximum performance, use source-generated dispatchers instead of runtime multi-schema. The generator creates optimized switch-based dispatch that compiles to jump tables:

[CsvGenerateDispatcher(DiscriminatorIndex = 0)]
[CsvSchemaMapping("H", typeof(HeaderRecord))]
[CsvSchemaMapping("D", typeof(DetailRecord))]
[CsvSchemaMapping("T", typeof(TrailerRecord))]
public partial class BankingDispatcher { }

// Usage:
var reader = Csv.Read().FromText(csv);
if (reader.MoveNext()) { } // Skip header
int rowNumber = 1;
while (reader.MoveNext())
{
    rowNumber++;
    var record = BankingDispatcher.Dispatch(reader.Current, rowNumber);
    switch (record)
    {
        case HeaderRecord h: /* ... */ break;
        case DetailRecord d: /* ... */ break;
        case TrailerRecord t: /* ... */ break;
    }
}

Why source-generated is faster:

  • Switch expression compiles to jump table (no dictionary lookup)
  • Direct binder invocation (no interface dispatch)
  • No boxing/unboxing overhead
  • ~2.85x faster than runtime multi-schema dispatch

Note: All mapped types must have [CsvGenerateBinder] attribute for AOT compatibility.

Advanced Reader Options

Progress Reporting

Track parsing progress for large files:

var progress = new Progress<CsvProgress>(p =>
{
    var pct = p.TotalBytes > 0 ? (p.BytesProcessed * 100.0 / p.TotalBytes) : 0;
    Console.WriteLine($"Processed {p.RowsProcessed} rows ({pct:F1}%)");
});

var records = Csv.Read<Person>()
    .WithProgress(progress, intervalRows: 1000)
    .FromFile("large-file.csv")
    .ToList();
Error Handling

Handle deserialization errors gracefully:

var records = Csv.Read<Person>()
    .OnError(ctx =>
    {
        Console.WriteLine($"Error at row {ctx.Row}, column '{ctx.MemberName}': {ctx.Exception?.Message}");
        return DeserializeErrorAction.Skip;  // Or UseDefault, Throw
    })
    .FromText(csv)
    .ToList();
Header Validation

Enforce required headers and detect duplicates:

// Require specific headers
var records = Csv.Read<Person>()
    .RequireHeaders("Name", "Email", "Age")
    .FromText(csv)
    .ToList();

// Detect duplicate headers
var records = Csv.Read<Person>()
    .DetectDuplicateHeaders()
    .FromText(csv)
    .ToList();

// Custom header validation
var records = Csv.Read<Person>()
    .ValidateHeaders(headers =>
    {
        if (!headers.Contains("Id"))
            throw new CsvException(CsvErrorCode.InvalidHeader, "Missing required 'Id' column");
    })
    .FromText(csv)
    .ToList();
Custom Type Converters

Register custom converters for domain-specific types:

var records = Csv.Read<Order>()
    .RegisterConverter<Money>((column, culture) =>
    {
        var text = column.ToString();
        if (Money.TryParse(text, out var money))
            return money;
        throw new FormatException($"Invalid money format: {text}");
    })
    .FromText(csv)
    .ToList();

✍️ CSV Writing

HeroParser includes a high-performance CSV writer that is 2-5x faster than Sep with significantly lower memory allocations.

Basic Writing

// Write records to a string
var records = new[]
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 }
};

string csv = Csv.WriteToText(records);
// Output:
// Name,Age
// Alice,30
// Bob,25

Writing to Files and Streams

// Write to a file
Csv.WriteToFile("output.csv", records);

// Write to a stream
using var stream = File.Create("output.csv");
Csv.WriteToStream(stream, records);

// Async writing (optimized for in-memory collections)
await Csv.WriteToFileAsync("output.csv", records);

// Async writing with IAsyncEnumerable (for streaming data sources)
await Csv.WriteToFileAsync("output.csv", GetRecordsAsync());

High-Performance Async Writing

For scenarios requiring true async I/O, use the CsvAsyncStreamWriter:

// Low-level async writer with sync fast paths
await using var writer = Csv.CreateAsyncStreamWriter(stream);
await writer.WriteRowAsync(new[] { "Alice", "30", "NYC" });
await writer.WriteRowAsync(new[] { "Bob", "25", "LA" });
await writer.FlushAsync();

// Builder API with async streaming (16-43% faster than sync at scale)
await Csv.Write<Person>()
    .WithDelimiter(',')
    .WithHeader()
    .ToStreamAsyncStreaming(stream, records);  // IEnumerable overload

The async writer uses sync fast paths when data fits in the buffer, avoiding async overhead for small writes while supporting true non-blocking I/O for large datasets.

Writer Options

var options = new CsvWriteOptions
{
    Delimiter = ',',           // Field delimiter (default: comma)
    Quote = '"',               // Quote character (default: double quote)
    NewLine = "\r\n",          // Line ending (default: CRLF per RFC 4180)
    WriteHeader = true,        // Include header row (default: true)
    QuoteStyle = QuoteStyle.WhenNeeded,  // Quote only when necessary
    NullValue = "",            // String to write for null values
    Culture = CultureInfo.InvariantCulture,
    DateTimeFormat = "O",      // ISO 8601 format for dates
    NumberFormat = "G"         // General format for numbers
};

string csv = Csv.WriteToText(records, options);

Fluent Writer Builder

// Write records with fluent configuration
var csv = Csv.Write<Person>()
    .WithDelimiter(';')
    .AlwaysQuote()
    .WithDateTimeFormat("yyyy-MM-dd")
    .WithHeader()
    .ToText(records);

// Write to file with async streaming
await Csv.Write<Person>()
    .WithDelimiter(',')
    .WithoutHeader()
    .ToFileAsync("output.csv", recordsAsync);

The builder provides a symmetric API to CsvReaderBuilder<T> for writing records.

Manual Row-by-Row Writing (Fluent)

Use the non-generic builder for low-level row-by-row writing:

// Manual row-by-row writing with fluent configuration
using var writer = Csv.Write()
    .WithDelimiter(';')
    .AlwaysQuote()
    .WithDateTimeFormat("yyyy-MM-dd")
    .CreateWriter(Console.Out);

writer.WriteField("Name");
writer.WriteField("Age");
writer.EndRow();

writer.WriteField("Alice");
writer.WriteField(30);
writer.EndRow();

writer.Flush();

// Write to file with custom options
using var fileWriter = Csv.Write()
    .WithNewLine("\n")
    .WithCulture("de-DE")
    .CreateFileWriter("output.csv");

Low-Level Row Writing

using var writer = Csv.CreateWriter(Console.Out);

// Write header
writer.WriteField("Name");
writer.WriteField("Age");
writer.EndRow();

// Write data rows
writer.WriteField("Alice");
writer.WriteField(30);
writer.EndRow();

writer.Flush();

Error Handling

var options = new CsvWriteOptions
{
    OnSerializeError = ctx =>
    {
        Console.WriteLine($"Error at row {ctx.Row}, column '{ctx.MemberName}': {ctx.Exception?.Message}");
        return SerializeErrorAction.WriteNull;  // Or SkipRow, Throw
    }
};

Benchmarks

# Run all benchmarks
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --all

# Reading benchmarks
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --throughput
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --streaming

# Writing benchmarks
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --writer
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --sync-writer
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --async-writer

Reading Performance

HeroParser uses CLMUL-based branchless quote masking (PCLMULQDQ instruction) for efficient quote-aware SIMD parsing. Results on AMD Ryzen AI 9 HX PRO 370, .NET 10:

Rows Columns Quotes Time Throughput
10k 25 No 552 μs ~6.1 GB/s
10k 25 Yes 1,344 μs ~5.1 GB/s
10k 100 No 1,451 μs ~4.5 GB/s
10k 100 Yes 3,617 μs ~1.9 GB/s
100k 100 No 14,568 μs ~4.5 GB/s
100k 100 Yes 35,396 μs ~1.9 GB/s

Key characteristics:

  • Fixed 4 KB allocation regardless of column count or file size
  • Scales well with wide CSVs - performance remains consistent with 50-100+ columns
  • UTF-8 optimized - use byte[] or ReadOnlySpan<byte> APIs for best performance
  • Quote-aware SIMD - maintains high throughput even with quoted fields

Writing Performance

HeroParser's CSV writer is optimized for high throughput with minimal allocations:

Scenario Throughput Memory
Sync Writing ~2-3 GB/s 35-85% less than alternatives
Async Writing ~1.5-2 GB/s Pooled buffers, minimal GC

Key characteristics:

  • SIMD-accelerated quote detection and field analysis
  • RFC 4180 compliant proper quote escaping
  • Sync fast paths in async writer avoid overhead for small writes

Quote Handling (RFC 4180)

var csv = "field1,\"field2\",\"field,3\"\n" +
          "aaa,\"b,bb\",ccc\n" +
          "zzz,\"y\"\"yy\",xxx";  // Escaped quote

foreach (var row in Csv.ReadFromText(csv))
{
    // Access raw value (includes quotes)
    var raw = row[1].ToString(); // "b,bb"

    // Remove surrounding quotes and unescape
    var unquoted = row[1].UnquoteToString(); // b,bb

    // Zero-allocation unquote (returns span)
    var span = row[1].Unquote(); // ReadOnlySpan<char>
}

Type Parsing

foreach (var row in Csv.ReadFromText(csv))
{
    // Generic parsing (ISpanParsable<T>)
    var value = row[0].Parse<int>();

    // Optimized type-specific methods
    if (row[1].TryParseDouble(out double d)) { }
    if (row[2].TryParseDateTime(out DateTime dt)) { }
    if (row[3].TryParseBoolean(out bool b)) { }

    // Additional type parsing
    if (row[4].TryParseGuid(out Guid id)) { }
    if (row[5].TryParseEnum<DayOfWeek>(out var day)) { }  // Case-insensitive
    if (row[6].TryParseTimeZoneInfo(out TimeZoneInfo tz)) { }
}

Lazy Evaluation

// Columns are NOT parsed until first access
foreach (var row in Csv.ReadFromText(csv))
{
    // Skip rows without parsing columns
    if (ShouldSkip(row))
        continue;

    // Only parse columns when accessed
    var value = row[0].Parse<int>();  // First access triggers parsing
}

Comment Lines

Skip comment lines in CSV files:

var options = new CsvReadOptions
{
    CommentCharacter = '#'  // Lines starting with # are ignored
};

var csv = @"# This is a comment
Name,Age
Alice,30
# Another comment
Bob,25";

foreach (var row in Csv.ReadFromText(csv, options))
{
    // Only data rows are processed
}

Trimming Whitespace

Remove leading and trailing whitespace from unquoted fields:

var options = new CsvReadOptions
{
    TrimFields = true  // Trim whitespace from unquoted fields
};

var csv = "  Name  ,  Age  \nAlice,  30  ";
foreach (var row in Csv.ReadFromText(csv, options))
{
    var name = row[0].ToString();  // "Name" (trimmed)
    var age = row[1].ToString();   // "30" (trimmed)
}

Null Value Handling

Treat specific string values as null during record parsing:

var recordOptions = new CsvRecordOptions
{
    NullValues = new[] { "NULL", "N/A", "NA", "" }
};

var csv = "Name,Value\nAlice,100\nBob,NULL\nCharlie,N/A";
foreach (var record in Csv.ParseRecords<MyRecord>(csv, recordOptions))
{
    // record.Value will be null when the field contains "NULL" or "N/A"
}

Security: Field Length Limits

Protect against DoS attacks with oversized fields:

var options = new CsvReadOptions
{
    MaxFieldSize = 10_000  // Throw exception if any field exceeds 10KB
};

// This will throw CsvException if a field is too large
var reader = Csv.ReadFromText(csv, options);

Skip Metadata Rows

Skip header rows or metadata before parsing:

var recordOptions = new CsvRecordOptions
{
    SkipRows = 2,  // Skip first 2 rows (e.g., metadata)
    HasHeaderRow = true  // The 3rd row is the header
};

var csv = @"File Version: 1.0
Generated: 2024-01-01
Name,Age
Alice,30
Bob,25";

foreach (var record in Csv.ParseRecords<MyRecord>(csv, recordOptions))
{
    // First 2 rows are skipped, 3rd row used as header
}

Storing Rows Safely

Rows are ref structs and cannot escape their scope. Use Clone() or ToImmutable() to store them:

var storedRows = new List<CsvCharSpanRow>();

foreach (var row in Csv.ReadFromText(csv))
{
    // ❌ WRONG: Cannot store ref struct directly
    // storedRows.Add(row);

    // ✅ CORRECT: Clone creates an owned copy
    storedRows.Add(row.Clone());
}

// Rows can now be safely accessed after enumeration
foreach (var row in storedRows)
{
    var value = row[0].ToString();
}

Line Number Tracking

Track row positions and source line numbers for error reporting:

foreach (var row in Csv.ReadFromText(csv))
{
    try
    {
        var id = row[0].Parse<int>();
    }
    catch (FormatException)
    {
        // LineNumber: 1-based logical row position (ordinal)
        // SourceLineNumber: 1-based physical line in the file (handles multi-line quoted fields)
        Console.WriteLine($"Invalid data at row {row.LineNumber} (source line {row.SourceLineNumber})");
    }
}

This distinction is important when CSV files contain multi-line quoted fields - LineNumber gives you the row index while SourceLineNumber tells you the exact line in the source file where the row starts.

⚠️ Important: Resource Management

HeroParser readers use ArrayPool buffers and MUST be disposed to prevent memory leaks.

// ✅ RECOMMENDED: Use 'using' statement
using (var reader = Csv.ReadFromText(csv))
{
    foreach (var row in reader)
    {
        var value = row[0].ToString();
    }
} // ArrayPool buffers automatically returned

// ✅ ALSO WORKS: foreach automatically disposes
foreach (var row in Csv.ReadFromText(csv))
{
    var value = row[0].ToString();
} // Disposed after foreach completes

// ❌ AVOID: Manual iteration without disposal
var reader = Csv.ReadFromText(csv);
while (reader.MoveNext())
{
    // ...
}
// MEMORY LEAK! ArrayPool buffers not returned

// ✅ FIX: Manually dispose if not using foreach
var reader = Csv.ReadFromText(csv);
try
{
    while (reader.MoveNext()) { /* ... */ }
}
finally
{
    reader.Dispose(); // Always dispose!
}

📁 Fixed-Width File Parsing

HeroParser includes comprehensive support for fixed-width (fixed-length) file parsing and writing, commonly used in legacy systems, mainframe exports, and financial data interchange.

Basic Reading

// Define record type with column mappings
[FixedWidthGenerateBinder]
public class Employee
{
    [FixedWidthColumn(Start = 0, Length = 10)]
    public string Id { get; set; } = "";

    [FixedWidthColumn(Start = 10, Length = 30)]
    public string Name { get; set; } = "";

    [FixedWidthColumn(Start = 40, Length = 10, Alignment = FieldAlignment.Right, PadChar = '0')]
    public decimal Salary { get; set; }
}

// Read records with fluent builder
foreach (var emp in FixedWidth.Read<Employee>().FromFile("employees.dat"))
{
    Console.WriteLine($"{emp.Name}: {emp.Salary:C}");
}

Reading from Files and Streams

// Read from string
var records = FixedWidth.Read<Employee>().FromText(data).ToList();

// Read from file
var records = FixedWidth.Read<Employee>().FromFile("data.dat").ToList();

// Read from stream
var records = FixedWidth.Read<Employee>().FromStream(stream).ToList();

// Async file reading
await foreach (var emp in FixedWidth.Read<Employee>().FromFileAsync("data.dat"))
{
    Console.WriteLine(emp.Name);
}

Manual Row-by-Row Reading

// Configure and read manually without binding to a type
foreach (var row in FixedWidth.Read()
    .WithRecordLength(80)
    .WithDefaultPadChar(' ')
    .FromFile("legacy.dat"))
{
    var id = row.GetField(0, 10).ToString();
    var name = row.GetField(10, 30).ToString();
    Console.WriteLine($"{id}: {name}");
}

Field Alignment

Fixed-width fields support four alignment modes that control how padding is trimmed:

public class Transaction
{
    // Left-aligned: "John      " -> "John" (trims trailing spaces)
    [FixedWidthColumn(Start = 0, Length = 10, Alignment = FieldAlignment.Left)]
    public string Name { get; set; } = "";

    // Right-aligned: "000012345" -> "12345" (trims leading zeros)
    [FixedWidthColumn(Start = 10, Length = 10, Alignment = FieldAlignment.Right, PadChar = '0')]
    public int Amount { get; set; }

    // Center-aligned: "  Data  " -> "Data" (trims both sides)
    [FixedWidthColumn(Start = 20, Length = 10, Alignment = FieldAlignment.Center)]
    public string Code { get; set; } = "";

    // None: No trimming, raw value preserved
    [FixedWidthColumn(Start = 30, Length = 10, Alignment = FieldAlignment.None)]
    public string RawField { get; set; } = "";
}

Alternative Field Bound Syntax: End Property

You can specify field bounds using either Start/Length or Start/End:

public class Record
{
    // Using Length: field from position 0, 10 characters long
    [FixedWidthColumn(Start = 0, Length = 10)]
    public string Id { get; set; } = "";

    // Using End: field from position 10 to 30 (exclusive), same as Length = 20
    [FixedWidthColumn(Start = 10, End = 30)]
    public string Name { get; set; } = "";

    // Using End with other options
    [FixedWidthColumn(Start = 30, End = 40, Alignment = FieldAlignment.Right, PadChar = '0')]
    public decimal Amount { get; set; }
}

The End property specifies the exclusive ending position of the field. When both Length and End are specified, Length takes precedence.

Handling Missing Columns

When parsing files where trailing fields may be omitted or rows vary in length, use AllowMissingColumns():

// Handle short rows gracefully - missing fields return empty values
var records = FixedWidth.Read<Employee>()
    .AllowMissingColumns()
    .FromFile("variable-length.dat")
    .ToList();

// By default, accessing fields beyond row length throws FixedWidthException
// Use AllowMissingColumns() when:
// - Trailing fields are optional
// - Records may have variable lengths
// - Legacy files have inconsistent formatting

Date/Time Format Strings

public class Record
{
    // Parse date with exact format
    [FixedWidthColumn(Start = 0, Length = 8, Format = "yyyyMMdd")]
    public DateTime TransactionDate { get; set; }

    // Parse time with exact format
    [FixedWidthColumn(Start = 8, Length = 6, Format = "HHmmss")]
    public TimeOnly TransactionTime { get; set; }
}

Fluent Builder Options

var records = FixedWidth.Read<Employee>()
    .WithDefaultPadChar(' ')           // Default padding character
    .WithDefaultAlignment(FieldAlignment.Left)  // Default field alignment
    .WithRecordLength(80)              // Fixed record length (vs line-based)
    .SkipRows(2)                       // Skip header rows
    .WithCommentCharacter('#')         // Skip comment lines
    .WithMaxRecords(10_000)            // Limit records (DoS protection)
    .WithMaxInputSize(50 * 1024 * 1024) // 50 MB max file size
    .WithCulture("de-DE")              // Culture for parsing
    .WithNullValues("NULL", "N/A")     // Values treated as null
    .TrackLineNumbers()                // Enable line number tracking
    .OnError((ctx, ex) =>              // Error handling
    {
        Console.WriteLine($"Error at record {ctx.RecordNumber}: {ex.Message}");
        return FixedWidthDeserializeErrorAction.SkipRecord;
    })
    .FromFile("data.dat")
    .ToList();

Validation Attributes

using HeroParser.FixedWidths.Validation;

public class ValidatedRecord
{
    [FixedWidthColumn(Start = 0, Length = 10)]
    [FixedWidthRequired]  // Field cannot be empty/whitespace
    public string Id { get; set; } = "";

    [FixedWidthColumn(Start = 10, Length = 20)]
    [FixedWidthStringLength(MinLength = 2, MaxLength = 20)]
    public string Name { get; set; } = "";

    [FixedWidthColumn(Start = 30, Length = 10)]
    [FixedWidthRange(Minimum = 0, Maximum = 1000000)]
    public decimal Amount { get; set; }

    [FixedWidthColumn(Start = 40, Length = 15)]
    [FixedWidthRegex(@"^\d{3}-\d{3}-\d{4}$", ErrorMessage = "Invalid phone format")]
    public string Phone { get; set; } = "";
}

Writing Fixed-Width Data

// Write records to string
var text = FixedWidth.WriteToText(employees);

// Write to file
FixedWidth.WriteToFile("output.dat", employees);

// Write to stream
FixedWidth.WriteToStream(stream, employees);

// Async writing
await FixedWidth.WriteToFileAsync("output.dat", employees);

// With options
await FixedWidth.WriteToFileAsync("output.dat", employees, new FixedWidthWriteOptions
{
    NewLine = "\r\n",
    DefaultPadChar = ' '
});

Fluent Writer Builder

// Write with fluent configuration
var text = FixedWidth.Write<Employee>()
    .WithPadChar(' ')
    .AlignLeft()
    .ToText(employees);

// Write to file
FixedWidth.Write<Employee>()
    .WithNewLine("\r\n")
    .ToFile("output.dat", employees);

Manual Row-by-Row Writing

using var writer = FixedWidth.Write()
    .WithPadChar(' ')
    .CreateFileWriter("output.dat");

// Write header
writer.WriteField("ID", 10);
writer.WriteField("NAME", 30);
writer.WriteField("AMOUNT", 10, FieldAlignment.Right);
writer.EndRow();

// Write data
writer.WriteField("001", 10);
writer.WriteField("Alice", 30);
writer.WriteField("12345", 10, FieldAlignment.Right, '0');
writer.EndRow();

writer.Flush();

Low-Level Writer Creation

// Create writer from TextWriter
using var writer = FixedWidth.CreateWriter(Console.Out);

// Create writer from Stream
using var stream = File.Create("output.dat");
using var streamWriter = FixedWidth.CreateStreamWriter(stream);

Async Row-by-Row Writing

For scenarios requiring true async I/O, use the FixedWidthAsyncStreamWriter:

// Low-level async writer with sync fast paths
await using var writer = FixedWidth.CreateAsyncStreamWriter(stream);
await writer.WriteFieldAsync("Alice", 20);
await writer.WriteFieldAsync("30", 5, FieldAlignment.Right);
await writer.EndRowAsync();
await writer.FlushAsync();

The async writer uses sync fast paths when data fits in the buffer, avoiding async overhead for small writes while supporting true non-blocking I/O for large datasets.

Custom Type Converters

var records = FixedWidth.Read<Order>()
    .RegisterConverter<Money>((value, culture, format, out result) =>
    {
        if (decimal.TryParse(value, NumberStyles.Currency, culture, out var amount))
        {
            result = new Money(amount);
            return true;
        }
        result = default;
        return false;
    })
    .FromFile("orders.dat")
    .ToList();

Source Generator (AOT Support)

For AOT compilation and trimming support, use the [FixedWidthGenerateBinder] attribute:

using HeroParser.FixedWidths.Records.Binding;

[FixedWidthGenerateBinder]
public class Employee
{
    [FixedWidthColumn(Start = 0, Length = 10)]
    public string Id { get; set; } = "";

    [FixedWidthColumn(Start = 10, Length = 30)]
    public string Name { get; set; } = "";
}

The source generator creates compile-time binders, enabling:

  • AOT compatibility - No runtime reflection
  • Faster startup - Binders are pre-compiled
  • Trimming-safe - Works with .NET trimming/linking

🏗️ Building

Requirements:

  • .NET 8, 9, or 10 SDK
  • C# 12+ language features
  • Recommended: AVX-512 or AVX2 capable CPU for maximum performance
# Build library
dotnet build src/HeroParser/HeroParser.csproj

# Run tests
dotnet test tests/HeroParser.Tests/HeroParser.Tests.csproj

# Run all benchmarks
dotnet run --project benchmarks/HeroParser.Benchmarks -c Release -- --all

Development Setup

To enable pre-commit format checks (recommended):

# Configure git to use the project's hooks
git config core.hooksPath .githooks

This runs dotnet format --verify-no-changes before each commit. If formatting issues are found, the commit is blocked until you run dotnet format to fix them.

🔧 Source Generators (AOT Support)

For AOT (Ahead-of-Time) compilation scenarios, HeroParser supports source-generated binders that avoid reflection:

using HeroParser.SeparatedValues.Records.Binding;

[CsvGenerateBinder]
public class Person
{
    public string Name { get; set; } = "";
    public int Age { get; set; }
    public string? Email { get; set; }
}

The [CsvGenerateBinder] attribute instructs the source generator to emit a compile-time binder, enabling:

  • AOT compatibility - No runtime reflection required
  • Faster startup - Binders are pre-compiled
  • Trimming-safe - Works with .NET trimming/linking

Note: Source generators require the HeroParser.Generators package and a compatible SDK.

⚠️ RFC 4180 Compliance

HeroParser implements core RFC 4180 features:

Supported:

  • Quoted fields with double-quote character (")
  • Escaped quotes using double-double-quotes ("")
  • Delimiters (commas) within quoted fields
  • Both LF (\n) and CRLF (\r\n) line endings
  • Newlines inside quoted fields when AllowNewlinesInsideQuotes = true (default is false for performance)
  • Empty fields and spaces preserved
  • Custom delimiters and quote characters

Not Supported:

  • Automatic header detection - Users skip header rows manually

This provides excellent RFC 4180 compatibility for most CSV use cases (logs, exports, data interchange).

📝 License

MIT

🙏 Acknowledgments

HeroParser was inspired by the excellent work in the .NET CSV parsing ecosystem:

  • Sep by nietras - Pioneering SIMD-based CSV parsing techniques
  • Sylvan.Data.Csv - High-performance CSV parsing patterns
  • SimdUnicode - SIMD text processing techniques

Special thanks to the .NET performance community for their research and open-source contributions.


High-performance, zero-allocation, AOT-ready CSV & fixed-width parsing for .NET

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • net10.0

    • No dependencies.
  • net8.0

    • No dependencies.
  • net9.0

    • No dependencies.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.6.3 90 1/13/2026
1.6.2 84 1/12/2026
1.6.1 88 1/10/2026
1.6.0 88 1/9/2026
1.5.4 89 12/29/2025
1.5.3 101 12/29/2025
1.5.2 92 12/27/2025
1.5.1 93 12/27/2025
1.5.0 210 12/7/2025
1.4.3 196 12/3/2025
1.4.2 664 12/3/2025
1.4.1 681 12/2/2025
1.4.0 658 12/2/2025
1.3.0 151 11/28/2025
1.2.0 192 11/27/2025
1.1.0 183 11/26/2025
1.0.1 407 11/20/2025
0.2.0 404 11/20/2025