FastLZMA2Net 1.1.0
dotnet add package FastLZMA2Net --version 1.1.0
NuGet\Install-Package FastLZMA2Net -Version 1.1.0
<PackageReference Include="FastLZMA2Net" Version="1.1.0" />
<PackageVersion Include="FastLZMA2Net" Version="1.1.0" />
<PackageReference Include="FastLZMA2Net" />
paket add FastLZMA2Net --version 1.1.0
#r "nuget: FastLZMA2Net, 1.1.0"
#:package FastLZMA2Net@1.1.0
#addin nuget:?package=FastLZMA2Net&version=1.1.0
#tool nuget:?package=FastLZMA2Net&version=1.1.0
FastLZMA2Net
Fast LZMA2 Compression algorithm Wrapper for .NET
With respect to Fast LZMA2 repo
Requirements
.NET 8 / .NET 10
| OS | Architectures |
|---|---|
| Windows | x64 · x86 · arm64 |
| Linux (glibc) | x64 · x86 · arm64 · arm |
| Linux (musl / Alpine) | x64 · arm64 |
x86 may have potential malfunction.
Installation
PM> Install-Package FastLZMA2Net
API Overview
| Class | Description |
|---|---|
FL2 |
Static helpers — one-shot compress / decompress, memory estimation |
Compressor |
Reusable compression context |
Decompressor |
Reusable decompression context |
CompressStream |
Streaming compression (Stream subclass) |
DecompressStream |
Streaming decompression (Stream subclass) |
Usage
Simple compression
byte[] origin = File.ReadAllBytes(sourceFilePath);
byte[] compressed = FL2.Compress(origin, level: 6);
byte[] decompressed = FL2.Decompress(compressed);
ReadOnlySpan<byte> overloads are available to avoid a copy when data is already in a pooled or stack-allocated buffer:
ReadOnlySpan<byte> span = ...;
byte[] compressed = FL2.Compress(span, level: 6);
Multi-threaded one-shot compression
byte[] compressed = FL2.CompressMT(origin, level: 6, nbThreads: 0); // 0 = all cores
byte[] decompressed = FL2.DecompressMT(compressed, nbThreads: 0);
Context Compression
Reuse a Compressor / Decompressor to amortize context-allocation cost across many calls (e.g. batches of small files).
using Compressor compressor = new(nbThreads: 0) { CompressLevel = 10 };
byte[] c1 = compressor.Compress(data1);
byte[] c2 = compressor.Compress(data2);
using Decompressor decompressor = new();
byte[] d1 = decompressor.Decompress(c1);
byte[] d2 = decompressor.Decompress(c2);
Async compression
using Compressor compressor = new(nbThreads: 0) { CompressLevel = 6 };
byte[] compressed = await compressor.CompressAsync(origin, cancellationToken);
using Decompressor decompressor = new();
byte[] decompressed = await decompressor.DecompressAsync(compressed, cancellationToken);
File-to-file compression (no memory copy)
Uses memory-mapped I/O — no full read into managed memory.
using Compressor compressor = new(nbThreads: 0) { CompressLevel = 6 };
nuint compressedBytes = compressor.Compress(sourceFilePath, destFilePath);
Streaming Compression — small data (< 2 GB)
// compress
using MemoryStream ms = new();
using (CompressStream cs = new(ms))
cs.Write(origin);
byte[] compressed = ms.ToArray();
// decompress
using MemoryStream recoveryStream = new();
using (DecompressStream ds = new(new MemoryStream(compressed)))
ds.CopyTo(recoveryStream);
byte[] decompressed = recoveryStream.ToArray();
Streaming Compression — large files (> 2 GB)
.NET byte arrays are limited to ~2 GB. For larger payloads use streaming with Append.
Call
Write()as many times as needed to feed data in chunks. The stream is automatically finalised whenDispose()is called (i.e. at the end of ausingblock). You may also callFlush()explicitly.
Compress
byte[] buffer = new byte[64 * 1024 * 1024]; // 64 MB read buffer
using FileStream sourceFile = File.OpenRead(sourceFilePath);
using FileStream compressedFile = File.Create(compressedFilePath);
using (CompressStream cs = new(compressedFile))
{
cs.CompressLevel = 10;
long offset = 0;
while (offset < sourceFile.Length)
{
int bytesToRead = (int)Math.Min(buffer.Length, sourceFile.Length - offset);
int bytesRead = sourceFile.Read(buffer, 0, bytesToRead);
cs.Write(buffer, 0, bytesRead);
offset += bytesRead;
}
} // Dispose() automatically finalises the stream and writes the end checksum.
Decompress
using FileStream compressedFile = File.OpenRead(compressedFilePath);
using FileStream recoveryFile = File.Create(decompressedFilePath);
using (DecompressStream ds = new(compressedFile))
{
ds.CopyTo(recoveryFile);
}
Fine-tune compression parameters
using Compressor compressor = new(nbThreads: 0) { CompressLevel = 10 };
compressor.SetParameter(FL2Parameter.FastLength, 48);
compressor.SetParameter(FL2Parameter.SearchDepth, 60);
Estimate memory usage
// By compression level and thread count
nuint size = FL2.EstimateCompressMemoryUsage(compressionLevel: 10, nbThreads: 8);
// Using an existing context's settings
using Compressor compressor = new(nbThreads: 4) { CompressLevel = 10 };
nuint size = FL2.EstimateCompressMemoryUsage(compressor.CompressLevel, compressor.ThreadCount);
Find decompressed size
// From a byte array
nuint size = FL2.FindDecompressedSize(compressedData);
// From a file path (uses memory-mapped I/O — no full read into memory)
nuint size = FL2.FindDecompressedSize(compressedFilePath);
Bug report
Open an issue.
Contribution
PR is welcome.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- No dependencies.
-
net8.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.