TheTechLoop.Cache
1.1.2
dotnet add package TheTechLoop.Cache --version 1.1.2
NuGet\Install-Package TheTechLoop.Cache -Version 1.1.2
<PackageReference Include="TheTechLoop.Cache" Version="1.1.2" />
<PackageVersion Include="TheTechLoop.Cache" Version="1.1.2" />
<PackageReference Include="TheTechLoop.Cache" />
paket add TheTechLoop.Cache --version 1.1.2
#r "nuget: TheTechLoop.Cache, 1.1.2"
#:package TheTechLoop.Cache@1.1.2
#addin nuget:?package=TheTechLoop.Cache&version=1.1.2
#tool nuget:?package=TheTechLoop.Cache&version=1.1.2
TheTechLoop.Cache
Enterprise-grade distributed Redis caching library for .NET microservices with production-ready features for high-performance, scalable applications.
📚 Check out the
/UsageScenariosfolder for comprehensive real-world examples including CQRS with MediatR, multi-level caching, cache tagging, compression, Redis Streams, and more.<b>!! NOTE: CORA.Organization is a fictional company used for demonstration purposes in this library.</b>
✨ Key Features
Core Caching
- Multi-Level Caching — L1 in-memory + L2 Redis for optimal latency (1-5ms reads)
- Distributed Locking — Prevent cache stampede with Redis-based locks
- Circuit Breaker — Graceful degradation when Redis is unavailable
- Service-Scoped Keys — Automatic key prefixing per microservice
- Cache Versioning — Bump version on breaking DTO changes
Advanced Features
- Cache Tagging — Group and invalidate related cache entries with Redis Sets (O(1) operations)
- Cache Warming — Pre-load reference data on startup for zero cold-start latency
- Compression — Automatic GZip compression for large payloads (60-80% memory savings)
- Sliding Expiration — Auto-extend cache lifetime on each access (perfect for sessions)
- Redis Streams — Guaranteed cache invalidation delivery across microservices (no message loss)
Invalidation & Coherence
- Pub/Sub Invalidation — Cross-service cache invalidation via Redis channels
- Bulk Invalidation — Invalidate by prefix pattern or tags (e.g., all user data at once)
- Automatic Invalidation — Convention-based invalidation via
ICacheInvalidatablemarker
Integration & Observability
- CQRS-Optimized — Read-through caching with write-through invalidation
- MediatR Pipeline Behavior — Convention-based caching via
ICacheablemarker - OpenTelemetry Metrics — Built-in hit/miss/duration/size metrics per entity type
- Effectiveness Tracking — Per-entity cache hit rate analysis for optimization
Performance & Reliability
- 10-50x Performance Improvement — Typical read latency: < 5ms (vs 50-200ms database queries)
- High Availability — Automatic Redis reconnection with exponential backoff
- Thread-Safe — Concurrent-safe operations with minimal lock contention
- Production-Ready — Battle-tested in enterprise microservices environments
📦 Installation
dotnet add package TheTechLoop.Cache
Or via project reference:
<ProjectReference Include="..\TheTechLoop.Cache\TheTechLoop.Cache.csproj" />
Requirements:
- .NET 10 or higher
- Redis 6.0+ (7.0+ recommended for Streams)
- StackExchange.Redis 2.8+
🚀 Quick Start
1. Register Services
// Program.cs
builder.Services.AddTheTechLoopCache(builder.Configuration);
// Optional: Enable cross-service invalidation via Redis Pub/Sub
builder.Services.AddTheTechLoopCacheInvalidation();
// Optional: Multi-level caching (L1 Memory + L2 Redis)
builder.Services.AddTheTechLoopMultiLevelCache(builder.Configuration);
2. Configuration (appsettings.json)
{
"TheTechLoopCache": {
"Configuration": "localhost:6379,password=yourpassword,defaultDatabase=0,ssl=false,abortConnect=false",
"InstanceName": "TheTechLoop:Company:",
"ServiceName": "company-svc",
"CacheVersion": "v1",
"DefaultExpirationMinutes": 60,
"EnableLogging": true,
"Enabled": true,
"InvalidationChannel": "cache:invalidation",
"CircuitBreaker": {
"Enabled": true,
"BreakDurationSeconds": 60,
"FailureThreshold": 5
},
"MemoryCache": {
"Enabled": true,
"DefaultExpirationSeconds": 30,
"SizeLimit": 1024
},
"EnableTagging": false,
"EnableCompression": false,
"CompressionThresholdBytes": 1024,
"EnableEffectivenessMetrics": false,
"UseStreamsForInvalidation": false,
"EnableWarmup": false
}
}
Configuration Options Explained:
| Option | Description | Default |
|---|---|---|
Configuration |
Redis connection string | Required |
ServiceName |
Unique name for your microservice (used in key prefixes) | Required |
InstanceName |
Global prefix for all cache keys | Required |
CacheVersion |
Version for cache keys (bump to invalidate all) | "v1" |
Enabled |
Master switch to enable/disable caching | true |
EnableTagging |
Enable cache tagging for bulk invalidation | false |
EnableCompression |
Auto-compress values > threshold | false |
EnableEffectivenessMetrics |
Track per-entity hit rates | false |
UseStreamsForInvalidation |
Use Redis Streams instead of Pub/Sub | false |
EnableWarmup |
Pre-load cache on startup | false |
📋 Usage Scenarios
TheTechLoop.Cache supports 10 comprehensive usage scenarios. Visit the /UsageScenarios folder for detailed documentation with complete code examples.
Quick Selection Guide
| Scenario | Best For | Key Features |
|---|---|---|
| 01 - CQRS Multi-Level Cache ⭐ | Microservices with MediatR, high read-to-write ratio | L1+L2 cache, automatic caching/invalidation, 10-50x performance |
| 02 - Cache Tagging | Complex invalidation (e.g., user logout) | Bulk invalidation, Redis Sets, O(1) tag queries |
| 03 - Session Management | User sessions, shopping carts | Sliding expiration, auto-extend on access |
| 04 - Compression | Large payloads, bandwidth-constrained | GZip compression, 60-80% memory savings |
| 05 - Microservices Streams | Mission-critical invalidation | Redis Streams, guaranteed delivery, no message loss |
| 06 - Cache Warming | Static reference data | Pre-load on startup, zero cold-start latency |
| 07 - Performance Metrics | Data-driven optimization | Per-entity hit rates, latency tracking, OpenTelemetry |
| 08 - Simple REST API | Simple APIs without CQRS | Single-level cache, minimal setup |
| 09 - Memory Only | Single-instance apps, development | L1 cache only, no Redis dependency |
| 10 - Write-Heavy | Frequent updates, real-time systems | Aggressive invalidation, short TTL |
Selection by Architecture
- CQRS + MediatR: Use Scenario #1 (CQRS Multi-Level Cache)
- Simple REST API: Use Scenario #8 (Simple REST API)
- Microservices: Use Scenario #5 (Microservices Streams)
- Monolith: Use Scenario #9 (Memory Only)
Selection by Feature Need
- Session management: Scenario #3
- Large payloads: Scenario #4
- Bulk invalidation: Scenario #2
- Static data: Scenario #6
- Performance analysis: Scenario #7
🏗️ Architecture with CQRS + MediatR
Overview
Controller → MediatR → CachingBehavior → QueryHandler → ReadRepository → DB
↑
ICacheService
(read-through)
Controller → MediatR → CommandHandler → WriteRepository → UnitOfWork → DB
↓
ICacheService (invalidate)
↓
ICacheInvalidationPublisher (Pub/Sub)
Data Layer: Read/Write Repositories + UnitOfWork
// Read-only repository (CQRS query-side). Uses AsNoTracking for performance.
public interface IReadRepository<TEntity> where TEntity : class
{
IQueryable<TEntity> Query { get; } // AsNoTracking
Task<TEntity?> GetByIdAsync(int id, CancellationToken ct = default);
Task<List<TEntity>> GetAllAsync(CancellationToken ct = default);
}
// Write repository (CQRS command-side). Tracked by EF.
public interface IWriteRepository<TEntity> where TEntity : class
{
Task AddAsync(TEntity entity, CancellationToken ct = default);
void Update(TEntity entity);
void Remove(TEntity entity);
Task<TEntity?> GetByIdAsync(int id, CancellationToken ct = default);
}
// Commits all pending changes from write repositories.
public interface IUnitOfWork
{
Task<int> SaveChangesAsync(CancellationToken ct = default);
}
Query Handler — Cache on the Read Path
public class GetDealershipByIdQueryHandler : IRequestHandler<GetDealershipByIdQuery, Dealership?>
{
private readonly IReadRepository<Data.Models.Dealership> _repository;
private readonly ICacheService _cache;
private readonly CacheKeyBuilder _keyBuilder;
private readonly IMapper _mapper;
public async Task<Dealership?> Handle(GetDealershipByIdQuery request, CancellationToken ct)
{
var cacheKey = _keyBuilder.Key("Dealership", request.Id.ToString());
return await _cache.GetOrCreateAsync(
cacheKey,
async () =>
{
var entity = await _repository.Query
.Include(d => d.BusinessZipCode)
.FirstOrDefaultAsync(d => d.ID == request.Id, ct);
return entity is null ? null : _mapper.Map<Dealership>(entity);
},
TimeSpan.FromMinutes(30),
ct);
}
}
Command Handler — Invalidate on the Write Path
public class UpdateDealershipCommandHandler : IRequestHandler<UpdateDealershipCommand, bool>
{
private readonly IWriteRepository<Data.Models.Dealership> _repository;
private readonly IUnitOfWork _unitOfWork;
private readonly ICacheService _cache;
private readonly ICacheInvalidationPublisher _invalidation;
private readonly CacheKeyBuilder _keyBuilder;
public async Task<bool> Handle(UpdateDealershipCommand request, CancellationToken ct)
{
var entity = await _repository.GetByIdAsync(request.Id, ct);
if (entity is null) return false;
entity.Name = request.Name;
entity.BusinessAddress = request.BusinessAddress;
_repository.Update(entity);
await _unitOfWork.SaveChangesAsync(ct);
// Invalidate specific entity cache
var entityKey = _keyBuilder.Key("Dealership", request.Id.ToString());
await _cache.RemoveAsync(entityKey, ct);
// Invalidate search results (prefix pattern)
var searchPattern = _keyBuilder.Key("Dealership", "Search");
await _cache.RemoveByPrefixAsync(searchPattern, ct);
// Notify OTHER microservice instances via Pub/Sub
await _invalidation.PublishAsync(entityKey, ct);
await _invalidation.PublishPrefixAsync(searchPattern, ct);
return true;
}
}
🎯 Advanced Features
Multi-Level Caching (L1 + L2)
Combine in-memory (L1) and Redis (L2) for optimal performance:
// Program.cs
builder.Services.AddTheTechLoopMultiLevelCache(builder.Configuration);
// Configuration
{
"TheTechLoopCache": {
"MemoryCache": {
"Enabled": true,
"DefaultExpirationSeconds": 30,
"SizeLimit": 1024
}
}
}
Performance:
- L1 hit: < 1ms (in-process memory)
- L2 hit: 1-5ms (Redis network call)
- Database: 50-200ms
Cache Tagging for Bulk Invalidation
Group related cache entries and invalidate them together:
// Enable in configuration
{
"TheTechLoopCache": {
"EnableTagging": true
}
}
// Usage
var options = CacheEntryOptions.Absolute(
TimeSpan.FromHours(2),
"User", // Generic user tag
$"User:{user.ID}", // Specific user tag
"Session" // Session tag
);
await _cache.SetAsync(profileKey, user, options);
// Invalidate all user data with one call
await _tagService.RemoveByTagAsync($"User:{userId}");
Use Cases:
- User logout (invalidate all user sessions + preferences + permissions)
- Role change (invalidate user permissions + menu access)
- Company update (invalidate company + dealerships + employees)
Compression for Large Payloads
Automatically compress cache values larger than threshold:
// Configuration
{
"TheTechLoopCache": {
"EnableCompression": true,
"CompressionThresholdBytes": 1024 // Compress values > 1KB
}
}
// Automatic compression - no code changes needed!
var company = await _cache.GetOrCreateAsync(
cacheKey,
async () => await GetCompanyWithAllDetails(id),
TimeSpan.FromHours(2));
// 500KB → 150KB (70% savings)
Benefits:
- 60-80% memory savings for JSON payloads
- Reduced network bandwidth
- Transparent compression/decompression
- Small CPU overhead (+2ms for 10KB data)
Redis Streams for Guaranteed Invalidation
Use Redis Streams instead of Pub/Sub for mission-critical invalidation:
// Configuration
{
"TheTechLoopCache": {
"UseStreamsForInvalidation": true
}
}
// Same API - guaranteed delivery
await _invalidation.PublishAsync(key);
Streams vs Pub/Sub: | Feature | Pub/Sub | Streams | |---------|---------|---------| | Delivery | Fire-and-forget | Guaranteed | | Persistence | No | Yes (until ACK) | | Consumer Offline | Message lost | Message queued | | Acknowledgment | No | Required | | Production Use | Dev/Staging | Production |
Cache Warming for Zero Cold-Start
Pre-load reference data on application startup:
// Program.cs
builder.Services.AddTheTechLoopCacheWarmup();
builder.Services.AddTransient<ICacheWarmupStrategy, GeoDataWarmupStrategy>();
// Configuration
{
"TheTechLoopCache": {
"EnableWarmup": true
}
}
// Strategy implementation
public class GeoDataWarmupStrategy : ICacheWarmupStrategy
{
public async Task WarmupAsync(ICacheService cache, CancellationToken ct)
{
var countries = await _repository.GetAllCountriesAsync(ct);
foreach (var country in countries)
{
var key = _keyBuilder.Key("Country", country.ID.ToString());
await cache.SetAsync(key, country, TimeSpan.FromHours(24), ct);
}
}
}
Benefits:
- First request: 0ms cache miss (data already cached)
- Zero cold-start latency
- 99.9%+ cache hit rate for reference data
Performance Metrics and Effectiveness Tracking
Track cache performance per entity type:
// Configuration
{
"TheTechLoopCache": {
"EnableEffectivenessMetrics": true
}
}
// Automatic metrics collection
// Query cache statistics
GET /api/cache/stats
{
"Company": {
"hits": 1420,
"misses": 180,
"hitRate": 0.8875, // 88.75%
"avgLatencyMs": 2.3
},
"Country": {
"hits": 4520,
"misses": 8,
"hitRate": 0.9982, // 99.82% - Excellent!
"avgLatencyMs": 0.8
}
}
Use Cases:
- Identify which entities benefit most from caching
- Optimize TTL values based on hit rates
- Discover caching candidates (low hit rate = bad candidate)
- Capacity planning with size tracking
Sliding Expiration for Sessions
Auto-extend cache lifetime on each access:
var options = CacheEntryOptions.Sliding(TimeSpan.FromMinutes(30));
await _cache.SetAsync(sessionKey, sessionData, options);
// Each access extends the TTL by 30 minutes
await _cache.GetAsync<SessionData>(sessionKey);
Perfect for:
- User login sessions
- Shopping cart persistence
- Temporary form data
- User activity tracking
🔌 MediatR Pipeline Behavior — Auto-Cache for Queries
Eliminate cache boilerplate with convention-based caching:
ICacheable Marker Interface
public interface ICacheable
{
string CacheKey { get; }
TimeSpan CacheDuration { get; }
}
CachingBehavior
public class CachingBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
where TRequest : IRequest<TResponse>
{
private readonly ICacheService _cache;
private readonly CacheKeyBuilder _keyBuilder;
public async Task<TResponse> Handle(
TRequest request,
RequestHandlerDelegate<TResponse> next,
CancellationToken ct)
{
if (request is not ICacheable cacheable)
return await next(ct);
var scopedKey = _keyBuilder.Key(cacheable.CacheKey);
return await _cache.GetOrCreateAsync(
scopedKey,
async () => await next(ct),
cacheable.CacheDuration,
ct);
}
}
Usage — Handler Stays Pure
// Query declares cache behavior
public record GetDealershipByIdQuery(int Id) : IRequest<Dealership?>, ICacheable
{
public string CacheKey => $"Dealership:{Id}";
public TimeSpan CacheDuration => TimeSpan.FromMinutes(30);
}
// Handler has ZERO cache logic - pure data access
public class GetDealershipByIdQueryHandler : IRequestHandler<GetDealershipByIdQuery, Dealership?>
{
private readonly IReadRepository<Data.Models.Dealership> _repository;
private readonly IMapper _mapper;
public async Task<Dealership?> Handle(GetDealershipByIdQuery request, CancellationToken ct)
{
var entity = await _repository.Query
.Include(d => d.BusinessZipCode)
.FirstOrDefaultAsync(d => d.ID == request.Id, ct);
return entity is null ? null : _mapper.Map<Dealership>(entity);
}
}
🔧 API Reference
ICacheService
public interface ICacheService
{
// Get or create with factory
Task<T?> GetOrCreateAsync<T>(string key, Func<Task<T>> factory, TimeSpan expiration, CancellationToken ct = default);
// Direct get
Task<T?> GetAsync<T>(string key, CancellationToken ct = default);
// Direct set
Task SetAsync<T>(string key, T value, TimeSpan expiration, CancellationToken ct = default);
Task SetAsync<T>(string key, T value, CacheEntryOptions options, CancellationToken ct = default);
// Remove operations
Task RemoveAsync(string key, CancellationToken ct = default);
Task RemoveByPrefixAsync(string keyPrefix, CancellationToken ct = default);
// Distributed locking
Task<T?> GetOrCreateWithLockAsync<T>(string key, Func<Task<T>> factory, TimeSpan expiration, TimeSpan lockTimeout, CancellationToken ct = default);
}
CacheKeyBuilder
// Injected instance (service-scoped, versioned)
var key = _keyBuilder.Key("Dealership", "42");
// → "company-svc:v1:Dealership:42"
var pattern = _keyBuilder.Pattern("Dealership", "Search");
// → "company-svc:v1:Dealership:Search*"
// Static helpers (no service scope)
var sharedKey = CacheKeyBuilder.For("shared", "config");
var entityKey = CacheKeyBuilder.ForEntity("User", 42);
var sanitized = CacheKeyBuilder.Sanitize("hello world/test");
CacheEntryOptions
// Absolute expiration
var options = CacheEntryOptions.Absolute(TimeSpan.FromHours(1));
// Sliding expiration
var options = CacheEntryOptions.Sliding(TimeSpan.FromMinutes(30));
// With tags
var options = CacheEntryOptions.Absolute(
TimeSpan.FromHours(2),
"User", $"User:{userId}", "Session"
);
📊 OpenTelemetry Metrics
All metrics are recorded automatically. No manual instrumentation needed.
Built-in Metrics
| Metric | Type | Description |
|---|---|---|
cache.hits |
Counter | Total cache hits |
cache.misses |
Counter | Total cache misses |
cache.errors |
Counter | Redis exceptions |
cache.evictions |
Counter | Explicit removals |
cache.duration |
Histogram | Operation latency (ms) |
cache.size |
Histogram | Cached value size (bytes) |
cache.effectiveness.hit_rate |
Gauge | Hit rate per entity type |
Setup — Prometheus
builder.Services.AddOpenTelemetry()
.WithMetrics(metrics =>
{
metrics.AddMeter("TheTechLoop.Cache");
metrics.AddPrometheusExporter();
});
app.MapPrometheusScrapingEndpoint("/metrics");
CLI — dotnet-counters
dotnet counters monitor --process-id <PID> --counters TheTechLoop.Cache
[TheTechLoop.Cache]
cache.hits (Count / 1 sec) 12
cache.misses (Count / 1 sec) 3
cache.duration (ms) P50 0.45
cache.duration (ms) P95 2.1
💡 Best Practices
Cache TTL Guidelines
| Data Type | TTL | Example |
|---|---|---|
| Static reference data | 6–10 hours | Countries, states, positions |
| Entity by ID | 15–30 minutes | Dealership, User, Company |
| Search / list results | 3–5 minutes | Search results, paginated lists |
| User session data | 1–5 minutes | Active user profile |
| Frequently mutated data | 30–60 seconds | Real-time counters, presence |
Rules of Thumb
| Rule | Why |
|---|---|
| Cache only in Query Handlers | Reads benefit from cache; writes must always hit DB |
| Invalidate only in Command Handlers | After UnitOfWork.SaveChangesAsync succeeds |
ReadRepository uses AsNoTracking |
No EF change tracking overhead on cached reads |
| WriteRepository is tracked | EF change tracking needed for updates |
Use ICacheable marker |
Eliminates cache boilerplate in every handler |
| Short TTL for search, long for by-ID | Search results change frequently |
Bump CacheVersion on breaking DTO changes |
Old cache entries are automatically ignored |
| Always fall back to DB on cache errors | Cache is an optimization, not a dependency |
Data Flow
READ PATH (Query)
Controller
→ MediatR.Send(Query)
→ CachingBehavior
→ ICacheService.GetOrCreateAsync()
→ [Cache Hit] Return cached value
→ [Cache Miss] → QueryHandler → Database → Cache → Return
WRITE PATH (Command)
Controller
→ MediatR.Send(Command)
→ CommandHandler
→ WriteRepository.Update()
→ UnitOfWork.SaveChangesAsync()
→ ICacheService.RemoveAsync()
→ ICacheInvalidationPublisher.PublishAsync() ← notify other instances
🗂️ Project Structure
TheTechLoop.Cache/
├── Abstractions/
│ ├── ICacheable.cs # Marker for auto-cached queries
│ ├── ICacheInvalidatable.cs # Marker for auto-invalidating commands
│ ├── ICacheService.cs # Core cache contract
│ ├── ICacheInvalidationPublisher.cs # Cross-service Pub/Sub
│ ├── ICacheTagService.cs # Cache tagging
│ └── IDistributedLock.cs # Stampede prevention
├── Behaviors/
│ ├── CachingBehavior.cs # MediatR read-path auto-cache
│ └── CacheInvalidationBehavior.cs # MediatR write-path auto-invalidate
├── Configuration/
│ └── CacheConfig.cs # Full config
├── Extensions/
│ └── CacheServiceCollectionExtensions.cs # DI registration
├── Keys/
│ └── CacheKeyBuilder.cs # Service-scoped, versioned keys
├── Metrics/
│ ├── CacheMetrics.cs # OpenTelemetry counters
│ └── CacheEffectivenessMetrics.cs # Per-entity tracking
├── Services/
│ ├── RedisCacheService.cs # Core Redis implementation
│ ├── MultiLevelCacheService.cs # L1 Memory + L2 Redis
│ ├── RedisDistributedLock.cs # Redis distributed locking
│ ├── RedisCacheInvalidationPublisher.cs # Pub/Sub publisher
│ ├── CacheInvalidationSubscriber.cs # Background Pub/Sub consumer
│ ├── StreamInvalidationPublisher.cs # Redis Streams publisher
│ ├── StreamInvalidationSubscriber.cs # Redis Streams consumer
│ ├── CacheTagService.cs # Tagging implementation
│ ├── CircuitBreakerState.cs # Thread-safe circuit breaker
│ └── NoOpCacheService.cs # No-op when disabled
├── Warming/
│ ├── ICacheWarmupStrategy.cs # Warmup strategy contract
│ └── CacheWarmupService.cs # Background warmup service
├── README.md
└── TheTechLoop.Cache.csproj
📚 Additional Resources
- /UsageScenarios — 10 comprehensive usage scenarios with complete examples
- Summary.md — Quick reference guide for all scenarios
- 01_CQRS_MultiLevel_Cache.md ⭐ Most popular scenario
🚀 Performance
Typical Results:
- Database query: 50-200ms
- Redis cache hit: 1-5ms
- Memory cache hit: < 1ms
- 10-50x performance improvement for read-heavy workloads
Compression:
- 60-80% memory savings for JSON payloads
- 70% faster network transfer for large payloads
Cache Hit Rates:
- Reference data: 99.9%+
- Entity by ID: 85-95%
- Search results: 70-85%
🤝 Contributing
Contributions are welcome! Please open an issue or submit a pull request.
📄 License
MIT License - see LICENSE file for details.
🆘 Support
For questions or issues:
- Check the troubleshooting section in relevant usage scenario
- Review this README and configuration options
- Open an issue on GitHub
Version: 1.1.0
Status: Production-Ready ✅
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- AspNetCore.HealthChecks.Redis (>= 9.0.0)
- MediatR (>= 12.5.0)
- Microsoft.Extensions.Caching.Memory (>= 10.0.3)
- Microsoft.Extensions.Caching.StackExchangeRedis (>= 10.0.3)
- Microsoft.Extensions.Diagnostics.HealthChecks (>= 10.0.3)
- Microsoft.Extensions.Hosting.Abstractions (>= 10.0.3)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 10.0.3)
- Microsoft.Extensions.Options.DataAnnotations (>= 10.0.3)
- StackExchange.Redis (>= 2.11.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.