qon 0.2.3-alpha

This is a prerelease version of qon.
dotnet add package qon --version 0.2.3-alpha
                    
NuGet\Install-Package qon -Version 0.2.3-alpha
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="qon" Version="0.2.3-alpha" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="qon" Version="0.2.3-alpha" />
                    
Directory.Packages.props
<PackageReference Include="qon" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add qon --version 0.2.3-alpha
                    
#r "nuget: qon, 0.2.3-alpha"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package qon@0.2.3-alpha
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=qon&version=0.2.3-alpha&prerelease
                    
Install as a Cake Addin
#tool nuget:?package=qon&version=0.2.3-alpha&prerelease
                    
Install as a Cake Tool

<h1 align="center">「qon」</h1>

NuGet netstandard 2.1

「qon」 is aт Open Source C# library for iterative backtracking-based problem solving.

Installation

NuGet
dotnet add package qon
dotnet add package qon.Spatial
Unity3D

Install NuGetForUnity and use packages listed above.

What it is, what it isn't and what it can become

Application:「qon」 provides API for solving tasks defined by a finite set of variables and rules, how these variables should be evaluated, changed, validated and etc. Originally it was created as a base for implementation of the Wave-Function Collapse algorithm. Later I decided to implement more generic system. Despite that main focus of this library is the procedural generation, which means some kinds of tasks are not optimized at all. For example, any math or logic-kind tasks, such as SEND+MORE=MONEY, are solvable by 「qon」, but it can take a lot of time.

Functionality: Out-of-the-box 「qon」provides functionality to solve tasks using constraint and/or genetic approach.

Opennes: Whole library heavily utilizes Interfaces and is designed in such way, you can swap and reimplement almost all of provided classes.

Aim: I develop 「qon」with Unity3D in mind, so I'm bound by available runtime. Until Unity embraces CoreCLR in its LTS version I'll not go beyond alpha state for the library. What I want to implement for beta at least:

  • Proper Numerical Domains utilizing INumber<T>
  • User-side heuristics for optimization fine-tuning
  • Support for hexagonal spaces
  • Feature rich QSL

While developing I was additionally inspired by MarkovJunior, another project of Maxim Gumin(author of WFC), so I utilized some ideas from it. Besides my appreciation to both these projects and their author, I want to highlight that my goal is not to somehow compete or even replace them, but to create its own things. My library is very modular and tries to encompass several different approaches, which unfortunately leads to not-so-good performance — tools created for specific situations always will be better than jack-of-all-trades multitools.

Usage examples

Simplest Example. Generates 4 variables with different char values

var domain = DomainHelper.SymbolicalDomain(
    new DomainHelper.CharDomainOptions()
        .WithAlphabet('a', 'j'));

var machine = QMachine<char>.Create()
    .WithConstraintLayer(new()
    {
        GeneralConstraints = new()
        {
            Constraints.CreateConstraint<char>()
                .Select(Filters.All<char>())
                .Propagate(Propagators.AllDistinct<char>())
                .Build()
        }
    })
    .GenerateField(domain, 10);

foreach (var state in machine.States)
{
    Console.WriteLine($"{state}: {machine.Status}");
}

Weasel genetic algorithm

Weasel program - Wikipedia

Evolutionary algorithm - Rosetta Code

var target = "ME THINKS IT IS LIKE A WEASEL";

var machine = QMachine<char>.Create()
    .WithMutation(new MutationLayerParameter<char>
    {
        MutationFunction = Mutations.CreateMutation<char>()
            .Sampling(100)
            .AddMutation(Mutations.Mutation<char>()
                .Frequency(0.1)
                .When(Filters.All<char>())
                .Into(Mutations.RandomFromDomain<char>())
                .Build())
            .Build(),
        Fitness = (field) => Score(field, target)
    })
    .GenerateField((target.Length, 1, 1), 'A');

foreach (var state in machine.States)
{
    Console.WriteLine(FormatState(state));
}

int Score(Field<char> first, string second)
{
    if (first.Count != second.Length)
    {
        return -1;
    }

    int mismatch = 0;
                    
    for (int i = 0; i < first.Count; i++)
    {
        if (!first[i].Value.CheckValue(second[i]))
        {
            mismatch++;
        }
    }

    return mismatch;
}

string FormatState(MachineState<char> state)
{
    return new string(state.Field.Select(variable => variable.Value.Value).ToArray());
}

AI Usage

I consider myself as a mild pro-ai person. I think AI (even LLMs) can be used on a par with other tools. But they should be used only with full transparency and its results should be treated with a caution.

How different AI instruments were used while developing this library:

  • Core project

    1. All code which is pushed to GitHub repository was manually written by me.
    2. In most cases I used Codex to review my code and to find bugs. It did a pretty good job. It was not asked to produce fixes, only to find particular issues and maybe suggest what can be done. At the end it was me, who wrote these fixes.
    3. In some cases I asked it to write some dummy code, which allowed me to focus on other parts. Later all AI-generated code was manually rewritten.
      1. For example, when I was rewriting V1 version I started 3 times from the scratch, because supposed changes were too big to implement at once, so I used Codex to do some fixes all across the project to let it be compiled at least, then I focused on some particular things and rewrote everything.
      2. Another example, when I was playing with Rotation support I was not sure about how to implement it, so I described the algorithm and asked Codex to produce the code. This code worked as a proof-of-concept for me, so I rewrote it on my own.
  • Examples

    • Some examples were initially AI-generated, because I wondered how LLM can play with my own code to produce something new. Basically I just asked AI to write whatever it can using my library and I picked couple of good examples, which were later rewritten due to all API changes. The best example is the Eight Queens(Wiki), it wasn't working properly, because at that moment Validation was not implemented yet, which forced me to do it.
  • Tests

    • Writing tests is not my strong suit, so most of them are generated. By I tried to make this process as meaningful as possible: I manually pick pieces of code to be tested, ask AI to search project for use cases of this functionality in my code, then I ask to generate tests covering existing functionality and deduce missing conditions, which should be tested to. Also I use some tools like dotCover to help me with tests.
  • Documentation and wiki

    1. I was frustrated at the lack of any meaningful tools for Wiki's by GitHub. Adriantanasa/github-wiki-sidebar wasn't able to generate Sidebar due to some internal error, so I vibe-coded python script to generate _Sidebar.md for this wiki. Maybe will rewrite it later.
    2. Everything else is written strictly by me. Hope you can excuse some mistakes, because English is not my native language.
Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
.NET Core netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.1 is compatible. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • .NETStandard 2.1

    • No dependencies.

NuGet packages (1)

Showing the top 1 NuGet packages that depend on qon:

Package Downloads
qon.Spatial

Additional functionality for working with spatial-based tasks. Support for Cartesian coordinate system

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
0.2.3-alpha 29 4/8/2026
0.2.2-alpha 41 4/3/2026
0.2.1-alpha 38 4/1/2026
0.1.1-alpha 41 3/31/2026