MandoCode 0.10.0
dotnet tool install --global MandoCode --version 0.10.0
dotnet new tool-manifest
dotnet tool install --local MandoCode --version 0.10.0
#tool dotnet:?package=MandoCode&version=0.10.0
nuke :add-package MandoCode --version 0.10.0
<p align="center"> <img src="docs/images/mcbanner.png" alt="MandoCode Logo" width="800"> </p>
<p align="center"> <strong>Your AI coding assistant — run locally or in the cloud with Ollama.</strong><br> No API keys required. Just you and your code. </p>
<p align="center"> <a href="https://www.nuget.org/packages/MandoCode"><img src="https://img.shields.io/nuget/v/MandoCode?logo=nuget&color=blue" alt="NuGet"></a> <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT"> <img src="https://img.shields.io/badge/.NET-8.0-blueviolet?logo=dotnet" alt=".NET 8.0"> <img src="https://img.shields.io/badge/Ollama-Local%20LLM-black?logo=ollama" alt="Ollama"> <img src="https://img.shields.io/badge/Platform-Windows%20%7C%20Linux-lightgrey" alt="Platform"> <img src="https://img.shields.io/badge/Made%20with%20%3C3%20by-Mando-red" alt="Made with ❤️ by Mando"> </p>
<p align="center"> <img src="docs/images/hero-demo.gif" alt="MandoCode in action" width="800"> </p>
MandoCode is an AI coding assistant built on RazorConsole, powered by Semantic Kernel and Ollama. RazorConsole makes the entire terminal UI possible — Razor components, a virtual DOM, and Spectre.Console rendering all running in the console.
Run locally or connect to Ollama cloud — no API keys required for anything, including web search. It gives you Claude-Code-style project awareness — reading, writing, searching, planning, and web browsing across your entire codebase — without ever leaving your terminal. It understands any file type: C#, JavaScript, TypeScript, Python, CSS, HTML, JSON, config files, and more.
Prerequisites
- .NET 8 SDK — dotnet.microsoft.com/download/dotnet/8.0 (SDK includes the runtime — install only the SDK)
- Ollama — ollama.com/download (MandoCode walks you through setup on first run)
Install
dotnet tool install -g MandoCode
mandocode
First run launches a guided wizard: it detects Ollama, offers to start it, walks you through cloud sign-in if you'd like more powerful models, and auto-pulls a sensible default. You can re-run it any time with /setup.
Troubleshooting
mandocode --doctor
Prints your runtime version, Ollama status, models pulled, and cloud sign-in state.
Or build from source
git clone https://github.com/DevMando/MandoCode.git
cd MandoCode
dotnet build src/MandoCode/MandoCode.csproj
dotnet run --project src/MandoCode/MandoCode.csproj -- /path/to/your/project
What Makes MandoCode Different
<table> <tr> <td width="50%">
Safe File Editing with Diff Approvals
Every file write and delete is intercepted with a color-coded diff. You approve, deny, or redirect — nothing touches disk without your say-so.
<img src="docs/images/diff-approval.png" alt="Diff approval" width="400">
</td> <td width="50%">
@ File References
Type @ to autocomplete any project file and attach it as context. The AI sees the full file content alongside your prompt. Reference multiple files in a single message.
<img src="docs/images/file-autocomplete.gif" alt="File autocomplete" width="400">
</td> </tr> <tr> <td width="50%">
Task Planner
Complex requests are automatically broken into step-by-step plans. Review the plan, then watch each step execute with progress tracking.
<img src="docs/images/task-planner.png" alt="Task planner" width="400">
</td> <td width="50%">
Web Search & Fetch
The AI can search DuckDuckGo and read webpages to find documentation, tutorials, or answers — no API keys needed.
<img src="docs/images/web-search.png" alt="Web search" width="400">
</td> </tr> <tr> <td width="50%">
Built-in Music Player
Lofi and synthwave tracks bundled right in. A waveform visualizer runs in the corner while you code. Because vibes matter.
<img src="docs/images/music-player.png" alt="Music player" width="400">
</td> <td width="50%">
Offline-Friendly Startup
If Ollama isn't running, MandoCode shows setup guidance inline instead of a bare error. Use /retry to reconnect without restarting.
<img src="docs/images/offline-guidance.png" alt="Offline guidance" width="400">
</td> </tr> </table>
Features at a Glance
| Feature | Description | |
|---|---|---|
| AI | Project-aware assistant | Reads, writes, deletes, and searches your entire codebase |
| AI | Web search & fetch | DuckDuckGo search and webpage reading — no API keys needed |
| AI | MCP server support | Connect to any Model Context Protocol server (stdio or remote HTTP) — Claude-Desktop-compatible config |
| AI | Streaming responses | Real-time output with animated spinners |
| AI | Task planner | Auto-detects complex requests and breaks them into steps |
| AI | Fallback function parsing | Handles models that output tool calls as raw JSON |
| UI | Diff approvals | Color-coded diffs with approve / deny / redirect |
| UI | Markdown rendering | Rich terminal output — headers, tables, code blocks, quotes |
| UI | Syntax highlighting | C#, Python, JavaScript/TypeScript, Bash |
| UI | Clickable file links | OSC 8 hyperlinks for file paths |
| UI | Terminal theme detection | Auto-adapts colors for light and dark terminals |
| UI | Taskbar progress | Windows Terminal integration during task execution |
| Input | / command autocomplete |
Slash commands with dropdown navigation |
| Input | @ file references |
Attach file content to any prompt |
| Input | ! shell escape |
Run shell commands inline (!git status, !ls) |
| Input | /copy and /copy-code |
Copy responses or code blocks to clipboard |
| Music | Lofi + synthwave | Bundled tracks with volume, genre switching, waveform visualizer |
| Config | Configuration wizard | Guided setup with model selection and connection testing |
| Config | Config validation | Auto-clamps invalid settings to safe ranges |
| Reliability | Retry + deduplication | Exponential backoff and duplicate call prevention |
| Education | /learn command |
LLM education guide with optional AI educator chat |
Commands
Type / to see the autocomplete dropdown, or ! to run a shell command.
| Command | What it does |
|---|---|
/help |
Show commands and usage examples |
/setup |
Guided wizard — reconnect to Ollama, install/sign in, or pick a different model |
/model |
Quick switch — pick a different model + context size |
/config |
Adjust settings — model, temperature, max tokens, timeout, ignore dirs |
/retry |
Retry Ollama connection |
/learn |
Interactive guide to LLMs and local AI |
/copy |
Copy last AI response to clipboard |
/copy-code |
Copy code blocks from last response |
/command <cmd> |
Run a shell command |
/music |
Start playing music |
/music-stop |
Stop playback |
/music-pause |
Pause / resume |
/music-next |
Next track |
/music-vol <0-100> |
Set volume |
/music-lofi |
Switch to lofi |
/music-synthwave |
Switch to synthwave |
/music-list |
List available tracks |
/mcp |
List configured MCP servers with status and tool counts |
/mcp add |
Interactively add a new MCP server to config |
/mcp remove <name> |
Remove an MCP server from config |
/mcp tools <server> |
List tools exposed by connected MCP servers (server optional) |
/mcp-reload |
Restart all MCP servers and re-register their tools |
/clear |
Clear conversation history |
/exit |
Exit MandoCode |
!<cmd> |
Shell escape (e.g., !git status) |
!cd <path> |
Change project root directory |
Setup vs config vs model
/setup— first-run wizard, guided. Detects Ollama, offers to install it, walks you through cloud sign-in, picks a model with hardware-aware tiers, auto-pulls a sensible default. Use when something's broken or you're a newcomer./model— quick switch. Pick a model from your pulled list + context size. Use when you just want to swap models./config— adjust settings. Full configuration form covering temperature, timeouts, ignore dirs, etc. Use when you know exactly what knob you want to turn.
CLI flags (outside the chat loop)
mandocode --doctor # preflight check: .NET runtime, Ollama status, models, sign-in
mandocode --config show # print current config
mandocode --config init # create a default config file
mandocode --config set <key> <value> # set a single value (e.g. set model qwen3.5:9b)
mandocode --config path # show config file location
Run mandocode --doctor any time chat is misbehaving — exits 0 if everything's green, 1 if anything's missing, with a clear summary of what's wrong.
How It Works
You type a prompt
|
MandoCode adds project context (@files, system prompt)
|
Semantic Kernel sends to Ollama (local or cloud model)
|
AI responds with text + function calls
|
File operations go through diff approval
Web searches and fetches run directly
|
Rich markdown rendered in your terminal
The AI has sandboxed access to your project through a FileSystemPlugin (9 functions: list files, glob search, read, write, delete files/folders, text search, path resolution) and a WebSearchPlugin (web search via DuckDuckGo, webpage fetching — no API keys required). All file operations are locked to your project root — path traversal is blocked.
Recommended Models
Models with tool/function calling support work best with MandoCode. The first-run wizard offers exactly the models below — auto-pulls the cloud default, or lets you pick a local tier matched to your hardware.
Cloud (no GPU required — runs on Ollama's servers, free with ollama signin):
| Model | Notes |
|---|---|
minimax-m2.7:cloud |
Default — auto-pulled by /setup when you pick Cloud |
Local (fully offline, runs on your hardware):
| Model | Size | Hardware |
|---|---|---|
qwen3.5:0.8b |
~1.0 GB | CPU-only / integrated GPU — fast on any laptop, light reasoning |
qwen3.5:2b |
~2.7 GB | Modern CPU or 4 GB+ GPU — quick Q&A, simple code edits |
qwen3.5:4b |
~3.4 GB | Mid-range GPU (4-6 GB VRAM) or 16 GB RAM — balanced day-to-day use |
qwen3.5:9b |
~6.6 GB | Dedicated GPU (8+ GB VRAM) — best local quality, multi-file refactors |
MandoCode validates model compatibility on startup. Run /learn for a detailed guide on model sizes and hardware requirements, or /setup to switch between tiers any time.
<details> <summary><h2>Configuration Reference</h2></summary>
Config File
Located at ~/.mandocode/config.json
{
"ollamaEndpoint": "http://localhost:11434",
"modelName": "minimax-m2.7:cloud",
"modelPath": null,
"temperature": 0.7,
"maxTokens": 4096,
"ignoreDirectories": [],
"enableDiffApprovals": true,
"enableTaskPlanning": true,
"enableTokenTracking": true,
"enableThemeCustomization": true,
"enableFallbackFunctionParsing": true,
"functionDeduplicationWindowSeconds": 5,
"maxRetryAttempts": 2,
"music": {
"volume": 0.5,
"genre": "lofi",
"autoPlay": false
}
}
All Options
| Key | Default | Description |
|---|---|---|
ollamaEndpoint |
http://localhost:11434 |
Ollama server URL |
modelName |
minimax-m2.7:cloud |
Model to use |
modelPath |
null |
Optional path to a local GGUF model file |
temperature |
0.7 |
Response creativity (0.0 = focused, 1.0 = creative) |
maxTokens |
4096 |
Maximum response token length |
ignoreDirectories |
[] |
Additional directories to exclude from file scanning |
enableDiffApprovals |
true |
Show diffs and prompt for approval before file writes/deletes |
enableTaskPlanning |
true |
Enable automatic task planning for complex requests |
enableTokenTracking |
true |
Show session token totals and per-response token costs |
enableThemeCustomization |
true |
Detect terminal theme and apply a curated ANSI palette |
enableFallbackFunctionParsing |
true |
Parse function calls from text output |
functionDeduplicationWindowSeconds |
5 |
Time window to prevent duplicate function calls |
maxRetryAttempts |
2 |
Max retry attempts for transient errors |
music.volume |
0.5 |
Music volume (0.0 - 1.0) |
music.genre |
lofi |
Default genre (lofi or synthwave) |
music.autoPlay |
false |
Auto-start music on launch |
CLI Config Commands
mandocode config show # Display current configuration
mandocode config init # Create default configuration file
mandocode config set <key> <value> # Set a configuration value
mandocode config path # Show configuration file location
mandocode config --help # Show help
Environment Variables
| Variable | Overrides |
|---|---|
OLLAMA_ENDPOINT |
ollamaEndpoint in config |
OLLAMA_MODEL |
modelName in config |
</details>
<details> <summary><h2>Diff Approvals — Deep Dive</h2></summary>
When the AI writes or deletes a file, MandoCode intercepts the operation and shows a color-coded diff before applying changes.
What You See
- Red lines — content being removed
- Light blue lines — content being added
- Dim lines — unchanged context (3 lines around each change)
- Long unchanged sections are collapsed with a summary
Approval Options
| Option | Behavior |
|---|---|
| Approve | Apply this change |
| Approve - Don't ask again | Auto-approve future changes to this file (per-file), or all files (global) |
| Deny | Reject the change, the AI is told it was denied |
| Provide new instructions | Redirect the AI with custom feedback |
For new files, "don't ask again" sets a global bypass — all future writes and deletes are auto-approved for the session. For existing files, the bypass is per-file.
Even when auto-approved, diffs are still rendered so you can follow along.
Delete Approvals
File deletions show all existing content as red removals with a deletion warning. The same approval options apply.
Toggle
mandocode config set diffApprovals false
</details>
<details> <summary><h2>@ File References — Deep Dive</h2></summary>
Type @ anywhere in your input (after a space or at position 0) to trigger file autocomplete. A dropdown appears showing your project files, filtered as you type.
How It Works
- Type your prompt and hit
@— a file dropdown appears - Type a partial name to filter (e.g.,
Conf) — matches narrow down - Use arrow keys to navigate, Tab or Enter to select
- The selected path is inserted (e.g.,
@src/MandoCode/Models/MandoCodeConfig.cs) - Continue typing and press Enter to submit
- MandoCode reads the referenced file(s) and injects the content as context for the AI
Examples
explain @src/MandoCode/Services/AIService.cs to me
what does the ProcessFileReferences method do in @src/MandoCode/Components/App.razor
refactor @src/MandoCode/Models/LoadingMessages.cs to use fewer spinners
Multiple @ references in one prompt are supported. Files over 10,000 characters are automatically truncated.
Controls
| Key | Action |
|---|---|
@ |
Open file dropdown |
| Type | Filter files by name |
| Up/Down | Navigate dropdown |
| Tab/Enter | Insert selected file path (does not submit) |
| Escape | Close dropdown, keep text |
| Backspace | Re-filter, or close if you delete past @ |
</details>
<details> <summary><h2>Task Planner — Deep Dive</h2></summary>
MandoCode automatically detects complex requests and offers to break them into a step-by-step plan before execution.
Triggers
The planner activates for requests like:
Create a REST API service with authentication and rate limiting for the user module(12+ words with imperative verb and scope indicator)Build an application that handles user registration and sends email confirmations- Numbered lists with 3+ items
- Requests over 400 characters
Simple questions, short prompts, and single-action operations (delete, remove, read, show, list, find, search, rename) bypass planning automatically.
Workflow
- Detection — heuristics identify complex requests
- Plan generation — AI creates numbered steps
- User approval — review the plan table, then choose: execute, skip planning, or cancel
- Step-by-step execution — each step runs with progress tracking
- Error handling — skip failed steps or cancel the entire plan
See Task Planner Documentation for full technical details.
</details>
<details> <summary><h2>/learn — LLM Education</h2></summary>
The /learn command helps new users understand local LLMs and get set up.
| Scenario | What happens |
|---|---|
| Startup, no Ollama detected | Automatically displays the educational guide instead of a bare error |
/learn typed, no model running |
Displays the static educational guide |
/learn typed, model is running |
Shows the guide, then offers to enter AI educator chat mode |
Educational Content
- What are Open-Weight LLMs? — Free, private, offline models vs. cloud AI
- Model Sizes & Hardware — Parameters, quantization, VRAM requirements
- Cloud vs Local Models — Ollama cloud models (no GPU) vs local models
- Recommended Models — Table of cloud and local options
- Getting Started — Step-by-step setup instructions
AI Educator Chat Mode
When Ollama is running, /learn offers an interactive chat mode where the AI explains LLM concepts using beginner-friendly language. Type /clear to return to normal mode.
</details>
<details> <summary><h2>MCP Servers — Deep Dive</h2></summary>
MandoCode speaks the Model Context Protocol as a client, which means you can plug in any published MCP server — filesystem, database, GitHub, Linear, Slack, whatever — and its tools show up to the model alongside MandoCode's built-in plugins.
Adding a server
Two ways:
/mcp addinside MandoCode — an interactive wizard that prompts through name, transport, URL/command, and optional headers/env vars, previews the JSON, and saves + reloads automatically.- Hand-edit
~/.mandocode/config.json— useful when copy-pasting amcpServersblock from a server's README. Run/mcp-reloadafter saving.
Config shape
The mcpServers block mirrors Claude Desktop's schema, so you can copy-paste any server's README installation snippet directly into ~/.mandocode/config.json:
{
"enableMcp": true,
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allow"]
},
"solana": {
"url": "https://mcp.solana.com/mcp",
"transport": "http"
},
"github": {
"url": "https://api.githubcopilot.com/mcp/",
"headers": { "Authorization": "Bearer ghp_your_token_here" },
"autoApprove": ["list_issues", "get_pr"]
}
}
}
Transports
- stdio — for local servers. Populate
command+args+ optionalenv. Works with any server published as an npm/pip/go binary. - HTTP / SSE — for remote servers. Populate
url; the client auto-detects Streamable HTTP or SSE. Custom headers go inheaders— most commonlyAuthorization: Bearer …for servers that accept static tokens.
Does MandoCode need Node?
No. MandoCode itself is pure .NET. But individual servers may need whatever runtime their command points at — Node for npx, Python for uvx, or nothing extra for standalone binaries. Same situation as Claude Desktop, Cursor, and VS Code.
OAuth-only servers
Native OAuth is not in this release. For servers that require an OAuth flow (some hosted connectors like Google Drive), wrap them in stdio via the community mcp-remote proxy, which handles the browser dance itself:
"gdrive": {
"command": "npx",
"args": ["mcp-remote", "https://example.com/mcp"]
}
Approvals
MandoCode cannot tell a read-only MCP tool from a destructive one by inspecting arguments, so the first call of each (server, tool) pair prompts you with Approve / Approve for session / Deny. Pre-trusted tools can be listed under autoApprove in a server's config entry to skip the prompt entirely.
Slash commands
/mcp— shows each configured server with its transport, connection status, and live tool count/mcp add— interactive wizard for adding a new server without hand-editing JSON/mcp remove <name>— remove a server from config (with confirm)/mcp tools <server>— list every tool exposed by connected servers with descriptions (server arg optional — omit to list all)/mcp-reload— tears down every MCP client, restarts them, and re-registers their tools on the kernel (useful when you edit the config mid-session)
Toggle
mandocode --config set mcp false # disable all MCP integration
Individual servers can be muted without deleting them — set "disabled": true on any entry in mcpServers.
</details>
<details> <summary><h2>AI Plugins</h2></summary>
FileSystemPlugin
The AI has sandboxed access to your project directory through these functions:
| Function | Description |
|---|---|
list_all_project_files() |
Recursively lists all project files, excluding ignored directories |
list_files_match_glob_pattern(pattern) |
Lists files matching a glob pattern (*.cs, src/**/*.ts) |
read_file_contents(relativePath) |
Reads complete file content with line count |
write_file(relativePath, content) |
Writes/creates a file (creates directories as needed) |
delete_file(relativePath) |
Deletes a file |
create_folder(relativePath) |
Creates a new directory |
delete_folder(relativePath) |
Deletes a directory and all its contents |
search_text_in_files(pattern, searchText) |
Searches file contents for text, returns paths and line numbers |
get_absolute_path(relativePath) |
Converts a relative path to absolute |
Security: All operations are sandboxed to the project root. Path traversal is blocked with a separator-boundary check.
Ignored directories: .git, node_modules, bin, obj, .vs, .vscode, packages, dist, build, __pycache__, .idea — plus any custom directories from your config.
WebSearchPlugin
The AI can search the web and fetch page content — no API keys required.
| Function | Description |
|---|---|
search_web(query, maxResults) |
Searches DuckDuckGo and returns titles, URLs, and snippets (1–10 results) |
fetch_webpage(url, maxCharacters) |
Fetches a URL and extracts readable text content (500–15,000 chars) |
Web search uses DuckDuckGo's HTML endpoint. Fetched pages are cleaned of scripts, nav, and non-content elements via HtmlAgilityPack.
</details>
<details> <summary><h2>Reliability & Internals</h2></summary>
Retry Policy
Transient errors (HTTP failures, timeouts, socket errors) are retried with exponential backoff:
Attempt 1 -> fail -> wait 500ms
Attempt 2 -> fail -> wait 1000ms
Attempt 3 -> fail -> throw
Function Deduplication
| Operation | Window | Matching |
|---|---|---|
| Read operations | 2 seconds | Function name + arguments |
| Write operations | 5 seconds (configurable) | Function name + path + content hash (SHA256) |
Fallback Function Parsing
Some local models output function calls as JSON text instead of proper tool calls. MandoCode detects and parses:
- Standard:
{"name": "func", "parameters": {...}} - OpenAI-style:
{"function_call": {"name": "func", "arguments": {...}}} - Tool calls:
{"tool_calls": [{"function": {"name": "func", "arguments": {...}}}]}
Markdown Rendering
AI responses are rendered as rich terminal output:
| Markdown | Rendered as |
|---|---|
**bold** |
Bold text |
*italic* |
Italic text |
`code` |
Cyan highlighted |
| Fenced code blocks | Bordered panels with syntax highlighting |
| Tables | Spectre.Console table widgets |
# Headers |
Bold yellow with horizontal rules |
- lists |
Indented bullet points |
> quotes |
Grey-bordered block quotes |
| URLs | Clickable OSC 8 hyperlinks |
Syntax highlighting supports C#, Python, JavaScript/TypeScript, and Bash with language-specific keyword coloring.
Token Tracking
- Per-response:
[~1.2k in, 847 out]after each AI response - Session total:
Total [4.2k tokens]above the prompt - File estimates:
@fileattachments show estimated token cost (chars/4)
Event-Based Completion Tracking
Function executions use semaphore-based signaling, ensuring each task plan step fully completes before the next begins.
</details>
Architecture
src/MandoCode/
Components/ Razor UI (App, Banner, HelpDisplay, ConfigMenu, Prompt)
Services/ Core logic (AI, markdown, syntax, tokens, music, diffs, input state machine)
Models/ Data models, config, system prompts, educational content
Plugins/ Semantic Kernel plugins (FileSystem, WebSearch)
Audio/ Bundled lofi and synthwave MP3 tracks
docs/ Feature and architecture documentation
Program.cs Entry point and DI registration
Dependencies
| Package | Purpose |
|---|---|
| Microsoft.SemanticKernel 1.72.0 | LLM orchestration and plugin system |
| Ollama Connector 1.72.0-alpha | Ollama model integration |
| RazorConsole.Core 0.5.0-alpha | Terminal UI with Razor components |
| Markdig 1.0.0 | Markdown parsing |
| NAudio 2.2.1 | Audio playback |
| HtmlAgilityPack 1.11.72 | HTML parsing for web search |
| FileSystemGlobbing 10.0.3 | Glob pattern matching |
Why .NET?
Most AI coding agents in the wild are built with Python, Rust, or TypeScript. .NET rarely gets mentioned — but it should.
Semantic Kernel is Microsoft's open-source SDK for building AI agents, and it's one of the most capable orchestration frameworks available: native plugin systems, function calling, structured planning, and first-class support for local models through connectors like Ollama. It runs cross-platform on Windows, Linux, and macOS.
MandoCode exists partly to prove the point: you can build a full-featured, agentic CLI tool on .NET and Semantic Kernel that stands alongside anything built in other ecosystems. The tooling is there. It's open source. It just doesn't get the attention it deserves.
<p align="center"> <a href="LICENSE">MIT License</a> </p>
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
This package has no dependencies.
v0.10.0 — File-read and cloud-signin reliability
Fixed:
- Nested same-name folder reads — `read_file_contents`, `write_file`, and `edit_file` no longer mangle paths in projects whose folder layout repeats the project root's last segment (e.g. the `MyApp/MyApp/MyApp.csproj` pattern from `dotnet new`). The literal path is resolved first, with the redundant-prefix heuristic only firing when the un-stripped target doesn't exist.
- `ollama signin` not launching after fresh install — `RunOllamaSigninAsync`, `TryStartOllamaProcess`, and `AutoPullAsync` now resolve `ollama` via canonical install paths when the bare command isn't on the running process's PATH. Fixes the silent "browser never opens" failure when MandoCode is launched in the same shell that just installed Ollama.
- First-run wizard imperative output getting repainted — the wizard now suppresses `HomeView` for the entire run, not just the 401 auto-recovery path. Closes the latent VDOM redraw race that could clobber `ollama signin` URLs and other progress lines.
Added:
- Auto-open browser on `ollama signin` — MandoCode now captures Ollama's stdout, scans for the sign-in URL, and opens the browser itself instead of relying on Ollama's own launch behavior. URL also re-emitted via `AnsiConsole.MarkupLine` as a copy/paste fallback.
UI:
- Green selection highlight in the setup wizard — every wizard `SelectionPrompt` (model picker, "What would you like to do?", "Sign me in now", starter-model picker) now uses green instead of Spectre's default blue.