BSE_Code 1.6.0
dotnet tool install --global BSE_Code --version 1.6.0
dotnet new tool-manifest
dotnet tool install --local BSE_Code --version 1.6.0
#tool dotnet:?package=BSE_Code&version=1.6.0
nuke :add-package BSE_Code --version 1.6.0
๐ค BSE-Code
The AI coding assistant that lives in your terminal โ works with ANY LLM, zero lock-in, zero compromise.
Chat with your codebase, read and write files, run shell commands, connect MCP servers, build reusable skills, persist project memory, and pick up right where you left off โ all from a gorgeous interactive REPL.
โจ Why BSE-Code?
| ๐ Any LLM, anywhere | OpenRouter, OpenAI, Anthropic, Google AI, Ollama, LM Studio, Local AI Foundry, or any OpenAI-compatible endpoint |
| ๐ Start for free | OpenRouter's free tier gives you Gemini 2.5 Pro, Llama 4, DeepSeek R1 โ no credit card |
| ๐ Fully local | Ollama or LM Studio โ no API key, no data leaving your machine |
| ๐ง Context-aware | Project memory, skills, and MCP tools are all injected automatically into every session |
| ๐พ Session persistence | Save and resume conversations per project โ never lose context again |
| ๐จ Beautiful terminal UI | 6 built-in themes, interactive slash picker, history navigation, full cursor editing |
| โก Instant shell access | !git status, !npm run build โ run any command without leaving the chat |
| ๐ File injection | @src/auth.ts โ drop any file or directory straight into your prompt |
๐ Install
via .NET tool (NuGet)
Requires .NET 10 SDK.
dotnet tool install --global BSE_Code
Update to the latest version:
dotnet tool update --global BSE_Code
via npm
Requires Node.js 18+. No .NET SDK needed โ the binary is bundled.
npm install -g bse-code
๐ Supported Providers
| # | Provider | Models | API Key |
|---|---|---|---|
| 1 | ๐ OpenRouter | 100+ models, free tier available | Yes (free at openrouter.ai) |
| 2 | ๐ข OpenAI | GPT-4o, o3, o1, GPT-3.5 | Yes |
| 3 | ๐ฃ Anthropic | Claude 3.7/3.5 Sonnet, Haiku, Opus | Yes |
| 4 | ๐ต Google AI | Gemini 2.5 Pro/Flash, 2.0, 1.5 | Yes (free tier) |
| 5 | ๐ฆ Ollama | llama3, mistral, qwen, deepseekโฆ | โ No (local) |
| 6 | ๐ฅ๏ธ LM Studio | Any model loaded in LM Studio | โ No (local) |
| 7 | ๐ญ Local AI Foundry | Phi-4, Phi-3.5 Mini, and more | โ No (local) |
| 8 | โ๏ธ Custom | Any OpenAI-compatible endpoint | Optional |
๐ง First-run Setup
On first run, an interactive wizard walks you through everything:
- ๐ฏ Pick a provider
- ๐ Set the base URL (pre-filled for known providers)
- ๐ Enter your API key (skipped for local providers)
- ๐ค Browse available models and pick one
- ๐พ Everything saved to
~/.bse-code/config.json
Re-run the wizard any time:
bse-code --config
โก Quick-start by provider
๐ OpenRouter โ free models, no credit card
bse-code --config
# Select [1] OpenRouter
# Get a free key at: https://openrouter.ai/keys
# Pick Gemini 2.5 Pro, Llama 4, DeepSeek R1 โ all free!
๐ฆ Ollama โ fully local, zero cost
ollama pull llama3.2
bse-code --config
# Select [5] Ollama โ accept default URL โ pick your model
๐ข OpenAI
bse-code --config
# Select [2] OpenAI โ https://platform.openai.com/api-keys
๐ฃ Anthropic
bse-code --config
# Select [3] Anthropic โ https://console.anthropic.com/settings/keys
๐ต Google AI (Gemini)
bse-code --config
# Select [4] Google AI โ https://aistudio.google.com/app/apikey
๐ฅ๏ธ LM Studio
# 1. Open LM Studio, load a model, start the local server
bse-code --config
# Select [6] LM Studio โ accept default URL (http://localhost:1234/v1)
๐ญ Local AI Foundry
bse-code --config
# Select [7] Local AI Foundry โ accept default URL (http://localhost:5272/v1)
โ๏ธ Custom endpoint
bse-code --config
# Select [8] Custom โ enter your URL, key, and model name
๐ป Usage
๐ Interactive REPL (recommended)
bse-code
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โโโโโโโ โโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโ โ code โ
โ โโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโ โโโโโโโโโโโโโโโโ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
provider: OpenRouter
model : google/gemini-2.5-pro-exp-03-25:free
theme : default
cwd : my-project
๐ง skills : 2 loaded
๐ mcp : 5 tools from 1 server(s)
๐พ memory : 1 BSE.md file(s) loaded
type /help for commands ยท /exit to quit ๐
my-project (main) โฏ
โก One-shot mode
bse-code -p "explain the auth flow in src/auth/"
bse-code -p "list all TODO comments" --output-format json
๐ณ๏ธ All CLI flags
bse-code # ๐ Interactive REPL
bse-code -p "<prompt>" # โก One-shot prompt
bse-code --model <model-id> # ๐ค Override model for this session
bse-code --theme <name> # ๐จ Set color theme for this session
bse-code --output-format json|text # ๐ Output format (one-shot only)
bse-code --config # โ๏ธ Re-run the setup wizard
bse-code --version, -v # ๐ข Show version
bse-code --help, -h # โ Show help
๐ช Special Input Prefixes
Two power-user shortcuts that make BSE-Code feel like a real dev tool:
@ โ File & directory injection
Drop any file or folder straight into your prompt. Tab-completes paths as you type.
@src/auth.ts explain this file
@src/auth/ summarize all files in this folder
@README.md what's missing from this doc?
Directories inject up to 20 files automatically โ perfect for asking about a whole module at once.
! โ Shell passthrough
Run any shell command instantly, no AI involved, output right in your terminal.
!git status
!dotnet build
!npm run test
!ls -la src/
โจ๏ธ REPL Slash Commands
๐ง Core
| Command | Description |
|---|---|
/clear |
๐งน Wipe conversation history โ fresh start |
/model [id] |
๐ค Show current model or switch to a new one mid-session |
/compact [hint] |
๐๏ธ Ask the AI to summarize history and trim tokens |
/stats |
๐ Show session stats (duration, turns, tool calls, messages, model, provider, theme, skills, MCP tools) |
/tools |
๐ง List all available built-in and MCP tools |
/help |
โ Show all commands |
/exit or /quit |
๐ Quit |
๐จ Appearance
| Command | Description |
|---|---|
/theme |
๐จ List all available themes with active marker |
/theme <name> |
๐จ Switch theme โ persisted to config |
๐ง Skills
| Command | Description |
|---|---|
/skills |
๐ List all loaded skills (user + project level) |
/<skill-name> |
โถ๏ธ Invoke a skill |
/<skill-name> @file.ts |
โถ๏ธ Invoke a skill with a file argument |
๐ MCP
| Command | Description |
|---|---|
/mcp |
๐ List all connected MCP servers and their tools |
/mcp reload |
๐ Hot-reload MCP servers without restarting |
๐พ Memory
| Command | Description |
|---|---|
/memory |
๐พ Show all loaded BSE.md files |
/memory add <text> |
โ๏ธ Append a note to ./BSE.md instantly |
/memory refresh |
๐ Reload all BSE.md files and refresh the system prompt |
/init |
๐ Scaffold a BSE.md in the current directory |
๐ Sessions
| Command | Description |
|---|---|
/save <tag> |
๐พ Save the current conversation with a tag |
/resume |
๐ List all saved sessions for this project |
/resume <tag> |
โถ๏ธ Restore a saved session and pick up where you left off |
๐ฎ Interactive Input โ Feels Like a Real Shell
The REPL has a fully interactive input reader. No more typing blind.
/ โ Slash command picker
Type / and an inline menu pops up instantly:
/ โโ navigate ยท Enter select ยท Esc cancel
โถ /clear ๐งน clear conversation history
/model ๐ค show or switch model
/compact ๐๏ธ summarize history to save tokens
/theme ๐จ list or set color theme
/skills ๐ง list loaded skills
/mcp ๐ list MCP servers and tools
/memory ๐พ show loaded BSE.md files
/save ๐พ save conversation
/resume โถ๏ธ list or resume a saved session
โฆ
- โฌ๏ธโฌ๏ธ Arrow keys navigate the list
- โจ๏ธ Type more characters to filter live โ
/thnarrows to/theme - โฉ๏ธ Enter selects, Esc cancels and lets you type manually
- โฅ Tab completes the top match
- ๐ง Your skills appear right alongside built-in commands
๐ History navigation
- โฌ๏ธโฌ๏ธ arrows cycle through previous inputs โ just like your shell
- Your current draft is preserved when you browse back
โ๏ธ Full cursor editing
- โฌ ๏ธโก๏ธ move the cursor anywhere in the line
- Home / End jump to start or end instantly
- Backspace / Delete work at any cursor position
โฅ Tab completion
- On
/<cmd>โ completes or opens the slash picker - On
@<path>โ completes file and directory paths from the filesystem
๐ง Skills โ Reusable AI Workflows
Skills are markdown files that give the AI reusable instructions or workflows. Write once, invoke from any project.
๐ Locations (both are loaded and merged):
~/.bse-code/skills/โ user-level, available in every project.bse-code/skills/โ project-level, scoped to this repo
Example skill (.bse-code/skills/review.md):
# Code Review
Review the provided code for:
- Correctness and logic errors
- Performance issues
- Security vulnerabilities
- Code style and readability
Provide specific, actionable feedback with line references.
Invoke it:
/review
/review @src/PaymentService.cs
Skills are also injected into the system prompt automatically โ the AI always knows what skills are available. ๐
๐พ Project Memory (BSE.md)
BSE.md files are loaded automatically at startup and injected into every session's system prompt. Teach the AI about your project once โ it remembers forever. Similar to Claude's CLAUDE.md and Gemini's GEMINI.md.
๐๏ธ Hierarchy โ all three are merged:
| File | Scope |
|---|---|
~/.bse-code/BSE.md |
๐ Global โ your personal preferences across all projects |
./BSE.md |
๐ Project โ tech stack, commands, coding standards |
./BSE.local.md |
๐ Local overrides โ add to .gitignore |
Scaffold one instantly:
bse-code
/init
This creates a BSE.md template with sections for project overview, tech stack, dev commands, and coding standards โ ready to fill in.
Add notes on the fly:
/memory add always use async/await, never .Result or .Wait()
/memory add run `dotnet test` before committing
๐ MCP (Model Context Protocol)
Connect any external tool or service to BSE-Code via MCP servers. GitHub, databases, Slack, custom APIs โ if it speaks MCP, it works here. Tools are discovered automatically and made available to the AI.
Config file: ~/.bse-code/mcp.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
"disabled": false
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
}
}
}
}
- MCP tools are available to the AI as
mcp__serverName__toolName - Hot-reload without restarting:
/mcp reload๐ - Inspect what's connected:
/mcp - Disable a server without removing it:
"disabled": true
๐จ Built-in AI Tools
The AI can use these tools autonomously to get things done:
| Tool | What it does |
|---|---|
๐ read_file |
Read any file's contents |
โ๏ธ Write |
Write or create a file (auto-creates parent directories) |
๐ฅ๏ธ Bash |
Execute shell commands โ cross-platform (cmd.exe on Windows, bash on Unix) |
๐ list_dir |
List files and subdirectories at a path |
๐ glob |
Find files matching a glob pattern (e.g. src/**/*.cs) |
๐ grep |
Search files with a regex pattern (up to 200 matches, recursive by default) |
๐ mcp__*__* |
Any tool from your connected MCP servers |
Tool calls are shown inline as the AI works โ you see exactly what it's doing in real time. โ or โ per call.
๐พ Session Management
Never lose a good conversation. Save any session with a tag and resume it later โ even across restarts.
/save auth-refactor
/resume
# shows all saved sessions for this project:
# auth-refactor 2025-04-24 14:32 18 messages [gpt-4o]
# bug-hunt 2025-04-23 09:15 31 messages [claude-3-5-sonnet]
/resume auth-refactor
# โถ๏ธ Resumed session 'auth-refactor' (18 messages) โ welcome back!
Sessions are stored per-project in ~/.bse-code/sessions/ using a SHA-256 hash of the project path โ no collisions, no mess. Each session records the tag, model, timestamp, working directory, and full message history.
๐ Session Statistics
See exactly what's happening in your session:
/stats
Session stats ๐
โฑ duration : 00:23:41
๐ฌ turns : 12
๐ง tool calls : 34
๐จ messages : 47
๐ค model : google/gemini-2.5-pro-exp-03-25:free
๐ provider : OpenRouter
๐จ theme : dracula
๐ง skills : 3
๐ mcp tools : 8
๐๏ธ Conversation Compaction
Running low on context? Compact the conversation into a tight summary without losing the important bits.
/compact
/compact focus on the auth changes we made
The AI summarizes the conversation, the history is trimmed, and you keep going โ same context, way fewer tokens. ๐ฏ
๐จ Themes
Six beautiful built-in themes. Switch any time, persisted automatically.
| Theme | Accent | Vibe |
|---|---|---|
default |
๐ฉต Cyan | Classic terminal |
dracula |
๐ Magenta/Purple | Dark and moody |
monokai |
๐ Yellow | Warm and punchy |
ocean |
๐ Blue | Cool and calm |
forest |
๐ Green | Fresh and focused |
light |
๐ฉต Dark on light | For light terminals |
Each theme customizes accent, prompt, response, tool calls, success/error states, skills, MCP, and git branch colors.
/theme dracula # switch and persist
bse-code --theme ocean # one session only
โ๏ธ Configuration
Config file: ~/.bse-code/config.json
{
"provider": "OpenRouter",
"api_key": "sk-or-...",
"model": "google/gemini-2.5-pro-exp-03-25:free",
"base_url": "https://openrouter.ai/api/v1",
"theme": "default"
}
For local providers โ no API key needed:
{
"provider": "Ollama",
"api_key": "local",
"model": "llama3.2",
"base_url": "http://localhost:11434/v1",
"theme": "forest"
}
๐ Environment variables
Environment variables always override the config file โ great for CI/CD or switching contexts fast.
| Variable | Description |
|---|---|
BSE_PROVIDER |
Provider name (OpenRouter, OpenAI, Anthropic, Google, Ollama, LmStudio, LocalAiFoundry, Custom) |
BSE_API_KEY |
API key for the selected provider |
BSE_MODEL |
Model ID to use |
BSE_BASE_URL |
Override the API base URL |
๐ Legacy variables
OPENROUTER_API_KEY,OPENROUTER_MODEL,OPENROUTER_BASE_URLare still accepted for backwards compatibility.
PowerShell (persist):
[System.Environment]::SetEnvironmentVariable('BSE_PROVIDER', 'OpenAI', 'User')
[System.Environment]::SetEnvironmentVariable('BSE_API_KEY', 'your-key', 'User')
[System.Environment]::SetEnvironmentVariable('BSE_MODEL', 'gpt-4o', 'User')
Bash (persist):
export BSE_PROVIDER="Ollama"
export BSE_MODEL="llama3.2"
# No BSE_API_KEY needed for local providers
๐ File Structure
~/.bse-code/
โโโ config.json # โ๏ธ Provider, API key, model, base URL, theme
โโโ mcp.json # ๐ MCP server definitions
โโโ BSE.md # ๐ Global memory (injected into every session)
โโโ skills/
โ โโโ *.md # ๐ง User-level skills (available in all projects)
โโโ sessions/
โโโ <project-hash>/ # ๐พ Saved conversations, isolated per project
โโโ *.json
.bse-code/ # Project-level (commit this to your repo)
โโโ BSE.md # ๐ Project memory
โโโ skills/
โโโ *.md # ๐ง Project-level skills
./BSE.md # ๐ Project memory (root level, same as above)
./BSE.local.md # ๐ Local overrides โ add to .gitignore
๐ฆ Dependencies
| Package | Version |
|---|---|
| OpenAI | 2.10.0 |
๐ License
MIT
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
This package has no dependencies.