Index a repository
cd path/to/repo
unch index --root .
Real output from indexing gorilla/mux:$ unch index --root .
Loaded model dim=768
Indexed 278 symbols in 16 files
Run your first search
unch search "create a new router"
$ unch search "create a new router"
1. mux.go:32 0.7747
2. mux.go:314 0.8135
Add details when you want richer output
unch search --details "get path variables from a request"
$ unch search --details "get path variables from a request"
1. mux.go:466 0.7991
kind: function
name: Vars
signature: func Vars(r *http.Request) map[string]string
docs: Vars returns the route variables for the current request, if any.
OpenRouter flow
Save the token once
unch auth openrouter --token sk-or-...
Index with OpenRouter embeddings
unch index --root . --provider openrouter --model openai/text-embedding-3-small
Search with the same provider and model
unch search --provider openrouter --model openai/text-embedding-3-small "create a new router"
Search modes in practice
Best default for natural-language queries.unch search "create a new router"
Use when you want embedding-driven matches only.unch search --mode semantic "parse query parameters"
Use when you already know the exact identifier or string.unch search --mode lexical "ParseQuery"
What happens on first run
The first local llama.cpp index may:
- download the default embedding model
- fetch local
yzma runtime libraries
- create
./.semsearch/
Each provider and model pair keeps its own active index snapshot. Rebuilding openrouter/openai/text-embedding-3-small does not replace the active llama.cpp/embeddinggemma snapshot until the new run finishes successfully.
If you want to keep state somewhere other than ./.semsearch, use --state-dir.
Next steps