Usage
--query. When both are present, --query wins.
Auto mode
Best default for natural-language queries about behavior.
Semantic mode
Force embedding-only retrieval when you want conceptual matches.
Lexical mode
Force exact-term retrieval when you already know the identifier or string.
Common search patterns
- Behavior query
- Exact symbol
- Detailed inspection
- External state dir
Flags
--query
Explicit search query text.
--mode
Accepted values:
autosemanticlexical
--limit
Maximum number of results to return.
--max-distance
Maximum semantic distance kept in auto and semantic modes. Set 0 or a negative value to disable filtering.
--details
Show symbol kind, name, signature, docs, and body context for each result.
--model
Query embedding model. Must match the model used when the index was built.
--provider
Embedding provider. Must match the provider used when the index was built.
--root
Root directory used to render result paths relative to the repository.
--state-dir
Path to a custom .semsearch directory. This is the preferred way to search external state.
--db
Deprecated alias for the index location. Prefer --state-dir.
--lib
Path to a yzma library directory, or one of its shared library files.
--ctx-size
llama context size. 0 uses the selected model default.
--verbose
Enable verbose yzma logging.
Examples
Notes
autostays semantic-first for natural-language queries.- If the manifest is bound to remote CI,
unch searchcan refresh from remote automatically before running the query. - OpenRouter token lookup checks
OPENROUTER_API_KEY, then~/.config/unch/tokens.json, then.semsearch/tokens.json. - If no active snapshot exists for the requested provider and model,
unchtells you to rununch index --provider <provider> --model <model>first.
When should I force lexical mode?
When should I force lexical mode?
Force lexical mode when you already know the identifier, string literal, or exact term you want to match.
When should I force semantic mode?
When should I force semantic mode?
Force semantic mode when the query is about behavior or intent and exact string overlap is weak.
Why should index and search use the same model?
Why should index and search use the same model?
Ranking quality depends on query and document embeddings living in the same provider and model space.If you indexed with one provider or model and search with another, the results may degrade or fail entirely.