Skip to main content

Usage

unch search [flags] <query>
The query can be passed positionally or with --query. When both are present, --query wins.

Auto mode

Best default for natural-language queries about behavior.

Semantic mode

Force embedding-only retrieval when you want conceptual matches.

Lexical mode

Force exact-term retrieval when you already know the identifier or string.

Common search patterns

unch search "create a new router"

Flags

--query

Explicit search query text.

--mode

Accepted values:
  • auto
  • semantic
  • lexical

--limit

Maximum number of results to return.

--max-distance

Maximum semantic distance kept in auto and semantic modes. Set 0 or a negative value to disable filtering.

--details

Show symbol kind, name, signature, docs, and body context for each result.

--model

Query embedding model. Must match the model used when the index was built.

--provider

Embedding provider. Must match the provider used when the index was built.

--root

Root directory used to render result paths relative to the repository.

--state-dir

Path to a custom .semsearch directory. This is the preferred way to search external state.

--db

Deprecated alias for the index location. Prefer --state-dir.

--lib

Path to a yzma library directory, or one of its shared library files.

--ctx-size

llama context size. 0 uses the selected model default.

--verbose

Enable verbose yzma logging.

Examples

unch search "sqlite schema"
unch search --mode lexical "Run"
unch search --details "get path variables from a request"
unch search --model qwen3 "search query"
unch search --provider openrouter --model openai/text-embedding-3-small "search query"
unch search --state-dir /tmp/project.semsearch "router middleware"

Notes

  • auto stays semantic-first for natural-language queries.
  • If the manifest is bound to remote CI, unch search can refresh from remote automatically before running the query.
  • OpenRouter token lookup checks OPENROUTER_API_KEY, then ~/.config/unch/tokens.json, then .semsearch/tokens.json.
  • If no active snapshot exists for the requested provider and model, unch tells you to run unch index --provider <provider> --model <model> first.
Force lexical mode when you already know the identifier, string literal, or exact term you want to match.
Force semantic mode when the query is about behavior or intent and exact string overlap is weak.
Ranking quality depends on query and document embeddings living in the same provider and model space.If you indexed with one provider or model and search with another, the results may degrade or fail entirely.