The ContextOS Glossary
Proprietary terminology defined by the Prompt Optimizer engineering team. These are the concepts behind the ContextOS infrastructure layer.
Precision Locks™
A deterministic regex-based routing layer in Prompt Optimizer that intercepts high-confidence intents — JSON formatting, syntax correction, template expansion — and executes them at 0ms latency and $0 LLM compute cost. The first layer of the Zero-Cost Routing Engine™.
See the Router ArchitectureGit Context Control (GCC)
An agentic memory architecture that applies Git concepts — Branch, Commit, and Merge — to AI agent state management, preventing context drift across long-horizon tasks. Powered by GitContextService and pgvector.
Read: Cost Reduction Deep DiveAline Search™
The vector retrieval mechanism within GCC. Allows agents to semantically search their own commit history to recall past decisions, reasoning states, and intermediate outputs — without reprocessing the full context window.
Related: Git Context ControlThe Sandwich Strategy
A two-layer context detection method in EnhancedAIContextDetector. Fast pattern matching runs first; expensive semantic embedding analysis only triggers when confidence is below threshold. Optimizes token yield while maintaining routing accuracy.
Read: Cost Reduction Deep DiveZero-Cost Routing Engine™
The hybrid two-tier routing system in Prompt Optimizer. Combines Precision Locks™ (regex) and Semantic Gating (embeddings) to route 45% of queries at zero LLM cost. 91.94% of routing decisions require no frontier model call.
See ContextOS in ActionContextOS
The positioning name for Prompt Optimizer's infrastructure layer. ContextOS sits between your application and the LLM, managing routing, memory, and security context injection so your agents don't have to implement these primitives themselves.
Back to HomeAgenticError
Prompt Optimizer's fail-closed error contract for agent workflows. When an agent step fails, AgenticError prevents bad state from propagating downstream — the pipeline halts cleanly rather than silently continuing with corrupt context.
Read: Intent Engineering Deep DiveValue Hierarchy
A machine-readable, ranked list of optimization goals defined via the define_value_hierarchy MCP tool. Each entry carries a PriorityLabel (NON-NEGOTIABLE, HIGH, MEDIUM, or LOW). The hierarchy operates at two levels: L1 injects a DIRECTIVES block into the LLM system prompt so goal priorities are enforced during optimization; L2 applies a routing floor so high-stakes goals are never served by the cheap RULES_BASED tier.
Read: Intent Engineering — Value HierarchiesPriorityLabel
The ranking enum used in a Value Hierarchy entry. NON-NEGOTIABLE raises the routing score floor to 0.72 (guaranteed LLM_BASED tier). HIGH raises the floor to 0.45 (guaranteed HYBRID tier). MEDIUM and LOW affect only the L1 prompt injection — no routing tier change.
Related: Value HierarchySee ContextOS in Action
These concepts power the routing, memory, and security infrastructure in Prompt Optimizer. Try it free — no credit card required.