Skip to main content

What is Decision runtime?

Decision runtime is the third pillar of Rippletide. After you’ve evaluated your agent and set up a Context Graph for persistent memory, decision runtime lets you build agents that make deterministic decisions instead of probabilistic guesses. You structure your agent’s knowledge as a hypergraph of Q&A pairs, tags, actions, and state transitions. The LLM handles language (understanding input, generating output), but all decisions are made by a deterministic reasoning engine. The result: agents that hallucinate less than 1% of the time, with full explainability on every answer.

When to use it

  • You need guaranteed accuracy (less than 1% hallucination rate)
  • Every decision must be traceable and explainable
  • Guardrails must be enforced at the engine level, not just in prompts
  • You’re building customer-facing agents (support, sales, onboarding)

How it differs from RAG

Traditional RAGRippletide Decision runtime
Knowledge storageUnstructured text chunks in a vector DBStructured Q&A pairs, tags, actions, and state transitions
Decision-makingLLM generates answers probabilisticallyDeterministic reasoning engine selects the best answer
Hallucination rateVariable, hard to controlLess than 1% by design
ExplainabilityBlack boxEvery decision is traceable to a knowledge node
GuardrailsPrompt-based, easy to bypassEnforced at the engine level, 100% compliance

Core Building Blocks

Your agent’s knowledge is composed of four types of building blocks:

Q&A Pairs

The questions your agent can answer and their expected responses. This is the foundation of your agent’s knowledge.

Tags

Labels that organize Q&A pairs by topic (e.g. “pricing”, “shipping”, “returns”). Tags improve retrieval accuracy and let you structure a glossary.

Actions

Things your agent can do beyond answering questions: create a ticket, process a return, escalate to a human. Each action has requirements that must be met.

State Predicates

Rules that define conversation flow. Based on what the user says, the agent transitions between states (e.g. “user described needs” -> “recommend product” -> “checkout”).

Architecture Overview

User message
    |
    v
[ Language Understanding (LLM) ]       Parses the user's intent
    |
    v
[ Hypergraph Reasoning Engine ]       Matches against Q&A, evaluates state, selects action
    |                                 (deterministic, no hallucination)
    v
[ Response Generation (LLM) ]        Formulates a natural language response
    |                                 from the selected knowledge node
    v
Agent response
The LLM is only used for language (understanding input, generating output). All decisions (which answer to give, which action to take, which state to transition to) are handled by the deterministic reasoning engine.

Getting Started

1

Create an agent

Define your agent’s name and system prompt via the SDK API.
2

Add Q&A pairs

Populate your agent’s knowledge base with questions and answers.
3

Organize with tags

Tag your Q&A pairs by topic for better retrieval.
4

Define actions

Specify what your agent can do and the requirements for each action.
5

Set up state predicates

Design the conversation flow with states and transitions.
6

Chat

Send messages to your agent and get hallucination-free responses.

Next Steps