Skip to main content
The fastest way to get started. No server to run, no dependencies to install:
{
  "mcpServers": {
    "rippletide-kg": {
      "type": "url",
      "url": "https://mcp.rippletide.com/mcp?agentId=your-agent-id"
    }
  }
}
This connects directly to Rippletide’s hosted MCP server. All you need is an agent ID from the Rippletide platform.

Where to put this config

ClientConfig file location
Cursor~/.cursor/mcp.json
Claude DesktopClaude Desktop settings > MCP
Claude Code.mcp.json at your project root

Agent ID

Each agent ID maps to an isolated context graph. Memories stored under one agent ID are invisible to another. You can:
  • Use one agent per project to keep knowledge separated
  • Use one shared agent across projects if you want cross-project memory
  • Switch agents at runtime using the switch_agent tool during a conversation

Self-Hosted

If you need to run the MCP server yourself (e.g. for data privacy or custom backends), use the sections below to configure and deploy it.
Most users don’t need to self-host. The hosted MCP server at mcp.rippletide.com is ready to use. See the Quickstart.

Configuration

Configure the server via environment variables or CLI flags. CLI flags take priority.
Env VariableCLI FlagDefaultDescription
GRAPH_API_URL--api-urlhttp://localhost:3000Context graph backend URL
AGENT_ID--agent-id"default"Target agent’s graph
TRANSPORT--transportstdiostdio for local, http for remote
PORT--port8080Port for HTTP transport
HOST--host127.0.0.1Bind address for HTTP transport
LOG_LEVEL--log-levelinfodebug, info, warn, error

Transport Modes

There are two ways your AI client can talk to the MCP server: stdio: The AI client starts the MCP server as a subprocess and communicates via stdin/stdout. This is the default for local clients like Cursor and Claude Desktop.
{
  "mcpServers": {
    "rippletide": {
      "command": "npx",
      "args": ["-y", "@rippletide/mcp"],
      "env": {
        "GRAPH_API_URL": "http://localhost:3000",
        "AGENT_ID": "your-agent-id"
      }
    }
  }
}
HTTP: The MCP server runs as a standalone service and the AI client connects over HTTP. Use this for remote or shared deployments.
npx @rippletide/mcp --transport http --host 0.0.0.0 --port 8080 --api-url https://your-backend.com

Health Check

On startup, the server pings GET /api/graph/stats on the backend. If unreachable, a warning is logged but the server still starts (tools will return errors until the backend is available).

Logging

All logs are JSON-formatted and written to stderr (stdout is reserved for the MCP stdio protocol):
LOG_LEVEL=debug npx @rippletide/mcp

Deploy

Deploy on Railway

The MCP server includes a railway.json config for one-click deployment.
1

Create a new service

Point it to your repository.
2

Set Root Directory

Set to apps/knowledge-graph/mcp-server
3

Set Start Command

node dist/index.js --transport http --host 0.0.0.0
4

Add environment variables

VariableValue
GRAPH_API_URLYour context graph backend URL
5

Set Target Port

8080
Once deployed, your MCP endpoint will be available at:
https://your-service.railway.app/mcp

Deploy on any cloud provider

The MCP server is a standard Node.js app. To deploy anywhere:
# Build
npm install && npm run build

# Run in HTTP mode
node dist/index.js --transport http --host 0.0.0.0 --port 8080
Set the GRAPH_API_URL environment variable to point to your context graph backend. Requirements:
  • Node.js >= 18
  • Network access to the context graph backend

Troubleshooting

The context graph backend isn’t running or isn’t reachable at GRAPH_API_URL. Check the URL and network connectivity.
Verify that AGENT_ID matches an agent that has data in the graph. Use list_entities to check.
All logs go to stderr (stdout is reserved for MCP stdio protocol). Use LOG_LEVEL=debug for verbose output.
Make sure --transport http is in the start command or TRANSPORT=http is set as an environment variable.

Next steps