Back to Dashboard
AgentRealm Docs
v1.0 Documentation

Operator Manual

Everything you need to deploy, manage, and scale autonomous entities on AgentRealm.

Quick Start

Deploy your first agent in three steps. No Dockerfile required – we auto-detect Python and Node.js projects.

1

Connect

Link your GitHub account. We auto-detect your project type and generate the right container config.

github.com/your-username/your-agent
2

Configure

Store your API keys securely in the Vault. They're encrypted and injected into your runtime automatically.

OPENAI_API_KEY=sk-...
3

Deploy

Push to main. We build your container and launch it in a secure, isolated runtime. Your agent gets a public HTTPS endpoint automatically.

https://your-agent-id.agentrealm.cloud

GitOps & Auto-Deploy

Enable automatic deployments whenever you push to your GitHub repository. No manual deploys needed – just git push and we handle the rest.

How It Works

1

Push

You push code to your main branch

2

Webhook

GitHub notifies AgentRealm

3

Build

We build your new container

4

Deploy

Zero-downtime rolling update

Setup Instructions

1

Enable Auto-Deploy

Go to your agent's GitOps settings from the dashboard.

Dashboard → Agent → GitOps → Toggle Auto-Deploy
2

Copy Webhook URL & Secret

Copy the webhook URL and secret from your GitOps settings.

https://api.agentrealm.cloud/webhooks/github
whsec_xxxxxxxxxxxxxxxx
3

Add Webhook in GitHub

Go to your repo → Settings → Webhooks → Add webhook

Payload URL: Paste the webhook URL
Content type: application/json
Secret: Paste the webhook secret
Events: Just the push event
That's it! Every push to your main branch will now trigger an automatic deployment. You can monitor builds in real-time from your dashboard.

Security

Webhook payloads are cryptographically verified using secure signatures. We validate every request before triggering a build – no one can deploy to your agent without your secret.

You can regenerate your webhook secret at any time from GitOps settings. Just remember to update it in GitHub too.

Secrets

Store API keys, tokens, and other sensitive data in the Secret Vault. All secrets are encrypted at rest and automatically injected into your agent at runtime.

How It Works

1

Add Secret

Go to Settings → Secrets and add your API keys

2

Encrypted

We encrypt it with your unique key

3

Injected

Available as env vars in your agent

Using Secrets in Code

Secrets are injected as environment variables. Access them like any other env var:

main.py
import os

# Secrets from your Vault are available as env vars
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")

# Use them in your agent
from openai import OpenAI
client = OpenAI(api_key=OPENAI_API_KEY)
Best Practice: Never hardcode secrets in your repository. Use the Vault and access them via environment variables.

Memory & Persistence

Every agent deployed on AgentRealm gets a dedicated, private PostgreSQL database automatically provisioned in your isolated environment. This is your agent's long-term memory.

Important: Do not provision your own database. AgentRealm injects a connection string into your runtime automatically.

Access your database using the DATABASE_URL environment variable:

main.py
import os
import psycopg2

# AgentRealm injects this automatically - do not hardcode!
DATABASE_URL = os.getenv("DATABASE_URL")

# Connect to your agent's dedicated database
conn = psycopg2.connect(DATABASE_URL)

# Store conversation history, embeddings, state, etc.
cursor = conn.cursor()
cursor.execute("""
    CREATE TABLE IF NOT EXISTS memory (
        id SERIAL PRIMARY KEY,
        timestamp TIMESTAMPTZ DEFAULT NOW(),
        content JSONB
    )
""")

LangChain Integration

If you're using LangChain, connect your agent's memory directly:

agent.py
import os
from langchain.memory import PostgresChatMessageHistory
from langchain.chains import ConversationChain

DATABASE_URL = os.getenv("DATABASE_URL")

# Persistent conversation memory
history = PostgresChatMessageHistory(
    connection_string=DATABASE_URL,
    session_id="user_123"
)

# Your agent now remembers everything
chain = ConversationChain(memory=history, ...)

What Persists

✓ Survives Restarts

  • • Chat history & conversations
  • • Vector embeddings
  • • Agent state & checkpoints
  • • User data your agent stores

✗ Does Not Persist

  • • In-memory Python variables
  • • Files written to /tmp
  • • Runtime cache

Agent Capabilities Pack

Every agent deployed on AgentRealm comes with built-in capabilities for vector memory, chain of thought tracing, tool authentication, and scheduled tasks. These are designed to help you build intelligent agents faster.

Vector Memory (pgvector)

Your agent's database includes the pgvector extension for storing and querying embeddings. Perfect for RAG, semantic search, and long-term memory.

vector_memory.py
import os
import psycopg2
from openai import OpenAI

DATABASE_URL = os.getenv("DATABASE_URL")
# Also available: VECTOR_STORE_URL (alias for clarity)

conn = psycopg2.connect(DATABASE_URL)
cursor = conn.cursor()

# pgvector is pre-installed - just use it!
cursor.execute("""
    CREATE TABLE IF NOT EXISTS agent_memory (
        id SERIAL PRIMARY KEY,
        content TEXT,
        embedding vector(1536),  -- OpenAI ada-002 dimensions
        metadata JSONB,
        created_at TIMESTAMPTZ DEFAULT NOW()
    )
""")

# Create HNSW index for fast similarity search
cursor.execute("""
    CREATE INDEX IF NOT EXISTS memory_embedding_idx 
    ON agent_memory USING hnsw (embedding vector_cosine_ops)
""")

# Store a memory with embedding
def remember(content: str, metadata: dict = None):
    client = OpenAI()
    response = client.embeddings.create(
        model="text-embedding-ada-002",
        input=content
    )
    embedding = response.data[0].embedding
    
    cursor.execute(
        "INSERT INTO agent_memory (content, embedding, metadata) VALUES (%s, %s, %s)",
        (content, embedding, json.dumps(metadata or {}))
    )
    conn.commit()

# Retrieve similar memories
def recall(query: str, limit: int = 5):
    client = OpenAI()
    response = client.embeddings.create(
        model="text-embedding-ada-002",
        input=query
    )
    query_embedding = response.data[0].embedding
    
    cursor.execute("""
        SELECT content, metadata, 1 - (embedding <=> %s::vector) as similarity
        FROM agent_memory
        ORDER BY embedding <=> %s::vector
        LIMIT %s
    """, (query_embedding, query_embedding, limit))
    
    return cursor.fetchall()
LangChain Compatible: Use PGVector from langchain_community directly with your DATABASE_URL.

Chain of Thought Tracing

Log your agent's reasoning steps to understand and debug its decision-making process. View traces in the dashboard with a visual timeline.

First, create the traces table in your database (run once at startup):

Create traces table
-- Add this to your database initialization
CREATE TABLE IF NOT EXISTS agent_traces (
    id SERIAL PRIMARY KEY,
    session_id VARCHAR(255) NOT NULL,
    step_type VARCHAR(50) NOT NULL,  -- thinking, tool_call, llm_call, error
    content JSONB NOT NULL,          -- {"input": "...", "output": "..."}
    duration_ms INTEGER,
    tokens_used INTEGER,
    model VARCHAR(100),
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX IF NOT EXISTS idx_traces_session ON agent_traces(session_id);
CREATE INDEX IF NOT EXISTS idx_traces_created ON agent_traces(created_at);

Then log traces from your agent code:

tracing.py
import os
import psycopg2
import json
import uuid

DATABASE_URL = os.getenv("DATABASE_URL")
conn = psycopg2.connect(DATABASE_URL)

class AgentTracer:
    def __init__(self, session_id: str = None):
        self.session_id = session_id or str(uuid.uuid4())
    
    def log_step(self, step_type: str, input_data: str, output_data: str, 
                 model: str = None, tokens: int = 0, duration_ms: int = 0):
        """Log a reasoning step to the traces table"""
        cursor = conn.cursor()
        cursor.execute("""
            INSERT INTO agent_traces 
            (session_id, step_type, content, model, tokens_used, duration_ms)
            VALUES (%s, %s, %s, %s, %s, %s)
        """, (
            self.session_id, step_type,
            json.dumps({"input": input_data, "output": output_data}),
            model, tokens, duration_ms
        ))
        conn.commit()

# Usage in your agent
tracer = AgentTracer()

# Log thinking
tracer.log_step(
    step_type="thinking",
    input_data="User asked: What's the weather in Berlin?",
    output_data="I need to call a weather API to answer this question"
)

# Log tool call
tracer.log_step(
    step_type="tool_call",
    input_data="get_weather(location='Berlin')",
    output_data='{"temp": 15, "condition": "cloudy"}',
    duration_ms=230
)

View your traces in the dashboard by clicking Traces on any running agent.

Managed Tool Auth (Vault Injection)

Store your API keys in the Vault, and we automatically inject matching secrets into your agent. Keys matching common prefixes are auto-detected:

OPENAI_*
ANTHROPIC_*
GOOGLE_*
AZURE_*
AWS_*
STRIPE_*
TWILIO_*
GITHUB_*

And many more: SLACK_, DISCORD_, SENDGRID_, PINECONE_, SUPABASE_, HUGGINGFACE_, LANGCHAIN_, AGENT_*, etc.

Using injected secrets
import os

# These are automatically injected from your Vault
# No need to manage .env files or secret mounting

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")       # From Vault
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY") # From Vault
STRIPE_SECRET_KEY = os.getenv("STRIPE_SECRET_KEY") # From Vault

# Custom agent secrets (prefix with AGENT_)
MY_CUSTOM_TOKEN = os.getenv("AGENT_MY_CUSTOM_TOKEN")  # From Vault

Scheduled Tasks (Cron)

Schedule recurring tasks for your agent using cron expressions. Perfect for daily reports, periodic data syncs, or maintenance routines.

Creating a Scheduled Task

  1. Go to your agent in the Dashboard
  2. Click Tasks to open the scheduler
  3. Click New Task
  4. Enter a name, cron schedule, and the command to run
Common Cron Schedules
# Format: minute hour day month weekday

* * * * *       # Every minute
*/5 * * * *     # Every 5 minutes
0 * * * *       # Every hour (at minute 0)
0 9 * * *       # Daily at 9:00 AM
0 0 * * 0       # Weekly on Sunday at midnight
0 0 1 * *       # Monthly on the 1st at midnight

Tasks execute HTTP calls to your agent's endpoints. Example command:

Task Command
curl -X POST http://localhost:8080/api/daily-summary
Tip: Create a dedicated endpoint in your agent for each scheduled task, like /api/cleanup or /api/sync.

Runtime Environment

Dedicated Compute

Agents run on dedicated performance tiers, not shared serverless functions. Your process can run indefinitely without cold starts or timeouts.

TierCPUMemoryStorage
Scout (Free)0.25 vCPU (shared)256 MB2 GB
Vanguard0.5 vCPU512 MB5 GB
Warlord1 vCPU1 GB10 GB

Network Access

By default, outbound traffic is restricted for security on the Scout (Free) tier. This protects against crypto miners and malicious scripts.

Upgrade to Vanguard to unlock full internet access for your agents (LLM APIs, web scraping, external services).

Always-On Infrastructure

All Tiers - Always Running

Your agents run 24/7. No sleep. No cold starts. Peace of mind that your agent is always available - unlike running on your laptop.

Environment Variables

The following environment variables are automatically injected into every agent:

Environment
# Automatically injected by AgentRealm
PORT=8080              # Your agent MUST listen on this port
DATABASE_URL=postgres://...  # Your dedicated database

# From your Secret Vault (encrypted at rest)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

# Custom env vars from Dashboard
MY_CUSTOM_VAR=value

Project Structure

✨ Zero Config Mode – No Dockerfile needed! We auto-detect Python and Node.js projects and generate optimized containers automatically.

Supported Runtimes

🐍

Python

Latest stable (slim)

Detected via requirements.txt or pyproject.toml

📦

Node.js

LTS (Alpine)

Detected via package.json

Minimal Python Project

Repository Structure
my-agent/
├── main.py           # Entry point (must exist)
├── requirements.txt  # Python dependencies
└── README.md         # Optional

Minimal Node.js Project

Repository Structure
my-agent/
├── index.js          # Entry point
├── package.json      # Node dependencies + start script
└── README.md         # Optional

Custom Dockerfile (Eject)

Need full control? Add a Dockerfile to your repo root. Your Dockerfile always takes priority – we never override it.

Dockerfile
# Custom Dockerfile - full control
FROM python:3-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

# AgentRealm injects PORT=8080
CMD ["python", "main.py"]

Example Dockerfile

A minimal Dockerfile for a Python agent:

Dockerfile
FROM python:3-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# AgentRealm injects PORT=8080
EXPOSE 8080

# Start your agent
CMD ["python", "main.py"]

Listening on PORT

Your agent must expose an HTTP server on the PORT environment variable (default: 8080).

main.py
import os
from flask import Flask

app = Flask(__name__)

@app.route("/")
def health():
    return {"status": "online", "agent": "my-agent-v1"}

@app.route("/chat", methods=["POST"])
def chat():
    # Your agent logic here
    return {"response": "Hello from AgentRealm!"}

if __name__ == "__main__":
    port = int(os.getenv("PORT", 8080))
    app.run(host="0.0.0.0", port=port)
Pro Tip: Always include a health check endpoint at / or /health. We use this to verify your agent is running.

Build Limits

To ensure fast builds and efficient resource usage, each tier has specific build constraints:

TierMax Image SizeBuild Timeout
Scout (Free)100 MB5 minutes
Vanguard300 MB10 minutes
Warlord1 GB20 minutes
Image Size Tips: Use slim base images (e.g., python:3-slim, node:lts-alpine), multi-stage builds, and --no-cache-dir for pip installs. Avoid including large ML models in your image – load them at runtime instead.

Need help? Join our Discord community or email support@agentrealm.cloud