OpenAgent
Core Concepts

Agents

How OpenAgent agents reason, plan, and act.

Agents

An agent in OpenAgent is an AI assistant that combines a language model with memory, knowledge retrieval, and tool-use capabilities to handle complex, multi-step tasks.

What makes an agent different from a chatbot?

A simple chatbot passes user messages to an LLM and returns the response. An agent does much more:

CapabilityChatbotAgent
Multi-turn conversation
Knowledge base retrieval
Tool use (MCP)
Multi-step reasoning
External API calls
Code execution

Agent Lifecycle

Every time a user sends a message, the agent follows this cycle:

User Message


┌──────────────────────┐
│  1. Context Assembly │  ← conversation history + system prompt
└──────────┬───────────┘


┌──────────────────────┐
│  2. Knowledge Lookup │  ← semantic search over knowledge base
└──────────┬───────────┘


┌──────────────────────┐
│  3. LLM Reasoning    │  ← decide: respond or call a tool?
└──────────┬───────────┘

     ┌─────┴─────┐
     │           │
     ▼           ▼
  Tool Call   Response


Tool Result → back to step 3

1. Context Assembly

Before calling the LLM, the agent assembles the full context window:

  • System prompt — instructions defined by you when creating the agent
  • Conversation history — recent turns (up to the model's context window)
  • Retrieved knowledge — relevant chunks from the knowledge base (if configured)
  • Available tools — list of MCP tools the agent can call

2. Knowledge Lookup

If the agent has a knowledge base attached, it performs a semantic search using the user's message as the query. The top-ranked chunks are injected into the context before the LLM call. This allows the agent to answer questions grounded in your own documents without hallucinating.

3. LLM Reasoning

The LLM processes the full context and decides what to do:

  • Respond directly — if it has enough information, it generates a response.
  • Call a tool — if the task requires external data or an action, it emits a tool call.

If a tool is called, the result is fed back into the context and the LLM reasons again. This loop continues until the agent is ready to respond.

Agent Configuration

When creating an agent in the dashboard, you configure:

FieldDescription
nameDisplay name
modelWhich LLM to use (e.g., gpt-4o, claude-3-5-sonnet)
system_promptInstructions for the agent's behavior and persona
knowledge_baseOptional: knowledge base to search for each query
mcp_serversOptional: list of MCP tool servers to connect
temperature0–1, controls response creativity
max_tokensMaximum output tokens per response
context_windowHow many past turns to include

System Prompt Best Practices

The system prompt is the most important lever for agent behavior. Here are proven patterns:

Define the persona and scope

You are a technical support assistant for Acme Software.
You help users troubleshoot issues with our products.
Only answer questions related to our software — for unrelated topics,
politely redirect the user.

Instruct on knowledge use

When answering, always search the knowledge base first.
Cite the source document name in your response.
If the knowledge base doesn't contain the answer, say so clearly.

Set output format expectations

Format code examples in markdown code blocks.
Keep responses concise — prefer bullet points over long paragraphs.
Always end troubleshooting responses with "Does this resolve your issue?"

Multi-Agent Workflows

OpenAgent supports linking multiple agents together. Use cases include:

  • Triage agent — routes incoming messages to specialized agents
  • Research agentSummary agent pipeline
  • Validation agent that reviews another agent's output

Multi-agent orchestration is configured via MCP tool calls. One agent can invoke another agent as a tool.

On this page