Agent Integration Guide
This guide walks through adding Masar to an LLM-based agent. By the end, your agent will plan before acting, verify after generating, and build memory from every task.
1. Install the SDK
pip install masar-client langchain langgraph
2. Create Masar Tools
Wrap Masar endpoints as tools your agent can call:
from masar import MasarClient
from langchain.tools import tool
client = MasarClient() # reads MASAR_API_KEY from env
@tool
def plan_instructions(goal: str, current: dict) -> dict:
"""Get dependency-ordered instructions to reach a goal."""
return client.plan_instructions(current=current, goal=goal).to_dict()
@tool
def verify_schema(schema: dict) -> dict:
"""Check if a schema is valid and predict errors."""
validity = client.verify(schema=schema)
errors = client.error_check(schema=schema)
return {"valid": validity.valid, "probability": validity.probability, "errors": errors.top_errors}
@tool
def recall_memory(context: str, domain: str) -> dict:
"""Recall similar past experiences."""
return client.memory.recall(context=context, domain=domain).to_dict()
@tool
def store_memory(schema: dict, domain: str, outcome: str) -> dict:
"""Store a completed episode."""
return client.memory.store(schema=schema, domain=domain, outcome=outcome).to_dict()
3. Add Tools to Your Agent
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
tools = [plan_instructions, verify_schema, recall_memory, store_memory]
agent = create_react_agent(llm, tools)
4. The Agent Workflow
A well-structured agent follows four phases:
Recall -> Plan -> Execute + Verify -> Store
Recall: Check memory for similar past tasks. If a pattern exists, use it to skip planning.
Plan: Ask Masar for dependency-ordered instructions. The agent receives concrete steps instead of guessing.
Execute + Verify: The LLM generates output for each instruction. After each step (or at the end), verify the result. If verification fails, use the repair endpoint.
Store: Save the completed episode so future runs benefit from this experience.
5. Example Agent Run
result = agent.invoke({
"messages": [{"role": "user", "content": "Build a helpdesk ticketing system"}]
})
The agent will:
- Call
recall_memoryto check for past helpdesk builds - Call
plan_instructionswith goal"std-helpdesk" - Generate schema using the LLM, following the plan
- Call
verify_schemato check the result - If invalid, use repair suggestions and regenerate
- Call
store_memoryto save the episode
Next Steps
- Memory Lifecycle - Manage your agent's growing memory
- Process API - Use the unified step-by-step endpoint
- Helpdesk Example - Full working example