LlamaIndex Integration Guide
Add a governed email mailbox to your LlamaIndex agents. Send, receive, and track email with built-in policy enforcement.
LlamaIndex agents can call any Python function as a tool. The problem is that email calls made from those tools have no governance layer: no deduplication, no cooldown windows, no suppression checks, no audit trail.
This guide shows how to give a LlamaIndex agent a managed mailbox. Every email the agent proposes runs through the Molted policy engine before it leaves - 20+ rules evaluated in under a second. The agent keeps sending through a familiar FunctionTool interface; policy runs at the infrastructure layer and cannot be bypassed by any instruction passed to the LLM.
Prerequisites
- A Molted account with an API key (sign up at molted.email/signup)
- A verified sending domain (see Domains)
- LlamaIndex installed:
pip install llama-index llama-index-llms-openai - Requests installed:
pip install requests
1. Create a mailbox for your agent
Each LlamaIndex agent should have its own mailbox. Log in to the portal, go to Mailboxes, and create one - or use the API:
curl -X POST https://api.molted.email/v1/me/mailboxes \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Outbound Agent",
"emailAddress": "agent@yourdomain.com"
}'Note the mailboxId in the response - you will pass it on each send request.
2. Define the send tool
LlamaIndex tools are plain Python functions wrapped with FunctionTool.from_defaults. The function signature and docstring become the tool definition the LLM sees.
import os
from typing import Optional
import requests
from llama_index.core.tools import FunctionTool
def send_email(
to: str,
subject: str,
body: str,
dedupe_key: Optional[str] = None,
) -> str:
"""Send an email to a contact.
The email is checked against policy rules before it is delivered.
If blocked, the function returns the reason - do not retry a blocked send.
Args:
to: Recipient email address.
subject: Email subject line.
body: Email body in plain text or HTML.
dedupe_key: Unique key to prevent duplicate sends. Defaults to recipient+subject.
"""
api_key = os.environ["MOLTED_API_KEY"]
tenant_id = os.environ["MOLTED_TENANT_ID"]
payload = {
"tenantId": tenant_id,
"recipientEmail": to,
"templateId": "_default",
"dedupeKey": dedupe_key or f"{to}-{subject}",
"agentId": "llamaindex-agent",
"payload": {
"subject": subject,
"html": body,
"text": body,
},
}
response = requests.post(
"https://api.molted.email/v1/agent/send/request",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
},
json=payload,
timeout=10,
)
data = response.json()
if data.get("status") == "blocked":
return (
f"Email blocked by policy. Reason: {data.get('blockReason')}. "
f"Decision trace: {data.get('requestId')}. Do not retry."
)
return (
f"Email queued successfully. "
f"requestId={data.get('requestId')}, status={data.get('status')}"
)
send_email_tool = FunctionTool.from_defaults(fn=send_email)The tool returns the policy decision back to the agent. When a send is blocked, the agent sees the reason and can decide how to proceed rather than silently failing or retrying.
3. Add a status-check tool (optional)
Agents can query delivery status of a prior send using the requestId from the send response:
import os
import requests
from llama_index.core.tools import FunctionTool
def check_email_status(request_id: str) -> str:
"""Check the delivery status of a previously queued email.
Args:
request_id: The requestId returned when the email was sent.
"""
api_key = os.environ["MOLTED_API_KEY"]
response = requests.get(
f"https://api.molted.email/v1/agent/send/{request_id}/status",
headers={"Authorization": f"Bearer {api_key}"},
timeout=10,
)
data = response.json()
return (
f"Status: {data.get('status')}. "
f"Provider: {data.get('provider', 'unknown')}. "
f"Last event: {data.get('lastEvent', 'none')}."
)
check_status_tool = FunctionTool.from_defaults(fn=check_email_status)4. Build the agent
Wire the tools into a ReActAgent or FunctionCallingAgent. Both work the same way - use FunctionCallingAgent for models that natively support tool use (GPT-4o, Claude, Gemini):
import os
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from tools.send_email import send_email_tool
from tools.check_email_status import check_status_tool
llm = OpenAI(model="gpt-4o", api_key=os.environ["OPENAI_API_KEY"])
agent = ReActAgent.from_tools(
tools=[send_email_tool, check_status_tool],
llm=llm,
verbose=True,
system_prompt=(
"You are an email outreach agent. You send policy-compliant emails through "
"a managed mailbox. Every send is checked against policy rules before delivery. "
"If a send is blocked, report the reason - do not retry. "
"Use 'welcome-{email}' as the dedupeKey for welcome emails to prevent duplicates."
),
)
response = agent.chat(
"Send a trial welcome email to alice@example.com. "
"Let her know her 14-day trial starts today and she can reply with questions."
)
print(response)Using FunctionCallingAgent
For function-calling models, FunctionCallingAgent avoids the ReAct chain-of-thought overhead:
from llama_index.core.agent import FunctionCallingAgent
from llama_index.llms.openai import OpenAI
from tools.send_email import send_email_tool
from tools.check_email_status import check_status_tool
llm = OpenAI(model="gpt-4o")
agent = FunctionCallingAgent.from_tools(
tools=[send_email_tool, check_status_tool],
llm=llm,
verbose=True,
)
response = agent.chat("Send a churn-prevention email to bob@example.com.")
print(response)5. AgentWorkflow (newer API)
LlamaIndex's AgentWorkflow API (introduced in v0.11+) lets you compose multi-step agentic pipelines with explicit state management. It works the same way:
import os
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.openai import OpenAI
from tools.send_email import send_email_tool
from tools.check_email_status import check_status_tool
llm = OpenAI(model="gpt-4o")
agent = FunctionAgent(
tools=[send_email_tool, check_status_tool],
llm=llm,
system_prompt=(
"You are an outreach agent. Send emails through the governed mailbox. "
"If a send is blocked, report the block reason to the caller. Do not retry."
),
)
import asyncio
async def main():
response = await agent.run(
"Send a reactivation email to carol@example.com who hasn't logged in for 30 days."
)
print(response)
asyncio.run(main())6. Multi-agent setup
For pipelines with multiple agents sending email, register each agent separately. This gives you per-agent rate limits and a per-agent attribution trail in the decision trace:
import os
import requests
api_key = os.environ["MOLTED_API_KEY"]
tenant_id = os.environ["MOLTED_TENANT_ID"]
agents = [
{"name": "onboarding-agent", "config": {"humanizer_enabled": True, "humanizer_style": "friendly"}},
{"name": "churn-agent", "config": {"humanizer_enabled": True, "humanizer_style": "professional"}},
{"name": "billing-agent", "config": {"humanizer_enabled": False}},
]
agent_ids = {}
for agent in agents:
response = requests.post(
"https://api.molted.email/v1/agent/register",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
},
json={"tenantId": tenant_id, **agent},
)
data = response.json()
agent_ids[agent["name"]] = data["id"]
print(f"Registered {agent['name']}: agentId={data['id']}")Pass the registered agentId in each send call instead of a hardcoded string:
payload = {
"tenantId": tenant_id,
"agentId": agent_ids["onboarding-agent"], # from registration
# ...
}7. TypeScript / LlamaIndex.TS
LlamaIndex also has a TypeScript SDK. The integration pattern is the same:
import { FunctionTool } from "llamaindex";
export const sendEmailTool = FunctionTool.from(
async ({
to,
subject,
body,
dedupeKey,
}: {
to: string;
subject: string;
body: string;
dedupeKey?: string;
}) => {
const response = await fetch(
"https://api.molted.email/v1/agent/send/request",
{
method: "POST",
headers: {
Authorization: `Bearer ${process.env.MOLTED_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
tenantId: process.env.MOLTED_TENANT_ID,
recipientEmail: to,
templateId: "_default",
dedupeKey: dedupeKey ?? `${to}-${subject}`,
agentId: "llamaindex-agent",
payload: { subject, html: body, text: body },
}),
}
);
const data = await response.json();
if (data.status === "blocked") {
return `Email blocked: ${data.blockReason}. Decision trace: ${data.requestId}. Do not retry.`;
}
return `Email queued: requestId=${data.requestId}, status=${data.status}`;
},
{
name: "send_email",
description:
"Send an email to a contact. Checked against policy rules before delivery. If blocked, returns the reason - do not retry.",
parameters: {
type: "object",
properties: {
to: { type: "string", description: "Recipient email address" },
subject: { type: "string", description: "Email subject line" },
body: { type: "string", description: "Email body in plain text or HTML" },
dedupeKey: {
type: "string",
description: "Unique key to prevent duplicate sends",
},
},
required: ["to", "subject", "body"],
},
}
);import { OpenAI, ReActAgent } from "llamaindex";
import { sendEmailTool } from "./tools/sendEmail";
const llm = new OpenAI({ model: "gpt-4o" });
const agent = new ReActAgent({
tools: [sendEmailTool],
llm,
});
const response = await agent.chat({
message: "Send a welcome email to alice@example.com.",
});
console.log(response.response);8. Handle policy blocks
Common block reasons and how agents should respond:
| Reason | What it means | Recommended agent behavior |
|---|---|---|
duplicate_send | Same dedupeKey used within cooldown window | Inform the user, do not retry |
rate_limit_exceeded | Mailbox hit its per-minute, per-hour, or per-day limit | Stop sends, report limit hit |
suppressed_recipient | Contact has unsubscribed or hard-bounced | Skip this contact, do not retry |
cooldown_active | Per-recipient cooldown in effect | Report ETA if available, do not retry now |
risk_budget_exceeded | Agent risk budget exhausted for this period | Stop sends, escalate to human |
consent_required | No valid consent record for this contact | Do not send, request consent first |
Add explicit block-handling instructions to your agent's system prompt so the LLM knows what to do:
system_prompt = """
You are an email outreach agent. When a send is blocked:
- 'duplicate_send': email was already sent recently. Report to the user. Do not retry.
- 'suppressed_recipient': contact has opted out. Skip and move to next contact.
- 'rate_limit_exceeded': mailbox rate limit hit. Stop all sends and report.
- 'cooldown_active': cooldown in effect. Report when it will lift. Do not retry now.
- Any other block: report the reason and requestId. Do not retry without explicit instruction.
"""What gets enforced
When your agent calls send_email, the mailbox evaluates 20+ policy rules before anything leaves:
- Suppression - has this recipient opted out or hard-bounced?
- Deduplication - has the same
dedupeKeybeen used within the cooldown window? - Cooldown - is there an active per-recipient cooldown?
- Rate limits - has the mailbox hit its per-minute, per-hour, or per-day budget?
- Risk budget - has this agent exhausted its risk allocation?
- Consent - does the contact have a valid consent record?
If all rules pass, the email is delivered through the managed sending infrastructure with automatic failover. If any rule fires, the send is blocked and the decision trace records exactly which rule triggered and why. No instruction passed to the LLM - and no tool call the agent makes - can override these checks.
Related
- Quickstart - send your first policy-checked email
- LangChain Integration Guide - LangChain-specific setup
- CrewAI Integration Guide - multi-agent crew setup
- Policy Simulation - test policy rules against draft sends
- Reactive Agent Guide - build agents that respond to inbound email
- Autonomy Levels - require human approval for high-risk sends