How to Build a claims processing Agent Using LangChain in Python for wealth management
A claims processing agent for wealth management takes incoming client claims, extracts the relevant facts, checks them against policy and account data, routes exceptions, and produces an auditable decision trail. It matters because wealth firms handle high-value accounts, sensitive personal data, and regulated workflows where slow manual review creates operational risk, compliance exposure, and poor client experience.
Architecture
- •
Input ingestion layer
- •Accepts claim emails, PDFs, scanned forms, or structured API payloads.
- •Normalizes them into a single internal schema before any LLM step runs.
- •
Document extraction layer
- •Uses OCR or parser output to turn unstructured claim packets into text.
- •Splits long documents with
RecursiveCharacterTextSplitterwhen needed.
- •
LLM reasoning layer
- •Uses LangChain chat models to classify claim type, extract fields, and draft a recommendation.
- •Keeps prompts narrow so the model only handles interpretation, not policy ownership.
- •
Policy and eligibility tools
- •Calls internal systems for account status, KYC flags, coverage rules, beneficiary data, and transaction history.
- •Exposes these through LangChain tools so the agent can reason with live data.
- •
Decision and audit layer
- •Produces structured outputs with
PydanticOutputParseror tool outputs. - •Stores prompt inputs, tool calls, model responses, and final disposition for audit.
- •Produces structured outputs with
- •
Control plane
- •Enforces human review thresholds, data residency rules, redaction policies, and approval routing.
- •Prevents the agent from auto-closing claims above a defined risk threshold.
Implementation
1) Define the claim schema and output contract
For wealth management workflows, do not let the model return free-form prose. Use a typed schema so downstream systems can validate decisions before they hit case management or CRM.
from typing import Literal, Optional
from pydantic import BaseModel, Field
class ClaimDecision(BaseModel):
claim_id: str = Field(..., description="Internal claim identifier")
claimant_name: str
claim_type: Literal["death_benefit", "transfer_error", "fraud_review", "fee_dispute"]
summary: str
recommended_action: Literal["approve", "deny", "needs_review"]
risk_level: Literal["low", "medium", "high"]
rationale: str
missing_information: list[str] = []
compliance_notes: Optional[str] = None
2) Build the LangChain prompt and model chain
Use ChatPromptTemplate, ChatOpenAI, and PydanticOutputParser. The prompt should instruct the model to stay inside policy boundaries and flag anything that needs human review.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import PydanticOutputParser
parser = PydanticOutputParser(pydantic_object=ClaimDecision)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a claims processing assistant for a wealth management firm. "
"Extract facts only from the provided context. "
"Do not invent policy outcomes. "
"If information is incomplete or contradictory, set recommended_action to needs_review."),
("user",
"""Process this claim packet:
Claim packet:
{claim_text}
Policy context:
{policy_context}
Account context:
{account_context}
Return output in this format:
{format_instructions}""")
])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = prompt | llm | parser
3) Add tools for policy lookup and account checks
LangChain agents work better when you separate reasoning from retrieval. Use tools for deterministic checks like account status or residency constraints instead of asking the model to guess.
from langchain_core.tools import tool
@tool
def get_account_status(account_id: str) -> str:
# Replace with your internal service call
return f"Account {account_id}: active; KYC=passed; jurisdiction=US"
@tool
def get_claim_policy(claim_type: str) -> str:
policies = {
"death_benefit": "Requires death certificate and beneficiary verification.",
"transfer_error": "Requires trade confirmation and operations review.",
"fraud_review": "Always escalate to fraud team.",
"fee_dispute": "Requires fee schedule comparison."
}
return policies.get(claim_type, "Unknown policy")
If you want an agent that can decide when to call these tools, use create_tool_calling_agent plus AgentExecutor. For many claims workflows, though, a deterministic orchestration flow is safer than a fully autonomous agent.
4) Orchestrate extraction → validation → decision
This pattern is production-friendly because each step is observable. The LLM extracts structure first; then your code validates it against internal rules before any disposition is finalized.
from langchain.agents import create_tool_calling_agent, AgentExecutor
tools = [get_account_status, get_claim_policy]
agent_prompt = ChatPromptTemplate.from_messages([
("system",
"You are a claims assistant. Use tools when needed. "
"Always produce a compliant recommendation."),
("user",
"{input}")
])
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=agent_prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
claim_text = """
Claim ID: CLM-88321
Claimant: Maria Chen
Account ID: A-10092
Request: Transfer error after beneficiary update.
Documents attached: statement.pdf
"""
policy_context = get_claim_policy.invoke({"claim_type": "transfer_error"})
account_context = get_account_status.invoke({"account_id": "A-10092"})
result = chain.invoke({
"claim_text": claim_text,
"policy_context": policy_context,
"account_context": account_context,
"format_instructions": parser.get_format_instructions()
})
print(result.model_dump())
That gives you structured extraction. If you need multi-step tool use during triage—like checking multiple systems or escalating based on missing documents—wrap those steps in an AgentExecutor workflow and keep the final disposition gated by your own business logic.
Production Considerations
- •
Compliance controls
- •Log every prompt, tool call, model response, and final decision with immutable timestamps.
- •Keep an approval boundary for any claim involving AML flags, vulnerable clients, deceased clients’ estates, or high-value transfers.
- •
Data residency
- •Route EU client claims to EU-hosted infrastructure if your firm has residency requirements.
- •Redact account numbers, tax IDs, addresses, and beneficiary details before sending text to the model when possible.
- •
Monitoring
- •Track extraction accuracy by field: claimant name matches are not enough; measure policy type detection and missing-document detection separately.
- •Alert on spikes in
needs_review, tool failures, or repeated hallucinated references to nonexistent policy clauses.
- •
Guardrails
- •Enforce hard-coded thresholds outside the model. For example: any fraud-related claim above a certain value must go to human review.
- •Use allowlisted tools only. Do not expose arbitrary database queries or unrestricted document access through the agent.
Common Pitfalls
- •
Letting the model make final decisions without validation
- •Fix it by separating extraction from adjudication.
- •The LLM should recommend; your rule engine should approve or escalate.
- •
Sending raw sensitive data into prompts
- •Fix it by redacting identifiers where possible and minimizing context.
- •In wealth management, least privilege applies to prompts too.
- •
Using a generic agent for every claim type
- •Fix it by branching on claim category first.
- •Death benefit claims need different evidence than fee disputes or transfer errors.
- •
Skipping auditability
- •Fix it by storing input text hashes, tool outputs like
get_account_status, parsed JSON results fromPydanticOutputParser, and reviewer overrides. - •If compliance asks why a claim was escalated six months later, you need more than the final answer.
- •Fix it by storing input text hashes, tool outputs like
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit