How AI agents work — the planning, execution, and tool-calling lifecycle that agent() connects your Angular app to. This page shows you the Python patterns that power modern agents and exactly how each pattern surfaces in Angular through @ngaf/langgraph.
Python + Angular, both sides
Every section below shows the Python backend code first, then the Angular frontend code that consumes it. You need both halves to build a production agent application — LangGraph handles the intelligence, agent() handles the reactivity.
The Agent Loop
Every agent follows a five-phase cycle. Understanding this cycle is critical because each phase maps to a specific agent() signal in your Angular app.
1
Receive
The user sends a message. On the Angular side, submit() posts input to LangGraph Platform. On the Python side, the message lands in the graph's messages state key.
class AgentState(TypedDict): messages: Annotated[list, add] plan: list[str] tool_results: dict
2
Plan
The LLM examines the full message history plus any accumulated state. It decides what to do next — respond directly, call one or more tools, or delegate to a subagent.
def plan(state: AgentState, config: RunnableConfig) -> dict: system = """You are a research assistant. Given the conversation, decide whether to respond directly, search for information, or analyze data. Use tools when the user needs factual answers.""" response = llm.bind_tools(tools).invoke([ {"role": "system", "content": system}, *state["messages"], ]) return {"messages": [response]}
3
Execute
If the LLM decided to call tools, LangGraph routes to the tool node. Tools run — database queries, API calls, code execution — and their results feed back into state as ToolMessage entries.
from langgraph.prebuilt import ToolNodetool_node = ToolNode(tools)# LangGraph automatically calls each tool the LLM requested# and appends ToolMessage results to state["messages"]
4
Respond
After tools finish (or if no tools were needed), the agent streams its final response token by token. agent() updates the messages() signal in real time so your Angular template re-renders incrementally.
// Angular side — messages update as tokens arrive@if (agent.isLoading()) { <app-typing-indicator />}@for (msg of agent.messages(); track msg.id) { <app-message [message]="msg" />}
5
Checkpoint
LangGraph checkpoints the full state — messages, tool results, plan, everything. The agent may loop back to Plan (if tools returned data that needs further reasoning) or finish. The checkpoint is what enables time-travel debugging via history().
from langgraph.checkpoint.postgres import PostgresSavercheckpointer = PostgresSaver.from_connection_string(DATABASE_URL)graph = builder.compile(checkpointer=checkpointer)
ReAct Pattern
ReAct (Reason + Act) is the most common agent pattern. The agent reasons about the user's question, decides to call a tool, observes the result, and loops until it has enough information to answer.
from langgraph.graph import END, START, StateGraphfrom langgraph.prebuilt import ToolNodefrom langchain_openai import ChatOpenAIfrom langchain_core.tools import toolfrom typing_extensions import TypedDict, Annotatedfrom operator import add# --- State ---class AgentState(TypedDict): messages: Annotated[list, add]# --- Tools ---@tooldef search_docs(query: str) -> str: """Search the knowledge base for relevant documents.""" results = vector_store.similarity_search(query, k=3) return "\n\n".join(doc.page_content for doc in results)@tooldef query_database(sql: str) -> str: """Run a read-only SQL query against the analytics database.""" rows = db.execute(text(sql)).fetchall() return json.dumps([dict(r) for r in rows])@tooldef get_weather(city: str) -> str: """Get current weather for a city.""" resp = httpx.get(f"https://api.weather.com/v1/{city}") return resp.json()["summary"]tools = [search_docs, query_database, get_weather]# --- LLM with tools bound ---llm = ChatOpenAI(model="gpt-5-mini")def call_model(state: AgentState) -> dict: response = llm.bind_tools(tools).invoke(state["messages"]) return {"messages": [response]}# --- Routing ---def should_continue(state: AgentState) -> str: last_message = state["messages"][-1] if last_message.tool_calls: return "tools" return END# --- Graph ---builder = StateGraph(AgentState)builder.add_node("model", call_model)builder.add_node("tools", ToolNode(tools))builder.add_edge(START, "model")builder.add_conditional_edges("model", should_continue)builder.add_edge("tools", "model") # After tools, reason againgraph = builder.compile()
The key insight: should_continue is the decision point. If the LLM's response contains tool_calls, the graph routes to the tools node. If not, it ends. After tools execute, the graph loops back to model so the LLM can reason about the tool results. This loop continues until the LLM responds without requesting any tools.
Tool Calling Deep Dive
Tools are how agents interact with the outside world. Understanding both the Python definition and the Angular consumption is essential.
Defining Tools in Python
Every tool is a Python function decorated with @tool. LangGraph converts the function signature and docstring into the JSON schema that the LLM uses to decide when and how to call it:
from langchain_core.tools import toolfrom pydantic import BaseModel, Field# Simple tool — args inferred from function signature@tooldef calculate(expression: str) -> str: """Evaluate a mathematical expression and return the result.""" return str(eval(expression)) # Use a sandbox in production# Structured tool — explicit schema with validationclass EmailInput(BaseModel): to: str = Field(description="Recipient email address") subject: str = Field(description="Email subject line") body: str = Field(description="Email body content")@tool(args_schema=EmailInput)def send_email(to: str, subject: str, body: str) -> str: """Send an email to the specified recipient.""" mail_service.send(to=to, subject=subject, body=body) return f"Email sent to {to}"
Docstrings matter
The LLM reads the docstring to decide when to call a tool. A vague docstring like "does stuff" means the LLM will not know when to use it. Be specific: what the tool does, what it returns, when to use it.
How Tools Surface in Angular
When the agent calls a tool, agent() exposes the execution lifecycle through toolCalls():
// toolCalls() — tool calls with status, args, and results// Updates in real time as tools start and completeconst agent = agent<AgentState>({ assistantId: 'react_agent',});// Each entry has: id, name, args, status, and optional resultconst activeTools = computed(() => agent.toolCalls().filter((tool) => tool.status === 'running'));// Template usage@Component({ changeDetection: ChangeDetectionStrategy.OnPush, template: ` @for (tool of activeTools(); track tool.id) { <div class="tool-executing"> <app-spinner /> <span>Running {{ tool.name }}...</span> <pre>{{ tool.args | json }}</pre> </div> } `,})export class ToolProgressComponent { activeTools = computed(() => this.agent.toolCalls().filter((tool) => tool.status === 'running') );}
Tool Execution Flow
The full lifecycle from Python tool definition to Angular UI update:
1
LLM requests tool
The model returns an AIMessage with a tool_calls array. Each entry specifies the tool name and arguments.
2
LangGraph routes to ToolNode
The should_continue conditional edge detects tool_calls and routes to the tools node.
3
Tool executes
ToolNode calls the Python function. The result is wrapped in a ToolMessage and appended to state.
4
SSE event streams
LangGraph Platform streams the tool call and result as SSE events to the Angular client.
5
agent() updates signals
toolCalls() updates as the tool moves through pending, running, complete, and error states. Each update triggers OnPush change detection.
Multi-Agent Architecture
When a single agent with tools is not enough, you can compose multiple agents into a supervisor-worker architecture. A supervisor agent receives the user's request, decides which specialist to delegate to, and synthesizes the final answer.
from langgraph.graph import END, START, StateGraphfrom langchain_openai import ChatOpenAIfrom typing import Literalfrom typing_extensions import TypedDict, Annotatedfrom operator import addclass OrchestratorState(TypedDict): messages: Annotated[list, add] next_agent: str research_output: str analysis_output: strllm = ChatOpenAI(model="gpt-5-mini")# --- Supervisor ---def supervisor(state: OrchestratorState) -> dict: response = llm.bind_tools([route_tool]).invoke([ {"role": "system", "content": """You are a supervisor. Route to 'researcher' for fact-finding, 'analyst' for data analysis, 'writer' for drafting content, or 'finish' if the task is complete."""}, *state["messages"], ]) destination = response.tool_calls[0]["args"]["agent"] return {"next_agent": destination, "messages": [response]}# --- Specialist subagents (each is its own compiled graph) ---researcher_graph = build_researcher_agent()analyst_graph = build_analyst_agent()writer_graph = build_writer_agent()# --- Routing ---def route_to_agent(state: OrchestratorState) -> str: return state["next_agent"]# --- Orchestrator graph ---builder = StateGraph(OrchestratorState)builder.add_node("supervisor", supervisor)builder.add_node("researcher", researcher_graph)builder.add_node("analyst", analyst_graph)builder.add_node("writer", writer_graph)builder.add_edge(START, "supervisor")builder.add_conditional_edges("supervisor", route_to_agent, { "researcher": "researcher", "analyst": "analyst", "writer": "writer", "finish": END,})# After each specialist, return to supervisorbuilder.add_edge("researcher", "supervisor")builder.add_edge("analyst", "supervisor")builder.add_edge("writer", "supervisor")graph = builder.compile()
subagentToolNames is the key
The subagentToolNames option tells agent() which tool calls spawn subagents. The default Deep Agents tool name is task; set this option when your graph uses custom delegation tool names.
Error Handling and Recovery
Agents fail. Tools throw exceptions, APIs time out, LLMs hallucinate invalid tool arguments. A robust architecture handles all of these gracefully.
Python-Side Error Handling
from langchain_core.tools import tool, ToolException@tool(handle_tool_error=True)def query_database(sql: str) -> str: """Run a read-only SQL query against the analytics database.""" if "DROP" in sql.upper() or "DELETE" in sql.upper(): raise ToolException("Destructive queries are not allowed.") try: rows = db.execute(text(sql)).fetchall() return json.dumps([dict(r) for r in rows]) except Exception as e: raise ToolException(f"Query failed: {str(e)}")
When handle_tool_error=True is set, LangGraph catches ToolException and feeds the error message back to the LLM as a ToolMessage. The LLM sees the error and can retry with corrected arguments or explain the failure to the user.
How Errors Surface in Angular
const agent = agent<AgentState>({ assistantId: 'react_agent',});// The error() signal captures both transport and agent errorsconst error = computed(() => agent.error());// In your template@Component({ changeDetection: ChangeDetectionStrategy.OnPush, template: ` @if (error()) { <app-error-banner [error]="error()" (retry)="retry()" /> } `,})export class AgentComponent { error = computed(() => this.agent.error()); retry() { // Re-submit the last message to retry this.agent.submit(this.lastInput); }}
Error Recovery Strategies
Error type
Python behavior
Angular signal
Tool throws ToolException
Error fed back to LLM, agent retries
toolCalls() shows error in result
Tool throws unexpected error
LangGraph catches it, marks tool as failed
error() fires with details
LLM returns invalid tool args
ToolNode validation fails, error fed to LLM
toolCalls() shows failed status
Transport error (network)
N/A
error() fires, status() becomes 'error'
Agent exceeds recursion limit
Graph raises GraphRecursionError
error() fires with recursion message
Recursion limits
LangGraph defaults to 25 recursion steps. If your agent loops between model and tools more than 25 times, it stops with a GraphRecursionError. Increase the limit in production with graph.compile(recursion_limit=50) or redesign the agent to converge faster.
Checkpointing and Debugging
Every time a node completes, LangGraph saves a checkpoint — a full snapshot of the agent's state at that moment. agent() exposes this checkpoint timeline to Angular, giving you time-travel debugging for free.
How Checkpoints Work
from langgraph.checkpoint.postgres import PostgresSavercheckpointer = PostgresSaver.from_connection_string(DATABASE_URL)graph = builder.compile(checkpointer=checkpointer)# Every node execution creates a checkpoint:# checkpoint_1: after "model" (LLM decided to call search_docs)# checkpoint_2: after "tools" (search_docs returned results)# checkpoint_3: after "model" (LLM responded with final answer)
Exposing Checkpoints in Angular
const agent = agent<AgentState>({ assistantId: 'react_agent', threadId: signal('thread_abc123'),});// Full checkpoint timeline — every state snapshotconst timeline = computed(() => agent.history());// Current branch (for time-travel)const branch = computed(() => agent.branch());
When you submit from a previous checkpoint, LangGraph creates a new branch from that point. The original timeline is preserved. The branch() signal tells you which branch is currently active. See the Time Travel guide for the full walkthrough.
Choosing an Architecture
Not every application needs a multi-agent swarm. Here is a decision guide for picking the right level of complexity.
Single Agent with Tools
Use when: Most applications. The user has a conversation, the agent calls tools as needed, and responds.
# Simple, powerful, covers 80% of use casesbuilder = StateGraph(AgentState)builder.add_node("model", call_model)builder.add_node("tools", ToolNode(tools))builder.add_edge(START, "model")builder.add_conditional_edges("model", should_continue)builder.add_edge("tools", "model")graph = builder.compile()
Use when: The agent takes high-stakes actions (sending emails, modifying data, making purchases) that need human approval.
from langgraph.types import Interruptdef propose_action(state: AgentState) -> dict: plan = llm.invoke(state["messages"]) raise Interrupt(value={"action": plan.content, "requires_approval": True})def execute_action(state: AgentState) -> dict: # Only runs after human approves return perform_action(state["pending_action"])
Angular signals used:messages(), interrupt(), status() plus submit({ resume }) to approve
Multi-Agent Supervisor
Use when: The task naturally decomposes into specialist roles (researcher, analyst, writer), and each specialist needs its own tools, prompts, and reasoning chain.
Begin with a single agent and tools. Add human-in-the-loop when you need approval flows. Graduate to multi-agent only when a single agent's context window cannot hold all the tools and instructions it needs.