LangGraph is a framework for building stateful AI agents as directed graphs. If you're an Angular developer building AI-powered applications, this page teaches you how LangGraph agents work and why agent() is the natural bridge between your frontend and your agent backend.
Why graphs?
Graphs give you explicit control over agent behavior. Instead of a black-box prompt-and-pray approach, you define exactly how your agent reasons, when it calls tools, and where it pauses for human input. Every step is visible, testable, and debuggable.
The Core Concepts
A LangGraph agent has three building blocks:
Nodes — Functions That Do Work
A node is a Python function that receives the current state, does something, and returns updated state. Every node has the same signature:
def my_node(state: State, config: RunnableConfig) -> dict: # Read from state messages = state["messages"] # Do work (call LLM, query DB, invoke tool) response = llm.invoke(messages) # Return state updates (merged into existing state) return {"messages": [response]}
Node return values
Nodes don't replace state — they return updates that get merged into the existing state. For lists like messages, LangGraph uses reducers (like operator.add) to accumulate entries instead of overwriting.
Edges — Connections Between Nodes
Edges define the execution flow. There are two types:
def should_continue(state: State) -> str: last_msg = state["messages"][-1] if last_msg.tool_calls: return "tools" # Agent wants to use a tool return END # Agent is done, return responsebuilder.add_conditional_edges("call_model", should_continue)
State — The Shared Memory
All nodes read from and write to a shared state object. You define its shape as a Python TypedDict:
from typing_extensions import TypedDict, Annotatedfrom operator import addclass State(TypedDict): messages: Annotated[list, add] # Accumulates messages plan: list[str] # Agent's current plan results: dict # Tool results
This state is exactly what agent() exposes to your Angular app through Signals.
Building Your First Agent
Here's the simplest possible agent — a chat model that takes messages and responds:
Thread-based persistence means conversations survive page refreshes, browser restarts, and even server deployments.
from langgraph.checkpoint.postgres import PostgresSavercheckpointer = PostgresSaver.from_connection_string(DATABASE_URL)graph = builder.compile(checkpointer=checkpointer)# Each thread_id is a persistent conversationresult = graph.invoke( {"messages": [user_message]}, config={"configurable": {"thread_id": "user_123_session"}})
Angular connection: Thread persistence is built into agent:
const chat = agent<ChatState>({ assistantId: 'chat_agent', threadId: signal(localStorage.getItem('threadId')), onThreadId: (id) => localStorage.setItem('threadId', id),});// User returns tomorrow — same thread, full history restored// No code needed — agent handles it
How agent() Bridges the Gap
Here's why agent() is the natural Angular companion for LangGraph:
1
Your Angular App
Calls submit({ message: text }) to send user input
2
agent()
Passes input to the transport layer
3
FetchStreamTransport
Sends HTTP POST to LangGraph Platform, opens SSE connection
4
LangGraph Platform
Executes graph nodes, calls tools, streams SSE events back
5
FetchStreamTransport
Parses SSE chunks into BehaviorSubjects
6
agent()
Converts BehaviorSubjects to Angular Signals via toSignal()
7
Your Angular App
Templates re-render automatically via OnPush change detection
Zero configuration streaming
You don't configure SSE, parse events, manage WebSocket connections, or handle reconnection. agent() does all of that. You call submit() and read Signals — that's the entire API surface for your Angular code.
Both APIs produce the same output and work identically with agent(). Choose the Graph API when you need conditional routing, subgraphs, or interrupts. Choose the Functional API for simple, linear workflows.