Deployment
Deploy your LangGraph agent to the cloud and ship your Angular frontend to production with environment-based configuration, authentication, error handling, and observability.
Python: LangGraph Cloud deployment
Your agent code needs a langgraph.json manifest at the project root. This file tells LangGraph Cloud how to build and serve your agent.
The graphs key maps an assistant ID (used by agent() on the Angular side) to the Python module path and graph variable. The env key points to a file with secrets like OPENAI_API_KEY that will be injected at runtime.
Agent entry point
Push and deploy
The CLI watches your repository and builds a container image on LangGraph Cloud. First deployments take roughly 10-15 minutes. Subsequent pushes to the default branch trigger automatic redeployments.
LangSmith deployment walkthrough
The LangSmith UI provides a visual deployment flow if you prefer not to use the CLI.
Navigate to smith.langchain.com and click Deployments in the left sidebar, then + New Deployment.
Authorize LangSmith to access your GitHub account. Select the repository containing your langgraph.json. LangSmith auto-detects the manifest and shows the graphs it found.
Add secrets like OPENAI_API_KEY in the deployment settings. These are encrypted at rest and injected into your container at runtime. You can also set LANGCHAIN_TRACING_V2=true here to enable automatic tracing.
Click Deploy. Once the build succeeds, you will see a deployment URL like https://my-agent-abc123.langgraph.app. Copy this URL for your Angular environment configuration.
Angular: environment configuration
Angular uses file-based environment replacement at build time rather than process.env. Create separate environment files for development and production.
Wire the environment into provideAgent():
Angular CLI replaces environment.ts with environment.prod.ts during ng build --configuration production automatically via the fileReplacements array in angular.json.
Authentication
API key for LangGraph Platform
LangGraph Cloud deployments require an API key on every request. The recommended approach is an Angular HTTP interceptor that attaches the key as a header.
Register the interceptor in your application config:
Add environment.prod.ts to .gitignore. In CI, generate it from environment variables or inject secrets at build time.
User-level authentication
If your app has its own user authentication (JWT, session cookies), you can add a second interceptor or extend the one above to forward identity headers that your agent can use for per-user scoping.
CORS configuration
When your Angular frontend and LangGraph backend are on different origins, you must configure CORS on the LangGraph side.
In langgraph.json, add an http section:
During local development with langgraph dev, CORS is permissive by default. You only need explicit CORS configuration for production deployments.
Error boundaries
Production apps need graceful error handling. Build a reactive error boundary using agent() signals.
Retry with exponential backoff
For automated retries (network blips, transient 5xx errors), wrap .submit() with a backoff utility:
Stream recovery
Use joinStream() to reconnect to a running agent execution after a network interruption, page refresh, or navigation event.
joinStream() replays any events the client missed, then switches to live streaming. This works because all state lives on the LangGraph Platform, and the SSE endpoint supports event ID-based resumption.
agent() is a stateless client. All state lives on the LangGraph Platform. This means your Angular app can be deployed anywhere (CDN, edge, SSR) without state management concerns. Scale your frontend independently of your agent infrastructure.
CI/CD pipeline
A typical pipeline deploys the Python agent and Angular frontend in parallel since they are independent artifacts.
Monitoring
LangSmith observability
When LANGCHAIN_TRACING_V2=true is set in your agent environment, every run is automatically traced in LangSmith. No code changes are needed.
Key metrics to track in production:
| Metric | Where to find it | Why it matters |
|---|---|---|
| End-to-end latency | LangSmith Runs tab | Directly affects user-perceived responsiveness |
| Error rate | LangSmith Runs tab, filter by error | Spike detection for broken tools or provider outages |
| Token usage | LangSmith per-run detail | Cost control and budget alerting |
| Time to first token | Angular performance monitoring | Stream startup latency visible to users |
| Thread count | LangGraph Platform dashboard | Capacity planning |
Client-side monitoring
Track stream health from your Angular app:
Deployment checklist
Point provideAgent({ apiUrl }) to your LangGraph Cloud deployment URL via environment.prod.ts.
Add an HTTP interceptor to attach x-api-key headers to all LangGraph requests.
Add your Angular app's origin to the allow_origins list in langgraph.json.
Show user-friendly error messages for 401, 429, 503, and network failures. Provide retry buttons.
Store runId and use joinStream() to reconnect after network interruptions.
Store threadId in localStorage or a backend so users can resume conversations across sessions.
Set the throttle option if token-by-token updates are too frequent for your UI rendering.
Set LANGCHAIN_TRACING_V2=true in your agent environment for production observability.
Add environment.prod.ts to .gitignore. Generate it from CI secrets at build time.
Automate agent and Angular deployments on push to your main branch.
Confirm LangSmith traces are arriving and set up alerts for error rate spikes and latency regressions.
What's Next
Test agent interactions deterministically before deploying to production.
Store thread IDs so users can resume conversations across sessions.
Tune streaming options like throttle and stream modes for production performance.
Understand the agent patterns your deployment will serve.
Full reference for provideAgent configuration options.
Deep dive into error recovery patterns beyond basic error boundaries.