Docsโ€บChatโ€บGuidesโ€บStreaming

Streaming

The ChatComponent automatically classifies AI message content and routes it to the appropriate renderer. This page explains how the streaming pipeline works and how to use the classification APIs directly for custom integrations.

Content Classification

Each AI message is processed by a ContentClassifier that examines the content as it streams token-by-token. The classifier determines the content type from the first non-whitespace character:

TriggerContent TypeWhat Happens
First non-whitespace is {json-renderParsed as a JSON spec via @ngaf/partial-json
Any other textmarkdownRendered as markdown prose
Per-message classification

Each message gets its own classifier instance. Classification happens once per message โ€” the type is determined by the first meaningful character and never changes.

The Streaming Pipeline

For JSON spec messages, the pipeline is:

Tokens arrive character-by-character
  โ†’ ContentClassifier detects { โ†’ switches to json-render mode
  โ†’ PartialJsonParser builds a parse tree incrementally
  โ†’ ParseTreeStore materializes tree โ†’ Spec signal (structural sharing)
  โ†’ RenderSpecComponent renders with element-level memoization

Structural sharing means that when a new token arrives, only the affected element's object reference changes. Sibling elements keep the same reference, so Angular's change detection skips them entirely. This makes streaming efficient even for large specs with many elements.

Using ContentClassifier Directly

For custom message rendering outside of ChatComponent, use createContentClassifier():

import { createContentClassifier } from '@ngaf/chat';
 
// Create a classifier instance (must be in an Angular injection context)
const classifier = createContentClassifier();
 
// Feed content snapshots โ€” the classifier computes deltas internally
classifier.update('{"root":"r1","elements":{"r1":{"type":"Te');
classifier.update('{"root":"r1","elements":{"r1":{"type":"Text","props":{"label":"Hello"}}}}');
 
// Read reactive signals
console.log(classifier.type());         // 'json-render'
console.log(classifier.spec());         // { root: 'r1', elements: { ... } }
console.log(classifier.markdown());     // '' (empty for pure JSON)
console.log(classifier.streaming());    // false (complete JSON)
 
// Clean up when done
classifier.dispose();

Signals

SignalTypeDescription
typeSignal<ContentType>'undetermined', 'markdown', 'json-render', 'a2ui', or 'mixed'
markdownSignal<string>Accumulated markdown prose (empty for pure JSON)
specSignal<Spec | null>Materialized JSON-render spec with structural sharing
elementStatesSignal<Map<string, ElementAccumulationState>>Per-element tracking of which properties have been received
streamingSignal<boolean>true while content is still arriving

ContentType

type ContentType = 'undetermined' | 'markdown' | 'json-render' | 'a2ui' | 'mixed';

Using ParseTreeStore Directly

For lower-level control over JSON-to-Spec materialization:

import { createPartialJsonParser } from '@ngaf/partial-json';
import { createParseTreeStore } from '@ngaf/chat';
 
const parser = createPartialJsonParser();
const store = createParseTreeStore(parser);
 
// Feed tokens
store.push('{"root":"r1","elements":{"r1":{"type":"Text"');
console.log(store.spec());  // partial spec with r1.type = "Text"
 
store.push(',"props":{"label":"Hello"}}}}');
console.log(store.spec());  // complete spec
 
// Track element accumulation
const states = store.elementStates();
console.log(states.get('r1'));
// { hasType: true, hasProps: true, hasChildren: false, streaming: false }

ElementAccumulationState

interface ElementAccumulationState {
  hasType: boolean;      // /elements/{key}/type received
  hasProps: boolean;     // /elements/{key}/props received
  hasChildren: boolean;  // /elements/{key}/children received
  streaming: boolean;    // still receiving data for this element
}

A2UI Content Detection

A2UI content uses a different detection trigger than JSON-render specs. Instead of detecting the first non-whitespace { character, the classifier looks for the ---a2ui_JSON--- prefix at the start of the message.

Once detected, the classifier switches to A2UI mode and parses the remaining content as JSONL โ€” one JSON object per line โ€” rather than a single JSON object. Each line represents an A2UI message that builds up surfaces with components and data models.

The resulting surfaces are available via classifier.a2uiSurfaces(), which returns a Map<string, A2uiSurface> keyed by surface ID. See the A2UI guide for full details on the A2UI protocol and surface structure.

Error Handling

Parse errors are captured in the errors signal and do not crash the rendering pipeline. When a malformed token arrives, the classifier records the error and continues processing subsequent tokens โ€” partial results keep rendering.

const classifier = createContentClassifier();
 
// Feed content (errors are captured internally)
classifier.update(content);
 
// Check for non-fatal parse errors
const parseErrors = classifier.errors();
if (parseErrors.length > 0) {
  console.warn('Parse errors encountered:', parseErrors);
}

This makes the errors signal useful for diagnostics and debugging without disrupting the user-facing chat experience.

What's Next