Concepts
Read this once before going deeper into the SDK or self-hosting. The terms here are the vocabulary used everywhere else in the docs.
Room
A room is a single live-streaming session. It has:
- A
room_id— UUID assigned by the server at creation. - A
room_type—"live"(one-to-many broadcast) or"1v1"(private call). - A
max_broadcasterscap (default 4 for live rooms). - A list of currently-connected peers.
Rooms live in memory + a Valkey cache. They’re created when the first
host connects (lazy creation if no /create_room was called) and
destroyed when the last broadcaster leaves.
Peer
A peer is one connected client — either a broadcaster sending media or a viewer consuming it. Each peer has:
- A
peer_id— UUID. For broadcasters this is server-assigned at/create_roomor/broadcaster-tokentime and embedded in the JWT; for viewers the SDK auto-generates aweb-XXXXXXXXfallback. - A
role—"host"(= broadcaster) or"viewer"or"guest". - An optional
display_name+avatar_urlfrom the JWT.
Broadcaster vs viewer
- Broadcaster — sends video + audio to the room. Typically the host, but co-broadcasters (cohorts) are also broadcasters.
- Viewer — receives video + audio from broadcasters. Cannot produce media without a role upgrade.
A room can have multiple broadcasters at once (up to
max_broadcasters). The first to call /create_room is the
original host — they have privileged actions like ending the
recording for everyone.
Cohost (co-broadcaster)
A cohost is a broadcaster who joined an existing room rather than creating it. Two paths:
- F1 — invite a friend off-stream. Host’s app generates a cohost
URL via
/broadcaster-token, sends it via SMS / email / their own app’s notification system. Friend opens the URL, joins as broadcaster. - F2 — invite an active viewer. Host taps “Invite to cohost”
on a viewer’s row. Server pushes
CohostInvitedto the viewer. Viewer accepts → role upgrade in place (no reconnect). - F3 — viewer requests cohost. Viewer taps “Request to cohost”
(gated by host’s “Allow guest requests” toggle). Server broadcasts
CohostRequestedto all current broadcasters. Any of them approves → role upgrade.
All three flows complete in 30 seconds (timeout). On success, the upgraded peer’s existing WebSocket connection becomes a producer — no new JWT, no reconnect, just a SendTransport on top of their existing recv connection.
Audience (Phase L.9 — multi-host chat partitioning)
In a multi-broadcaster room, each broadcaster has their own
audience: viewers who joined for THAT specific broadcaster, plus
the broadcaster themselves. Chat messages and viewer counts partition
by (room_id, host_peer_id):
- Host A’s viewers chat with Host A. Host A sees their messages, and their messages are seen by Host A’s viewers.
- Host B’s viewers chat with Host B. Host B’s audience is invisible to Host A’s audience.
- Each broadcaster sees their OWN viewer count, not the room total.
The host_peer_id is set at token-mint time:
- For broadcasters, it’s their own peer_id (server-assigned).
- For viewers, it’s the broadcaster they came in for. Defaults to the original host if the viewer’s link doesn’t specify one.
Share links generated by the SDK include ?host=<peer_id> so
viewers route to the correct audience automatically.
JWT claims
Every WebSocket connection authenticates with a JWT. Claims:
{ room_id: string, // required role: 'host'|'viewer'|'guest', // required room_type: 'live'|'1v1', // required max_peers?: number, display_name?: string, avatar_url?: string, is_original_host?: boolean, // true for /create_room mints host_peer_id?: string, // L.9 audience anchor exp: number, // unix-seconds}In tenant-aware mode, the customer’s backend mints these against the
shared JWT_SECRET after authenticating their user via their own
auth system. In standalone mode, you mint directly.
The signaling-server NEVER trusts a client-supplied display_name, role, or peer_id — those flow exclusively through the JWT, so the chat-engine + recording layer can trust the values without re-asking the client.
Slot system
The prebuilt pages have named slot mount points where you plug in your own widgets. Slot names are public API and frozen — won’t be renamed without a major-version bump.
Three integration depths:
// Layer 1 — configmanager.fillSlot('header.actionPill', '<button>+ Follow</button>');
// Layer 2 — events + reactive componentsmanager.on('SLOT_RENDER', ({ name, mount, ctx }) => { ... });
// Layer 3 — imperative DOMconst node = manager.getSlotMount('header.actionPill');See the Slot system guide for the full taxonomy (17 slots across 5 regions).
State machine
MufLiveManager has a strict state machine:
IDLE → INITIALIZING → ACTIVE → ENDING → IDLE- IDLE — not connected. Form-fillable.
- INITIALIZING —
startBroadcast()/joinAsViewer()/joinAsCoBroadcaster()is running. Spinner UI. - ACTIVE — stream is live, media flowing. All controls (mic, cam, pause, record, chat) work.
- ENDING —
endBroadcast()is tearing down transports + chat- presence.
Listen for STATE_CHANGE events to drive your UI. The pre-live and
live screens are CSS-toggled based on state.
What’s NOT a room
- A user account. MUF Engine has no user model. Your backend’s user model is the source of truth; you put display_name in the JWT.
- A persistent chat history. Messages are written to a Valkey Stream with a 24-hour TTL by default. Persistent chat (Module B — 1v1) is a separate code path.
- A recording library. Recordings are written to your R2 bucket with the customer’s credentials. The signaling-server doesn’t track them after upload — your app does.