Building a Video Chat App
Learn how to build peer-to-peer video and audio calling applications using NoLag as the WebRTC signaling server.
Overview
NoLag provides built-in WebRTC support through the WebRTCManager class available in JavaScript, Python, and Go. It handles all the complexity of WebRTC signaling, including:
- Automatic peer discovery via presence
- SDP offer/answer exchange
- ICE candidate trickling
- Perfect negotiation pattern (handles offer collisions)
- Connection state management
How It Works
NoLag acts as the signaling server for WebRTC connections:
- Peer Discovery: Users join a room and set presence with
webrtcReady: true - Signaling: Offers, answers, and ICE candidates are exchanged via NoLag topics
- Connection: Once signaling completes, peers connect directly (peer-to-peer)
- Media: Video/audio streams flow directly between peers, not through NoLag
Installation
# Browser - no extra dependencies needed
npm install @nolag/js-sdk
# Node.js - requires wrtc package
npm install @nolag/js-sdk wrtcPrerequisites
Before starting, ensure you have:
- A NoLag account and access token
- A room with WebRTC enabled (see Room Creation below)
- HTTPS connection (required for
getUserMediain browsers)
Room Creation with WebRTC
When creating a room, enable WebRTC support to automatically add the required signaling topics:
// When creating a room via the REST API, enable WebRTC topics
const response = await fetch('https://api.nolag.io/rooms', {
method: 'POST',
headers: {
'Authorization': 'Bearer your_api_key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
appId: 'your_app_id',
slug: 'meeting-123',
enableWebRTC: true // This adds webrtc:offer, webrtc:answer, webrtc:candidate topics
})
}) This adds the following topics to your room: webrtc:offer, webrtc:answer, webrtc:candidate, and webrtc:state.
Basic Setup
import { NoLag, WebRTCManager } from '@nolag/js-sdk'
// Connect to NoLag
const client = NoLag('your_access_token')
await client.connect()
// Initialize WebRTC manager
const webrtc = new WebRTCManager(client, {
app: 'video-chat',
room: 'meeting-123',
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }]
})
// Get local media
const localStream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
})
// Display local video
const localVideo = document.getElementById('local-video')
localVideo.srcObject = localStream
// Set the local stream for WebRTC
webrtc.setLocalStream(localStream)
// Start WebRTC (subscribes to signaling, discovers peers)
await webrtc.start()Handling Peer Events
Listen for peer connections and disconnections:
// Handle remote peer connections
webrtc.on('peerConnected', (actorId, remoteStream) => {
console.log('Peer connected:', actorId)
// Create video element for remote peer
const video = document.createElement('video')
video.id = `video-${actorId}`
video.srcObject = remoteStream
video.autoplay = true
video.playsInline = true
document.getElementById('remote-videos').appendChild(video)
})
webrtc.on('peerDisconnected', (actorId) => {
console.log('Peer disconnected:', actorId)
// Remove video element
const video = document.getElementById(`video-${actorId}`)
video?.remove()
})
// Handle individual tracks
webrtc.on('peerTrack', (actorId, track, stream) => {
console.log(`Received ${track.kind} track from ${actorId}`)
})
// Handle errors
webrtc.on('error', (error) => {
console.error('WebRTC error:', error)
})Cleanup
Properly clean up resources when leaving the call:
// Clean up when leaving the call
function leaveCall() {
// Stop WebRTC manager
webrtc.stop()
// Stop local media tracks
localStream.getTracks().forEach(track => track.stop())
// Disconnect from NoLag (optional)
client.disconnect()
}
// Handle page unload
window.addEventListener('beforeunload', leaveCall)API Reference
JavaScript/TypeScript API
Constructor Options
| Option | Type | Description |
|---|---|---|
app | string | App name for topic prefix |
room | string | Room slug for topic prefix |
iceServers | RTCIceServer[] | STUN/TURN servers (optional, defaults to Google STUN) |
Methods
| Method | Description |
|---|---|
start() | Start WebRTC manager, subscribe to signaling topics, discover peers |
stop() | Stop manager, close all peer connections, unsubscribe from topics |
setLocalStream(stream) | Set the local media stream to share with peers |
getLocalStream() | Get the local media stream |
getRemoteStream(actorId) | Get a peer's remote stream by actor ID |
getPeers() | Get list of connected peer actor IDs |
isConnected(actorId) | Check if connected to a specific peer |
Events
| Event | Arguments | Description |
|---|---|---|
peerConnected | (actorId, stream) | Peer connection established with remote stream |
peerDisconnected | (actorId) | Peer disconnected |
peerTrack | (actorId, track, stream) | New track received from peer |
localStream | (stream) | Local stream was set |
error | (error) | An error occurred |
Python API
Constructor
WebRTCManager(client, app, room, ice_servers=None)| Parameter | Type | Description |
|---|---|---|
client | NoLag | Connected NoLag client instance |
app | str | App name for topic prefix |
room | str | Room slug for topic prefix |
ice_servers | list[dict] | STUN/TURN servers (optional) |
Methods
| Method | Description |
|---|---|
await start() | Start WebRTC manager, subscribe to signaling topics |
await stop() | Stop manager and close all connections |
add_track(track) | Add a local track to share with peers |
get_peers() | Get list of connected peer actor IDs |
is_connected(actor_id) | Check if connected to a specific peer |
on(event, handler) | Register an event handler |
off(event) | Remove event handlers |
Events
| Event | Arguments | Description |
|---|---|---|
peer_connected | (actor_id, track) | Peer connected with track |
peer_disconnected | (actor_id) | Peer disconnected |
peer_track | (actor_id, track) | New track received |
local_stream | (track) | Local track added |
error | (error) | An error occurred |
Go API
Constructor
NewWebRTCManager(client *Client, options WebRTCOptions) *WebRTCManager| Option | Type | Description |
|---|---|---|
App | string | App name for topic prefix |
Room | string | Room slug for topic prefix |
ICEServers | []webrtc.ICEServer | STUN/TURN servers (optional) |
Methods
| Method | Description |
|---|---|
Start() error | Start WebRTC manager |
Stop() error | Stop manager and close all connections |
AddTrack(track) error | Add a local track to share with peers |
GetPeers() []string | Get list of connected peer actor IDs |
IsConnected(actorID) bool | Check if connected to a specific peer |
OnTrack(handler) | Set handler for incoming tracks |
On(event, handler) | Register an event handler |
Off(event) | Remove event handlers |
Events
| Event Constant | Arguments | Description |
|---|---|---|
EventPeerConnected | (actorID, track) | Peer connected with track |
EventPeerDisconnected | (actorID) | Peer disconnected |
EventPeerTrack | (actorID, track) | New track received |
EventLocalTrack | (track) | Local track added |
EventError | (error) | An error occurred |
Python WebRTC Guide
The Python SDK uses aiortc for WebRTC support. It's ideal for building AI voice bots, server-side recording, and audio/video processing.
Basic Setup
# Basic WebRTC setup in Python
# First: pip install nolag[webrtc]
from nolag import NoLag, WebRTCManager
import asyncio
async def main():
# Connect to NoLag
client = NoLag('your_access_token')
await client.connect()
# Initialize WebRTC manager
webrtc = WebRTCManager(
client,
app='video-chat',
room='meeting-123',
ice_servers=[
{'urls': 'stun:stun.l.google.com:19302'},
{'urls': 'stun:stun1.l.google.com:19302'}
]
)
# Handle peer connections
@webrtc.on('peer_connected')
async def on_peer(actor_id, track):
print(f'Peer {actor_id} connected with {track.kind} track')
@webrtc.on('peer_disconnected')
def on_disconnect(actor_id):
print(f'Peer {actor_id} disconnected')
# Start WebRTC
await webrtc.start()
print('WebRTC is ready!')
# Keep running
try:
await asyncio.Event().wait()
except KeyboardInterrupt:
await webrtc.stop()
await client.disconnect()
asyncio.run(main())Building an AI Voice Bot
Here's a complete example of an AI voice bot using Claude:
# AI Voice Bot with WebRTC in Python
# pip install nolag[webrtc] anthropic
from nolag import NoLag, WebRTCManager
import anthropic
import asyncio
client_anthropic = anthropic.Anthropic()
async def process_audio_frame(frame):
"""Process audio frame from WebRTC"""
# Convert frame to audio samples
samples = frame.to_ndarray()
# Send to speech-to-text service
return samples
async def get_claude_response(text: str) -> str:
"""Send user speech to Claude and get response"""
response = client_anthropic.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": text}]
)
return response.content[0].text
async def main():
# Connect to NoLag
client = NoLag('bot_access_token')
await client.connect()
# Initialize WebRTC manager
webrtc = WebRTCManager(
client,
app='video-chat',
room='meeting-123'
)
# Handle incoming audio from peers
@webrtc.on('peer_connected')
async def on_peer(actor_id, track):
print(f'User connected: {actor_id}')
if track.kind == 'audio':
audio_buffer = []
while True:
try:
frame = await track.recv()
samples = await process_audio_frame(frame)
audio_buffer.append(samples)
# Process when we have enough audio
if len(audio_buffer) > 50: # ~1 second of audio
# 1. Speech-to-text
user_text = await speech_to_text(audio_buffer)
audio_buffer.clear()
if user_text:
print(f'User said: {user_text}')
# 2. Get Claude response
response = await get_claude_response(user_text)
print(f'Bot: {response}')
# 3. Text-to-speech and send back
# await send_audio_response(response)
except Exception as e:
print(f'Error: {e}')
break
@webrtc.on('peer_disconnected')
def on_disconnect(actor_id):
print(f'User disconnected: {actor_id}')
# Start the bot
await webrtc.start()
print('AI Voice Bot is ready!')
await asyncio.Event().wait()
asyncio.run(main())Go WebRTC Guide
The Go SDK uses pion/webrtc for WebRTC support. It's ideal for high-performance voice bots, media servers, and backend processing.
Basic Setup
// Basic WebRTC setup in Go
// go get github.com/NoLagApp/nolag-go
package main
import (
"fmt"
"os"
"os/signal"
"syscall"
"github.com/NoLagApp/nolag-go"
"github.com/pion/webrtc/v3"
)
func main() {
// Connect to NoLag
client := nolag.New("your_access_token")
if err := client.Connect(); err != nil {
panic(err)
}
defer client.Close()
// Initialize WebRTC manager
webrtcMgr := nolag.NewWebRTCManager(client, nolag.WebRTCOptions{
App: "video-chat",
Room: "meeting-123",
ICEServers: []webrtc.ICEServer{
{URLs: []string{"stun:stun.l.google.com:19302"}},
{URLs: []string{"stun:stun1.l.google.com:19302"}},
},
})
// Handle incoming tracks from peers
webrtcMgr.OnTrack(func(actorID string, track *webrtc.TrackRemote, receiver *webrtc.RTPReceiver) {
fmt.Printf("Peer %s connected with %s track
", actorID, track.Kind())
// Process the track in a goroutine
go func() {
for {
packet, _, err := track.ReadRTP()
if err != nil {
return
}
// Process packet.Payload
_ = packet
}
}()
})
// Handle peer disconnection
webrtcMgr.On(nolag.EventPeerDisconnected, func(args ...any) {
actorID := args[0].(string)
fmt.Printf("Peer %s disconnected
", actorID)
})
// Start WebRTC
if err := webrtcMgr.Start(); err != nil {
panic(err)
}
fmt.Println("WebRTC is ready!")
// Wait for shutdown signal
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
<-sigChan
webrtcMgr.Stop()
fmt.Println("Shutdown complete")
}Building an AI Voice Bot
Here's a complete example of an AI voice bot using Claude:
// AI Voice Bot with WebRTC in Go
// go get github.com/NoLagApp/nolag-go
// go get github.com/anthropics/anthropic-sdk-go
package main
import (
"context"
"fmt"
"github.com/NoLagApp/nolag-go"
"github.com/anthropics/anthropic-sdk-go"
"github.com/pion/webrtc/v3"
)
var anthropicClient *anthropic.Client
func init() {
anthropicClient = anthropic.NewClient()
}
func getCloudeResponse(text string) (string, error) {
resp, err := anthropicClient.Messages.Create(context.Background(), anthropic.MessageCreateParams{
Model: anthropic.ModelClaudeSonnet4_20250514,
MaxTokens: 1024,
Messages: []anthropic.MessageParam{
anthropic.NewUserMessage(anthropic.NewTextBlock(text)),
},
})
if err != nil {
return "", err
}
return resp.Content[0].Text, nil
}
func main() {
// Connect to NoLag
client := nolag.New("bot_access_token")
if err := client.Connect(); err != nil {
panic(err)
}
defer client.Close()
// Initialize WebRTC manager
webrtcMgr := nolag.NewWebRTCManager(client, nolag.WebRTCOptions{
App: "video-chat",
Room: "meeting-123",
})
// Handle incoming audio from peers
webrtcMgr.OnTrack(func(actorID string, track *webrtc.TrackRemote, receiver *webrtc.RTPReceiver) {
fmt.Printf("User connected: %s
", actorID)
if track.Kind() == webrtc.RTPCodecTypeAudio {
go func() {
audioBuffer := make([]byte, 0)
for {
packet, _, err := track.ReadRTP()
if err != nil {
return
}
audioBuffer = append(audioBuffer, packet.Payload...)
// Process when we have enough audio (~1 second)
if len(audioBuffer) > 48000 {
// 1. Speech-to-text
userText := speechToText(audioBuffer)
audioBuffer = make([]byte, 0)
if userText != "" {
fmt.Printf("User said: %s
", userText)
// 2. Get Claude response
response, err := getCloudeResponse(userText)
if err != nil {
fmt.Printf("Error: %v
", err)
continue
}
fmt.Printf("Bot: %s
", response)
// 3. Text-to-speech and send back
// sendAudioResponse(response)
}
}
}
}()
}
})
webrtcMgr.On(nolag.EventPeerDisconnected, func(args ...any) {
actorID := args[0].(string)
fmt.Printf("User disconnected: %s
", actorID)
})
// Start the bot
if err := webrtcMgr.Start(); err != nil {
panic(err)
}
fmt.Println("AI Voice Bot is ready!")
select {}
}Node.js WebRTC Guide
For Node.js, the JavaScript SDK uses the wrtc package for WebRTC support. Install it alongside the SDK:
npm install @nolag/js-sdk wrtc// AI Voice Bot with WebRTC in Node.js
// First: npm install @nolag/js-sdk wrtc
import { NoLag, WebRTCManager } from '@nolag/js-sdk'
import { nonstandard } from 'wrtc'
const { RTCAudioSink } = nonstandard
// Connect to NoLag
const client = NoLag('bot_access_token')
await client.connect()
// Initialize WebRTC manager (wrtc is detected automatically)
const webrtc = new WebRTCManager(client, {
app: 'video-chat',
room: 'meeting-123'
})
// Handle incoming audio from peers
webrtc.on('peerConnected', (actorId, remoteStream) => {
console.log('User connected:', actorId)
// Get audio track from the remote stream
const audioTrack = remoteStream.getAudioTracks()[0]
if (audioTrack) {
// Create audio sink to receive raw audio data
const sink = new RTCAudioSink(audioTrack)
sink.ondata = (data) => {
// data.samples contains Int16Array audio samples
// Send to speech-to-text service (Whisper, Deepgram, etc.)
processAudioWithSTT(data.samples)
}
}
})
webrtc.on('peerDisconnected', (actorId) => {
console.log('User disconnected:', actorId)
})
// Start the bot
await webrtc.start()
console.log('AI Voice Bot is ready!')Complete AI Voice Bot Flow
Here's how to build a complete voice bot with speech-to-text, LLM, and text-to-speech:
// Complete AI Voice Bot flow
import Anthropic from '@anthropic-ai/sdk'
const anthropic = new Anthropic()
// Accumulate audio chunks for processing
let audioBuffer: Int16Array[] = []
let silenceTimeout: NodeJS.Timeout | null = null
async function processAudioWithSTT(samples: Int16Array) {
audioBuffer.push(samples)
// Reset silence detection timer
if (silenceTimeout) clearTimeout(silenceTimeout)
// After 500ms of silence, process the audio
silenceTimeout = setTimeout(async () => {
if (audioBuffer.length === 0) return
// Combine audio chunks
const combinedAudio = combineAudioChunks(audioBuffer)
audioBuffer = []
// 1. Speech-to-Text (using your preferred service)
const userText = await speechToText(combinedAudio)
console.log('User said:', userText)
// 2. Send to Claude for response
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: userText }]
})
const botResponse = response.content[0].text
console.log('Bot response:', botResponse)
// 3. Text-to-Speech (ElevenLabs, OpenAI TTS, etc.)
const audioResponse = await textToSpeech(botResponse)
// 4. Send audio back through WebRTC
await sendAudioToPeers(audioResponse)
}, 500)
}TURN Servers
For production use, you may need TURN servers to relay media when direct peer-to-peer connections fail (e.g., due to strict NATs or firewalls). You can use services like:
- Twilio TURN
- Xirsys
- Self-hosted coturn
const webrtc = new WebRTCManager(client, {
app: 'video-chat',
room: 'meeting-123',
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' },
{
urls: 'turn:your-turn-server.com:3478',
username: 'your-username',
credential: 'your-credential'
}
]
})