Real-time conversation intelligence — transcription, behavioral profiling, agenda tracking, and AI feedback. One Docker image with ASR, models, and agent. You only need a Gemini or Claude API key.
Install (auto-detects platform, installs CLI):
GEMINI_API_KEY=your_key bash -c 'curl -fsSL https://raw.githubusercontent.com/WhissleAI/live_assist_js_sdk/main/install.sh | bash'Or run Docker directly (CPU, auto-selects amd64/arm64):
docker run -d --name live-assist -p 8001:8001 -p 8765:8765 -e GEMINI_API_KEY=your_key whissleasr/live-assist:latestGPU (NVIDIA, faster ASR):
docker run -d --name live-assist --gpus all -p 8001:8001 -p 8765:8765 -e GEMINI_API_KEY=your_key whissleasr/live-assist:latest-gpuOption A: Install script (recommended)
export GEMINI_API_KEY=your_key_here
curl -fsSL https://raw.githubusercontent.com/WhissleAI/live_assist_js_sdk/main/install.sh | bashWith Claude: ANTHROPIC_API_KEY=your_key LLM_PROVIDER=anthropic bash -c 'curl -fsSL ... | bash'
Bash CLI (after install):
live-assist start # Start Docker
live-assist status # Check health
live-assist agents # List smart agents
live-assist feedback "We agreed to send the deck by Friday"
echo "Meeting notes..." | live-assist feedback --agent commitment_tracker
live-assist stop # Stop DockerOption B: Docker Compose
git clone https://github.com/WhissleAI/live_assist_js_sdk.git
cd live_assist_js_sdk
export GEMINI_API_KEY=your_key_here
docker compose -f docker/docker-compose.unified.yml up -dOption C: Docker run
docker run -d --name live-assist \
-p 8001:8001 -p 8765:8765 \
-e GEMINI_API_KEY=your_key_here \
whissleasr/live-assist:latestGPU variant (faster ASR, requires NVIDIA GPU + nvidia-container-toolkit):
docker run -d --name live-assist --gpus all \
-p 8001:8001 -p 8765:8765 \
-e GEMINI_API_KEY=your_key_here \
whissleasr/live-assist:latest-gpu| Port | Service |
|---|---|
| 8001 | ASR (WebSocket at ws://localhost:8001/asr/stream) |
| 8765 | Agent API (feedback, sessions, health) |
First start takes ~2 minutes while ASR loads models. Check logs: docker logs -f live-assist.
Once Docker is running:
git clone https://github.com/WhissleAI/live_assist_js_sdk.git
cd live_assist_js_sdk/examples/transcript-timeline-demo
npm install
npm run devOpen http://localhost:5173 — configure your session and click Start Live Assist.
Troubleshooting: If you see
ERR_MODULE_NOT_FOUNDfor Vite, runrm -rf node_modules package-lock.json && npm installand try again.
Browser ──WebSocket PCM──► ASR Server (8001)
│ │
│ transcript + metadata
│ │
◄──────────────────────────┘
│
├──SSE──► Agent Server (8765)
│ ├── Memory extraction
│ ├── Status tracking
│ ├── LLM feedback (Gemini/Claude)
│ └── Action item extraction
│
◄──────────────────────────┘
| Package | Description |
|---|---|
@whissle/live-assist-core |
Framework-agnostic JS — ASR client, capture, profiling, session orchestrator |
@whissle/live-assist-react |
React components — provider, widget, donut, transcript, agenda tracker |
packages/server |
Python FastAPI agent with LangGraph workflow |
npm install @whissle/live-assist-core @whissle/live-assist-reactimport { LiveAssistProvider, LiveAssistWidget } from '@whissle/live-assist-react';
import '@whissle/live-assist-react/styles/live-assist.css';
function App() {
return (
<LiveAssistProvider config={{
asrUrl: "ws://localhost:8001/asr/stream",
agentUrl: "http://localhost:8765",
}}>
<LiveAssistWidget
agenda={[
{ id: "1", title: "Discuss roadmap" },
{ id: "2", title: "Review metrics" },
]}
/>
</LiveAssistProvider>
);
}import { createLiveAssistSession } from '@whissle/live-assist-core';
const session = createLiveAssistSession({
asrUrl: "ws://localhost:8001/asr/stream",
agentUrl: "http://localhost:8765",
});
session.on("transcript", (entry) => {
console.log(`[${entry.channel}] ${entry.text}`);
});
session.on("profile", ({ user, other }) => {
console.log("User emotion:", user.emotionProfile);
});
session.on("feedback", ({ summary, suggestions }) => {
console.log("AI says:", summary);
});
await session.start({ includeTab: true });
// ... later
const report = await session.stop();<iframe
src="http://localhost:3001/widget?agenda=Discuss+roadmap,Review+metrics"
width="400" height="600"
allow="microphone; display-capture"
/>const session = createLiveAssistSession(config: LiveAssistConfig);
// Events
session.on("transcript", (entry: TranscriptEntry) => void);
session.on("profile", (profiles: { user: BehavioralProfile; other: BehavioralProfile }) => void);
session.on("feedback", (data: { summary: string; suggestions: string[] }) => void);
session.on("action", (data: { items: ActionItem[] }) => void);
session.on("memory", (data: { items: MemoryItem[] }) => void);
session.on("agenda", (items: AgendaItem[]) => void);
session.on("status", (data: { engagementScore; sentimentTrend; keywords }) => void);
session.on("error", (err: Error) => void);
// Lifecycle
await session.start({ includeTab?: boolean; agenda?: AgendaItem[] });
const report: SessionReport = await session.stop();const asr = new AsrStreamClient("ws://localhost:8001/asr/stream", { metadataProb: true });
asr.onTranscript = (seg: StreamTranscriptSegment) => { ... };
await asr.connect();
asr.sendPcm(int16Array);
asr.setChannel("system");
const finals = await asr.end();| Component | Props |
|---|---|
<LiveAssistProvider> |
config: LiveAssistConfig |
<LiveAssistWidget> |
agenda?: AgendaItem[], style?: CSSProperties |
<TranscriptView> |
entries: TranscriptEntry[], maxHeight?: number |
<EmotionDonut> |
segments: { key; value }[], size?: number |
<InlineProfileChart> |
profile: BehavioralProfile, size?: number |
<AgendaTracker> |
items: AgendaItem[], compact?: boolean |
<ProfileBadge> |
profile: BehavioralProfile, label: string |
<SessionControls> |
isCapturing, onStart, onStop |
| Method | Path | Description |
|---|---|---|
| POST | /live-assist/process/stream |
SSE streaming feedback |
| POST | /live-assist/session/start |
Create session |
| POST | /live-assist/session/end |
End session |
| GET | /live-assist/sessions |
List sessions |
| GET | /health |
Health check |
{
asrUrl: string; // WebSocket URL for ASR server
agentUrl: string; // HTTP URL for agent server
backendUrl?: string; // Optional external backend
deviceId?: string; // Auto-generated if omitted
llmApiKey?: string; // LLM API key
llmProvider?: "gemini" | "anthropic" | "local";
audioWorkletUrl?: string; // Path to audio-capture-processor.js
}| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER |
gemini |
LLM backend: gemini, anthropic, local |
GEMINI_API_KEY |
— | Google Gemini API key |
ANTHROPIC_API_KEY |
— | Anthropic API key |
LOCAL_LLM_URL |
— | Local LLM endpoint URL |
DB_PATH |
./data/live_assist.db |
SQLite database path |
EMBEDDING_MODEL |
all-MiniLM-L6-v2 |
Sentence transformer model |
CORS_ORIGINS |
* |
Allowed CORS origins |
PORT |
8765 |
Server port |
Override CSS variables to match your brand:
:root {
--la-primary: #124e3f;
--la-bg: #ffffff;
--la-text: #1a1a1a;
--la-border: #e5e7eb;
--la-radius: 12px;
}The workspace ships two libraries: @whissle/live-assist-core and @whissle/live-assist-react. Publish core first, then react (react depends on the published core version).
Prerequisites: npm login with permission to publish the @whissle scope (npmjs.com or your org registry). For a private registry, set publishConfig.registry in each package’s package.json and use .npmrc for auth.
Steps:
cd live_assist_js_sdk
npm ci
npm run build
# Dry run (optional)
npm publish -w @whissle/live-assist-core --dry-run
npm publish -w @whissle/live-assist-react --dry-run
npm publish -w @whissle/live-assist-core
npm publish -w @whissle/live-assist-reactAfter a version bump, commit updated package.json files and tag if your release process requires it.
# Build all packages
npm run build
# Build individual packages
cd packages/core && npm run build
cd packages/react && npm run build
# Run the server locally
cd packages/server
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8765Builds four images sequentially, then creates a latest manifest for auto-select:
# From live_assist repo root — builds all four
./live_assist_js_sdk/scripts/build-and-push.shImages:
whissleasr/live-assist:latest-amd64— CPU (Intel Mac, Linux x86)whissleasr/live-assist:latest-arm64— CPU (Apple Silicon, ARM Linux)whissleasr/live-assist:latest-gpu— GPU (NVIDIA CUDA)whissleasr/live-assist:latest— manifest (auto-selects amd64 or arm64)
Options:
./live_assist_js_sdk/scripts/build-and-push.sh --no-push # build only, no push
./live_assist_js_sdk/scripts/build-and-push.sh --gpu-only # build GPU only
./live_assist_js_sdk/scripts/build-and-push.sh --amd64-only # build amd64 + manifest (when arm64 already pushed)Alternatively, build with Docker Compose (CPU only):
docker compose -f live_assist_js_sdk/docker/docker-compose.build.yml buildlive_assist_js_sdk/
├── packages/
│ ├── core/ # Headless JS library
│ ├── react/ # React UI components
│ └── server/ # Python FastAPI agent
├── docker/ # Docker images (Dockerfile.unified, Dockerfile.unified.gpu)
├── examples/ # transcript-timeline-demo, vanilla-js
├── public/ # AudioWorklet
├── package.json # Workspace root
└── tsconfig.base.json
Proprietary — Whissle Inc.