Skip to content

WhissleAI/live_assist_js_sdk

Repository files navigation

Live Assist JS-SDK

Real-time conversation intelligence — transcription, behavioral profiling, agenda tracking, and AI feedback. One Docker image with ASR, models, and agent. You only need a Gemini or Claude API key.

Quick Start (Docker)

One-liner (share with customers)

Install (auto-detects platform, installs CLI):

GEMINI_API_KEY=your_key bash -c 'curl -fsSL https://raw.githubusercontent.com/WhissleAI/live_assist_js_sdk/main/install.sh | bash'

Or run Docker directly (CPU, auto-selects amd64/arm64):

docker run -d --name live-assist -p 8001:8001 -p 8765:8765 -e GEMINI_API_KEY=your_key whissleasr/live-assist:latest

GPU (NVIDIA, faster ASR):

docker run -d --name live-assist --gpus all -p 8001:8001 -p 8765:8765 -e GEMINI_API_KEY=your_key whissleasr/live-assist:latest-gpu

Option A: Install script (recommended)

export GEMINI_API_KEY=your_key_here
curl -fsSL https://raw.githubusercontent.com/WhissleAI/live_assist_js_sdk/main/install.sh | bash

With Claude: ANTHROPIC_API_KEY=your_key LLM_PROVIDER=anthropic bash -c 'curl -fsSL ... | bash'

Bash CLI (after install):

live-assist start              # Start Docker
live-assist status             # Check health
live-assist agents             # List smart agents
live-assist feedback "We agreed to send the deck by Friday"
echo "Meeting notes..." | live-assist feedback --agent commitment_tracker
live-assist stop               # Stop Docker

Option B: Docker Compose

git clone https://github.com/WhissleAI/live_assist_js_sdk.git
cd live_assist_js_sdk

export GEMINI_API_KEY=your_key_here
docker compose -f docker/docker-compose.unified.yml up -d

Option C: Docker run

docker run -d --name live-assist \
  -p 8001:8001 -p 8765:8765 \
  -e GEMINI_API_KEY=your_key_here \
  whissleasr/live-assist:latest

GPU variant (faster ASR, requires NVIDIA GPU + nvidia-container-toolkit):

docker run -d --name live-assist --gpus all \
  -p 8001:8001 -p 8765:8765 \
  -e GEMINI_API_KEY=your_key_here \
  whissleasr/live-assist:latest-gpu

What you get

Port Service
8001 ASR (WebSocket at ws://localhost:8001/asr/stream)
8765 Agent API (feedback, sessions, health)

First start takes ~2 minutes while ASR loads models. Check logs: docker logs -f live-assist.

Run the example app

Once Docker is running:

git clone https://github.com/WhissleAI/live_assist_js_sdk.git
cd live_assist_js_sdk/examples/transcript-timeline-demo
npm install
npm run dev

Open http://localhost:5173 — configure your session and click Start Live Assist.

Troubleshooting: If you see ERR_MODULE_NOT_FOUND for Vite, run rm -rf node_modules package-lock.json && npm install and try again.


Architecture

Browser ──WebSocket PCM──► ASR Server (8001)
  │                          │
  │                    transcript + metadata
  │                          │
  ◄──────────────────────────┘
  │
  ├──SSE──► Agent Server (8765)
  │           ├── Memory extraction
  │           ├── Status tracking
  │           ├── LLM feedback (Gemini/Claude)
  │           └── Action item extraction
  │
  ◄──────────────────────────┘

Packages

Package Description
@whissle/live-assist-core Framework-agnostic JS — ASR client, capture, profiling, session orchestrator
@whissle/live-assist-react React components — provider, widget, donut, transcript, agenda tracker
packages/server Python FastAPI agent with LangGraph workflow

Integration

React

npm install @whissle/live-assist-core @whissle/live-assist-react
import { LiveAssistProvider, LiveAssistWidget } from '@whissle/live-assist-react';
import '@whissle/live-assist-react/styles/live-assist.css';

function App() {
  return (
    <LiveAssistProvider config={{
      asrUrl: "ws://localhost:8001/asr/stream",
      agentUrl: "http://localhost:8765",
    }}>
      <LiveAssistWidget
        agenda={[
          { id: "1", title: "Discuss roadmap" },
          { id: "2", title: "Review metrics" },
        ]}
      />
    </LiveAssistProvider>
  );
}

Vanilla JS

import { createLiveAssistSession } from '@whissle/live-assist-core';

const session = createLiveAssistSession({
  asrUrl: "ws://localhost:8001/asr/stream",
  agentUrl: "http://localhost:8765",
});

session.on("transcript", (entry) => {
  console.log(`[${entry.channel}] ${entry.text}`);
});

session.on("profile", ({ user, other }) => {
  console.log("User emotion:", user.emotionProfile);
});

session.on("feedback", ({ summary, suggestions }) => {
  console.log("AI says:", summary);
});

await session.start({ includeTab: true });

// ... later
const report = await session.stop();

iframe embed

<iframe
  src="http://localhost:3001/widget?agenda=Discuss+roadmap,Review+metrics"
  width="400" height="600"
  allow="microphone; display-capture"
/>

API Reference

Core: LiveAssistSession

const session = createLiveAssistSession(config: LiveAssistConfig);

// Events
session.on("transcript", (entry: TranscriptEntry) => void);
session.on("profile", (profiles: { user: BehavioralProfile; other: BehavioralProfile }) => void);
session.on("feedback", (data: { summary: string; suggestions: string[] }) => void);
session.on("action", (data: { items: ActionItem[] }) => void);
session.on("memory", (data: { items: MemoryItem[] }) => void);
session.on("agenda", (items: AgendaItem[]) => void);
session.on("status", (data: { engagementScore; sentimentTrend; keywords }) => void);
session.on("error", (err: Error) => void);

// Lifecycle
await session.start({ includeTab?: boolean; agenda?: AgendaItem[] });
const report: SessionReport = await session.stop();

Core: AsrStreamClient

const asr = new AsrStreamClient("ws://localhost:8001/asr/stream", { metadataProb: true });
asr.onTranscript = (seg: StreamTranscriptSegment) => { ... };
await asr.connect();
asr.sendPcm(int16Array);
asr.setChannel("system");
const finals = await asr.end();

React: Components

Component Props
<LiveAssistProvider> config: LiveAssistConfig
<LiveAssistWidget> agenda?: AgendaItem[], style?: CSSProperties
<TranscriptView> entries: TranscriptEntry[], maxHeight?: number
<EmotionDonut> segments: { key; value }[], size?: number
<InlineProfileChart> profile: BehavioralProfile, size?: number
<AgendaTracker> items: AgendaItem[], compact?: boolean
<ProfileBadge> profile: BehavioralProfile, label: string
<SessionControls> isCapturing, onStart, onStop

Server Endpoints

Method Path Description
POST /live-assist/process/stream SSE streaming feedback
POST /live-assist/session/start Create session
POST /live-assist/session/end End session
GET /live-assist/sessions List sessions
GET /health Health check

Configuration

LiveAssistConfig

{
  asrUrl: string;          // WebSocket URL for ASR server
  agentUrl: string;        // HTTP URL for agent server
  backendUrl?: string;     // Optional external backend
  deviceId?: string;       // Auto-generated if omitted
  llmApiKey?: string;      // LLM API key
  llmProvider?: "gemini" | "anthropic" | "local";
  audioWorkletUrl?: string; // Path to audio-capture-processor.js
}

Environment Variables (Server)

Variable Default Description
LLM_PROVIDER gemini LLM backend: gemini, anthropic, local
GEMINI_API_KEY Google Gemini API key
ANTHROPIC_API_KEY Anthropic API key
LOCAL_LLM_URL Local LLM endpoint URL
DB_PATH ./data/live_assist.db SQLite database path
EMBEDDING_MODEL all-MiniLM-L6-v2 Sentence transformer model
CORS_ORIGINS * Allowed CORS origins
PORT 8765 Server port

CSS Theming

Override CSS variables to match your brand:

:root {
  --la-primary: #124e3f;
  --la-bg: #ffffff;
  --la-text: #1a1a1a;
  --la-border: #e5e7eb;
  --la-radius: 12px;
}

Publishing npm packages (maintainers)

The workspace ships two libraries: @whissle/live-assist-core and @whissle/live-assist-react. Publish core first, then react (react depends on the published core version).

Prerequisites: npm login with permission to publish the @whissle scope (npmjs.com or your org registry). For a private registry, set publishConfig.registry in each package’s package.json and use .npmrc for auth.

Steps:

cd live_assist_js_sdk
npm ci
npm run build

# Dry run (optional)
npm publish -w @whissle/live-assist-core --dry-run
npm publish -w @whissle/live-assist-react --dry-run

npm publish -w @whissle/live-assist-core
npm publish -w @whissle/live-assist-react

After a version bump, commit updated package.json files and tag if your release process requires it.

Development

# Build all packages
npm run build

# Build individual packages
cd packages/core && npm run build
cd packages/react && npm run build

# Run the server locally
cd packages/server
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8765

Building & Pushing (Maintainers)

Builds four images sequentially, then creates a latest manifest for auto-select:

# From live_assist repo root — builds all four
./live_assist_js_sdk/scripts/build-and-push.sh

Images:

  • whissleasr/live-assist:latest-amd64 — CPU (Intel Mac, Linux x86)
  • whissleasr/live-assist:latest-arm64 — CPU (Apple Silicon, ARM Linux)
  • whissleasr/live-assist:latest-gpu — GPU (NVIDIA CUDA)
  • whissleasr/live-assist:latest — manifest (auto-selects amd64 or arm64)

Options:

./live_assist_js_sdk/scripts/build-and-push.sh --no-push      # build only, no push
./live_assist_js_sdk/scripts/build-and-push.sh --gpu-only      # build GPU only
./live_assist_js_sdk/scripts/build-and-push.sh --amd64-only    # build amd64 + manifest (when arm64 already pushed)

Alternatively, build with Docker Compose (CPU only):

docker compose -f live_assist_js_sdk/docker/docker-compose.build.yml build

Directory Structure

live_assist_js_sdk/
├── packages/
│   ├── core/            # Headless JS library
│   ├── react/           # React UI components
│   └── server/          # Python FastAPI agent
├── docker/              # Docker images (Dockerfile.unified, Dockerfile.unified.gpu)
├── examples/            # transcript-timeline-demo, vanilla-js
├── public/              # AudioWorklet
├── package.json         # Workspace root
└── tsconfig.base.json

License

Proprietary — Whissle Inc.

About

No description, website, or topics provided.

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors