Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

@siteed/audio-studio

Version Downloads License GitHub stars

Give it a GitHub star, if you found this repo useful.

Cross-platform audio recording, analysis, and processing for React Native and Expo. Mel spectrogram and spectral feature extraction run through a shared C++ layer on iOS, Android, and web (via WASM).

Formerly @siteed/expo-audio-studio. See MIGRATION.md for upgrade steps.

iOS

iOS Demo

Android

Android Demo

App Store Google Play

Try the web demo

Features

  • Recording — real-time streaming, dual-stream (raw PCM + compressed), background recording, zero-latency start via prepareRecording
  • Device management — list/select input devices (Bluetooth, USB, wired), automatic fallback
  • Interruption handling — auto pause/resume during phone calls
  • Audio analysis — MFCC, spectral features, mel spectrogram, tempo, pitch, waveform preview
  • Native performance — mel spectrogram and FFT-based features (MFCC, spectral centroid, etc.) computed in shared C++ on all platforms (JNI on Android, Obj-C++ bridge on iOS, WASM on web)
  • Trimming — precision cut with multi-segment support (keep or remove ranges)
  • Notifications — Android live waveform in notification, iOS media player integration
  • Consistent format — WAV PCM output across all platforms

Install

yarn add @siteed/audio-studio

Quick Start

import { useAudioRecorder } from '@siteed/audio-studio';

const { startRecording, stopRecording, isRecording } = useAudioRecorder();

// Record
await startRecording({
  sampleRate: 44100,
  channels: 1,
  encoding: 'pcm_16bit',
});

// ... later
const result = await stopRecording();
console.log('Saved to:', result.fileUri);

Zero-Latency Recording

Pre-initialize to eliminate startup delay:

const { prepareRecording, startRecording, stopRecording } = useSharedAudioRecorder();

await prepareRecording({ sampleRate: 44100, channels: 1, encoding: 'pcm_16bit' });

// Later — starts instantly
await startRecording();

Shared State Across Components

const AudioApp = () => (
  <AudioRecorderProvider>
    <RecordButton />
    <AudioVisualizer />
  </AudioRecorderProvider>
);

Float32 Streaming (for ML / DSP)

Set streamFormat: 'float32' to get Float32Array on all platforms instead of base64:

await startRecording({
  sampleRate: 16000,
  channels: 1,
  encoding: 'pcm_32bit',
  streamFormat: 'float32',
  onAudioStream: async (event) => {
    const samples = event.data as Float32Array;
    await myModel.feed(samples);
  },
});

Audio Analysis

import { extractAudioAnalysis, extractPreview, extractMelSpectrogram, trimAudio } from '@siteed/audio-studio';

// Feature extraction
const analysis = await extractAudioAnalysis({
  fileUri: 'path/to/recording.wav',
  features: { rms: true, zcr: true, mfcc: true, spectralCentroid: true }
});

// Lightweight waveform for visualization
const preview = await extractPreview({
  fileUri: 'path/to/recording.wav',
  pointsPerSecond: 50
});

// Mel spectrogram for ML
const mel = await extractMelSpectrogram({
  fileUri: 'path/to/recording.wav',
  nMels: 40, hopLengthMs: 10
});

// Trim audio
const trimmed = await trimAudio({
  fileUri: 'path/to/recording.wav',
  ranges: [{ startTimeMs: 1000, endTimeMs: 5000 }],
  mode: 'keep'
});

Which Method to Use

Method Cost Use case
extractPreview Light Waveform visualization
extractRawWavAnalysis Light WAV metadata without decoding
extractAudioData Medium Raw PCM for custom processing
extractAudioAnalysis Medium-Heavy MFCC, spectral features, pitch, tempo
extractMelSpectrogram Heavy Frequency-domain for ML

Docs

License

MIT — see LICENSE.


Created by Arthur Breton