Talk to Claude Code when typing isn't an option.
VoiceMode enables natural voice conversations with Claude Code. Voice isn't about replacing typing - it's about being available when typing isn't.
Perfect for:
- Walking to your next meeting
- Cooking while debugging
- Giving your eyes a break after hours of screen time
- Holding a coffee (or a dog)
- Any moment when your hands or eyes are busy
Requirements: Computer with microphone and speakers
The fastest way to get started:
# Add the plugin marketplace
claude plugin marketplace add https://github.com/mbailey/claude-plugins
# Install VoiceMode plugin
claude plugin install voicemode@mbailey
## Install dependencies (CLI, Local Voice Services)
/voicemode:install
# Start talking!
/voicemode:converseThe plugin handles everything - just install and go.
Add VoiceMode as an MCP server for more control:
# Install UV package manager (if needed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Run the installer (sets up dependencies and local voice services)
uvx voice-mode-install
# Add to Claude Code
claude mcp add --scope user voicemode -- uvx --refresh voice-mode
# Optional: Add OpenAI API key as fallback for local services
export OPENAI_API_KEY=your-openai-key
# Start a conversation
claude converseFor manual setup, see the Getting Started Guide.
- Natural conversations - speak naturally, hear responses immediately
- Works offline - optional local voice services (Whisper STT, Kokoro TTS)
- Low latency - fast enough to feel like a real conversation
- Smart silence detection - stops recording when you stop speaking
- Privacy options - run entirely locally or use cloud services
Platforms: Linux, macOS, Windows (WSL), NixOS Python: 3.10-3.14
VoiceMode works out of the box. For customization:
# Set OpenAI API key (if using cloud services)
export OPENAI_API_KEY="your-key"
# Or configure via file
voicemode config editSee the Configuration Guide for all options.
For privacy or offline use, install local speech services:
- Whisper.cpp - Local speech-to-text
- Kokoro - Local text-to-speech with multiple voices
These provide the same API as OpenAI, so VoiceMode switches seamlessly between them.
System Dependencies by Platform
sudo apt update
sudo apt install -y ffmpeg gcc libasound2-dev libasound2-plugins libportaudio2 portaudio19-dev pulseaudio pulseaudio-utils python3-devWSL2 users: The pulseaudio packages above are required for microphone access.
sudo dnf install alsa-lib-devel ffmpeg gcc portaudio portaudio-devel python3-develbrew install ffmpeg node portaudio# Use development shell
nix develop github:mbailey/voicemode
# Or install system-wide
nix profile install github:mbailey/voicemodeAlternative Installation Methods
git clone https://github.com/mbailey/voicemode.git
cd voicemode
uv tool install -e .# In /etc/nixos/configuration.nix
environment.systemPackages = [
(builtins.getFlake "github:mbailey/voicemode").packages.${pkgs.system}.default
];| Problem | Solution |
|---|---|
| No microphone access | Check terminal/app permissions. WSL2 needs pulseaudio packages. |
| UV not found | Run curl -LsSf https://astral.sh/uv/install.sh | sh |
| OpenAI API error | Verify OPENAI_API_KEY is set correctly |
| No audio output | Check system audio settings and available devices |
export VOICEMODE_SAVE_AUDIO=true
# Files saved to ~/.voicemode/audio/YYYY/MM/- Getting Started - Full setup guide
- Configuration - All environment variables
- Whisper Setup - Local speech-to-text
- Kokoro Setup - Local text-to-speech
- Development Setup - Contributing guide
Full documentation: voice-mode.readthedocs.io
- Website: getvoicemode.com
- GitHub: github.com/mbailey/voicemode
- PyPI: pypi.org/project/voice-mode
- YouTube: @getvoicemode
- Twitter/X: @getvoicemode
MIT - A Failmode Project
mcp-name: com.failmode/voicemode
