ββββ ββββ ββββββββββββββ βββββββββββ ββββββββββββββ βββ
βββββ βββββββββββββββββββββ βββββββββββ ββββββββββββββ βββ
ββββββββββββββ ββββββββ ββββββββββββββββββββββ βββ βββ
ββββββββββββββ βββββββ ββββββββββββββββββββββ βββ βββ
βββ βββ ββββββββββββββ βββββββββββ βββββββββββββββββββββββββββ
βββ βββ ββββββββββ βββββββββββ βββββββββββββββββββββββββββ
MCP Shell is a powerful, modern shell-based client for the Model Context Protocol (MCP). It provides both direct tool execution capabilities and interactive chat mode with LLM integration, making it easy to interact with MCP servers from the command line.
π€ MCP Shell - Your AI-Powered Command Line Companion
Features:
β’ Direct MCP tool execution from command line
β’ Interactive chat mode with LLM integration
β’ Support for stdio and HTTP MCP servers
β’ Rich terminal UI with beautiful formatting
β’ Multiple LLM provider support (OpenAI, Anthropic, Google, etc.)
β’ Persistent server configuration
β’ Tool discovery and help system
Execute MCP tools directly from the command line with automatic parameter collection:
mcp-terminal tool list_files
mcp-terminal tool search_web --query "Python tutorials"Chat naturally with an AI assistant that automatically uses MCP tools:
mcp-terminal chat
> "List the files in my project directory"
> "Search for information about async Python"Chat with historical figures and generate videos using Google's Veo 3:
mcp-terminal character
> "What do you think about the nature of reality?" (Einstein)
> "How do you approach writing a new play?" (Shakespeare)- stdio: Connect to local MCP servers via stdin/stdout
- HTTP: Connect to remote MCP servers via HTTP/REST APIs
- Beautiful table displays for tools and servers
- Syntax highlighted code blocks
- Markdown rendering in chat mode
- Progress indicators and status updates
Support for multiple LLM providers through LiteLLM:
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Google (Gemini)
- Groq, Ollama, and more
Generate videos of historical characters speaking using Google's Veo 3 API:
- Automatic video generation from character responses
- Historical accuracy in visual representation
- Browser integration for video playback
# Install from source
git clone https://github.com/mcpterminal/mcp-terminal.git
cd mcp-terminal
pip install -e .
# Or install from PyPI (when published)
pip install mcp-terminal- Add an MCP server:
mcp-terminal server add- List available tools:
mcp-terminal tools- Execute a tool:
mcp-terminal tool <tool_name>- Start chat mode:
export OPENAI_API_KEY="your-api-key"
mcp-terminal chat- Chat with historical characters:
export GOOGLE_API_KEY="your-google-api-key"
mcp-terminal characterAdd a stdio server:
mcp-terminal server add
# Follow the interactive prompts:
# Server name: filesystem
# Transport type: stdio
# Command: npx
# Arguments: @modelcontextprotocol/server-filesystem /tmpAdd an HTTP server:
mcp-terminal server add
# Follow the interactive prompts:
# Server name: remote-mcp
# Transport type: http
# Server URL: http://localhost:8000/mcpList configured servers:
mcp-terminal server listShow server status:
mcp-terminal server statusList all available tools:
mcp-terminal toolsGet help for a specific tool:
mcp-terminal tool-help read_fileExecute a tool interactively:
mcp-terminal tool read_file
# You'll be prompted for required parametersStart interactive chat:
export OPENAI_API_KEY="your-openai-key"
mcp-terminal chatUse a specific model:
mcp-terminal chat --model claude-3-5-sonnet-20241022Chat with a specific historical character:
mcp-terminal character --character einstein
mcp-terminal character --character shakespeareOpenAI Models:
gpt-4.1- Latest GPT-4.1 (default)gpt-4.1-mini- Efficient GPT-4.1 variantgpt-4.1-nano- Lightweight GPT-4.1 varianto3- Latest reasoning model (limited access)gpt-4o- GPT-4 Omni multimodalgpt-4o-mini- Efficient GPT-4 Omni
Anthropic Claude Models:
claude-3-5-sonnet-20241022- Latest Claude 3.5 Sonnetclaude-3-5-haiku-20241022- Fast Claude 3.5 modelclaude-3-7-sonnet-20250215- Advanced Claude 3.7 Sonnetclaude-4-opus-20250515- Claude 4 Opus (most capable)claude-4-sonnet-20250515- Claude 4 Sonnet
Google Gemini Models:
gemini-2.5-pro- Latest Gemini 2.5 Pro (experimental)gemini-2.5-flash- Fast Gemini 2.5 modelgemini-1.5-pro- Production Gemini 1.5 Progemini-1.5-flash- Efficient Gemini 1.5 model
Ask a single question:
mcp-terminal ask "What files are in my current directory?"Chat commands:
/help- Show help/tools- List available tools/status- Show server status/clear- Clear conversation/exit- Exit chat
Character chat commands:
/help- Show help/character- Change historical character/video- Generate video of current response/clear- Clear conversation history/exit- Exit character chat
MCP Shell stores configuration in ~/.config/mcp-terminal/config.json:
{
"servers": [
{
"name": "filesystem",
"transport": "stdio",
"command": "npx",
"args": ["@modelcontextprotocol/server-filesystem", "/tmp"],
"description": "Local filesystem server"
},
{
"name": "remote-mcp",
"transport": "http",
"url": "http://localhost:8000/mcp",
"description": "Remote MCP server"
}
]
}Filesystem Server (Node.js):
npm install -g @modelcontextprotocol/server-filesystem
# Use with: npx @modelcontextprotocol/server-filesystem /path/to/directoryGit Server (Python):
pip install mcp-server-git
# Use with: python -m mcp_server_gitTerminal Server (for command execution):
npm install -g @rinardnick/mcp-terminal
# Use with: npx @rinardnick/mcp-terminalWeb Search Server:
npm install -g @modelcontextprotocol/server-web-search
# Use with: npx @modelcontextprotocol/server-web-searchYou can create custom MCP servers in any language that supports JSON-RPC 2.0. See the MCP specification for details.
Set your LLM API key via environment variables:
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Google
export GOOGLE_API_KEY="AI..."
# Groq
export GROQ_API_KEY="gsk_..."
**Note:** For historical character chat with video generation, you need a Google API key for Veo 3 access.# Add filesystem server
mcp-terminal server add
# Configure as stdio server with command: npx @modelcontextprotocol/server-filesystem /
# Chat mode
mcp-terminal chat
> "List all Python files in my project"
> "Read the contents of main.py"
> "Create a new file called test.py with a hello world function"# Add web search server
mcp-terminal server add
# Configure as stdio server with command: npx @modelcontextprotocol/server-web-search
# Chat mode
mcp-terminal chat
> "Search for the latest Python asyncio best practices"
> "Find tutorials about Model Context Protocol"# Add git server
mcp-terminal server add
# Configure as stdio server with command: python -m mcp_server_git
# Direct tool usage
mcp-terminal tool git_status
mcp-terminal tool git_log --max_entries 5
# Chat mode
mcp-terminal chat
> "What's the current status of my git repository?"
> "Show me the last 5 commits"
> "Create a new branch called feature/mcp-integration"# Start character chat mode
mcp-terminal character
# Chat with specific character
mcp-terminal character --character einstein
mcp-terminal character --character shakespeare
# Generate video with response
> "What do you think about the nature of reality?" /videogit clone https://github.com/mcpterminal/mcp-terminal.git
cd mcp-terminal
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black mcp_terminal/
# Type checking
mypy mcp_terminal/mcp_terminal/
βββ __init__.py # Package initialization
βββ core.py # Core MCP client implementation
βββ cli.py # Command-line interface
βββ chat.py # Interactive chat session
βββ character_chat.py # Historical character chat with video generation
βββ config.py # Configuration management
We welcome contributions! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Model Context Protocol for the excellent protocol specification
- LiteLLM for multi-provider LLM support
- Rich for beautiful terminal formatting
- Click for the CLI framework
- π Report Issues
- π¬ Discussions
- π Documentation
Made with β€οΈ for the MCP community