A vanilla, up-to-date fork of ComfyUI intended for long term support (LTS) from AppMana and Hidden Switch.
Used in production by Scopely, a game studio, Livepeer and Nunchaku Tech. Used by innovators at Ferrero Group, Hyundai and Nike.
If you need to develop an application or plugin around ComfyUI, this fork stays compatible and up-to-date with upstream, fixing numerous bugs and adding features. It also packages tacit knowledge about running diffusion models and art workflows, distributed inference, deployment on Kubernetes, and other production tasks that Claude and Gemini cannot do.
This LTS fork adds development, embedding, automated testing, LLM and distributed inference features to ComfyUI, but maintains compatibility with custom nodes from the ecosystem.
- Pip and UV Installable: Install via
piporuvdirectly from GitHub. No manual cloning required for users. - Automatic Model Downloading: Missing models (e.g., Stable Diffusion, FLUX, LLMs) are downloaded on-demand from Hugging Face or CivitAI.
- Docker and Containers: First-class support for Docker and Kubernetes with optimized containers for NVIDIA and AMD.
- Distributed Inference: Run scalable inference clusters with multiple workers and frontends using RabbitMQ.
- Embedded / Library: Use ComfyUI as a Python library (
import comfy) inside your own applications without the web server. Runs likediffusers. - Vanilla Custom Nodes: Fully compatible with existing ComfyUI custom nodes (ComfyUI-Manager, WanVideoWrapper, KJNodes, etc.). Clone into
custom_nodes/and install dependencies into your venv. - LTS Custom Nodes: A curated set of "Installable" custom nodes (ControlNet, AnimateDiff, IPAdapter) optimized for this fork.
- LLM Support: Native support for Large Language Models (LLaMA, Phi-3, etc.) and multi-modal workflows.
- API and Configuration: Enhanced API endpoints and extensive configuration options via CLI args, env vars, and config files.
- Tests: Automated test suite ensuring stability for new features.
For users who want to run ComfyUI for generating images and videos.
-
Install
uv:curl -LsSf https://astral.sh/uv/install.sh | sh -
Create a Workspace:
mkdir comfyui-workspace cd comfyui-workspace -
Install and Run:
# Create a virtual environment uv venv --python 3.12 # Install ComfyUI LTS # --torch-backend=auto installs the correct torch, torchvision and torchaudio for your platform. # Omit --torch-backend if you want to keep your currently installed PyTorch. uv pip install --torch-backend=auto "comfyui@git+https://github.com/hiddenswitch/ComfyUI.git" # Run uv run --no-sync comfyui
For developers contributing to the codebase or building on top of it.
-
Clone the Repository:
git clone https://github.com/hiddenswitch/ComfyUI.git cd ComfyUI -
Setup Environment:
# Create virtual environment uv venv --python 3.12 source .venv/bin/activate # Install in editable mode with dev dependencies uv pip install -e .[dev]
-
Run:
uv run --no-sync comfyui
ComfyUI can run embedded inside your own Python application. No server is started, no subprocesses are used. Use the Comfy async context manager to execute workflows directly:
from comfy.client.embedded_comfy_client import Comfy
async with Comfy() as client:
outputs = await client.queue_prompt(workflow_dict)
# All models unloaded and VRAM released on exitBuild workflows programmatically with GraphBuilder, or paste API-format JSON from the web UI. Stream previews during inference with queue_with_progress.
See Embedded / Library Usage for complete examples.
Full documentation is available in docs/index.md.
- Large Language Models
- Video Workflows (AnimateDiff, SageAttention, etc.)
- Other Features (SVG, Ideogram)
- Custom Nodes (Installing & Authoring)
- Embedded / Library Usage (Python, GraphBuilder, Streaming)
- Testing Workflows (pytest, Image Snapshots)
- API Usage (REST, WebSocket)