Set up & Run ComfyUI-Copilot on Linux / Ubuntu
ComfyUI Copilot represents a sophisticated, modular graphical user interface (GUI) and backend framework designed for the orchestration of Stable Diffusion models.
This system leverages a graph- and node-based configuration, enabling users to construct and implement intricate image synthesis workflows with minimal direct coding intervention.
System Prerequisites
Before proceeding with installation, ensure that the Linux environment meets the following technical specifications:
- Python 3.10 or later – Ensures compatibility with the ComfyUI framework.
- Git – Required for repository cloning and version control.
- PyTorch – Essential for model computation; installation parameters depend on the hardware architecture.
- NVIDIA GPU (recommended) – While the framework can operate on CPUs, an NVIDIA GPU with a minimum of 8GB VRAM significantly enhances computational efficiency.
Installation Protocol
Step 1: System Update
Execute the following command to synchronize package dependencies with the latest available versions:
sudo apt update && sudo apt upgrade -y
Step 2: Installation of Required Dependencies
Deploy Python, pip, and Git using the package manager:
sudo apt install python3 python3-pip git -y
Step 3: Repository Cloning
Retrieve the ComfyUI source code via Git:
git clone https://github.com/comfyanonymous/ComfyUI.git
Step 4: Directory Navigation
cd ComfyUI
Step 5: Dependency Installation
Execute the following command to install all necessary Python dependencies:
pip3 install -r requirements.txt
Step 6: PyTorch Installation
For NVIDIA GPUs:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
For AMD GPUs:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2
Step 7: Model Deployment
Download Stable Diffusion model checkpoints (.ckpt
or .safetensors
) and position them within the models/checkpoints
directory.
Step 8: Execution of ComfyUI
Launch ComfyUI by executing:
python3 main.py
Following initiation, the user interface becomes accessible at http://localhost:8188
.
Configuration and Advanced Usage
WebUI Setup
Upon launching ComfyUI, navigate to http://localhost:8188
and enable Developer Mode to unlock advanced functionalities.
API Utilization
ComfyUI features an integrated API for automation and external integrations. To enable it:
- Access API settings via the WebUI.
- Click Save (API Format) to generate a JSON configuration file.
Executing image synthesis via API:
import requests
import json
with open('xyz_template.json') as f:
data = json.load(f)
response = requests.post('http://localhost:8188/api/run', json=data)
print(response.json())
Ensure ComfyUI is active prior to execution.
Model and Extension Management
Directory Structure
Organize model files within designated subdirectories:
- /checkpoints – Primary Stable Diffusion models (e.g., SDXL, Flux)
- /vae – Variational autoencoders
- /controlnet – ControlNet configurations
- Additional folders for LoRA, CLIP, and super-resolution models
Model Installation
- Acquire
.safetensors
models from reputable sources. - Place them in their corresponding directories.
- Restart ComfyUI to load new models.
Specialized Configurations
For complex models such as Flux, additional CLIP encoders and VAE settings may be required.
Applied Use Cases
Example 1: Automated Image Synthesis
To programmatically generate images using ComfyUI:
import requests
payload = {
"model": "stable-diffusion",
"prompt": "A futuristic cityscape with neon lights",
"steps": 50,
"width": 512,
"height": 512
}
response = requests.post('http://localhost:8188/api/generate', json=payload)
print(response.json())
Example 2: Batch Image Processing
For processing multiple images within a single session:
import requests
prompts = [
"A fantasy castle in the clouds",
"A cyberpunk street with rain reflections",
"A medieval warrior with a sword"
]
for prompt in prompts:
payload = {"model": "stable-diffusion", "prompt": prompt, "steps": 50, "width": 512, "height": 512}
response = requests.post('http://localhost:8188/api/generate', json=payload)
print(response.json())
Diagnostic and Troubleshooting Procedures
Common Issues
- Dependency Resolution Errors – Reinstall missing packages from
requirements.txt
. - GPU Compatibility Conflicts – Verify that GPU drivers align with installed PyTorch versions.
- Python Version Mismatch – Ensure Python 3.10 or later is in use.
Conclusion
By adhering to this systematic approach, users can successfully deploy and optimize ComfyUI Copilot on Linux. Proper configuration ensures maximal computational efficiency, allowing for high-performance AI-driven image synthesis.
References
- Run DeepSeek Janus-Pro 7B on Mac: A Comprehensive Guide Using ComfyUI
- Run DeepSeek Janus-Pro 7B on Mac: Step-by-Step Guide
- Run Microsoft OmniParser V2 on Ubuntu : Step by Step Installation Guide
- Set up & Run ComfyUI-Copilot on Windows
- Set up & Run ComfyUI-Copilot on macOS
- ComfyUI-Copilot vs ComfyUI: Which is better?