Install and Run Cherry Studio Using Ollama on Windows

Cherry Studio is a powerful, open-source desktop application designed as a unified front-end for large language models (LLMs). It integrates smoothly with both local LLM engines like Ollama and popular cloud-based services, providing Windows users with a flexible, privacy-focused AI experience.
This guide walks you through installing and running Cherry Studio with Ollama on Windows, including setup, configuration, troubleshooting, and advanced usage tips.
Overview of Cherry Studio and Ollama
Cherry Studio
- Cross-platform desktop client for LLMs (Windows, macOS, Linux)
- Supports OpenAI, Gemini, Anthropic, and local backends like Ollama
- Features include chat, RAG (Retrieval-Augmented Generation), agents, and productivity tools
- Designed for privacy: run local models without any cloud connection
Ollama
- Open-source tool for running LLMs locally
- Supports models like Llama 2, Mistral, Deepseek, Gemma, and more
- Offers a CLI and OpenAI-compatible API
- Keeps all interactions private by processing data locally
System Requirements
Cherry Studio
- OS: Windows 10 or 11
- CPU: Modern x64 processor
- RAM: Minimum 8 GB (16 GB+ recommended)
- Disk Space: 2 GB+ for the app, more for models
- GPU: Strongly recommended (NVIDIA GPU, 8 GB+ VRAM)
Ollama
- OS: Best on Windows 11 (limited support on Windows 10)
- CPU: x64 architecture
- RAM: 8 GB minimum
- GPU: Required for large models; CPU possible for smaller ones
- Disk Space: Depends on model size (hundreds of MBs to several GBs)
Step 1: Download and Install Cherry Studio
- Visit the Official Website
Go to the Cherry Studio download page. - Download the Windows Installer
Choose the.exe
installer for a typical setup or the portable version if preferred. - Handle Browser Warnings
If your browser flags the file, choose “Keep” or allow the download. - Run the Installer
Double-click the installer and follow the setup wizard. Once done, launch Cherry Studio.
Step 2: Download and Install Ollama
- Go to Ollama’s Website
Visit https://ollama.com. - Download the Windows Installer
Select the Windows version and start the download. - Install Ollama
Run the installer and complete the setup.
Verify Installation
Open PowerShell or Command Prompt and run:
ollama --version
If not recognized, ensure ollama
is in your system’s PATH.
Step 3: Download and Run a Model with Ollama
- Open PowerShell or Command Prompt
- Leave Ollama Running
Keep the terminal open so Cherry Studio can connect to Ollama’s local API.
Run the Model
ollama run gemma3:1b
Download a Model
ollama pull gemma3:1b
Replace gemma3:1b
with your desired model name.
Step 4: Configure Cherry Studio to Use Ollama
- Open Cherry Studio
- Go to Settings
Click the gear icon in the left navigation panel. - Open Model Providers
Navigate to the “Model Providers” or “Model Services” tab. - Add Ollama as a Provider
Click on “Ollama” and enable it. - Configure API Details
- API Address:
http://localhost:11434
- API Key: Leave blank (not required)
- Session Timeout: Set preferred duration in minutes
- API Address:
- Add Downloaded Models
Click “+ Add” and input model names likegemma3:1b
. Use “Manage” to edit or remove.
Step 5: Use Cherry Studio with Ollama
- Start a New Chat
- Select Ollama as the Provider
- Choose Your Model
Pick from the list of added models (e.g.,gemma3:1b
). - Begin Chatting
Enter your prompt and receive responses from the locally running model. All processing is handled on your machine.
Troubleshooting Common Issues
Ollama Not Running
- Confirm Ollama is open and running
- Verify
http://localhost:11434
is accessible - Run Ollama manually and check for terminal errors
Model Not Found in Cherry Studio
- Ensure the model is downloaded via
ollama pull
- Double-check the model name is correct in Cherry Studio
Performance Problems
- Use a smaller model if system resources are limited
- Ensure GPU drivers are updated for optimal acceleration
Language or UI Confusion
- Some UI text may appear in Chinese—use browser translation tools if needed
Advanced Configuration and Features
Multiple LLM Providers
Integrate local and cloud models for maximum flexibility.
RAG Support
Enhance context via document search or web connections.
Agent Workflows
Use Cherry Studio's agents to automate tasks and connect to external APIs.
Model Management
Switch between multiple local models via the Ollama integration.
UI Customization
Enable dark mode and adjust keyboard shortcuts for a personalized setup.
Security and Privacy Benefits
- Local Processing: No data leaves your device
- Offline Capability: Works without an internet connection
- Transparency: Both tools are fully open-source
Cherry Studio + Ollama vs. Cloud-Based LLMs
Feature | Cherry Studio + Ollama | Cloud-Based LLMs (e.g., ChatGPT) |
---|---|---|
Data Privacy | 100% Local | Cloud-processed |
Offline Use | Yes | No |
Customization | Full | Limited |
Cost | Free (Open Source) | May Require Subscription |
Hardware Needs | Higher (Local Models) | Lower |
Integration Options | Flexible | Restricted |
Tips for the Best Experience
- Use a GPU: NVIDIA GPUs (8 GB+ VRAM) improve performance significantly
- Monitor Resources: Close unused apps to free up RAM/VRAM
- Stay Updated: Regularly check for new releases of Cherry Studio and Ollama
- Explore Model Options: Try various models to see what fits your needs
Conclusion
By installing Cherry Studio and Ollama on your Windows machine, you unlock a private, customizable, and powerful AI setup. Whether you’re experimenting with LLMs, building workflows, or protecting sensitive data, this local-first solution offers full control with zero reliance on the cloud.