Installing and running Cherry Studio with Ollama on a Mac
This comprehensive guide walks you through every step—from prerequisites to advanced features—ensuring a smooth and efficient setup of Cherry Studio and Ollama on macOS.
- Cherry Studio is a cross-platform desktop application for interacting with various large language models (LLMs). It supports providers like OpenAI, Gemini, Anthropic, and local models through Ollama and LM Studio.
- Ollama is an open-source platform that allows you to run LLMs locally on your Mac, offering enhanced privacy, speed, and control—without relying on cloud APIs.
System Requirements
- macOS 11 (Big Sur) or later
- Adequate disk space (LLMs may require several GBs to tens of GBs)
- Internet access for initial downloads and updates
Step 1: Install Ollama on macOS
A. Download and Install Ollama
- Visit the Ollama website or use Homebrew to install.
Start Ollama as a background service
brew services start ollama
Recommended: Install via Homebrew
brew install ollama
Ollama will now be running at:http://localhost:11434/
- Alternatively: Install via .zip
- Download
.zip
from the official site. - Drag
Ollama.app
into your Applications folder. - Open the app and follow prompts.
- Download
B. Verify Installation
Visit http://localhost:11434/
in your browser. If Ollama is running, you’ll see a status confirmation.
Step 2: Download and Run a Model with Ollama
A. Explore Available Models
Check the Ollama models page or use the terminal.
B. Download a Model
Example:
ollama pull deepseek-r1
C. Run the Model
ollama run deepseek-r1
You can now interact with the model directly from your terminal.
Step 3: Install Cherry Studio on macOS
A. Download Cherry Studio
- Visit the Cherry Studio download page.
- Select the correct version:
- Apple Silicon (M1/M2/M3): ARM64
- Intel Macs: x64
(Check your chip under Apple logo → "About This Mac")
B. Install Cherry Studio
- Open the
.dmg
or.zip
file. - Drag
Cherry Studio
to your Applications folder. - If you see a security warning, right-click and choose “Open.”
C. Install via Homebrew (Alternative)
brew install --cask cherry-studio
This method ensures you're installing the latest version.
Step 4: Connect Cherry Studio to Ollama
A. Launch Cherry Studio
Open the application from your Applications folder.
B. Add Ollama as a Model Provider
- Navigate to Settings → Model Providers.
- Select Ollama from the list.
- Use the default endpoint:
http://localhost:11434/
(Update if you’ve changed the port) - Click Manage to view downloaded models and select the one you want to use.
C. Test Your Setup
- Start a new conversation or AI assistant session.
- Choose your Ollama-backed model.
- Type a prompt and confirm a local response is returned.
Step 5: Unlock Advanced Features
A. Multi-Model Conversations
Compare responses by interacting with multiple models (OpenAI, Gemini, Ollama, etc.) in a single conversation window.
B. Custom AI Assistants
Create assistants for tasks like summarization, code generation, document analysis, and more. Cherry Studio includes 300+ prebuilt assistants, or you can customize your own.
C. Document Processing
Upload and analyze:
- PDFs
- Word docs
- Images
- Spreadsheets
- Plain text
Your selected model will handle the extraction and interpretation.
D. Built-In Tools
Utilize translation, global search, and other productivity tools—all powered by your chosen LLM.
Troubleshooting & Tips
- Model not found?
Ensure you’ve pulled it usingollama pull model-name
. - Slow performance?
Use Apple Silicon Macs with ample RAM and storage. Avoid large models on underpowered systems. - Blocked by macOS security?
Right-click the app → Click Open to bypass the warning.
Ollama not running?
Restart with:
brew services restart ollama
Frequently Asked Questions
Q: Can I use Cherry Studio without Ollama?
Yes, Cherry Studio supports cloud-based LLMs like OpenAI, Gemini, and Anthropic. Ollama enhances your setup by allowing offline, local inference.
Q: Do I need to be online after setup?
Only for downloading updates or models. Once installed, everything works offline.
Q: Can I switch between multiple models?
Absolutely. Cherry Studio enables dynamic switching and comparison between various local and cloud-based models.
Installation Summary Table
Step | Cherry Studio | Ollama |
---|---|---|
Download | Official website / Homebrew | Official website / Homebrew |
Install | Drag to Applications / brew install |
Drag to Applications / brew install |
Configure | Add Ollama provider, select model | Run service, pull and run models |
Use | Chat, assistants, tools | Terminal or through Cherry Studio |
Conclusion
With Cherry Studio and Ollama on macOS, you can unlock the full power of modern AI—locally, privately, and efficiently. Whether you're building custom AI assistants, analyzing documents, or experimenting with powerful open-source models, this setup provides a seamless and flexible environment for both casual and advanced users.