Install and Run Cherry Studio Using Ollama on Linux Ubuntu

Install and Run Cherry Studio Using Ollama on Linux Ubuntu

This comprehensive guide walks you through installing and running Cherry Studio with Ollama on Ubuntu Linux. Learn how to set up a robust local environment for running large language models (LLMs) privately, securely, and efficiently.

Whether you're a developer, researcher, or privacy-conscious user, this setup will give you powerful AI capabilities without relying on cloud services.

Overview

Cherry Studio

A cross-platform desktop application for managing and interacting with LLMs. Supports local models via Ollama and cloud providers like OpenAI, Gemini, and Anthropic.

Ollama

An open-source tool that enables local execution of LLMs with support for popular models like LLaMA 2, DeepSeek, Mistral, and Gemma. It exposes an OpenAI-compatible API, making integration with other tools seamless.


Why Use Cherry Studio with Ollama?

  • Run LLMs Locally: Avoid cloud dependencies and ensure complete control over your data.
  • Offline Access: Use language models without an internet connection.
  • No API Costs: Save money by avoiding pay-per-token API usage.
  • Enhanced Privacy: All data and model execution remain on your local machine.
  • Model Flexibility: Choose from a wide variety of supported models.

Prerequisites

  • Ubuntu Linux (22.04 LTS or later recommended)
  • 64-bit CPU (x86_64 or ARM64)
  • Minimum 16 GB RAM (more for large models)
  • At least 30 GB free disk space
  • Basic terminal proficiency

Step 1: Download Cherry Studio for Linux

Cherry Studio provides Linux builds in .AppImage and .deb formats.

Using AppImage

wget https://github.com/Cherry-AI/CherryStudio/releases/download/vX.Y.Z/CherryStudio-x86_64.AppImage
chmod +x CherryStudio-x86_64.AppImage

Step 2: Install and Run Cherry Studio

Option 1: Run via AppImage

chmod +x CherryStudio-x86_64.AppImage
./CherryStudio-x86_64.AppImage

Option 2: Install with .deb (if available)

sudo dpkg -i CherryStudio-x86_64.deb
sudo apt-get install -f

Then run:

cherrystudio

Step 3: Install Ollama on Ubuntu

Install Ollama using the official script:

curl -fsSL https://ollama.com/install.sh | sh

Verify the installation:

ollama --version

Step 4: Download and Run a Model Using Ollama

Choose and download a model:

ollama pull llama3

Start the model:

ollama run llama3

Keep this terminal window open—Cherry Studio connects to Ollama’s local API at http://localhost:11434.


Step 5: Configure Cherry Studio to Use Ollama

  1. Launch Cherry Studio via AppImage or application menu.
  2. Open Settings (gear icon on the sidebar).
  3. Navigate to Model Providers or Model Services.
  4. Click Add Provider, choose Ollama, and enable it.
  5. Enter the following:
    • API Address: http://localhost:11434
    • API Key: Leave blank
    • Session Timeout: Optional (e.g., 30 minutes)
  6. Under models, click + Add, and enter the model name (e.g., llama3).

Step 6: Start Using Cherry Studio with Ollama

  • Open a new chat inside Cherry Studio.
  • Select Ollama as the provider.
  • Choose the model you added (e.g., llama3).
  • Start interacting with your local LLM in real-time, with zero cloud dependency.

Advanced Configuration

Managing Multiple Models

Use Ollama to manage multiple models:

ollama list        # View available models
ollama pull gemma  # Download another model
ollama rm llama3   # Remove a model

Add any additional models to Cherry Studio under “Providers > Ollama > Add Model.”

Change Default Port

If Ollama runs on a non-default port, update the API Address in Cherry Studio settings accordingly:

http://localhost:12345

Troubleshooting

Cherry Studio Won’t Launch

Install FUSE for AppImage support:

sudo apt install libfuse2

Ollama Not Detected

Check if the API is live:

curl http://localhost:11434

If it fails, ensure the model is running and that nothing is blocking the port.

Model Not Found in Cherry Studio

Ensure the model name exactly matches the name shown in:

ollama list

Performance Bottlenecks

  • Use models optimized for lower specs (e.g., Gemma or LLaMA 1B).
  • Close other apps to free up memory and CPU/GPU usage.
  • Consider upgrading hardware for optimal performance.

Tips for Optimal Use

  • Use a GPU: If available, ensure drivers are properly installed for accelerated performance.
  • Stay Updated: Regularly check for updates to Cherry Studio and Ollama.
  • Explore Different Models: Try various models to find the one that balances performance and accuracy on your setup.
  • Secure Your Machine: While everything runs locally, ensure your system is regularly patched and protected.

Quick Reference Table

Step Command/Action Notes
Download Cherry Studio wget ... Get latest version from GitHub
Run Cherry Studio ./CherryStudio-x86_64.AppImage Or install with .deb package
Install Ollama `curl -fsSL ... sh`
Download Model ollama pull llama3 Choose a supported model
Run Model ollama run llama3 Keep terminal open
Configure in Cherry Settings → Model Providers → Add Ollama Use default API address
Add Model to Cherry Use + Add and model name Must match name from ollama list

Conclusion

By integrating Cherry Studio with Ollama on Ubuntu, you unlock the full power of local LLMs with zero reliance on cloud infrastructure. This setup is ideal for developers, researchers, and privacy advocates seeking flexibility, performance, and security.

FAQ

FAQ 1 Is Cherry Studio free? FAQ 1 Yes, Cherry Studio is free to use. Some LLMs may require a paid API key.
FAQ 2 Can I use local models? FAQ answer 2 Yes, with support for Ollama and compatible local models.
FAQ 3 Which file types are supported? FAQ answer 3 Cherry Studio handles text, images, Office files, and PDFs.
FAQ 4 Are plugins supported? FAQ answer 4 Yes, via mini-programs to extend functionality.

References

  1. Install and Run Cherry Studio on Windows: A Complete Guide
  2. Install and Run Cherry Studio on Mac
  3. Install and Run Cherry Studio on Linux Ubuntu: A Complete Guide
  4. Installing and running Cherry Studio with Ollama on a Mac