Run Void AI with Ollama on Ubuntu: Best Cursor Alternative

Run Void AI with Ollama on Ubuntu: Best Cursor Alternative

The rise of AI-powered coding tools has transformed software development, but many popular solutions—like Cursor and GitHub Copilot—are closed-source and cloud-based, raising concerns about privacy and data control.

Enter Void, an open-source, locally-hosted AI code editor, and Ollama, a robust tool for running large language models (LLMs) on your own machine.

This guide provides a comprehensive walkthrough for running Void AI with Ollama on Ubuntu, creating a powerful, private, and customizable alternative to Cursor.

What is Void?

Void is an open-source AI code editor designed as a direct alternative to Cursor. Built as a fork of VS Code, it retains full extension and theme compatibility, while adding powerful AI features for code completion, editing, and chat—all with your data kept local.

Key Features:

  • Local AI Agents: Use AI models on your codebase without sending data to the cloud.
  • VS Code Compatibility: Seamless migration of extensions and themes.
  • AI-Powered Editing: Inline code edits, autocompletion, and file-aware chat.
  • Privacy: No data leaves your machine unless you choose.
  • Checkpoints & Visualization: Track and visualize code changes.
  • Extensible: Integrate with other tools and models, including Ollama, llama.cpp, LM Studio, and more.

What is Ollama?

Ollama is an open-source tool for running large language models directly on your local machine. It supports a wide range of LLMs (like Llama 2, Llama 3, Mistral, Gemma, and more), offering full data privacy, offline operation, and multi-platform support (Linux, macOS, Windows).

Key Features:

  • Local Model Hosting: Run, update, and manage models on your device.
  • Command-Line Interface: Simple CLI for model management and inference.
  • Cross-Platform: Works on Linux, macOS, and Windows (preview).
  • GPU Acceleration: Supports discrete GPUs for fast inference.
  • Third-Party Integration: Can be used as a backend for editors like Void.

Why Use Void + Ollama?

Combining Void and Ollama gives you a fully local, private, and extensible AI coding environment:

  • Privacy: No code or prompts are sent to external servers.
  • Control: Choose and manage your own AI models.
  • Performance: Local inference reduces latency and increases reliability.
  • Customization: Integrate with any model supported by Ollama.
  • Cost: Completely free and open-source—no subscriptions or usage limits.

Feature Comparison Table

FeatureCursorVoid + Ollama
Open SourceNoYes
Local Model HostingNo (Cloud-based)Yes (Ollama)
Data PrivacyLimitedFull (local-only)
Extension SupportNoYes (VS Code compatible)
Model ChoiceFixedAny supported by Ollama
CostPaid/SubscriptionFree & Open Source
Platform SupportmacOS, WindowsLinux, macOS, Windows

System Requirements

Minimum Requirements:

  • Ubuntu 18.04 or newer (Desktop or Server)
  • 200 MB free disk space for Void IDE
  • Sudo privileges
  • Internet access (for initial downloads)
  • Ollama installed and running locally
  • Sufficient RAM and disk space for LLMs (models can be several GB)
  • Discrete GPU (NVIDIA/AMD) for faster AI inference
  • SSD storage for faster model loading

Step-by-Step Installation

1. Prepare Your Ubuntu System

Update your package lists and upgrade existing packages:

bashsudo apt update
sudo apt upgrade -y

2. Install Ollama

Ollama is required to host your LLMs locally.

a. Open Terminal and Install Ollama:

bashcurl -fsSL https://ollama.com/install.sh | sh

b. Allow Ollama’s Default Port (if using a firewall):

bashsudo ufw allow 11434/tcp

c. Start the Ollama Daemon:

bashollama daemon start

d. Verify Ollama is Running:

Visit http://localhost:11434 in your browser. You should see a message:
“Ollama is running”.

3. Download and Run a Model in Ollama

Ollama supports many models. For this guide, let’s use Llama 3.1 as an example.

a. Pull the Model:

bashollama pull llama3.1:8b

b. Run the Model (Optional Test):

bashollama run llama3.1:8b --prompt "Hello, world!"

You can list all available models:

bashollama list

4. Install Void IDE

a. Download the Latest .deb Package:

Get the latest release from the Void GitHub Releases page.

b. Install Void Using APT:

bashcd ~/Downloads
sudo apt update
sudo apt install ./void_1.99.30034_amd64.deb

(Replace the filename with the latest version as needed).

5. Launch Void

Start Void from your applications menu or by running:

bashvoid

On first launch, Void will detect your local Ollama instance running at localhost:114344.

Configuring and Running Models

1. Connecting Void to Ollama

Void automatically detects running Ollama instances on the default port (11434). If you have multiple models, you can choose which to use from within Void’s settings or model selection menu.

To switch or add models:

  • Open Void’s settings or model selection panel.
  • Enter the model name as shown in your Ollama list (e.g., llama3.1:8b).
  • Toggle the model on or off as needed.

2. Using Void’s AI Features

Void provides several AI-powered features, all running locally with your chosen model:

  • Autocomplete: Press Tab to accept AI-generated code completions.
  • Inline Edits: Select code and press Ctrl + K to invoke AI-powered editing.
  • AI Chat: Press Ctrl + L to open a chat window, ask questions, or attach files for context-aware answers.
  • File Indexing: AI can reference your entire codebase for smarter suggestions.
  • Intelligent Search: Use AI to find and edit code across your project.
  • Prompt Customization: View and edit the prompts used by Void for even finer control.

All these features operate locally, ensuring your code and queries remain private.

Advanced Customization

1. Model Management in Ollama

  • Update a Model:bashollama pull <model-name>
  • Remove a Model:bashollama rm <model-name>
  • Show Model Info:bashollama show <model-name>

2. Using Different Models

Ollama supports a wide range of models, including Llama 2, Llama 3, Mistral, Gemma, and more. You can pull and use any supported model:

bashollama pull mistral
ollama run mistral --prompt "Summarize this code:"

Switch models in Void’s settings as described above.

3. Integrating Extensions

Since Void is a fork of VS Code, you can install VS Code extensions and themes for additional functionality and customization.

Troubleshooting & FAQs

Q: Void can’t connect to Ollama. What should I check?

  • Ensure Ollama is running (ollama daemon start).
  • Verify Ollama is listening on port 11434 (localhost:11434).
  • If using a firewall, ensure port 11434 is open (sudo ufw allow 11434/tcp).
  • Make sure the correct model name is entered in Void’s settings.

Q: How do I improve AI performance?

  • Use a discrete GPU for faster inference.
  • Use smaller models if you have limited RAM or CPU resources.
  • Close unused applications to free up system resources.

Q: Can I use Ollama and Void on Windows or macOS?

  • Ollama supports macOS and Windows (preview), but Void’s best support is currently on Linux/Ubuntu.

Q: Is my code or data ever sent to the cloud?

  • No. Both Void and Ollama are designed for local operation. Your code, prompts, and AI interactions remain on your machine unless you explicitly choose to share them.

Conclusion

Running Void AI with Ollama on Ubuntu gives you a powerful, private, and fully customizable AI coding environment—without the privacy trade-offs or costs of commercial solutions like Cursor.

References

  1. Running Void AI with Ollama on Linux: A Comprehensive Guide
  2. Run Void AI with Ollama on Windows: Cursor AI Alternative
  3. Cursor AI vs Void AI: An In-Depth Comparison of Modern AI Code Editors
  4. Run Void AI with Ollama on Mac: Best Cursor Alternative