Running Local Deep Researcher with Ollama on Ubuntu

Running Local Deep Researcher with Ollama on Ubuntu

Running Local Deep Researcher with Ollama on Ubuntu combines powerful AI-driven research capabilities with the privacy and control of local processing. This setup is ideal for users who need comprehensive, citation-backed reports without relying on cloud services.

Overview of Local Deep Researcher

Local Deep Researcher is an AI-powered assistant that transforms complex queries into detailed reports using iterative analysis. Key features include:

  • Full Local Operation: Runs entirely on your machine via Ollama or LM Studio for enhanced data privacy.
  • Automated Research Cycles: Iteratively refines search queries with gap analysis to boost result quality.
  • Multi-Source Integration: Aggregates data from academic databases, web content, and private documents.
  • Markdown Output: Generates detailed, citation-rich reports in markdown format.

Why Choose Local Over Cloud?

  • Data Security: No third-party access or data leaks.
  • Customizable Depth: Configure the number of refinement cycles (typically 3–10) to suit your research needs.
  • Offline Capability: Operates offline after the initial setup.

Prerequisites for Ubuntu Installation

Before beginning, ensure your system meets these requirements:

  • Operating System: Ubuntu 24.04 (or a compatible Linux distribution)
  • Memory: 16GB RAM (minimum for smooth LLM operation)
  • Software: Python 3.8+ and Git
  • Hardware: NVIDIA GPU (recommended for faster inference)
  • Internet: Stable connection for initial installation and model downloads

Step-by-Step Installation Guide

1. System Preparation

Update your system packages and install core dependencies:

sudo apt update && sudo apt upgrade -y
sudo apt install python3 python3-pip git python3-venv -y

2. Install Ollama

Download and configure the LLM management platform:

curl -fsSL https://ollama.com/install.sh | sh
sudo systemctl start ollama
sudo systemctl enable ollama

3. Download the Language Model

Pull a suitable language model for research tasks (Gemma3 is recommended):

ollama pull gemma3:12b

4. Install Local Deep Researcher

Clone the repository and set up the Python virtual environment:

git clone https://github.com/langchain-ai/local-deep-researcher
cd local-deep-researcher
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

5. Configure Search Providers

Edit the .env file to enable web searches:

SEARCH_PROVIDER=searxng
SEARXNG_INSTANCE=https://searx.example.com
# Alternative: BRAVE_API_KEY=your-key-here

Using the Research Assistant

Web Interface Setup

Launch the browser-based control panel:

python -m local_deep_research.web.app

Access the control panel at http://localhost:5000 to manage and monitor your research projects.

Command-Line Usage

For advanced usage, run research tasks directly from the command line:

python -m local_deep_research.main --topic "Fusion Energy Developments" --cycles 5

Optional Flags:

  • --depth: Set the number of research iterations (default is 3)
  • --sources: Specify or limit particular data repositories
  • --format: Choose the output format (e.g., markdown or pdf)

Advanced Configuration

Custom Search Providers

To add new search engines, modify config/search_engines.yaml:

custom_engine:
  name: ArXiv
  url: https://arxiv.org/search/?query={query}
  parser: academic
  weight: 0.8

Model Optimization

Improve performance by enabling GPU acceleration:

pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python --no-cache-dir

Troubleshooting Common Issues

Problem Solution
Out-Of-Memory (OOM) Use a smaller model: ollama pull gemma3:8b
Slow Search Performance Cache results using Redis: sudo apt install redis-server
Citation Errors Ensure cite_sources: True is set in config.yaml

Research Workflow Example

  1. Initial Query:
    "Current status of fusion energy commercialization"
  2. Automated Process:
    • Generates 5 sub-queries on topics like funding, technical hurdles, and regulatory landscapes .
    • Cross-references data from 12+ sources (e.g., DOE reports, IEEE papers).
    • Highlights gaps, such as strategies for waste management.
  3. Final Output:
    Produces a comprehensive 20-page markdown report including:
    • Executive summary
    • Detailed technical analysis
    • Commercial viability timeline
    • Complete source citations

Conclusion

Running Local Deep Researcher with Ollama on Ubuntu empowers you to conduct deep, reliable research while maintaining full control over your data. With its iterative query refinement and multi-source integration.

This tool is particularly valuable for scientific research, policy analysis, and any application where data privacy is paramount. Combine it with academic database API keys and keep your models updated through Ollama for optimal performance.

References

  1. Run DeepSeek Janus-Pro 7B on Mac: A Comprehensive Guide Using ComfyUI
  2. Run DeepSeek Janus-Pro 7B on Mac: Step-by-Step Guide
  3. Run Local Deep Researcher: A Guide to Private Web Research on Windows
  4. Run Local Deep Researcher with Ollama on Mac