Run DeepSeek-VL2 on macOS: Step-by-Step Installation Guide

DeepSeek AI has developed the DeepSeek-VL2, a mixture-of-experts vision-language model. This model is designed to understand and process both images and text, allowing it to perform tasks such as image understanding, object localization, and grounded captioning. You can run DeepSeek-VL2 on Windows using tools like LM Studio or Ollama.

What is DeepSeek-VL2? πŸ€–

DeepSeek-VL2 is a state-of-the-art multimodal AI model that combines:

  • Vision-Language Understanding 🌐 (processes images + text)
  • Mixture-of-Experts Architecture 🧠 (enhanced task handling)
  • Advanced Capabilities:
    • Object localization πŸ”
    • Visual grounding πŸ–ΌοΈ
    • Multimodal conversations πŸ’¬
    • Image captioning πŸ“

System Requirements πŸ’»

Component Minimum Recommended
OS Windows 10 Windows 11
RAM 8GB 16GB+
Storage 15GB 30GB SSD
GPU NVIDIA GTX 1060 RTX 3080+
Python 3.8+ 3.10+

Installation Methods πŸ› οΈ

Option 1: Ollama Installation (CLI Approach)

# 1. Download installer from ollama.ai
# 2. Run PowerShell as Admin
$env:OLLAMA_DEBUG="1" & "ollama app.exe"
# 3. Launch model
ollama run deepseek-r1:8b

Pros: Lightweight, terminal-based | Cons: Requires CLI familiarity

Option 2: LM Studio Setup (GUI Method)

  1. Download from LM Studio
  2. Install β†’ Select Processor Type (CPU/GPU)
  3. Model Search: "DeepSeek-VL2"
  4. Load β†’ Chat Interface

Best For: Beginners preferring visual interface

Web Access (Instant Access)

1. Visit [DeepSeek Chat](https://chat.deepseek.com)
2. Use "Try Now" demo (registration currently paused)

Note: Limited functionality vs local installation

Code Implementation Guide 🐍

# 1. Environment Setup
pip install torch==2.1.0 transformers==4.35.0 deepseek-vl2

# 2. Basic Image Analysis
from deepseek_vl2 import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM

model = DeepseekVLV2ForCausalLM.from_pretrained("deepseek-ai/deepseek-vl2-tiny")
processor = DeepseekVLV2Processor.from_pretrained(model_path)

# 3. Object Localization Example
conversation = [{
    "role": "<|User|>",
    "content": "<image>\n<|ref|>Identify all vehicles<|/ref|>",
    "images": ["traffic_scene.jpg"]
}]

Running DeepSeek-VL2 with Python

To use DeepSeek-VL2 in a Python environment, follow these steps:

Generate Response:

inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
outputs = vl_gpt.language.generate(
    inputs_embeds=inputs_embeds,
    attention_mask=prepare_inputs.attention_mask,
    pad_token_id=tokenizer.eos_token_id,
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=512,
    do_sample=False,
    use_cache=True
)
answer = tokenizer.decode(outputs.cpu().tolist(), skip_special_tokens=False)
print(f"{prepare_inputs['sft_format']}", answer)

Prepare Inputs:

conversation = [
    {"role": "<|User|>", "content": "<image>\n<|ref|>The giraffe at the back.<|/ref|>", "images": ["./images/visual_grounding_1.jpeg"]},
    {"role": "<|Assistant|>", "content": ""},
]
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
    conversations=conversation,
    images=pil_images,
    force_batchify=True,
    system_prompt=""
).to(vl_gpt.device)

Load Processor and Model:

vl_chat_processor = DeepseekVLV2Processor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()

Specify Model Path:

model_path = "deepseek-ai/deepseek-vl2-tiny"

Import Required Libraries:

import torch
from transformers import AutoModelForCausalLM
from deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM
from deepseek_vl2.utils.io import load_pil_images

Install Dependencies:

pip install -e .[gradio]

Key Features

  • Vision-Language Model: DeepSeek-VL2 processes and understands both visual and textual data.
  • Mixture-of-Experts: The model architecture enhances its ability to handle complex tasks.
  • Object Localization: It identifies and locates specific objects within images using special tokens like <|ref|> and <|/ref|>.
  • Grounded Captioning: Using the <|grounding|> token at the beginning of the prompt enables object localization and reasoning.

Example Usage

Single Image Conversation

conversation = [
    {"role": "<|User|>", "content": "<image>\n<|ref|>The giraffe at the back.<|/ref|>", "images": ["./images/visual_grounding_1.jpeg"]},
    {"role": "<|Assistant|>", "content": ""},
]

Multiple Images/Interleaved Image-Text Conversation

conversation = [
    {
        "role": "<|User|>",
        "content": "This is image_1: <image>\n"
                   "This is image_2: <image>\n"
                   "This is image_3: <image>\n"
                   "Can you tell me what are in the images?",
        "images": [
            "images/multi_image_1.jpeg",
            "images/multi_image_2.jpeg",
            "images/multi_image_3.jpeg",
        ],
    },
    {"role": "<|Assistant|>", "content": "..."}
]

Additional Tips

  • Model Selection: Choose the appropriate DeepSeek configuration based on your system’s performance needs.

Gradio Demo: To run a Gradio demo, install the required dependencies and use:

pip install -e .[gradio]

Inference: Run inference using:

CUDA_VISIBLE_DEVICES=0 python inference.py --model_path "deepseek-ai/deepseek-vl2"

Pro Tips & Optimization πŸ”§

Performance Boost

# Enable Flash Attention 2
CUDA_VISIBLE_DEVICES=0 python inference.py --use_flash_attn 2

Hardware Optimization

Resource Optimization Strategy
GPU VRAM Use --precision bfloat16
CPU Enable OpenMP threading
Storage NVMe SSD for model loading speed

FAQs ❓

Q: Can I run this without GPU?
A: Possible with CPU-only mode, but expect 10x slower performance

Q: Model size options?
A: Available variants: Tiny (8B), Base (24B), Pro (72B)

Q: Commercial use?
A: Check DeepSeek's licensing terms for your use case

References

  1. Run DeepSeek Janus-Pro 7B on Mac: A Comprehensive Guide Using ComfyUI
  2. Run DeepSeek Janus-Pro 7B on Mac: Step-by-Step Guide
  3. Run DeepSeek Janus-Pro 7B on Windows: A Complete Installation Guide
  4. Run TransPixar on macOS : Step-by-Step Installation Guide