Setting Up AutoCodeRover on Linux: A Comprehensive Guide
AutoCodeRover is an AI-powered DevAssistant that streamlines software development by automating bug fixes, feature implementations, and code optimizations. Designed for developers working with complex systems, it supports three key workflows:
- GitHub Issue Resolution
- Local Repository Debugging
- SWE-Bench Task Automation
This comprehensive guide walks through Linux installation methods, configuration best practices, and real-world use cases.
Prerequisites
Before installation, ensure your system meets these requirements:
- Operating System: Ubuntu 20.04+/Debian 10+ or equivalent Linux distro
- Hardware:
- 8GB+ RAM (16GB recommended for large codebases)
- 20GB+ free disk space
- Software:
conda
(Miniconda3 or Anaconda) or Docker Engine 24.0+- Python 3.10+
- API Keys:
- OpenAI API Key (required)
- Anthropic/Groq API keys (optional for alternative AI models)
Installation Methods
There are two primary methods to set up AutoCodeRover on Linux: using Docker or a local installation with conda
.
1. Docker Installation
Using Docker is the recommended method for running AutoCodeRover, as it simplifies dependency management and ensures a consistent environment.
Steps:
Run the Docker container:
docker run -it -e OPENAI_KEY="${OPENAI_KEY:-OPENAI_API_KEY}" acr
Build the Docker image:
docker build -f Dockerfile.minimal -t acr .
Set up API keys:Obtain an OpenAI API key and set it as an environment variable:
export OPENAI_KEY=sk-YOUR-OPENAI-API-KEY-HERE
For Anthropic models, set the Anthropic API key:
export ANTHROPIC_API_KEY=sk-ant-api...
Similarly, set the Groq API key.
2. Local Installation
Alternatively, you can set up AutoCodeRover locally by managing Python dependencies with environment.yml
. This method is recommended for SWE-bench experiments.
Steps:
- Set up API keys:Set the OpenAI or Anthropic API key in your shell before running AutoCodeRover.
- Activate the conda environment:
conda activate auto-code-rover
- Create a conda environment:
conda env create -f environment.yml
Running AutoCodeRover
AutoCodeRover can be run in three modes: GitHub issue mode, local issue mode, and SWE-bench mode.
1. GitHub Issue Mode
This mode allows you to run AutoCodeRover on live GitHub issues by providing a link to the issue page.
Steps:
- Prepare the necessary information:
- Link to clone the project (
git clone ...
). - Commit hash of the project version for AutoCodeRover to work on (
git checkout ...
). - Link to the GitHub issue page.
- Replace
<task id>
with a string to identify the issue. - If patch generation is successful, the path to the generated patch will be written to a file named
selected_patch.json
in the output directory.
- Link to clone the project (
- Run the following commands in the Docker container (or your local copy of AutoCodeRover):
cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py github-issue --output-dir output --setup-dir setup --model gpt-4o-2024-05-13 --model-temperature 0.2 --task-id <task id> --clone-link <link for cloning the project> --commit-hash <any version that has the issue> --issue-link <link to issue page>
Example:
PYTHONPATH=. python app/main.py github-issue --output-dir output --setup-dir setup --model gpt-4o-2024-05-13 --model-temperature 0.2 --task-id langchain-20453 --clone-link https://github.com/langchain-ai/langchain.git --commit-hash cb6e5e5 --issue-link https://github.com/langchain-ai/langchain/issues/20453
2. Local Issue Mode
This mode allows you to run AutoCodeRover on a local repository and a file containing the issue description.
Steps:
- Prepare the local repository and issue:
- Prepare a local codebase.
- Write an issue description into a file.
- If patch generation is successful, the path to the generated patch will be written to a file named
selected_patch.json
in the output directory.
- Run the following commands:
cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py local-issue --output-dir output --model gpt-4o-2024-05-13 --model-temperature 0.2 --task-id <task id> --local-repo <path to the local project repository> --issue-file <path to the file containing issue description>
3. SWE-bench Mode
To run AutoCodeRover on SWE-bench task instances, a local setup of AutoCodeRover is recommended. Further details on this mode can be found in the AutoCodeRover documentation.
Enhancing AutoCodeRover with Test Cases
AutoCodeRover can resolve more issues if test cases are available. For example, refer to the provided video acr_enhancement-final.mp4
for a demonstration.
Troubleshooting Common Issues
Symptom | Solution |
---|---|
Docker build fails |
Ensure Dockerfile uses FROM python:3.10-slim |
ModuleNotFoundError |
Run conda env update -f environment.yml |
API Key rejected |
Verify key validity at OpenAI Dashboard |
Patch generation fails |
Increase --model-temperature to 0.3-0.5 |
Performance Optimization Tips
- GPU Acceleration: Add
--gpus all
flag indocker run
for faster LLM inference - Model Selection:
- Speed:
gpt-3.5-turbo
- Accuracy:
claude-3-opus-20240229
- Speed:
- Cache Management: Clean
~/.cache/acr
periodically
Conclusion
AutoCodeRover significantly enhances developer productivity by automating routine coding tasks. Setting up AutoCodeRover on Linux involves a few straightforward steps, whether you choose to use Docker or a local installation.
References
- Run DeepSeek Janus-Pro 7B on Mac: A Comprehensive Guide Using ComfyUI
- Run DeepSeek Janus-Pro 7B on Mac: Step-by-Step Guide
- Run DeepSeek Janus-Pro 7B on Windows: A Complete Installation Guide
- Setting Up AutoCodeRover on macOS
- Setting Up AutoCodeRover on Windows
- Setting Up AutoCodeRover on Ubuntu: A Comprehensive Guide