Files
axolotl/SETUP_MIAAI.md

7.9 KiB
Raw Blame History

Axolotl Setup — miaai (RTX 5080, CUDA 13.2)

System Info

  • GPU: NVIDIA RTX 5080 (16GB VRAM, sm_120 / Blackwell)
  • Driver: 580.126.09 — max CUDA 13.0 shown by nvidia-smi, but nvcc from conda is 13.2
  • OS: Ubuntu 25.10 (Python 3.13 system — do NOT use system Python for ML)
  • Axolotl repo: /home/tocmo0nlord/axolotl (branch: activeblue/main)
  • Conda env: axolotl at /opt/miniconda3/envs/axolotl

Pre-Training Checklist (every time)

Before starting a training run, verify these:

# 1. Stop Ollama — if a request hits it mid-training it will compete for VRAM
sudo systemctl stop ollama

# 2. Activate conda env
export PATH="/opt/miniconda3/bin:$PATH"
conda activate axolotl

# 3. Set env vars
export CUDA_HOME=$CONDA_PREFIX
export PATH=$CUDA_HOME/bin:$PATH
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True

# 4. Confirm GPU is clear (should show no processes)
nvidia-smi --query-compute-apps=pid,process_name,used_memory --format=csv

# 5. Go to axolotl directory
cd /home/tocmo0nlord/axolotl

Run Training

axolotl train ~/human_chat_qlora.yml

After Training

# Restart Ollama
sudo systemctl start ollama

# Test the adapter
axolotl inference ~/human_chat_qlora.yml \
  --lora-model-dir ~/outputs/llama31-8b-humanchat \
  --prompter chat

# (Optional) Merge adapter into base model
axolotl merge-lora ~/human_chat_qlora.yml

One-time Setup (fresh machine only)

1. Install Miniconda

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
bash miniconda.sh -b -p /opt/miniconda3
/opt/miniconda3/bin/conda init bash
source ~/.bashrc

2. Create Python 3.11 environment

conda create -n axolotl python=3.11 -y
conda activate axolotl

3. Clone axolotl repo

git clone https://git.activeblue.net/tocmo0nlord/axolotl.git /home/tocmo0nlord/axolotl
cd /home/tocmo0nlord/axolotl
git remote add upstream https://github.com/axolotl-ai-cloud/axolotl.git
git fetch upstream
git rebase upstream/main        # keeps activeblue patches on top

4. Install CUDA toolkit (needed to compile flash-attn and bitsandbytes)

conda install -y -c "nvidia/label/cuda-12.8.0" cuda-toolkit
export CUDA_HOME=$CONDA_PREFIX
export PATH=$CUDA_HOME/bin:$PATH

NOTE: Despite installing from the cuda-12.8.0 channel, conda resolves nvcc to 13.2.78. This is fine — use cu132 everywhere to match.

5. Install PyTorch — use cu132 (matches nvcc from conda)

# torchaudio has no cu132 wheel — skip it, not needed for LLM training
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu132
python -c "import torch; print('CUDA:', torch.version.cuda); print('GPU:', torch.cuda.get_device_name(0))"

6. Install Axolotl

cd /home/tocmo0nlord/axolotl
pip install -e "."

7. Install flash-attn

Compiles CUDA kernels from source — takes 1525 min on 10 cores of i7-14700K.

MAX_JOBS=10 pip install flash-attn --no-build-isolation

8. Compile bitsandbytes from source for sm_120 (RTX 5080 / Blackwell)

Prebuilt wheels do not include sm_120. CUDA 13.2 also dropped sm_5053. Must compile from source with a patched CMakeLists.txt.

# Clone bitsandbytes v0.49.1
git clone --branch v0.49.1 --depth 1 https://github.com/bitsandbytes-foundation/bitsandbytes.git /tmp/bnb_0491

# Patch CMakeLists.txt: insert sm_120 override before the foreach loop
# (cmake >= 3.23.0 uses its own built-in arch list which does not include sm_120)
sed -i '/    foreach(capability \${CMAKE_CUDA_ARCHITECTURES_ALL})/i\    # RTX 5080 sm_120 patch\n    set(CMAKE_CUDA_ARCHITECTURES_ALL 120)' /tmp/bnb_0491/CMakeLists.txt

# Verify patch landed correctly (should show the set() line immediately before foreach)
grep -n "ARCHITECTURES_ALL\|foreach" /tmp/bnb_0491/CMakeLists.txt | tail -5

# Configure
cmake \
  -DCMAKE_CUDA_COMPILER=/opt/miniconda3/envs/axolotl/bin/nvcc \
  -DCOMPUTE_BACKEND=cuda \
  -S /tmp/bnb_0491 \
  -B /tmp/bnb_0491/build 2>&1 | grep -E "(Capabilit|CUDA Ver|Error)"
# Must show: CUDA Capabilities Selected: 120

# Build
cmake --build /tmp/bnb_0491/build -j10

# Install into conda site-packages
cp -r /tmp/bnb_0491/bitsandbytes /opt/miniconda3/envs/axolotl/lib/python3.11/site-packages/

# Verify
python3 -c "
import torch, bitsandbytes as bnb
x = torch.randn(64, 64, device='cuda')
l = bnb.nn.Linear8bitLt(64, 64).cuda()
print('bitsandbytes CUDA OK:', l(x).shape)
"

9. HuggingFace login (meta-llama is gated)

huggingface-cli login
# Paste your HF token when prompted

10. Verify everything is working

python3 -c "
import torch, bitsandbytes as bnb, flash_attn, transformers, axolotl
print('torch:', torch.__version__, '| CUDA:', torch.version.cuda)
print('bitsandbytes:', bnb.__version__)
print('flash_attn:', flash_attn.__version__)
print('transformers:', transformers.__version__)
print('GPU:', torch.cuda.get_device_name(0))
print('VRAM:', round(torch.cuda.get_device_properties(0).total_memory/1e9, 1), 'GB')
"

Training Config — human_chat_qlora.yml

Key settings tuned for RTX 5080 (16GB):

Setting Value Notes
sequence_len 2048 4096 OOMs during loss computation (logits x 128k vocab)
micro_batch_size 1 Effective batch = micro x grad_accum = 8
gradient_accumulation_steps 8 Keeps effective batch at 8
adapter qlora 4-bit via bitsandbytes compiled from source
attn_implementation flash_attention_2 Not the deprecated flash_attention: true
type (datasets) chat_template Not the deprecated sharegpt

Expected training metrics (RTX 5080, ~65k samples, 2 epochs):

  • VRAM: ~1011 GB active, ~11 GB allocated
  • Training duration: ~3.5 hours
  • Initial eval loss: ~0.81, perplexity ~2.25
  • Final loss target: ~0.550.60

To use more VRAM (~14GB) and improve gradient signal, increase micro_batch_size: 2 (adjust gradient_accumulation_steps: 4 to keep effective batch at 8).


Common Pitfalls

Problem Cause Fix
externally-managed-environment System Python 3.13 blocks pip Use conda env, never system pip
No module named torch (flash-attn) pip builds in isolated env Use --no-build-isolation
CUDA_HOME not set CUDA toolkit not installed conda install cuda-toolkit from nvidia channel
CUDA version mismatch 13.2 vs 12.8 Conda nvcc is 13.2, torch was cu128 Reinstall torch with --index-url .../cu132
torchaudio not found for cu132 No cu132 wheel exists Skip torchaudio — not needed
flash-attn compile is slow Single-threaded by default Set MAX_JOBS=<cpu_count> before pip install
nvcc fatal: Unsupported gpu architecture 'compute_50' bitsandbytes CMakeLists.txt hardcodes sm_50; CUDA 13.2 dropped it Patch CMakeLists.txt (see step 8 above)
CUDA Capabilities Selected: 50;52;... (ignores -D flag) cmake >= 3.23 built-in arch list lacks sm_120; CMakeLists.txt overrides -D Insert set(CMAKE_CUDA_ARCHITECTURES_ALL 120) before foreach loop
BackendUnavailable: scikit_build_core pip install of bnb triggers cmake rebuild Copy .so directly to site-packages instead
torch.OutOfMemoryError during eval logits tensor (batch x 4096 x 128k vocab) too large Set sequence_len: 2048, micro_batch_size: 1
type: sharegpt deprecation warning axolotl removed sharegpt type Use type: chat_template with field mappings
flash_attention: true deprecation Old config key removed Use attn_implementation: flash_attention_2
Capybara dataset field_messages null Capybara uses input/output format, not conversations Switch to SlimOrca or OpenHermes-2.5
Ollama loads model mid-training Ollama is enabled and receives a request sudo systemctl stop ollama before training
Training slower than expected (~3.5h not 19min) The fast it/s on screen is the eval loop, not training Normal — training includes backward pass and optimizer