* feat: move to uv first * fix: update doc to uv first * fix: merge dev/tests into uv pyproject * fix: update docker docs to match current config * fix: migrate examples to readme * fix: add llmcompressor to conflict * feat: rec uv sync with lockfile for dev/ci * fix: update docker docs to clarify how to use uv images * chore: docs * fix: use system python, no venv * fix: set backend cpu * fix: only set for installing pytorch step * fix: remove unsloth kernel and installs * fix: remove U in tests * fix: set backend in deps too * chore: test * chore: comments * fix: attempt to lock torch * fix: workaround torch cuda and not upgraded * fix: forgot to push * fix: missed source * fix: nightly upstream loralinear config * fix: nightly phi3 long rope not work * fix: forgot commit * fix: test phi3 template change * fix: no more requirements * fix: carry over changes from new requirements to pyproject * chore: remove lockfile per discussion * fix: set match-runtime * fix: remove unneeded hf hub buildtime * fix: duplicate cache delete on nightly * fix: torchvision being overridden * fix: migrate to uv images * fix: leftover from merge * fix: simplify base readme * fix: update assertion message to be clearer * chore: docs * fix: change fallback for cicd script * fix: match against main exactly * fix: peft 0.19.1 change * fix: e2e test * fix: ci * fix: e2e test
Finetune Swiss-AI's Apertus with Axolotl
Apertus is a family of opensource models trained by Swiss-ai.
This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.
Getting started
-
Install Axolotl following the installation guide. You need to install from main as Apertus is only on nightly or use our latest Docker images.
Here is an example of how to install from main for pip:
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
uv pip install --no-build-isolation -e '.[flash-attn]'
# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh
- (Optional, highly recommended) Install XIELU CUDA
## Recommended for reduced VRAM and faster speeds
# Point to CUDA toolkit directory
# For those using our Docker image, use the below path.
export CUDA_HOME=/usr/local/cuda
uv pip install git+https://github.com/nickjbrowning/XIELU@59d6031 --no-build-isolation --no-deps
For any installation errors, see XIELU Installation Issues
- Run the finetuning example:
axolotl train examples/apertus/apertus-8b-qlora.yaml
This config uses about 8.7 GiB VRAM.
Let us know how it goes. Happy finetuning! 🚀
Tips
- For inference, the official Apertus team recommends
top_p=0.9andtemperature=0.8. - You can instead use full paremter fine-tuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The dataset format follows the OpenAI Messages format as seen here.
XIELU Installation Issues
ModuleNotFoundError: No module named 'torch'
Please check these one by one:
- Running in correct environment
- Env has PyTorch installed
- CUDA toolkit is at
CUDA_HOME
If those didn't help, please try the below solutions:
-
Pass env for CMAKE and try install again:
Python_EXECUTABLE=$(which python) uv pip install git+https://github.com/nickjbrowning/XIELU@59d6031 --no-build-isolation --no-deps -
Git clone the repo and manually hardcode python path:
git clone https://github.com/nickjbrowning/XIELU cd xielu git checkout 59d6031 cd xielu nano CMakeLists.txt # or vi depending on your preferenceexecute_process( - COMMAND ${Python_EXECUTABLE} -c "import torch.utils; print(torch.utils.cmake_prefix_path)" + COMMAND /root/miniconda3/envs/py3.11/bin/python -c "import torch.utils; print(torch.utils.cmake_prefix_path)" RESULT_VARIABLE TORCH_CMAKE_PATH_RESULT OUTPUT_VARIABLE TORCH_CMAKE_PATH_OUTPUT ERROR_VARIABLE TORCH_CMAKE_PATH_ERROR )uv pip install . --no-build-isolation --no-deps