monkeypatch.unsloth_
-monkeypatch.unsloth_
module for patching with unsloth optimizations
- - -diff --git a/.nojekyll b/.nojekyll index 5361f5b47..d30a5b91b 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -11c88725 \ No newline at end of file +1cf0992e \ No newline at end of file diff --git a/FAQS.html b/FAQS.html index 85ac56908..455a991fb 100644 --- a/FAQS.html +++ b/FAQS.html @@ -661,12 +661,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRA - -
monkeypatch.unsloth_
module for patching with unsloth optimizations
- - -Make sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:
- +If you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.
@@ -1034,14 +1029,14 @@ You may not want to delete these folders. For example, if you are debugging mode If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.Next, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:2
--[!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl’s Docker CI builds.
You will now be in the container. Next, perform an editable install of Axolotl:
- +You will now be in the container. Next, install Axolotl with dev dependencies:
+This section describes the different Docker images that are released by AxolotlAI at Docker Hub.
+This section describes the different Docker images that are released by AxolotlAI at +Docker Hub.
For Blackwell GPUs, please use the tags with PyTorch 2.7.1 and CUDA 12.8.
+For Blackwell GPUs, please use the tags with PyTorch 2.9.1 and CUDA 12.8.
+Each image below is available in a uv variant that uses uv with
+a relocatable venv (/workspace/axolotl-venv) instead of Miniconda + pip. Append -uv to the image name
+(e.g. axolotlai/axolotl-base-uv). Tags follow the same format. We recommend the uv images for new deployments.
The base image is the most minimal image that can install Axolotl. It is based on the nvidia/cuda image. It includes python, torch, git, git-lfs, awscli, pydantic, and more.
The base image is the most minimal image that can install Axolotl. It is based on the nvidia/cuda image.
+It includes python, torch, git, git-lfs, awscli, pydantic, and more.
axolotlai/axolotl-base
-Link: Docker Hub
+| Variant | +Image | +Docker Hub | +
|---|---|---|
| pip | +axolotlai/axolotl-base |
+Link | +
| uv | +axolotlai/axolotl-base-uv |
+Link | +
Tags examples:
main-base-py3.11-cu128-2.8.0main-base-py3.11-cu128-2.9.1main-base-py3.12-cu128-2.10.0main-base-py3.12-cu130-2.9.1main-base-py3.12-cu130-2.10.0The main image is the image that is used to run Axolotl. It is based on the axolotlai/axolotl-base image and includes the Axolotl codebase, dependencies, and more.
axolotlai/axolotl
-Link: Docker Hub
+| Variant | +Image | +Docker Hub | +
|---|---|---|
| pip | +axolotlai/axolotl |
+Link | +
| uv | +axolotlai/axolotl-uv |
+Link | +
Tags examples:
main-py3.11-cu128-2.8.0main-py3.11-cu128-2.9.1main-py3.12-cu128-2.10.0main-py3.12-cu130-2.9.1main-py3.12-cu130-2.10.0main-latestmain-20250303-py3.11-cu124-2.6.0main-20250303-py3.11-cu126-2.6.0main-20260315-py3.11-cu128-2.9.10.12.0axolotlai/axolotl-cloud
-Link: Docker Hub
+| Variant | +Image | +Docker Hub | +
|---|---|---|
| pip | +axolotlai/axolotl-cloud |
+Link | +
| uv | +axolotlai/axolotl-cloud-uv |
+Link | +
bf16 and Flash Attention) or AMD GPUPlease make sure to have Pytorch installed before installing Axolotl in your local environment.
-Follow the instructions at: https://pytorch.org/get-started/locally/
-For Blackwell GPUs, please use Pytorch 2.9.1 and CUDA 12.8.
We use --no-build-isolation in order to detect the installed PyTorch version (if
-installed) in order not to clobber it, and so that we set the correct version of
-dependencies that are specific to the PyTorch version or other installed
-co-dependencies.
Axolotl uses uv as its package manager. uv is a fast, reliable Python package installer and resolver built in Rust.
+Install uv if not already installed:
+ +Choose your CUDA version (e.g. cu128, cu130), create a venv, and install:
uv is a fast, reliable Python package installer and resolver built in Rust. It offers significant performance improvements over pip and provides better dependency resolution, making it an excellent choice for complex environments.
-Install uv if not already installed
- -Choose your CUDA version to use with PyTorch; e.g. cu124, cu126, cu128,
-then create the venv and activate
Install PyTorch -- PyTorch 2.6.0 recommended
-Install axolotl from PyPi
- -For the latest features between releases:
-uv sync creates a .venv, installs exact pinned versions from uv.lock, and sets up an editable install automatically.
For development with Docker:
- +For Blackwell GPUs, please use axolotlai/axolotl:main-py3.11-cu128-2.9.1 or the cloud variant axolotlai/axolotl-cloud:main-py3.11-cu128-2.9.1.
For Blackwell GPUs, please use axolotlai/axolotl-uv:main-py3.11-cu128-2.9.1 or the cloud variant axolotlai/axolotl-cloud-uv:main-py3.11-cu128-2.9.1.
Please refer to the Docker documentation for more information on the different Docker images that are available.
@@ -956,7 +916,7 @@ ImportantFor providers supporting Docker:
axolotlai/axolotl-cloud:main-latestaxolotlai/axolotl-cloud-uv:main-latestSee Section 6 for Mac-specific issues.
+ +See Section 7 for Mac-specific issues.
Install Python ≥3.11
Install PyTorch: https://pytorch.org/get-started/locally/
Install Axolotl:
-(Optional) Login to Hugging Face:
-If you have an existing pip-based Axolotl installation, you can migrate to uv:
+# Install uv
+curl -LsSf https://astral.sh/uv/install.sh | sh
+source $HOME/.local/bin/env
+
+# Create a fresh venv (recommended for a clean start)
+export UV_TORCH_BACKEND=cu128 # or cu130
+uv venv --no-project --relocatable
+source .venv/bin/activate
+
+# Reinstall axolotl
+uv pip install --no-build-isolation axolotl[flash-attn,deepspeed]If you are unable to install uv, you can still use pip directly.
+Please make sure to have PyTorch installed before installing Axolotl with pip.
+Follow the instructions at: https://pytorch.org/get-started/locally/
+For editable/development installs:
+If you encounter installation issues, see our FAQ and Debugging Guide.
diff --git a/docs/lora_optims.html b/docs/lora_optims.html index 1ef931140..a4f102e54 100644 --- a/docs/lora_optims.html +++ b/docs/lora_optims.html @@ -697,12 +697,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRA -Install Axolotl following the installation guide.
Here is an example of how to install from pip:
Run one of the finetuning examples below.
LFM2
# FFT SFT (1x48GB @ 25GiB)
@@ -837,7 +830,7 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
LFM2-MoE
-pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6
+
@@ -846,7 +839,7 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
TIPS
Installation Error: If you encounter ImportError: ... undefined symbol ... or ModuleNotFoundError: No module named 'causal_conv1d_cuda', the causal-conv1d package may have been installed incorrectly. Try uninstalling it:
-
+
Dataset Loading: Read more on how to load your own dataset in our documentation.
Dataset Formats:
diff --git a/docs/models/apertus.html b/docs/models/apertus.html
index f16a01f4a..3b61f46d6 100644
--- a/docs/models/apertus.html
+++ b/docs/models/apertus.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
-
- -
-
-
- Unsloth
-
-
@@ -830,11 +824,10 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
-pip3 install packaging==26.0 setuptools==75.8.0 wheel ninja
-pip3 install --no-build-isolation -e '.[flash-attn]'
-
-# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
-python scripts/cutcrossentropy_install.py | sh
+uv pip install --no-build-isolation -e '.[flash-attn]'
+
+# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
+python scripts/cutcrossentropy_install.py | shFor any installation errors, see XIELU Installation Issues
If those didn’t help, please try the below solutions:
Pass env for CMAKE and try install again:
-Git clone the repo and manually hardcode python path:
Here is an example of how to install from pip:
Here is an example of how to install from pip:
Here is an example of how to install from pip:
{output_dir}/merged.
GPT-OSS support in vLLM does not exist in a stable release yet. See https://x.com/MaziyarPanahi/status/1955741905515323425 for more information about using a special vllm-openai docker image for inferencing with vLLM.
Optionally, vLLM can be installed from nightly:
- +and the vLLM server can be started with the following command (modify --tensor-parallel-size 8 to match your environment):
Install Axolotl following the installation guide.
Install timm for vision model support:
Install Cut Cross Entropy to reduce training VRAM usage.
Run the finetuning example:
Here is an example of how to install from pip:
Install the required vision lib:
-bash pip install 'mistral-common[opencv]==1.8.5'
bash uv pip install 'mistral-common[opencv]==1.8.5'
Download the example dataset image:
Run the fine-tuning:
diff --git a/docs/models/mimo.html b/docs/models/mimo.html index b1a0894bc..ac8d1076b 100644 --- a/docs/models/mimo.html +++ b/docs/models/mimo.html @@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRARun the fine-tuning:
Install the required vision lib:
-bash pip install 'mistral-common[opencv]==1.8.6'
bash uv pip install 'mistral-common[opencv]==1.8.6'
Download the example dataset image:
Run the fine-tuning:
diff --git a/docs/models/mistral-small.html b/docs/models/mistral-small.html index d2f385550..59eff9e97 100644 --- a/docs/models/mistral-small.html +++ b/docs/models/mistral-small.html @@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRAInstall the required vision lib:
-bash pip install 'mistral-common[opencv]==1.8.5'
bash uv pip install 'mistral-common[opencv]==1.8.5'
Download the example dataset image:
Run the fine-tuning:
diff --git a/docs/models/mistral.html b/docs/models/mistral.html index e57dc9961..1f9e9c343 100644 --- a/docs/models/mistral.html +++ b/docs/models/mistral.html @@ -661,12 +661,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRAInstall Cut Cross Entropy to reduce training VRAM usage.
Install FLA for improved performance
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
Run the finetuning example:
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
Install an extra dependency:
-Run the finetuning example:
Here is an example of how to install from pip:
Unsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over -standard industry baselines.
-Due to breaking changes in transformers v4.48.0, users will need to downgrade to <=v4.47.1 to use this patch.
This will later be deprecated in favor of LoRA Optimizations.
-The following will install the correct unsloth and extras from source.
- -Axolotl exposes a few configuration options to try out unsloth and get most of the performance gains.
-Our unsloth integration is currently limited to the following model architectures: -- llama
-These options are specific to LoRA finetuning and cannot be used for multi-GPU finetuning
- -These options are composable and can be used with multi-gpu finetuning
- -# install uv if you don't already have it installed
+# install uv if you don't already have it installed (restart shell after)
curl -LsSf https://astral.sh/uv/install.sh | sh
-source $HOME/.local/bin/env
-
-# CUDA 12.8.1 tends to have better package compatibility
-export UV_TORCH_BACKEND=cu128
-
-# create a new virtual environment
-uv venv --python 3.12
-source .venv/bin/activate
-
-uv pip install torch==2.10.0 torchvision
-uv pip install --no-build-isolation axolotl[deepspeed]
-
-# recommended - install cut-cross-entropy
-uv pip install "cut-cross-entropy[transformers] @ git+https://github.com/axolotl-ai-cloud/ml-cross-entropy.git@main"
-
-# (optional) - prefetch flash-attn2 and causal-conv1d kernels
-uv run --python 3.12 python -c "from kernels import get_kernel; get_kernel('kernels-community/flash-attn2'); get_kernel('kernels-community/causal-conv1d')"
-
-# Download example axolotl configs, deepspeed configs
-axolotl fetch examples
-axolotl fetch deepspeed_configs # OPTIONAL
-
-
-Using pip
-
-
+
+# change depending on system
+export UV_TORCH_BACKEND=cu128
+
+# create a new virtual environment
+uv venv --python 3.12
+source .venv/bin/activate
+
+uv pip install torch==2.10.0 torchvision
+uv pip install --no-build-isolation axolotl[deepspeed]
+
+# Download example axolotl configs, deepspeed configs
+axolotl fetch examples
+axolotl fetch deepspeed_configs # OPTIONALInstalling with Docker can be less error prone than installing in your own environment.
- +Other installation approaches are described here.
That’s it! Check out our Getting Started Guide for a more detailed walkthrough.
Axolotl ships with built-in documentation optimized for AI coding agents (Claude Code, Cursor, Copilot, etc.). These docs are bundled with the pip package — no repo clone needed.
-# Show overview and available training methods
-axolotl agent-docs
-
-# Topic-specific references
-axolotl agent-docs sft # supervised fine-tuning
-axolotl agent-docs grpo # GRPO online RL
-axolotl agent-docs preference_tuning # DPO, KTO, ORPO, SimPO
-axolotl agent-docs reward_modelling # outcome and process reward models
-axolotl agent-docs pretraining # continual pretraining
-axolotl agent-docs --list # list all topics
-
-# Dump config schema for programmatic use
-axolotl config-schema
-axolotl config-schema --field adapter# Show overview and available training methods
+axolotl agent-docs
+
+# Topic-specific references
+axolotl agent-docs sft # supervised fine-tuning
+axolotl agent-docs grpo # GRPO online RL
+axolotl agent-docs preference_tuning # DPO, KTO, ORPO, SimPO
+axolotl agent-docs reward_modelling # outcome and process reward models
+axolotl agent-docs pretraining # continual pretraining
+axolotl agent-docs --list # list all topics
+
+# Dump config schema for programmatic use
+axolotl config-schema
+axolotl config-schema --field adapterIf you’re working with the source repo, agent docs are also available at docs/agents/ and the project overview is in AGENTS.md.
If you use Axolotl in your research or projects, please cite it as follows:
-