monkeypatch.unsloth_
-monkeypatch.unsloth_
module for patching with unsloth optimizations
- - -From f18c2bb1f8a1d1563e00da0d2253f3c45aee60a2 Mon Sep 17 00:00:00 2001
From: Quarto GHA Workflow Runner module for patching with unsloth optimizations Make sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project: If you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging. Next, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:2 [!Tip]
To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl’s Docker CI builds. You will now be in the container. Next, perform an editable install of Axolotl: You will now be in the container. Next, install Axolotl with dev dependencies: This section describes the different Docker images that are released by AxolotlAI at Docker Hub. This section describes the different Docker images that are released by AxolotlAI at
+Docker Hub. For Blackwell GPUs, please use the tags with PyTorch 2.7.1 and CUDA 12.8. For Blackwell GPUs, please use the tags with PyTorch 2.9.1 and CUDA 12.8. Each image below is available in a uv variant that uses uv with
+a relocatable venv ( The base image is the most minimal image that can install Axolotl. It is based on the The base image is the most minimal image that can install Axolotl. It is based on the Link: Docker Hub Tags examples: The main image is the image that is used to run Axolotl. It is based on the Link: Docker Hub Tags examples: Link: Docker Hub Please make sure to have Pytorch installed before installing Axolotl in your local environment. Follow the instructions at: https://pytorch.org/get-started/locally/ For Blackwell GPUs, please use Pytorch 2.9.1 and CUDA 12.8. We use Axolotl uses uv as its package manager. uv is a fast, reliable Python package installer and resolver built in Rust. Install uv if not already installed: Choose your CUDA version (e.g. uv is a fast, reliable Python package installer and resolver built in Rust. It offers significant performance improvements over pip and provides better dependency resolution, making it an excellent choice for complex environments. Install uv if not already installed Choose your CUDA version to use with PyTorch; e.g. Install PyTorch
-- PyTorch 2.6.0 recommended Install axolotl from PyPi For the latest features between releases: For development with Docker: For Blackwell GPUs, please use For Blackwell GPUs, please use Please refer to the Docker documentation for more information on the different Docker images that are available. For providers supporting Docker: See Section 6 for Mac-specific issues. See Section 7 for Mac-specific issues. Install Python ≥3.11 Install PyTorch: https://pytorch.org/get-started/locally/ Install Axolotl: (Optional) Login to Hugging Face: If you have an existing pip-based Axolotl installation, you can migrate to uv: If you are unable to install uv, you can still use pip directly. Please make sure to have PyTorch installed before installing Axolotl with pip. Follow the instructions at: https://pytorch.org/get-started/locally/ For editable/development installs: If you encounter installation issues, see our FAQ and Debugging Guide. Install Axolotl following the installation guide. Here is an example of how to install from pip: Run one of the finetuning examples below. LFM2 LFM2-MoE Installation Error: If you encounter Dataset Loading: Read more on how to load your own dataset in our documentation. Dataset Formats: For any installation errors, see XIELU Installation Issues If those didn’t help, please try the below solutions: Pass env for CMAKE and try install again: Git clone the repo and manually hardcode python path: Here is an example of how to install from pip: Here is an example of how to install from pip: Here is an example of how to install from pip:see https://github.com/huggingface/transformers/pull/35834
-
-monkeypatch.unsloth_
-module for patching with unsloth optimizations
-
-monkeypatch.data.batch_dataset_fetcher
Monkey patches for the dataset fetcher to handle batches of packed indexes.
+
-monkeypatch.mixtral
Patches to support multipack for mixtral
+
-monkeypatch.gradient_checkpointing.offload_cpu
CPU offloaded checkpointing
+
diff --git a/docs/api/integrations.base.html b/docs/api/integrations.base.html
index 626a4a0e6..88edc7796 100644
--- a/docs/api/integrations.base.html
+++ b/docs/api/integrations.base.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
monkeypatch.gradient_checkpointing.offload_disk
DISCO - DIsk-based Storage and Checkpointing with Optimized prefetching
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- On this page
-
-
-monkeypatch.unsloth_
-monkeypatch.unsloth_Setup
Remote Hosts
-Attach To Container
diff --git a/docs/docker.html b/docs/docker.html
index 62001ed89..3c5300f06 100644
--- a/docs/docker.html
+++ b/docs/docker.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
/workspace/axolotl-venv) instead of Miniconda + pip. Append -uv to the image name
+(e.g. axolotlai/axolotl-base-uv). Tags follow the same format. We recommend the uv images for new deployments.Base
-nvidia/cuda image. It includes python, torch, git, git-lfs, awscli, pydantic, and more.nvidia/cuda image.
+It includes python, torch, git, git-lfs, awscli, pydantic, and more.Image
-
-axolotlai/axolotl-base
+
+
+
+
+
+Variant
+Image
+Docker Hub
+
+
+pip
+
+axolotlai/axolotl-baseLink
+
+
+
+uv
+
+axolotlai/axolotl-base-uvLink
+Tags format
-
+
-
main-base-py3.11-cu128-2.8.0main-base-py3.11-cu128-2.9.1main-base-py3.12-cu128-2.10.0main-base-py3.12-cu130-2.9.1main-base-py3.12-cu130-2.10.0axolotlai/axolotl-base image and includes the Axolotl codebase, dependencies, and more.Image
-
-axolotlai/axolotl
+
+
+
+
+
+Variant
+Image
+Docker Hub
+
+
+pip
+
+axolotlai/axolotlLink
+
+
+
+uv
+
+axolotlai/axolotl-uvLink
+Tags format
-
-
@@ -928,8 +980,27 @@ Tip
main-py3.11-cu128-2.8.0main-py3.11-cu128-2.9.1main-py3.12-cu128-2.10.0main-py3.12-cu130-2.9.1main-py3.12-cu130-2.10.0main-latestmain-20250303-py3.11-cu124-2.6.0main-20250303-py3.11-cu126-2.6.0main-20260315-py3.11-cu128-2.9.10.12.0Image
-
-axolotlai/axolotl-cloud
+
+
+
+
+
+Variant
+Image
+Docker Hub
+
+
+pip
+
+axolotlai/axolotl-cloudLink
+
+
+
+uv
+
+axolotlai/axolotl-cloud-uvLink
+Tags format
diff --git a/docs/ebft.html b/docs/ebft.html
index a49150984..123cd3f92 100644
--- a/docs/ebft.html
+++ b/docs/ebft.html
@@ -697,12 +697,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
@@ -804,11 +797,9 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
bf16 and Flash Attention) or AMD GPU2 Installation Methods
-2 Installation
2.1 PyPI Installation (Recommended)
---no-build-isolation in order to detect the installed PyTorch version (if
-installed) in order not to clobber it, and so that we set the correct version of
-dependencies that are specific to the PyTorch version or other installed
-co-dependencies.2.1 Quick Install
+cu128, cu130), create a venv, and install:2.2 uv Installation
-cu124, cu126, cu128,
-then create the venv and activate2.3 Edge/Development Build
+2.2 Edge/Development Build
uv sync creates a .venv, installs exact pinned versions from uv.lock, and sets up an editable install automatically.2.4 Docker
-
+2.3 Docker
+
axolotlai/axolotl:main-py3.11-cu128-2.9.1 or the cloud variant axolotlai/axolotl-cloud:main-py3.11-cu128-2.9.1.axolotlai/axolotl-uv:main-py3.11-cu128-2.9.1 or the cloud variant axolotlai/axolotl-cloud-uv:main-py3.11-cu128-2.9.1.3.1 Cloud GPU Providers
-
axolotlai/axolotl-cloud:main-latestaxolotlai/axolotl-cloud-uv:main-latest
4 Platform-Specific Instructions
4.1 macOS
-
-4.2 Windows
@@ -998,23 +958,46 @@ Important
5 Environment Managers
-5.1 Conda/Pip venv
-
-
+5 Migrating from pip to uv
+# Install uv
+curl -LsSf https://astral.sh/uv/install.sh | sh
+source $HOME/.local/bin/env
+
+# Create a fresh venv (recommended for a clean start)
+export UV_TORCH_BACKEND=cu128 # or cu130
+uv venv --no-project --relocatable
+source .venv/bin/activate
+
+# Reinstall axolotl
+uv pip install --no-build-isolation axolotl[flash-attn,deepspeed]6 Using pip (Alternative)
+6 Troubleshooting
+7 Troubleshooting
# FFT SFT (1x48GB @ 25GiB)
@@ -837,7 +830,7 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6
+TIPS
ImportError: ... undefined symbol ... or ModuleNotFoundError: No module named 'causal_conv1d_cuda', the causal-conv1d package may have been installed incorrectly. Try uninstalling it:
diff --git a/docs/models/apertus.html b/docs/models/apertus.html
index f16a01f4a..3b61f46d6 100644
--- a/docs/models/apertus.html
+++ b/docs/models/apertus.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
@@ -844,7 +837,7 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
# For those using our Docker image, use the below path.
export CUDA_HOME=/usr/local/cuda
-pip3 install git+https://github.com/nickjbrowning/XIELU@59d6031 --no-build-isolation --no-deps
+uv pip install git+https://github.com/nickjbrowning/XIELU@59d6031 --no-build-isolation --no-deps
diff --git a/docs/models/devstral.html b/docs/models/devstral.html
index 2e424522c..645addf1b 100644
--- a/docs/models/devstral.html
+++ b/docs/models/devstral.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
-
diff --git a/docs/models/gemma3n.html b/docs/models/gemma3n.html
index 0b7590103..8acb4cb01 100644
--- a/docs/models/gemma3n.html
+++ b/docs/models/gemma3n.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
-
-
diff --git a/docs/models/gpt-oss.html b/docs/models/gpt-oss.html
index 3ea3e255e..dc94f4712 100644
--- a/docs/models/gpt-oss.html
+++ b/docs/models/gpt-oss.html
@@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
FSDP + QLoRA
-
@@ -881,7 +874,7 @@ weights to {output_dir}/merged.
GPT-OSS support in vLLM does not exist in a stable release yet. See https://x.com/MaziyarPanahi/status/1955741905515323425 for more information about using a special vllm-openai docker image for inferencing with vLLM.
Optionally, vLLM can be installed from nightly:
- +and the vLLM server can be started with the following command (modify --tensor-parallel-size 8 to match your environment):
Install Axolotl following the installation guide.
Install timm for vision model support:
Install Cut Cross Entropy to reduce training VRAM usage.
Run the finetuning example:
Here is an example of how to install from pip:
Install the required vision lib:
-bash pip install 'mistral-common[opencv]==1.8.5'
bash uv pip install 'mistral-common[opencv]==1.8.5'
Download the example dataset image:
Run the fine-tuning:
diff --git a/docs/models/mimo.html b/docs/models/mimo.html index b1a0894bc..ac8d1076b 100644 --- a/docs/models/mimo.html +++ b/docs/models/mimo.html @@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRARun the fine-tuning:
Install the required vision lib:
-bash pip install 'mistral-common[opencv]==1.8.6'
bash uv pip install 'mistral-common[opencv]==1.8.6'
Download the example dataset image:
Run the fine-tuning:
diff --git a/docs/models/mistral-small.html b/docs/models/mistral-small.html index d2f385550..59eff9e97 100644 --- a/docs/models/mistral-small.html +++ b/docs/models/mistral-small.html @@ -696,12 +696,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRAInstall the required vision lib:
-bash pip install 'mistral-common[opencv]==1.8.5'
bash uv pip install 'mistral-common[opencv]==1.8.5'
Download the example dataset image:
Run the fine-tuning:
diff --git a/docs/models/mistral.html b/docs/models/mistral.html index e57dc9961..1f9e9c343 100644 --- a/docs/models/mistral.html +++ b/docs/models/mistral.html @@ -661,12 +661,6 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); FSDP + QLoRAInstall Cut Cross Entropy to reduce training VRAM usage.
Install FLA for improved performance
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
Run the finetuning example:
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
Install an extra dependency:
-Run the finetuning example:
Here is an example of how to install from pip:
Unsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over -standard industry baselines.
-Due to breaking changes in transformers v4.48.0, users will need to downgrade to <=v4.47.1 to use this patch.
This will later be deprecated in favor of LoRA Optimizations.
-The following will install the correct unsloth and extras from source.
- -Axolotl exposes a few configuration options to try out unsloth and get most of the performance gains.
-Our unsloth integration is currently limited to the following model architectures: -- llama
-These options are specific to LoRA finetuning and cannot be used for multi-GPU finetuning
- -These options are composable and can be used with multi-gpu finetuning
- -# install uv if you don't already have it installed
+# install uv if you don't already have it installed (restart shell after)
curl -LsSf https://astral.sh/uv/install.sh | sh
-source $HOME/.local/bin/env
-
-# CUDA 12.8.1 tends to have better package compatibility
-export UV_TORCH_BACKEND=cu128
-
-# create a new virtual environment
-uv venv --python 3.12
-source .venv/bin/activate
-
-uv pip install torch==2.10.0 torchvision
-uv pip install --no-build-isolation axolotl[deepspeed]
-
-# recommended - install cut-cross-entropy
-uv pip install "cut-cross-entropy[transformers] @ git+https://github.com/axolotl-ai-cloud/ml-cross-entropy.git@main"
-
-# (optional) - prefetch flash-attn2 and causal-conv1d kernels
-uv run --python 3.12 python -c "from kernels import get_kernel; get_kernel('kernels-community/flash-attn2'); get_kernel('kernels-community/causal-conv1d')"
-
-# Download example axolotl configs, deepspeed configs
-axolotl fetch examples
-axolotl fetch deepspeed_configs # OPTIONAL
-
-
-Using pip
-
-
+
+# change depending on system
+export UV_TORCH_BACKEND=cu128
+
+# create a new virtual environment
+uv venv --python 3.12
+source .venv/bin/activate
+
+uv pip install torch==2.10.0 torchvision
+uv pip install --no-build-isolation axolotl[deepspeed]
+
+# Download example axolotl configs, deepspeed configs
+axolotl fetch examples
+axolotl fetch deepspeed_configs # OPTIONALInstalling with Docker can be less error prone than installing in your own environment.
- +Other installation approaches are described here.
That’s it! Check out our Getting Started Guide for a more detailed walkthrough.
Axolotl ships with built-in documentation optimized for AI coding agents (Claude Code, Cursor, Copilot, etc.). These docs are bundled with the pip package — no repo clone needed.
-# Show overview and available training methods
-axolotl agent-docs
-
-# Topic-specific references
-axolotl agent-docs sft # supervised fine-tuning
-axolotl agent-docs grpo # GRPO online RL
-axolotl agent-docs preference_tuning # DPO, KTO, ORPO, SimPO
-axolotl agent-docs reward_modelling # outcome and process reward models
-axolotl agent-docs pretraining # continual pretraining
-axolotl agent-docs --list # list all topics
-
-# Dump config schema for programmatic use
-axolotl config-schema
-axolotl config-schema --field adapter# Show overview and available training methods
+axolotl agent-docs
+
+# Topic-specific references
+axolotl agent-docs sft # supervised fine-tuning
+axolotl agent-docs grpo # GRPO online RL
+axolotl agent-docs preference_tuning # DPO, KTO, ORPO, SimPO
+axolotl agent-docs reward_modelling # outcome and process reward models
+axolotl agent-docs pretraining # continual pretraining
+axolotl agent-docs --list # list all topics
+
+# Dump config schema for programmatic use
+axolotl config-schema
+axolotl config-schema --field adapterIf you’re working with the source repo, agent docs are also available at docs/agents/ and the project overview is in AGENTS.md.
If you use Axolotl in your research or projects, please cite it as follows:
-