Compare commits

..

3 Commits

Author SHA1 Message Date
NanoCode012
798c8fba89 chore: update docker docs (#3623)
Some checks failed
Publish Docs / build-deploy (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 128, 12.8.1, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 128, 12.8.1, true, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-uv (<nil>, 128, 12.8.1, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-uv (<nil>, 128, 12.8.1, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-uv (<nil>, 128, 12.8.1, true, linux/amd64,linux/arm64, 3.12, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-uv (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-uv (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 128, 12.8.1, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 128, 12.8.1, true, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud-uv (<nil>, 128, 12.8.1, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud-uv (<nil>, 128, 12.8.1, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud-uv (<nil>, 128, 12.8.1, true, linux/amd64,linux/arm64, 3.12, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud-uv (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud-uv (<nil>, 130, 13.0.0, linux/amd64,linux/arm64, 3.12, 2.10.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 128, 12.8.1, true, 3.11, 2.9.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 130, 13.0.0, <nil>, 3.11, 2.9.1) (push) Has been cancelled
Tests Nightly against upstream main / pre-commit (push) Has been cancelled
Tests Nightly against upstream main / Prefetch S3 once to prime the CDN cache (push) Has been cancelled
Tests Nightly against upstream main / PyTest (3.12, 2.10.0) (push) Has been cancelled
Tests Nightly against upstream main / PyTest (3.12, 2.9.1) (push) Has been cancelled
Tests Nightly against upstream main / docker-e2e-tests (<nil>, 128, 12.8.1, 1, 3.11, 2.10.0) (push) Has been cancelled
Tests Nightly against upstream main / docker-e2e-tests (<nil>, 128, 12.8.1, true, 1, 3.11, 2.9.1) (push) Has been cancelled
Tests Nightly against upstream main / docker-e2e-tests (<nil>, 130, 13.0.0, true, 1, 3.12, 2.9.1) (push) Has been cancelled
Tests Nightly against upstream main / docker-e2e-multigpu-tests (<nil>, 128, 12.8.1, true, 2, 3.11, 2.9.1) (push) Has been cancelled
docker-nightlies / build-axolotl (<nil>, 128, 12.8.1, 3.11, 2.9.1) (push) Has been cancelled
docker-nightlies / build-axolotl-cloud (<nil>, 128, 12.8.1, 3.11, 2.9.1) (push) Has been cancelled
docker-multigpu-tests-biweekly / test-axolotl-multigpu (<nil>, 130, 13.0.0, 2, 3.11, 2.9.1) (push) Has been cancelled
docker-multigpu-tests-biweekly / test-axolotl-multigpu (fbgemm-gpu, 128, 12.8.1, 2, 3.11, 2.10.0) (push) Has been cancelled
Pre-commit auto-update / auto-update (push) Has been cancelled
2026-04-24 16:03:21 +07:00
NanoCode012
17fc747f99 fix: docker build failing (#3622)
* fix: uv leftover docs

* fix: docker build failing

* chore: doc

* fix: remove old pytorch build

* fix: stop recommend flash-attn optional, let transformers pull

* fix: remove ring flash attention from image

* fix: quotes [skip ci]

* chore: naming [skip ci]
2026-04-24 14:23:09 +07:00
Wing Lian
901f2356bc dpo collation/padding (#3601) [skip ci]
* fix dpo collation/padding

* fix DPO collator encoder-decoder pixel_values dtype and is_encoder_decoder detection

- Use float32 instead of LongTensor for _pixel_values in encoder-decoder branch
- Add missing padding_value case for _pixel_values in encoder-decoder branch
- Derive is_encoder_decoder from model config instead of hardcoding False
2026-04-23 14:49:52 -04:00
58 changed files with 528 additions and 840 deletions

View File

@@ -31,10 +31,11 @@ PRs are **greatly welcome**!
Please run below to setup env
```bash
# Install axolotl + dev and test dependencies from lockfile
# Install axolotl + dev and test dependencies
export UV_TORCH_BACKEND=cu128 # or cu130
uv sync --extra flash-attn --extra deepspeed --group dev --group test
uv venv --no-project --relocatable
source .venv/bin/activate
uv pip install --no-build-isolation -e '.[deepspeed]' --group dev --group test
pre-commit install
# test

View File

@@ -30,14 +30,6 @@ jobs:
fail-fast: false
matrix:
include:
- cuda: "128"
cuda_version: 12.8.1
cudnn_version: ""
python_version: "3.11"
pytorch: 2.9.0
torch_cuda_arch_list: "7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX"
dockerfile: "Dockerfile-base"
platforms: "linux/amd64,linux/arm64"
- cuda: "128"
cuda_version: 12.8.1
cudnn_version: ""
@@ -168,14 +160,6 @@ jobs:
torch_cuda_arch_list: "7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX"
dockerfile: "Dockerfile-uv-base"
platforms: "linux/amd64,linux/arm64"
- cuda: "128"
cuda_version: 12.8.1
cudnn_version: ""
python_version: "3.11"
pytorch: 2.9.0
torch_cuda_arch_list: "7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX"
dockerfile: "Dockerfile-uv-base"
platforms: "linux/amd64,linux/arm64"
- cuda: "128"
cuda_version: 12.8.1
cudnn_version: ""

View File

@@ -18,12 +18,6 @@ jobs:
fail-fast: false
matrix:
include:
- cuda: 128
cuda_version: 12.8.1
python_version: "3.11"
pytorch: 2.9.0
axolotl_extras:
platforms: "linux/amd64,linux/arm64"
- cuda: 128
cuda_version: 12.8.1
python_version: "3.11"
@@ -180,12 +174,6 @@ jobs:
fail-fast: false
matrix:
include:
- cuda: 128
cuda_version: 12.8.1
python_version: "3.11"
pytorch: 2.9.0
axolotl_extras:
platforms: "linux/amd64,linux/arm64"
- cuda: 128
cuda_version: 12.8.1
python_version: "3.11"

View File

@@ -1 +1 @@
0.16.0.dev0
0.16.2.dev0

View File

@@ -24,9 +24,9 @@ WORKDIR /workspace/axolotl
# If AXOLOTL_EXTRAS is set, append it in brackets; don't install deepspeed with arm64
RUN pip uninstall -y causal_conv1d
RUN if [ "$TARGETARCH" = "arm64" ]; then \
BASE_EXTRAS="flash-attn,ring-flash-attn,optimizers,ray"; \
BASE_EXTRAS="optimizers,ray"; \
else \
BASE_EXTRAS="deepspeed,flash-attn,ring-flash-attn,optimizers,ray"; \
BASE_EXTRAS="deepspeed,optimizers,ray"; \
fi && \
if [ "$AXOLOTL_EXTRAS" != "" ]; then \
pip install --no-build-isolation -e .[$BASE_EXTRAS,$AXOLOTL_EXTRAS] $AXOLOTL_ARGS; \

View File

@@ -58,19 +58,3 @@ RUN git lfs install --skip-repo && \
# The base image ships with `pydantic==1.8.2` which is not working
pip3 install -U --no-cache-dir pydantic==1.10.10 && \
pip3 cache purge
# Map Python version (e.g., 3.12 -> cp312)
RUN PYTHON_CP="cp$(echo $PYTHON_VERSION | tr -d '.')" && \
# Map PyTorch version (e.g., 2.9.1 -> torch2.9, 2.10.0 -> torch2.10)
TORCH_TAG="torch$(echo $PYTORCH_VERSION | grep -oP '^\d+\.\d+')" && \
# Map architecture
case "$TARGETARCH" in \
amd64) ARCH_TAG="x86_64" ;; \
arm64) ARCH_TAG="aarch64" ;; \
*) echo "Unsupported architecture: $TARGETARCH"; exit 1 ;; \
esac && \
WHL_VERSION="v0.7.16" && \
WHL_FILE="flash_attn-2.8.3+cu${CUDA}${TORCH_TAG}-${PYTHON_CP}-${PYTHON_CP}-linux_${ARCH_TAG}.whl" && \
wget -nv "https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/${WHL_VERSION}/${WHL_FILE}" && \
pip3 install --no-cache-dir "${WHL_FILE}" && \
rm "${WHL_FILE}"

View File

@@ -24,9 +24,9 @@ RUN git fetch origin +$GITHUB_REF && \
# If AXOLOTL_EXTRAS is set, append it in brackets
RUN if [ "$AXOLOTL_EXTRAS" != "" ] ; then \
pip install --no-build-isolation -e .[deepspeed,flash-attn,mamba-ssm,$AXOLOTL_EXTRAS] $AXOLOTL_ARGS; \
pip install --no-build-isolation -e .[deepspeed,mamba-ssm,$AXOLOTL_EXTRAS] $AXOLOTL_ARGS; \
else \
pip install --no-build-isolation -e .[deepspeed,flash-attn,mamba-ssm] $AXOLOTL_ARGS; \
pip install --no-build-isolation -e .[deepspeed,mamba-ssm] $AXOLOTL_ARGS; \
fi
# So we can test the Docker image

View File

@@ -24,9 +24,9 @@ WORKDIR /workspace/axolotl
# If AXOLOTL_EXTRAS is set, append it in brackets; don't install deepspeed with arm64
RUN uv pip uninstall causal_conv1d
RUN if [ "$TARGETARCH" = "arm64" ]; then \
BASE_EXTRAS="flash-attn,ring-flash-attn,optimizers,ray"; \
BASE_EXTRAS="optimizers,ray"; \
else \
BASE_EXTRAS="deepspeed,flash-attn,ring-flash-attn,optimizers,ray"; \
BASE_EXTRAS="deepspeed,optimizers,ray"; \
fi && \
if [ "$AXOLOTL_EXTRAS" != "" ]; then \
uv pip install --no-build-isolation -e .[$BASE_EXTRAS,$AXOLOTL_EXTRAS] $AXOLOTL_ARGS; \

View File

@@ -38,20 +38,3 @@ RUN uv pip install packaging setuptools wheel psutil \
RUN if [ "$TARGETARCH" = "amd64" ]; then \
MAMBA_SKIP_CUDA_BUILD=TRUE CAUSAL_CONV1D_SKIP_CUDA_BUILD=TRUE uv pip install --no-build-isolation mamba_ssm causal_conv1d; \
fi
# Map Python version (e.g., 3.12 -> cp312)
RUN PYTHON_CP="cp$(echo $PYTHON_VERSION | tr -d '.')" && \
# Map PyTorch version (e.g., 2.9.1 -> torch2.9, 2.10.0 -> torch2.10)
TORCH_TAG="torch$(echo $PYTORCH_VERSION | grep -oP '^\d+\.\d+')" && \
LINUX_TAG="manylinux_" && \
# Map architecture
case "$TARGETARCH" in \
amd64) ARCH_TAG="2_24_x86_64.manylinux_2_28_x86_64" ;; \
arm64) ARCH_TAG="2_34_aarch64" ;; \
*) echo "Unsupported architecture: $TARGETARCH"; exit 1 ;; \
esac && \
WHL_VERSION="v0.7.16" && \
WHL_FILE="flash_attn-2.8.3+cu${CUDA}${TORCH_TAG}-${PYTHON_CP}-${PYTHON_CP}-${LINUX_TAG}${ARCH_TAG}.whl" && \
wget -nv "https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/${WHL_VERSION}/${WHL_FILE}" && \
uv pip install --no-cache-dir "${WHL_FILE}" && \
rm "${WHL_FILE}"

View File

@@ -77,8 +77,9 @@ Make sure you have an [editable install](https://setuptools.pypa.io/en/latest/us
```bash
export UV_TORCH_BACKEND=cu128 # or cu130
uv sync --extra flash-attn --extra deepspeed --group dev --group test
uv venv --no-project --relocatable
source .venv/bin/activate
uv pip install --no-build-isolation -e '.[deepspeed]' --group dev --group test
```
#### Remote Hosts
@@ -218,8 +219,9 @@ docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --
You will now be in the container. Next, install Axolotl with dev dependencies:
```bash
uv sync --extra flash-attn --extra deepspeed --group dev --group test
uv venv --no-project --relocatable
source .venv/bin/activate
uv pip install --no-build-isolation -e '.[deepspeed]' --group dev --group test
```
### Attach To Container

View File

@@ -10,13 +10,16 @@ This section describes the different Docker images that are released by AxolotlA
[Docker Hub](https://hub.docker.com/u/axolotlai).
::: {.callout-important}
For Blackwell GPUs, please use the tags with PyTorch 2.9.1 and CUDA 12.8.
:::
### Switch to the `-uv` images
::: {.callout-tip}
Each image below is available in a **uv variant** that uses [uv](https://docs.astral.sh/uv/) with
a relocatable venv (`/workspace/axolotl-venv`) instead of Miniconda + pip. Append `-uv` to the image name
(e.g. `axolotlai/axolotl-base-uv`). Tags follow the same format. We recommend the uv images for new deployments.
Each image below ships a **uv variant** that uses [uv](https://docs.astral.sh/uv/) with a relocatable venv
(`/workspace/axolotl-venv`) instead of Miniconda + pip. Append `-uv` to the image name
(e.g. `axolotlai/axolotl-uv`, `axolotlai/axolotl-base-uv`, `axolotlai/axolotl-cloud-uv`). Tags follow the
same format as their non-uv counterparts.
**We recommend switching to the `-uv` images early.** In the near future we will publish the uv-based
build to the non-uv tags as well. The non-uv names will continue to work, but they will start serving
the uv image.
:::
## Base
@@ -85,7 +88,7 @@ Tags examples:
- `main-py3.12-cu130-2.10.0`
- `main-latest`
- `main-20260315-py3.11-cu128-2.9.1`
- `0.12.0`
- `0.16.1`
## Cloud

View File

@@ -57,7 +57,7 @@ description: Frequently asked questions
**Q: vLLM is not working with Axolotl**
> A: We currently recommend torch 2.6.0 for use with `vllm`. Please ensure you use the right version. For Docker, please use the `main-py3.11-cu124-2.6.0` tag.
> A: We currently recommend torch 2.10 for use with `vllm`. Please ensure you use the right version. For Docker, please use the `main-py3.12-cu128-2.10.0` tag (note: torch 2.10 images are built with Python 3.12).
**Q: FA2 2.8.0 `undefined symbol` runtime error on CUDA 12.4**

View File

@@ -15,7 +15,7 @@ This guide covers all the ways you can install and set up Axolotl for your envir
- NVIDIA GPU (Ampere architecture or newer for `bf16` and Flash Attention) or AMD GPU
- Python ≥3.11
- PyTorch ≥2.9.0
- PyTorch ≥2.9.1
## Installation {#sec-installation}
@@ -36,9 +36,9 @@ source $HOME/.local/bin/env
Choose your CUDA version (e.g. `cu128`, `cu130`), create a venv, and install:
```{.bash}
export UV_TORCH_BACKEND=cu128 # or cu130
uv venv --no-project --relocatable
uv venv
source .venv/bin/activate
uv pip install --no-build-isolation axolotl[flash-attn,deepspeed]
uv pip install --no-build-isolation axolotl[deepspeed]
```
### Edge/Development Build {#sec-edge-build}
@@ -49,12 +49,11 @@ For the latest features between releases:
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
export UV_TORCH_BACKEND=cu128 # or cu130
uv sync --extra flash-attn --extra deepspeed
uv venv
source .venv/bin/activate
uv pip install --no-build-isolation -e '.[deepspeed]'
```
`uv sync` creates a `.venv`, installs exact pinned versions from `uv.lock`, and sets up an editable install automatically.
### Docker {#sec-docker}
```{.bash}
@@ -132,11 +131,11 @@ source $HOME/.local/bin/env
# Create a fresh venv (recommended for a clean start)
export UV_TORCH_BACKEND=cu128 # or cu130
uv venv --no-project --relocatable
uv venv
source .venv/bin/activate
# Reinstall axolotl
uv pip install --no-build-isolation axolotl[flash-attn,deepspeed]
uv pip install --no-build-isolation axolotl[deepspeed]
```
## Using pip (Alternative) {#sec-pip}
@@ -151,13 +150,13 @@ Follow the instructions at: [https://pytorch.org/get-started/locally/](https://p
```{.bash}
pip3 install -U packaging setuptools wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
pip3 install --no-build-isolation axolotl[deepspeed]
```
For editable/development installs:
```{.bash}
pip3 install -U packaging setuptools wheel ninja
pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
pip3 install --no-build-isolation -e '.[deepspeed]'
```
## Troubleshooting {#sec-troubleshooting}

View File

@@ -15,7 +15,7 @@ Thanks to the team at LiquidAI for giving us early access to prepare for these r
Here is an example of how to install from pip:
```bash
# Ensure you have a compatible version of Pytorch installed
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. Run one of the finetuning examples below.

View File

@@ -11,11 +11,11 @@ This guide shows how to fine-tune it with Axolotl with multi-turn conversations
Here is an example of how to install from main for pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
uv pip install --no-build-isolation -e '.[flash-attn]'
uv pip install --no-build-isolation -e '.'
# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh

View File

@@ -13,11 +13,11 @@ Thanks to the team at Arcee.ai for using Axolotl in supervised fine-tuning the A
Here is an example of how to install from main for pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
uv pip install --no-build-isolation -e '.[flash-attn]'
uv pip install --no-build-isolation -e '.'
# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh

View File

@@ -36,12 +36,7 @@
"id": "msOCO4NRmRLa"
},
"outputs": [],
"source": [
"%%capture\n",
"# This step can take ~5-10 minutes to install dependencies\n",
"!pip install --no-build-isolation axolotl[flash-attn]>=0.9.1\n",
"!pip install \"cut-cross-entropy[transformers] @ git+https://github.com/axolotl-ai-cloud/ml-cross-entropy.git@fec1a88\""
]
"source": "%%capture\n# This step can take ~5-10 minutes to install dependencies\n!pip install --no-build-isolation \"axolotl>=0.16.1\"\n!pip install \"cut-cross-entropy[transformers] @ git+https://github.com/axolotl-ai-cloud/ml-cross-entropy.git@fec1a88\""
},
{
"cell_type": "markdown",

View File

@@ -15,8 +15,8 @@ Thanks to the team at MistralAI for giving us early access to prepare for this r
Here is an example of how to install from pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. Install [Cut Cross Entropy](https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy) to reduce training VRAM usage

View File

@@ -9,8 +9,8 @@ Gemma-3n is a family of multimodal models from Google found on [HuggingFace](htt
Here is an example of how to install from pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. In addition to Axolotl's requirements, Gemma-3n requires:

View File

@@ -13,8 +13,8 @@ This guide shows how to fine-tune it with Axolotl with multi-turn conversations
Here is an example of how to install from pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. Choose one of the following configs below for training the 20B model. (for 120B, see [below](#training-120b))

View File

@@ -11,11 +11,11 @@ This guide shows how to fine-tune it with Axolotl with multi-turn conversations
Here is an example of how to install from main for pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.7.1 min)
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
uv pip install --no-build-isolation -e '.[flash-attn]'
uv pip install --no-build-isolation -e '.'
# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh

View File

@@ -9,11 +9,11 @@ Tencent released a family of opensource models called HunYuan with varying param
Here is an example of how to install from main for pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
uv pip install --no-build-isolation -e '.[flash-attn]'
uv pip install --no-build-isolation -e '.'
# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh

View File

@@ -13,8 +13,8 @@ Thanks to the team at MistralAI for giving us early access to prepare for these
Here is an example of how to install from pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.7.0 min)
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. Install [Cut Cross Entropy](https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy) to reduce training VRAM usage

View File

@@ -11,7 +11,7 @@ This guide shows how to fine-tune it with Axolotl with multi-turn conversations
Here is an example of how to install from pip:
```bash
# Ensure you have a compatible version of Pytorch installed
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
uv pip install --no-build-isolation 'axolotl>=0.16.1'
# Install Cut Cross Entropy
python scripts/cutcrossentropy_install.py | sh

View File

@@ -13,7 +13,7 @@ This guide shows how to fine-tune SmolVLM2 models with Axolotl.
Here is an example of how to install from pip:
```bash
# Ensure you have a compatible version of Pytorch installed
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. Install an extra dependency:

View File

@@ -11,8 +11,8 @@ Thanks to the team at MistralAI for giving us early access to prepare for this r
Here is an example of how to install from pip:
```bash
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
uv pip install --no-build-isolation 'axolotl>=0.16.1'
```
2. Please install the below.

View File

@@ -12,7 +12,7 @@ requires-python = ">=3.10"
dependencies = [
# Core ML stack
"torch>=2.6.0",
"torch>=2.9.1",
"packaging==26.0",
"huggingface_hub>=1.1.7",
"peft>=0.19.1,<0.20.0",
@@ -79,7 +79,7 @@ dependencies = [
# Platform-specific (Linux only)
"bitsandbytes==0.49.1 ; sys_platform != 'darwin'",
"triton>=3.4.0 ; sys_platform != 'darwin'",
"xformers>=0.0.23.post1 ; sys_platform != 'darwin'",
"xformers>=0.0.33.post2 ; sys_platform != 'darwin' and platform_machine != 'aarch64'",
"liger-kernel==0.7.0 ; sys_platform != 'darwin'",
"torchao==0.17.0 ; sys_platform != 'darwin' and platform_machine != 'aarch64'",

View File

@@ -370,7 +370,7 @@ class HFCausalTrainerBuilder(TrainerBuilderBase):
data_collator_kwargs = {
"padding": True, # True/"longest" is the default
}
multiple = 64
multiple = getattr(self.cfg, "pad_to_multiple_of", None) or 64
if self.cfg.pad_to_sequence_len:
data_collator_kwargs["pad_to_multiple_of"] = multiple * math.ceil(
self.cfg.sequence_len / multiple

View File

@@ -228,9 +228,47 @@ class HFRLTrainerBuilder(TrainerBuilderBase):
return training_args, trainer_kwargs
def build_collator(self, **kwargs):
"""Build a data collator for preference-tuning trainers.
Returns None for RL types that provide their own collator (e.g. GRPO,
KTO), letting the trainer construct its default. For DPO/IPO/ORPO/SIMPO
returns an ``AxolotlDPODataCollatorWithPadding`` when
``pad_to_multiple_of`` is set, otherwise None (so the trainer
falls back to the TRL default).
"""
if self.cfg.rl not in (
RLType.DPO,
RLType.IPO,
RLType.ORPO,
RLType.SIMPO,
):
return None
pad_to_multiple_of = getattr(self.cfg, "pad_to_multiple_of", None)
if not pad_to_multiple_of:
return None
from axolotl.utils.collators.dpo import AxolotlDPODataCollatorWithPadding
LOG.info(
f"Using AxolotlDPODataCollatorWithPadding with pad_to_multiple_of="
f"{pad_to_multiple_of}"
)
is_enc_dec = getattr(self.model.config, "is_encoder_decoder", False)
return AxolotlDPODataCollatorWithPadding(
pad_token_id=self.tokenizer.pad_token_id,
is_encoder_decoder=is_enc_dec,
pad_to_multiple_of=pad_to_multiple_of,
**kwargs,
)
def build(self, total_num_steps):
training_args, trainer_kwargs = self._build_training_arguments(total_num_steps)
if (data_collator := self.build_collator()) is not None:
trainer_kwargs["data_collator"] = data_collator
if self.eval_dataset:
trainer_kwargs["eval_dataset"] = self.eval_dataset
if (

View File

@@ -11,7 +11,7 @@ kd_ce_alpha: 0.1
kd_alpha: 0.9
kd_temperature: 1.0
torch_compile: True # torch>=2.6.0, recommended to reduce vram
torch_compile: True # recommended to reduce vram
datasets:
- path: ...

View File

@@ -407,7 +407,10 @@ def selective_log_softmax(logits, index) -> torch.Tensor:
K = index.shape[-1]
original_index_shape = index.shape
flat_logits = logits.reshape(-1, V).contiguous()
try:
flat_logits = logits.view(-1, V)
except RuntimeError:
flat_logits = logits.reshape(-1, V).contiguous()
flat_index = index.reshape(-1, K).contiguous()
BLOCK_V = 4096

View File

@@ -6,6 +6,7 @@ from .batching import (
PretrainingBatchSamplerDataCollatorForSeq2Seq,
V2BatchSamplerDataCollatorForSeq2Seq,
)
from .dpo import AxolotlDPODataCollatorWithPadding
from .mamba import MambaDataCollator
__all__ = [
@@ -13,5 +14,6 @@ __all__ = [
"BatchSamplerDataCollatorForSeq2Seq",
"V2BatchSamplerDataCollatorForSeq2Seq",
"PretrainingBatchSamplerDataCollatorForSeq2Seq",
"AxolotlDPODataCollatorWithPadding",
"MambaDataCollator",
]

View File

@@ -0,0 +1,128 @@
"""DPO/ORPO/IPO/KTO data collator with pad_to_multiple_of support.
Extends TRL's DPODataCollatorWithPadding to round padded sequence lengths
up to a fixed multiple. This stabilizes Triton autotune caches for kernels
that key on sequence length (e.g. fla's linear attention kernels used by
Qwen3.5), which otherwise re-autotune on every distinct batch length.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any
import torch
from torch.nn.utils.rnn import pad_sequence
from trl.experimental.utils import DPODataCollatorWithPadding
from trl.trainer.utils import pad
def _round_up(length: int, multiple: int) -> int:
return ((length + multiple - 1) // multiple) * multiple
@dataclass
class AxolotlDPODataCollatorWithPadding(DPODataCollatorWithPadding):
"""DPO data collator that pads to a multiple of ``pad_to_multiple_of``.
Args:
pad_token_id: Tokenizer pad token id (inherited).
is_encoder_decoder: Whether the model is encoder-decoder (inherited).
pad_to_multiple_of: If set, padded lengths are rounded up to this
multiple. Helps stabilize Triton autotune caches.
"""
pad_to_multiple_of: int | None = None
def __call__(self, features: list[dict[str, Any]]) -> dict[str, Any]:
pad_to_mult = self.pad_to_multiple_of
padded_batch: dict[str, Any] = {}
for k in features[0].keys():
if k.endswith(
("_input_ids", "_attention_mask", "_labels", "_pixel_values")
):
if self.is_encoder_decoder:
if k.endswith("_pixel_values"):
to_pad = [
torch.tensor(ex[k], dtype=torch.float32) for ex in features
]
else:
to_pad = [torch.LongTensor(ex[k]) for ex in features]
if k.startswith("prompt") and k.endswith("input_ids"):
if self.pad_token_id is None:
raise ValueError(
"Padding is enabled, but the tokenizer is not configured with a padding token."
)
padding_value = self.pad_token_id
elif k.endswith("_attention_mask"):
padding_value = 0
elif k.endswith("_pixel_values"):
padding_value = 0
elif (
k.startswith(("chosen", "rejected", "completion"))
or "decoder" in k
):
padding_value = -100
else:
raise ValueError(f"Unexpected key in batch '{k}'")
padded = pad_sequence(
to_pad, batch_first=True, padding_value=padding_value
)
if pad_to_mult:
cur = padded.shape[1]
target = _round_up(cur, pad_to_mult)
if target > cur:
extra = target - cur
pad_shape = list(padded.shape)
pad_shape[1] = extra
filler = torch.full(
pad_shape,
padding_value,
dtype=padded.dtype,
device=padded.device,
)
padded = torch.cat([padded, filler], dim=1)
padded_batch[k] = padded
else:
if k.endswith("_input_ids"):
if self.pad_token_id is None:
raise ValueError(
"Padding is enabled, but the tokenizer is not configured with a padding token."
)
padding_value = self.pad_token_id
elif k.endswith("_labels"):
padding_value = -100
elif k.endswith("_attention_mask"):
padding_value = 0
elif k.endswith("_pixel_values"):
padding_value = 0
else:
raise ValueError(f"Unexpected key in batch '{k}'")
padding_side = (
"left"
if k in ("prompt_input_ids", "prompt_attention_mask")
else "right"
)
dtype = (
torch.float32 if k.endswith("_pixel_values") else torch.int64
)
to_pad = [torch.tensor(ex[k], dtype=dtype) for ex in features]
# trl.pad() natively supports pad_to_multiple_of
padded_batch[k] = pad(
to_pad,
padding_value=padding_value,
padding_side=padding_side,
pad_to_multiple_of=pad_to_mult,
)
elif k.endswith("_logps"):
padded_batch[k] = torch.tensor([ex[k] for ex in features])
else:
padded_batch[k] = [ex[k] for ex in features]
return padded_batch

View File

@@ -673,6 +673,12 @@ class AxolotlInputConfig(
"description": "Pad inputs so each step uses constant sized buffers. This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently. Defaults to True if `sample_packing` enabled"
},
)
pad_to_multiple_of: int | None = Field(
default=None,
json_schema_extra={
"description": ("Pad each batch to a multiple of this value.")
},
)
curriculum_sampling: bool | None = Field(
default=None,
json_schema_extra={
@@ -1010,7 +1016,7 @@ class AxolotlInputConfig(
torch_compile: Literal["auto"] | bool | None = Field(
default=None,
json_schema_extra={
"description": "Whether to use torch.compile and which backend to use. setting to `auto` will enable torch compile when torch>=2.6.0"
"description": "Whether to use torch.compile and which backend to use."
},
)
torch_compile_backend: str | None = Field(

View File

@@ -118,52 +118,18 @@ def download_smollm2_135m_gptq_model():
snapshot_download_w_retry("lilmeaty/SmolLM2-135M-Instruct-GPTQ", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_qwen_2_5_half_billion_model():
# download the model
snapshot_download_w_retry("Qwen/Qwen2.5-0.5B", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_qwen3_half_billion_model():
# download the model (still used as the KD teacher in tests/e2e/integrations/test_kd.py)
# download the model
snapshot_download_w_retry("Qwen/Qwen3-0.6B", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_llama_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-llama-50m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_mistral_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-mistral-25m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_mixtral_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-mixtral-30m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_phi_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-phi-64m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_falcon_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-falcon-42m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_qwen2_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-qwen2-129m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_qwen3_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-qwen3-129m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tiny_gemma2_model():
snapshot_download_w_retry("axolotl-ai-co/tiny-gemma2-137m", repo_type="model")
@pytest.fixture(scope="session", autouse=True)
def download_tatsu_lab_alpaca_dataset():
# download the dataset
@@ -654,15 +620,7 @@ def fixture_min_base_cfg():
)
def test_load_fixtures(
download_smollm2_135m_model,
download_qwen3_half_billion_model,
download_tiny_llama_model,
download_tiny_mistral_model,
download_tiny_mixtral_model,
download_tiny_phi_model,
download_tiny_falcon_model,
download_tiny_qwen2_model,
download_tiny_qwen3_model,
download_tiny_gemma2_model,
download_qwen_2_5_half_billion_model,
download_tatsu_lab_alpaca_dataset,
download_mhenrichsen_alpaca_2k_dataset,
download_mhenrichsen_alpaca_2k_w_revision_dataset,

View File

@@ -10,10 +10,7 @@ from axolotl.utils import get_pytorch_version
from axolotl.utils.config import normalize_config, prepare_plugins, validate_config
from axolotl.utils.dict import DictDefault
from tests.e2e.utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
)
from tests.e2e.utils import check_model_output_exists
@pytest.fixture()
@@ -38,16 +35,13 @@ def min_cfg(temp_dir):
"num_epochs": 1,
"micro_batch_size": 8,
"gradient_accumulation_steps": 1,
"learning_rate": 5e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"output_dir": temp_dir,
"lr_scheduler": "cosine",
"max_steps": 40,
"warmup_steps": 5,
"max_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
@@ -70,18 +64,11 @@ class TestCutCrossEntropyIntegration:
else:
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=2.2,
max_final=2.0,
)
def test_qwen2_w_cce(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"plugins": [
"axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin",
],
@@ -100,15 +87,13 @@ class TestCutCrossEntropyIntegration:
"num_epochs": 1,
"micro_batch_size": 4,
"gradient_accumulation_steps": 1,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"output_dir": temp_dir,
"lr_scheduler": "cosine",
"max_steps": 50,
"max_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -123,13 +108,6 @@ class TestCutCrossEntropyIntegration:
else:
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@pytest.mark.parametrize(
"attention_type",
@@ -158,10 +136,3 @@ class TestCutCrossEntropyIntegration:
else:
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=2.2,
max_final=2.0,
)

View File

@@ -24,7 +24,7 @@ from axolotl.monkeypatch.lora_kernels import (
)
from axolotl.utils.dict import DictDefault
MODEL_NAME = "axolotl-ai-co/tiny-qwen3-129m"
MODEL_NAME = "Qwen/Qwen3-0.6B"
DEVICE = "cuda"
DTYPE = torch.bfloat16

View File

@@ -1,22 +1,23 @@
"""Test module for DistMuon optimizer with FSDP2 multi-GPU functionality."""
import os
from pathlib import Path
import torch
import yaml
from accelerate.test_utils import execute_subprocess_async
from tbparse import SummaryReader
from transformers.testing_utils import get_torch_dist_unique_port
from axolotl.utils.dict import DictDefault
from tests.e2e.utils import check_tensorboard_loss_decreased, require_torch_2_7_0
from tests.e2e.utils import most_recent_subdir, require_torch_2_7_0
AXOLOTL_ROOT = Path(__file__).parent.parent.parent.parent
def verify_training_success(temp_dir):
"""Verify that training completed successfully artifacts, no-NaN, loss
stayed in qwen2-pretraining scale (tiny-qwen2-129m final pretrain CE ~3.92).
"""
"""Verify that training completed successfully by checking artifacts and loss."""
output_path = Path(temp_dir)
model_files = list(output_path.glob("*.bin")) + list(
@@ -29,13 +30,19 @@ def verify_training_success(temp_dir):
"No checkpoint files found - training may have failed"
)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=10,
final_window=10,
max_initial=5.0,
max_final=4.7,
)
tb_log_path = most_recent_subdir(temp_dir + "/runs")
if tb_log_path:
event_files = sorted(os.listdir(tb_log_path))
if event_files:
event_file = os.path.join(tb_log_path, event_files[0])
reader = SummaryReader(event_file)
df = reader.scalars
train_loss_df = df[df.tag == "train/train_loss"]
if len(train_loss_df) > 0:
final_loss = train_loss_df.value.values[-1]
assert not torch.isnan(torch.tensor(final_loss)), (
f"Training loss is NaN: {final_loss}"
)
class TestDistMuon:
@@ -45,7 +52,7 @@ class TestDistMuon:
def test_fft_sft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -56,12 +63,11 @@ class TestDistMuon:
},
],
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-3,
"learning_rate": 0.02,
"optimizer": "muon",
"weight_decay": 0.01,
"lr_scheduler": "cosine",
@@ -76,9 +82,6 @@ class TestDistMuon:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
}
)
@@ -106,7 +109,7 @@ class TestDistMuon:
def test_lora_sft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -119,15 +122,14 @@ class TestDistMuon:
"adapter": "lora",
"lora_r": 8,
"lora_alpha": 16,
"lora_dropout": 0.0,
"lora_dropout": 0.05,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-3,
"learning_rate": 0.02,
"optimizer": "muon",
"weight_decay": 0.01,
"lr_scheduler": "cosine",
@@ -142,9 +144,6 @@ class TestDistMuon:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
}
)

View File

@@ -1,23 +1,24 @@
"""Test module for FSDP1 multi-GPU functionality."""
import os
from pathlib import Path
import pytest
import torch
import yaml
from accelerate.test_utils import execute_subprocess_async
from tbparse import SummaryReader
from transformers.testing_utils import get_torch_dist_unique_port
from axolotl.utils.dict import DictDefault
from tests.e2e.utils import check_tensorboard_loss_decreased
from tests.e2e.utils import most_recent_subdir
AXOLOTL_ROOT = Path(__file__).parent.parent.parent.parent
def verify_training_success(temp_dir):
"""Verify that training completed successfully artifacts, no-NaN, loss
stayed in qwen2-pretraining scale (tiny-qwen2-129m final pretrain CE ~3.92).
"""
"""Verify that training completed successfully by checking artifacts and loss."""
output_path = Path(temp_dir)
model_files = list(output_path.glob("*.bin")) + list(
@@ -30,13 +31,19 @@ def verify_training_success(temp_dir):
"No checkpoint files found - training may have failed"
)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=10,
final_window=10,
max_initial=5.0,
max_final=4.7,
)
tb_log_path = most_recent_subdir(temp_dir + "/runs")
if tb_log_path:
event_files = sorted(os.listdir(tb_log_path))
if event_files:
event_file = os.path.join(tb_log_path, event_files[0])
reader = SummaryReader(event_file)
df = reader.scalars
train_loss_df = df[df.tag == "train/train_loss"]
if len(train_loss_df) > 0:
final_loss = train_loss_df.value.values[-1]
assert not torch.isnan(torch.tensor(final_loss)), (
f"Training loss is NaN: {final_loss}"
)
class TestFSDP1:
@@ -49,7 +56,7 @@ class TestFSDP1:
def test_fft_sft(self, temp_dir, fsdp_cpu_ram_efficient_loading):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -60,12 +67,11 @@ class TestFSDP1:
},
],
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -81,9 +87,6 @@ class TestFSDP1:
"fsdp_use_orig_params": False,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
}
)
@@ -123,7 +126,7 @@ class TestFSDP1:
def test_lora_sft(self, temp_dir, adapter_config):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -137,15 +140,14 @@ class TestFSDP1:
"load_in_4bit": adapter_config["load_in_4bit"],
"lora_r": 8,
"lora_alpha": 16,
"lora_dropout": 0.0,
"lora_dropout": 0.05,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -161,9 +163,6 @@ class TestFSDP1:
"fsdp_use_orig_params": False,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
}
)
@@ -191,7 +190,7 @@ class TestFSDP1:
def test_dpo_fft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"rl": "dpo",
@@ -204,11 +203,11 @@ class TestFSDP1:
},
],
"num_epochs": 1,
"max_steps": 20,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -224,9 +223,6 @@ class TestFSDP1:
"fsdp_use_orig_params": False,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
}
)
@@ -266,7 +262,7 @@ class TestFSDP1:
def test_dpo_lora(self, temp_dir, adapter_config):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"load_in_4bit": adapter_config["load_in_4bit"],
"rl": "dpo",
"chat_template": "chatml",
@@ -285,11 +281,11 @@ class TestFSDP1:
},
],
"num_epochs": 1,
"max_steps": 20,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -305,9 +301,6 @@ class TestFSDP1:
"fsdp_use_orig_params": False,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": "auto",
"tf32": True,
}

View File

@@ -1,23 +1,24 @@
"""Test module for FSDP2 multi-GPU functionality."""
import os
from pathlib import Path
import pytest
import torch
import yaml
from accelerate.test_utils import execute_subprocess_async
from tbparse import SummaryReader
from transformers.testing_utils import get_torch_dist_unique_port
from axolotl.utils.dict import DictDefault
from tests.e2e.utils import check_tensorboard_loss_decreased, require_torch_2_7_0
from tests.e2e.utils import most_recent_subdir, require_torch_2_7_0
AXOLOTL_ROOT = Path(__file__).parent.parent.parent.parent
def verify_training_success(temp_dir):
"""Verify that training completed successfully artifacts, no-NaN, loss
stayed in qwen2-pretraining scale (tiny-qwen2-129m final pretrain CE ~3.92).
"""
"""Verify that training completed successfully by checking artifacts and loss."""
output_path = Path(temp_dir)
model_files = list(output_path.glob("*.bin")) + list(
@@ -30,13 +31,19 @@ def verify_training_success(temp_dir):
"No checkpoint files found - training may have failed"
)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=10,
final_window=10,
max_initial=5.0,
max_final=4.7,
)
tb_log_path = most_recent_subdir(temp_dir + "/runs")
if tb_log_path:
event_files = sorted(os.listdir(tb_log_path))
if event_files:
event_file = os.path.join(tb_log_path, event_files[0])
reader = SummaryReader(event_file)
df = reader.scalars
train_loss_df = df[df.tag == "train/train_loss"]
if len(train_loss_df) > 0:
final_loss = train_loss_df.value.values[-1]
assert not torch.isnan(torch.tensor(final_loss)), (
f"Training loss is NaN: {final_loss}"
)
class TestFSDP2:
@@ -50,7 +57,7 @@ class TestFSDP2:
def test_fft_sft(self, temp_dir, fsdp_cpu_ram_efficient_loading):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -61,12 +68,11 @@ class TestFSDP2:
},
],
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -80,9 +86,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
}
)
@@ -111,7 +114,7 @@ class TestFSDP2:
def test_lora_sft(self, temp_dir, peft_use_dora):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -125,15 +128,14 @@ class TestFSDP2:
"adapter": "lora",
"lora_r": 8,
"lora_alpha": 16,
"lora_dropout": 0.0,
"lora_dropout": 0.05,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -147,9 +149,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
# explicitly disable LORA kernels, as they may be auto-enabled
"lora_mlp_kernel": False,
@@ -181,7 +180,7 @@ class TestFSDP2:
def test_lora_sft_kernels(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -196,12 +195,11 @@ class TestFSDP2:
"lora_alpha": 16,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -215,9 +213,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
"lora_mlp_kernel": True,
"lora_qkv_kernel": True,
@@ -248,7 +243,7 @@ class TestFSDP2:
def test_qlora_sft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -262,15 +257,14 @@ class TestFSDP2:
"adapter": "qlora",
"lora_r": 8,
"lora_alpha": 16,
"lora_dropout": 0.0,
"lora_dropout": 0.05,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -284,9 +278,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
}
)
@@ -314,7 +305,7 @@ class TestFSDP2:
def test_qlora_sft_kernels(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -330,12 +321,11 @@ class TestFSDP2:
"lora_alpha": 16,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 80,
"warmup_steps": 5,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -349,9 +339,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
"bf16": True,
"lora_mlp_kernel": True,
"lora_qkv_kernel": True,
@@ -383,7 +370,7 @@ class TestFSDP2:
def test_dpo_fft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"rl": "dpo",
@@ -396,11 +383,11 @@ class TestFSDP2:
},
],
"num_epochs": 1,
"max_steps": 20,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -414,9 +401,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
}
)
@@ -444,7 +428,7 @@ class TestFSDP2:
def test_dpo_lora(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"rl": "dpo",
"chat_template": "chatml",
@@ -461,11 +445,11 @@ class TestFSDP2:
"lora_dropout": 0.05,
"lora_target_linear": True,
"num_epochs": 1,
"max_steps": 20,
"max_steps": 2,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"flash_attention": True,
@@ -479,9 +463,6 @@ class TestFSDP2:
"reshard_after_forward": True,
},
"use_tensorboard": True,
"seed": 42,
"sample_packing": True,
"pad_to_sequence_len": True,
}
)

View File

@@ -40,7 +40,7 @@ def _run_training(temp_dir, cfg):
def _base_lora_fsdp2_config(temp_dir, **overrides):
"""Base config for LoRA + FSDP2 + kernel tests."""
cfg = {
"base_model": "axolotl-ai-co/tiny-qwen3-129m",
"base_model": "Qwen/Qwen3-0.6B",
"sequence_len": 512,
"val_set_size": 0.0,
"datasets": [

View File

@@ -8,7 +8,7 @@ from accelerate.test_utils import execute_subprocess_async, get_torch_dist_uniqu
from axolotl.utils.dict import DictDefault
from tests.e2e.utils import check_tensorboard_loss_decreased, require_torch_2_7_0
from tests.e2e.utils import check_tensorboard, require_torch_2_7_0
class TestTensorParallel:
@@ -21,7 +21,7 @@ class TestTensorParallel:
def test_fft_sft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [
@@ -63,6 +63,6 @@ class TestTensorParallel:
]
)
check_tensorboard_loss_decreased(
temp_dir + "/runs", max_initial=5.0, max_final=4.7
check_tensorboard(
temp_dir + "/runs", "train/train_loss", 1.0, "Train Loss (%s) is too high"
)

View File

@@ -32,12 +32,12 @@ from axolotl.utils.dict import DictDefault
MODEL_CONFIGS = [
{
"name": "axolotl-ai-co/tiny-mistral-25m",
"name": "trl-internal-testing/tiny-MistralForCausalLM-0.2",
"expected_activation": apply_lora_mlp_swiglu,
"dtype": torch.float16,
},
{
"name": "axolotl-ai-co/tiny-qwen2-129m",
"name": "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5",
"expected_activation": apply_lora_mlp_swiglu,
"dtype": torch.float16,
},
@@ -47,7 +47,7 @@ MODEL_CONFIGS = [
"dtype": torch.float32,
},
{
"name": "axolotl-ai-co/tiny-gemma2-137m",
"name": "trl-internal-testing/tiny-Gemma2ForCausalLM",
"expected_activation": apply_lora_mlp_geglu,
"dtype": torch.float16,
},
@@ -159,7 +159,7 @@ def test_swiglu_mlp_integration(small_llama_model):
def test_geglu_model_integration():
"""Test GeGLU activation with Gemma model."""
model = AutoModelForCausalLM.from_pretrained(
"axolotl-ai-co/tiny-gemma2-137m",
"trl-internal-testing/tiny-Gemma2ForCausalLM",
dtype=torch.float16,
device_map="cuda:0",
)

View File

@@ -4,16 +4,14 @@ E2E tests for falcon
import unittest
import pytest
from axolotl.common.datasets import load_datasets
from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from ..utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from ..utils import check_model_output_exists, with_temp_dir
class TestFalconPatched(unittest.TestCase):
@@ -21,12 +19,13 @@ class TestFalconPatched(unittest.TestCase):
Test case for Falcon models
"""
@pytest.mark.skip(reason="no tiny models for testing with safetensors")
@with_temp_dir
def test_qlora(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-falcon-42m",
"flash_attention": False,
"base_model": "illuin/tiny-random-FalconForCausalLM",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 2048,
"load_in_4bit": True,
@@ -48,20 +47,17 @@ class TestFalconPatched(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -70,20 +66,14 @@ class TestFalconPatched(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=6.0,
max_final=4.7,
)
@pytest.mark.skip(reason="no tiny models for testing with safetensors")
@with_temp_dir
def test_ft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-falcon-42m",
"flash_attention": False,
"base_model": "illuin/tiny-random-FalconForCausalLM",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 2048,
"val_set_size": 0.05,
@@ -98,20 +88,17 @@ class TestFalconPatched(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -120,10 +107,3 @@ class TestFalconPatched(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=6.0,
max_final=4.7,
)

View File

@@ -9,12 +9,7 @@ from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from ..utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
require_torch_2_6_0,
with_temp_dir,
)
from ..utils import check_model_output_exists, require_torch_2_6_0, with_temp_dir
class TestMistral(unittest.TestCase):
@@ -27,7 +22,7 @@ class TestMistral(unittest.TestCase):
def test_lora_packing(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mistral-25m",
"base_model": "trl-internal-testing/tiny-MistralForCausalLM-0.2",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 1024,
@@ -50,20 +45,17 @@ class TestMistral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 5,
"save_steps": 3,
"eval_steps": 4,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -72,19 +64,12 @@ class TestMistral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.5,
max_final=4.3,
)
@with_temp_dir
def test_ft_packing(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mistral-25m",
"base_model": "trl-internal-testing/tiny-MistralForCausalLM-0.2",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 1024,
@@ -101,20 +86,17 @@ class TestMistral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 5,
"save_steps": 3,
"eval_steps": 4,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -123,10 +105,3 @@ class TestMistral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.5,
max_final=4.3,
)

View File

@@ -9,11 +9,7 @@ from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from ..utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from ..utils import check_model_output_exists, with_temp_dir
class TestMixtral(unittest.TestCase):
@@ -25,7 +21,8 @@ class TestMixtral(unittest.TestCase):
def test_qlora(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 2048,
@@ -33,7 +30,7 @@ class TestMixtral(unittest.TestCase):
"adapter": "qlora",
"lora_r": 16,
"lora_alpha": 32,
"lora_dropout": 0.0,
"lora_dropout": 0.1,
"lora_target_linear": True,
"val_set_size": 0.05,
"special_tokens": {},
@@ -44,21 +41,17 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 3e-3,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 80,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 80,
"eval_steps": 80,
"max_steps": 5,
"save_steps": 3,
"eval_steps": 4,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -67,19 +60,13 @@ class TestMixtral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=10,
final_window=10,
max_initial=6.0,
max_final=4.7,
)
@with_temp_dir
def test_ft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 2048,
@@ -92,21 +79,17 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 5e-4,
"optimizer": "adamw_torch_fused",
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 80,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 80,
"eval_steps": 80,
"max_steps": 5,
"save_steps": 3,
"eval_steps": 4,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -115,10 +98,3 @@ class TestMixtral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=6.0,
max_final=4.7,
)

View File

@@ -22,7 +22,8 @@ class TestModelPatches(unittest.TestCase):
def test_mixtral_multipack(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 2048,
@@ -56,7 +57,7 @@ class TestModelPatches(unittest.TestCase):
def test_mistral_multipack(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mistral-25m",
"base_model": "trl-internal-testing/tiny-MistralForCausalLM-0.2",
"flash_attention": True,
"sample_packing": True,
"sequence_len": 2048,

View File

@@ -9,11 +9,7 @@ from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from ..utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from ..utils import check_model_output_exists, with_temp_dir
class TestPhiMultipack(unittest.TestCase):
@@ -25,7 +21,7 @@ class TestPhiMultipack(unittest.TestCase):
def test_ft_packed(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-phi-64m",
"base_model": "microsoft/phi-1_5",
"model_type": "PhiForCausalLM",
"tokenizer_type": "AutoTokenizer",
"sequence_len": 1024,
@@ -47,20 +43,17 @@ class TestPhiMultipack(unittest.TestCase):
"dataset_shard_num": 10,
"dataset_shard_idx": 0,
"num_epochs": 1,
"micro_batch_size": 2,
"micro_batch_size": 1,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"optimizer": "adamw_torch_fused",
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"eval_steps": 50,
"save_steps": 50,
"max_steps": 5,
"eval_steps": 3,
"save_steps": 4,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
@@ -70,19 +63,12 @@ class TestPhiMultipack(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=6.0,
max_final=4.7,
)
@with_temp_dir
def test_qlora_packed(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-phi-64m",
"base_model": "microsoft/phi-1_5",
"model_type": "PhiForCausalLM",
"tokenizer_type": "AutoTokenizer",
"sequence_len": 1024,
@@ -108,20 +94,17 @@ class TestPhiMultipack(unittest.TestCase):
"dataset_shard_num": 10,
"dataset_shard_idx": 0,
"num_epochs": 1,
"micro_batch_size": 2,
"micro_batch_size": 1,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"eval_steps": 50,
"save_steps": 50,
"max_steps": 5,
"eval_steps": 3,
"save_steps": 4,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
@@ -131,10 +114,3 @@ class TestPhiMultipack(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=6.0,
max_final=4.7,
)

View File

@@ -18,7 +18,7 @@ from transformers import AutoModelForCausalLM
# Import the actual trainer methods we want to test
from axolotl.core.trainers.grpo.async_trainer import AsyncGRPOTrainer
MODEL_NAME = "axolotl-ai-co/tiny-qwen3-129m"
MODEL_NAME = "Qwen/Qwen3-0.6B"
def _fix_patched_attention(model):

View File

@@ -4,16 +4,14 @@ E2E tests for falcon
import unittest
import pytest
from axolotl.common.datasets import load_datasets
from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from .utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from .utils import check_model_output_exists, with_temp_dir
class TestFalcon(unittest.TestCase):
@@ -21,12 +19,13 @@ class TestFalcon(unittest.TestCase):
Test case for falcon
"""
@pytest.mark.skip(reason="no tiny models for testing with safetensors")
@with_temp_dir
def test_lora(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-falcon-42m",
"flash_attention": False,
"base_model": "illuin/tiny-random-FalconForCausalLM",
"flash_attention": True,
"sequence_len": 1024,
"load_in_8bit": True,
"adapter": "lora",
@@ -50,21 +49,17 @@ class TestFalcon(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 50,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
@@ -74,20 +69,14 @@ class TestFalcon(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@pytest.mark.skip(reason="no tiny models for testing with safetensors")
@with_temp_dir
def test_lora_added_vocab(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-falcon-42m",
"flash_attention": False,
"base_model": "illuin/tiny-random-FalconForCausalLM",
"flash_attention": True,
"sequence_len": 1024,
"load_in_8bit": True,
"adapter": "lora",
@@ -115,21 +104,17 @@ class TestFalcon(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 50,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
@@ -139,20 +124,14 @@ class TestFalcon(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@pytest.mark.skip(reason="no tiny models for testing with safetensors")
@with_temp_dir
def test_ft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-falcon-42m",
"flash_attention": False,
"base_model": "illuin/tiny-random-FalconForCausalLM",
"flash_attention": True,
"sequence_len": 1024,
"val_set_size": 0.02,
"special_tokens": {
@@ -166,23 +145,17 @@ class TestFalcon(unittest.TestCase):
},
],
"num_epochs": 2,
"sample_packing": True,
"pad_to_sequence_len": True,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 5e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 80,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 80,
"eval_steps": 80,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
@@ -192,10 +165,3 @@ class TestFalcon(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=10,
final_window=10,
max_initial=5.0,
max_final=4.7,
)

View File

@@ -11,11 +11,7 @@ from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from .utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from .utils import check_model_output_exists, with_temp_dir
class TestMistral(unittest.TestCase):
@@ -27,7 +23,7 @@ class TestMistral(unittest.TestCase):
def test_lora(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mistral-25m",
"base_model": "trl-internal-testing/tiny-MistralForCausalLM-0.2",
"flash_attention": True,
"sequence_len": 1024,
"load_in_8bit": True,
@@ -49,18 +45,16 @@ class TestMistral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
@@ -70,19 +64,12 @@ class TestMistral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=4.5,
max_final=4.3,
)
@with_temp_dir
def test_ft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mistral-25m",
"base_model": "trl-internal-testing/tiny-MistralForCausalLM-0.2",
"flash_attention": True,
"sequence_len": 1024,
"val_set_size": 0.02,
@@ -98,18 +85,16 @@ class TestMistral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_torch_fused",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
if is_torch_bf16_gpu_available():
@@ -123,10 +108,3 @@ class TestMistral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=4.5,
max_final=4.3,
)

View File

@@ -12,11 +12,7 @@ from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from .utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from .utils import check_model_output_exists, with_temp_dir
class TestMixtral(unittest.TestCase):
@@ -28,7 +24,8 @@ class TestMixtral(unittest.TestCase):
def test_qlora_w_fa2(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": True,
"sequence_len": 1024,
"load_in_4bit": True,
@@ -54,18 +51,16 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
@@ -79,19 +74,13 @@ class TestMixtral(unittest.TestCase):
== torch.float32
)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@with_temp_dir
def test_qlora_wo_fa2(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": False,
"sequence_len": 1024,
"load_in_4bit": True,
@@ -117,18 +106,16 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
@@ -142,19 +129,13 @@ class TestMixtral(unittest.TestCase):
== torch.float32
)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@with_temp_dir
def test_16bit_lora_w_fa2(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": True,
"sequence_len": 1024,
"adapter": "lora",
@@ -179,18 +160,16 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
if is_torch_bf16_gpu_available():
@@ -208,19 +187,13 @@ class TestMixtral(unittest.TestCase):
== torch.float32
)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@with_temp_dir
def test_16bit_lora_wo_fa2(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": False,
"sequence_len": 1024,
"adapter": "lora",
@@ -245,18 +218,16 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
@@ -274,19 +245,13 @@ class TestMixtral(unittest.TestCase):
== torch.float32
)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@with_temp_dir
def test_ft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-mixtral-30m",
"base_model": "hf-internal-testing/Mixtral-tiny",
"tokenizer_config": "LoneStriker/Mixtral-8x7B-v0.1-HF",
"flash_attention": True,
"sequence_len": 1024,
"val_set_size": 0.02,
@@ -298,18 +263,16 @@ class TestMixtral(unittest.TestCase):
},
],
"num_epochs": 2,
"micro_batch_size": 4,
"micro_batch_size": 2,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "adamw_bnb_8bit",
"lr_scheduler": "cosine",
"max_steps": 50,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 20,
"save_steps": 10,
"eval_steps": 10,
"save_first_step": False,
"use_tensorboard": True,
}
)
if is_torch_bf16_gpu_available():
@@ -323,10 +286,3 @@ class TestMixtral(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)

View File

@@ -13,7 +13,6 @@ from axolotl.utils.dict import DictDefault
from .utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
require_torch_2_5_1,
require_torch_2_6_0,
require_torch_2_7_0,
@@ -244,18 +243,20 @@ class TestCustomOptimizers(unittest.TestCase):
def test_came_pytorch(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-llama-50m",
"tokenizer_type": "AutoTokenizer",
"base_model": "JackFram/llama-68m",
"tokenizer_type": "LlamaTokenizer",
"sequence_len": 1024,
"load_in_8bit": True,
"adapter": "lora",
"lora_r": 8,
"lora_alpha": 16,
"lora_dropout": 0.0,
"lora_dropout": 0.05,
"lora_target_linear": True,
"val_set_size": 0.1,
"special_tokens": {
"pad_token": "<|endoftext|>",
"unk_token": "<unk>",
"bos_token": "<s>",
"eos_token": "</s>",
},
"datasets": [
{
@@ -264,22 +265,16 @@ class TestCustomOptimizers(unittest.TestCase):
},
],
"num_epochs": 1,
"sample_packing": True,
"pad_to_sequence_len": True,
"micro_batch_size": 8,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 1e-4,
"learning_rate": 0.00001,
"optimizer": "came_pytorch",
"adam_beta3": 0.9999,
"adam_epsilon2": 1e-16,
"max_steps": 80,
"warmup_steps": 5,
"logging_steps": 1,
"max_steps": 5,
"lr_scheduler": "cosine",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
@@ -289,13 +284,6 @@ class TestCustomOptimizers(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=10,
final_window=10,
max_initial=4.0,
max_final=3.0,
)
@require_torch_2_7_0

View File

@@ -9,11 +9,7 @@ from axolotl.train import train
from axolotl.utils.config import normalize_config, validate_config
from axolotl.utils.dict import DictDefault
from .utils import (
check_model_output_exists,
check_tensorboard_loss_decreased,
with_temp_dir,
)
from .utils import check_model_output_exists, with_temp_dir
class TestPhi(unittest.TestCase):
@@ -25,7 +21,7 @@ class TestPhi(unittest.TestCase):
def test_phi_ft(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-phi-64m",
"base_model": "microsoft/phi-1_5",
"model_type": "AutoModelForCausalLM",
"tokenizer_type": "AutoTokenizer",
"sequence_len": 2048,
@@ -45,22 +41,18 @@ class TestPhi(unittest.TestCase):
"dataset_shard_num": 10,
"dataset_shard_idx": 0,
"num_epochs": 1,
"micro_batch_size": 4,
"micro_batch_size": 1,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"optimizer": "adamw_torch_fused",
"learning_rate": 0.00001,
"optimizer": "paged_adamw_8bit",
"lr_scheduler": "cosine",
"flash_attention": True,
"max_steps": 50,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 10,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -69,19 +61,12 @@ class TestPhi(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)
@with_temp_dir
def test_phi_qlora(self, temp_dir):
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-phi-64m",
"base_model": "microsoft/phi-1_5",
"model_type": "AutoModelForCausalLM",
"tokenizer_type": "AutoTokenizer",
"sequence_len": 2048,
@@ -105,22 +90,18 @@ class TestPhi(unittest.TestCase):
"dataset_shard_num": 10,
"dataset_shard_idx": 0,
"num_epochs": 1,
"micro_batch_size": 4,
"micro_batch_size": 1,
"gradient_accumulation_steps": 1,
"output_dir": temp_dir,
"learning_rate": 2e-4,
"learning_rate": 0.00001,
"optimizer": "paged_adamw_8bit",
"lr_scheduler": "cosine",
"flash_attention": True,
"max_steps": 50,
"warmup_steps": 5,
"logging_steps": 1,
"save_steps": 50,
"eval_steps": 50,
"max_steps": 10,
"save_steps": 10,
"eval_steps": 10,
"bf16": "auto",
"save_first_step": False,
"use_tensorboard": True,
"seed": 42,
}
)
cfg = validate_config(cfg)
@@ -129,10 +110,3 @@ class TestPhi(unittest.TestCase):
train(cfg=cfg, dataset_meta=dataset_meta)
check_model_output_exists(temp_dir, cfg)
check_tensorboard_loss_decreased(
temp_dir + "/runs",
initial_window=5,
final_window=5,
max_initial=5.0,
max_final=4.7,
)

View File

@@ -18,7 +18,7 @@ class TestPreprocess:
cfg = DictDefault(
{
"base_model": "axolotl-ai-co/tiny-qwen2-129m",
"base_model": "Qwen/Qwen2.5-0.5B",
"sequence_len": 2048,
"val_set_size": 0.01,
"datasets": [

View File

@@ -45,7 +45,7 @@ def _get_fake_quant_config_dtype(config):
@pytest.fixture()
def model():
dummy_model = AutoModelForCausalLM.from_pretrained(
"axolotl-ai-co/tiny-qwen2-129m",
"Qwen/Qwen2-0.5B",
device_map="auto",
dtype=torch.bfloat16,
)

View File

@@ -17,7 +17,7 @@ class TestE2eQwen:
Test cases for qwen models
"""
@pytest.mark.parametrize("base_model", ["axolotl-ai-co/tiny-qwen2-129m"])
@pytest.mark.parametrize("base_model", ["Qwen/Qwen2-0.5B", "Qwen/Qwen2.5-0.5B"])
def test_dpo(self, base_model, temp_dir):
cfg = DictDefault(
{

View File

@@ -199,106 +199,6 @@ def check_tensorboard(
assert df.value.values[-1] > 1e-5, "Expected loss to be greater than zero"
def check_tensorboard_loss_decreased(
temp_run_dir: str,
tag: str | None = None,
initial_window: int = 1,
final_window: int = 1,
min_delta: float | None = None,
max_initial: float | None = None,
max_final: float | None = None,
max_loss_ratio: float = 0.95,
) -> None:
"""Check that training actually learned — loss went down and stayed in
a sensible range.
Used with the tiny ``axolotl-ai-co/tiny-*`` CI models, where pretraining
was brief enough that final loss won't clear the absolute thresholds used
for 135M+ models — but the training pipeline should still behave.
``train/train_loss`` is only logged once (end-of-training aggregate). The
per-step tag is ``train/loss`` for SFT/LM trainers and may vary across
trainers (e.g. DPO). When ``tag`` is None we try common per-step tags in
order and use the first with enough samples.
Two kinds of regression we guard against:
1. **Loss blew up.** A silent bug (e.g. broken label masking) can start
training at an absurdly high loss. ``max_initial`` / ``max_final``
assert the measured means stay at-or-below bounds measured from a
known-good run. Both are optional but strongly encouraged — loss
going *down* from a bad starting scale still looks like "learning."
2. **Loss didn't go down enough.** ``max_loss_ratio`` (default 0.95)
requires ``final <= initial * ratio``. A default below 1.0 means the
final window mean must sit at least 5% below the initial window mean
— real learning, not noise that happened to land below start. Only
raise this for configs where a smaller drop is expected *and*
documented (e.g. DPO with near-trivial pairs); in that case you are
intentionally weakening the test.
``min_delta`` is optional; when set, additionally requires
``final + min_delta <= initial`` — use for configs with enough signal
to demand a specific minimum absolute drop.
"""
tb_log_path = most_recent_subdir(temp_run_dir)
event_file = os.path.join(tb_log_path, sorted(os.listdir(tb_log_path))[0])
reader = SummaryReader(event_file)
df = reader.scalars
if tag is None:
candidates = ["train/loss", "train/train_loss"]
else:
candidates = [tag]
required = initial_window + final_window
chosen_tag, values = None, None
for candidate in candidates:
sub = df[df.tag == candidate]
if len(sub) >= required:
chosen_tag = candidate
values = sub.value.values
break
available = sorted({t for t in df.tag.unique() if "loss" in t.lower()})
assert values is not None, (
f"None of the tags {candidates} had ≥{required} logged steps. "
f"Loss tags present: {available}"
)
initial = float(values[:initial_window].mean())
final = float(values[-final_window:].mean())
print(
f"[check_tensorboard_loss_decreased] tag={chosen_tag} n={len(values)} "
f"initial_mean{initial_window}={initial:.4f} final_mean{final_window}={final:.4f}"
)
assert final > 1e-5, "Expected loss to be greater than zero"
assert final <= initial * max_loss_ratio, (
f"Loss did not decrease for {chosen_tag}: "
f"initial(mean of first {initial_window})={initial:.4f}, "
f"final(mean of last {final_window})={final:.4f}, "
f"ratio={final / initial:.4f} (max allowed {max_loss_ratio}). "
f"Expected final <= initial — training did not learn."
)
if min_delta is not None:
assert final + min_delta <= initial, (
f"Expected loss to decrease by at least {min_delta} for {chosen_tag}: "
f"initial={initial:.4f}, final={final:.4f}, delta={initial - final:.4f}"
)
if max_initial is not None:
assert initial <= max_initial, (
f"Initial loss {initial:.4f} is above the expected max {max_initial}. "
f"Absolute scale is wrong — probably a silent regression "
f"(e.g. bad label masking) that bumped the starting point."
)
if max_final is not None:
assert final <= max_final, (
f"Final loss {final:.4f} is above the expected max {max_final}. "
f"Absolute scale is wrong — probably a silent regression "
f"(e.g. bad label masking) that bumped the endpoint."
)
def check_model_output_exists(temp_dir: str, cfg: DictDefault) -> None:
"""
helper function to check if a model output file exists after training