Files
axolotl/README.md
NanoCode012 90e5598930 Feat: Add voxtral, magistral small 1.1, and misc gemma3n fixes (#2979)
* fix: lock version in gemma3n docs

* feat: add sample configs and docs

* chore: move mistraltokenizer into mistral folder

* feat: update instructions

* feat: add dynamic load voxtral

* fix: remove incorrect vision config, add audio

* fix: support voxtral processing strategy and address none in data

* feat: patch mistraltokenizer subclass upstream and add missing

* feat: update cce commit to include voxtral

* fix: remove old comment

* fix: gemma3 patch not needed anymore

* fix: voxtral modeling code

* fix: remove incorrect ds path

* fix: adjust apply chat template parsing

* feat: enable voxtral patch

* fix: patch

* feat: update example datasets

* fix: target layer

* feat: update gemma3n docs

* feat: update voxtral docs

* feat: revert assistant parsing to rely on new upstream changes

* chore: skip test till next PR fix

* fix: override upstream decode due to missing handling

* feat: update readme

* fix: update

* feat: add magistral small think support

* feat: update mistral-common dep

* fix: lint

* fix: remove optional dep

* chore: typing

* chore: simply import

* feat(doc): update differences for 2507

* fix: coderrabbit comments

* feat: update clarify docs on new transformers
2025-07-30 15:57:05 +07:00

9.3 KiB

Axolotl

GitHub License tests codecov Releases
contributors GitHub Repo stars
discord twitter
tests-nightly multigpu-semi-weekly tests

🎉 Latest Updates

  • 2025/07: Voxtral with mistral-common tokenizer support has been integrated in Axolotl. Read the docs!
  • 2025/07: TiledMLP support for single-GPU to multi-GPU training with DDP, DeepSpeed and FSDP support has been added to support Arctic Long Sequence Training. (ALST). See examples for using ALST with Axolotl!
  • 2025/06: Magistral with mistral-common tokenizer support has been added to Axolotl. See examples to start training your own Magistral models with Axolotl!
  • 2025/05: Quantization Aware Training (QAT) support has been added to Axolotl. Explore the docs to learn more!
  • 2025/04: Llama 4 support has been added in Axolotl. See examples to start training your own Llama 4 models with Axolotl's linearized version!
  • 2025/03: Axolotl has implemented Sequence Parallelism (SP) support. Read the blog and docs to learn how to scale your context length when fine-tuning.
  • 2025/03: (Beta) Fine-tuning Multimodal models is now supported in Axolotl. Check out the docs to fine-tune your own!
  • 2025/02: Axolotl has added LoRA optimizations to reduce memory usage and improve training speed for LoRA and QLoRA in single GPU and multi-GPU training (DDP and DeepSpeed). Jump into the docs to give it a try.
  • 2025/02: Axolotl has added GRPO support. Dive into our blog and GRPO example and have some fun!
  • 2025/01: Axolotl has added Reward Modelling / Process Reward Modelling fine-tuning support. See docs.

Overview

Axolotl is a tool designed to streamline post-training for various AI models.

Features:

🚀 Quick Start

Requirements:

  • NVIDIA GPU (Ampere or newer for bf16 and Flash Attention) or AMD GPU
  • Python 3.11
  • PyTorch ≥2.6.0

Installation

Using pip

pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]

# Download example axolotl configs, deepspeed configs
axolotl fetch examples
axolotl fetch deepspeed_configs  # OPTIONAL

Using Docker

Installing with Docker can be less error prone than installing in your own environment.

docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest

Other installation approaches are described here.

Cloud Providers

Your First Fine-tune

# Fetch axolotl examples
axolotl fetch examples

# Or, specify a custom path
axolotl fetch examples --dest path/to/folder

# Train a model using LoRA
axolotl train examples/llama-3/lora-1b.yml

That's it! Check out our Getting Started Guide for a more detailed walkthrough.

📚 Documentation

🤝 Getting Help

🌟 Contributing

Contributions are welcome! Please see our Contributing Guide for details.

❤️ Sponsors

Interested in sponsoring? Contact us at wing@axolotl.ai

📜 License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.