Files
axolotl/examples/devstral/README.md
NanoCode012 b71482cec5 Feat: add hunyuan v1 (#3016)
* feat: add hunyuan cce support

* feat: update cce docs

* feat: add multipack support for granite and hunyuan

* feat: add hunyuan docs and example config

* feat: update readme instructions to include CCE installation

* fix: chat template log appearing despite tokenizer already having template

* feat: add vram usage

* fix: remove duplicate cce install

* fix: use latest commit of PR in case rebased/pushed

* Revert "fix: use latest commit of PR in case rebased/pushed"

This reverts commit 8b60aa00de.

* feat: update doc as upstream merged
2025-09-10 09:03:30 +07:00

3.1 KiB

Finetune Devstral with Axolotl

Devstral Small is a 24B parameter opensource model from MistralAI found on HuggingFace Devstral-Small-2505 and Devstral-Small-2507. Devstral-Small-2507 is the latest version of the model and has function calling support.

This guide shows how to fine-tune it with Axolotl with multi-turn conversations with proper masking.

The model was fine-tuned ontop of Mistral-Small-3.1 without the vision layer and has a context of up to 128k tokens.

Thanks to the team at MistralAI for giving us early access to prepare for this release.

Getting started

  1. Install Axolotl following the installation guide.

    Here is an example of how to install from pip:

# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
pip3 install packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
  1. Install Cut Cross Entropy to reduce training VRAM usage
python scripts/cutcrossentropy_install.py | sh
  1. Run the finetuning example:
axolotl train examples/devstral/devstral-small-qlora.yml

This config uses about 21GB VRAM.

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the OpenAI Messages format as seen here.
  • Learn how to use function calling with Axolotl at docs.

Optimization Guides

Limitations

We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.

In addition, we do not support overriding tokens yet.

Future Work

  • Add parity to Preference Tuning, RL, Multi-modal, etc.
  • Add parity to other tokenizer configs like overriding tokens.