3.2 KiB
Finetune Devstral with Axolotl
Devstral Small is a 24B parameter opensource model from MistralAI found on HuggingFace Devstral-Small-2505 and Devstral-Small-2507. Devstral-Small-2507 is the latest version of the model and has function calling support.
This guide shows how to fine-tune it with Axolotl with multi-turn conversations with proper masking.
The model was fine-tuned ontop of Mistral-Small-3.1 without the vision layer and has a context of up to 128k tokens.
Thanks to the team at MistralAI for giving us early access to prepare for this release.
Getting started
-
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
# Option A: manage dependencies in your project
uv add 'axolotl>=0.12.0'
uv pip install flash-attn --no-build-isolation
# Option B: quick install
uv pip install 'axolotl>=0.12.0'
uv pip install flash-attn --no-build-isolation
- Install Cut Cross Entropy to reduce training VRAM usage
python scripts/cutcrossentropy_install.py | sh
- Run the finetuning example:
axolotl train examples/devstral/devstral-small-qlora.yml
This config uses about 21GB VRAM.
Let us know how it goes. Happy finetuning! 🚀
TIPS
- You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The dataset format follows the OpenAI Messages format as seen here.
- Learn how to use function calling with Axolotl at docs.
Optimization Guides
Limitations
We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.
In addition, we do not support overriding tokens yet.
Related Resources
- MistralAI Devstral Blog
- MistralAI Devstral 1.1 Blog
- Axolotl Docs
- Axolotl GitHub
- Axolotl Website
- Axolotl Discord
Future Work
- Add parity to Preference Tuning, RL, Multi-modal, etc.
- Add parity to other tokenizer configs like overriding tokens.