* fix: lock version in gemma3n docs * feat: add sample configs and docs * chore: move mistraltokenizer into mistral folder * feat: update instructions * feat: add dynamic load voxtral * fix: remove incorrect vision config, add audio * fix: support voxtral processing strategy and address none in data * feat: patch mistraltokenizer subclass upstream and add missing * feat: update cce commit to include voxtral * fix: remove old comment * fix: gemma3 patch not needed anymore * fix: voxtral modeling code * fix: remove incorrect ds path * fix: adjust apply chat template parsing * feat: enable voxtral patch * fix: patch * feat: update example datasets * fix: target layer * feat: update gemma3n docs * feat: update voxtral docs * feat: revert assistant parsing to rely on new upstream changes * chore: skip test till next PR fix * fix: override upstream decode due to missing handling * feat: update readme * fix: update * feat: add magistral small think support * feat: update mistral-common dep * fix: lint * fix: remove optional dep * chore: typing * chore: simply import * feat(doc): update differences for 2507 * fix: coderrabbit comments * feat: update clarify docs on new transformers
Finetune Magistral Small with Axolotl
Magistral Small is a 24B parameter opensource model from MistralAI found on HuggingFace at 2506 and 2507 (see Thinking). This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.
MistralAI has also released a proprietary medium-sized version called Magistral Medium.
Thanks to the team at MistralAI for giving us early access to prepare for this release.
Getting started
-
Install Axolotl following the installation guide. You need to install from main as Magistral is only on nightly or use our latest Docker images.
Here is an example of how to install from main for pip:
# Ensure you have Pytorch installed (Pytorch 2.6.0 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
pip3 install packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation -e '.[flash-attn]'
- Run the finetuning example:
axolotl train examples/magistral/magistral-small-qlora.yaml
This config uses about 24GB VRAM.
Let us know how it goes. Happy finetuning! 🚀
Thinking
MistralAI has released their 2507 model with thinking capabilities. The model requires the multi-content dataset format with support for an extra role: thinking within system and assistant messages.
Example format:
{
"messages": [
{"role": "system", "content": [{ "type": "text", "text": "{SYSTEM_PROMPT}"}]},
{"role": "user", "content": [{ "type": "text", "text": "..."}]},
{"role": "assistant", "content": [{ "type": "thinking", "thinking": "..."}, { "type": "text", "text": "..." }]},
],
}
Example config: ./magistral-small-think-qlora.yaml.
The thinking section also supports an optional arg closed: bool (True default) which controls adding the closing [/THINK] tag.
Limitations:
- You cannot mix
content: strwithcontent: list[dict]as thedataset.load_datasetmay complain about different types forcontentkey. - This mode does not work with custom
train_detailandtrainingat the moment.
TIPS
- We recommend adding the same/similar SystemPrompt that the model is tuned for. You can find this within the repo's files titled
SYSTEM_PROMPT.txt. - For inference, the official MistralAI team recommends
top_p: 0.95andtemperature: 0.7withmax_tokens: 40960. - You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The text dataset format follows the OpenAI Messages format as seen here.
Optimization Guides
Limitations
We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.
In addition, we do not support overriding tokens yet.
Related Resources
Future Work
- Add parity to Preference Tuning, RL, Multi-modal, etc.
- Add parity to other tokenizer configs like overriding tokens.