* feat: move to uv first * fix: update doc to uv first * fix: merge dev/tests into uv pyproject * fix: update docker docs to match current config * fix: migrate examples to readme * fix: add llmcompressor to conflict * feat: rec uv sync with lockfile for dev/ci * fix: update docker docs to clarify how to use uv images * chore: docs * fix: use system python, no venv * fix: set backend cpu * fix: only set for installing pytorch step * fix: remove unsloth kernel and installs * fix: remove U in tests * fix: set backend in deps too * chore: test * chore: comments * fix: attempt to lock torch * fix: workaround torch cuda and not upgraded * fix: forgot to push * fix: missed source * fix: nightly upstream loralinear config * fix: nightly phi3 long rope not work * fix: forgot commit * fix: test phi3 template change * fix: no more requirements * fix: carry over changes from new requirements to pyproject * chore: remove lockfile per discussion * fix: set match-runtime * fix: remove unneeded hf hub buildtime * fix: duplicate cache delete on nightly * fix: torchvision being overridden * fix: migrate to uv images * fix: leftover from merge * fix: simplify base readme * fix: update assertion message to be clearer * chore: docs * fix: change fallback for cicd script * fix: match against main exactly * fix: peft 0.19.1 change * fix: e2e test * fix: ci * fix: e2e test
Finetune Magistral Small with Axolotl
Magistral Small is a 24B parameter opensource model from MistralAI found on HuggingFace at 2506, 2507 (see Thinking), and 2509 (see Vision). This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.
MistralAI has also released a proprietary medium-sized version called Magistral Medium.
Thanks to the team at MistralAI for giving us early access to prepare for these releases.
Getting started
-
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
# Ensure you have Pytorch installed (Pytorch 2.7.0 min)
uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
- Install Cut Cross Entropy to reduce training VRAM usage
python scripts/cutcrossentropy_install.py | sh
- Run the finetuning example:
axolotl train examples/magistral/magistral-small-qlora.yaml
This config uses about 24GB VRAM.
Let us know how it goes. Happy finetuning! 🚀
Thinking
MistralAI has released their 2507 model with thinking capabilities, enabling Chain-of-Thought reasoning with explicit thinking steps.
📚 See the Thinking fine-tuning guide →
Vision
MistralAI has released their 2509 model with vision capabilities.
📚 See the Vision fine-tuning guide →
Tips
- We recommend adding the same/similar SystemPrompt that the model is tuned for. You can find this within the repo's files titled
SYSTEM_PROMPT.txt. - For inference, the official MistralAI team recommends
top_p: 0.95andtemperature: 0.7withmax_tokens: 40960. - You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The text dataset format follows the OpenAI Messages format as seen here.
Optimization Guides
Limitations
We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.
In addition, we do not support overriding tokens yet.
Related Resources
Future Work
- Add parity to Preference Tuning, RL, etc.
- Add parity to other tokenizer configs like overriding tokens.