Files
axolotl/examples/magistral/README.md
NanoCode012 8c6a6ea6eb Feat: add devstral model support (#2880) [skip ci]
* fix: do not add training and training_detail block by default

* fixed: magistral docs

* fix: address pad adding new fields and use built-in from_openai

* feat: try enable multiprocessing

* fix: check for keys before deleting attn_mask

* feat: add mistral pad test

* feat: add tool calling test

* feat: add devstral tokenizer tests

* fix: comma format

* chore: remove unused support_preprocessing as tokenizer is pickable now

* chore: update magistral doc

* feat: add devstral readme and example

* chore: refactor error handling
2025-07-08 11:01:19 -04:00

2.5 KiB

Finetune Magistral Small with Axolotl

Magistral Small is a 24B parameter opensource model from MistralAI found on HuggingFace. This guide shows how to fine-tune it with Axolotl with multi-turn conversations with proper masking.

MistralAI has also released a proprietary medium-sized version called Magistral Medium.

Thanks to the team at MistralAI for giving us early access to prepare for this release.

Getting started

  1. Install Axolotl following the installation guide. You need to install from main as Magistral is only on nightly or use our latest Docker images.

    Here is an example of how to install from main for pip:

# Ensure you have Pytorch installed (Pytorch 2.6.0 recommended)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl

pip3 install packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation -e '.[flash-attn]'
  1. Run the finetuning example:
axolotl train examples/magistral/magistral-small-qlora.yaml

This config uses about 24GB VRAM.

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • For inference, the official MistralAI team recommends top_p: 0.95 and temperature: 0.7 with max_tokens: 40960.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the OpenAI Messages format as seen here.

Optimization Guides

Limitations

We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.

In addition, we do not support overriding tokens yet.

Future Work

  • Add parity to Preference Tuning, RL, Multi-modal, etc.
  • Add parity to other tokenizer configs like overriding tokens.