* fix: uv leftover docs * fix: docker build failing * chore: doc * fix: remove old pytorch build * fix: stop recommend flash-attn optional, let transformers pull * fix: remove ring flash attention from image * fix: quotes [skip ci] * chore: naming [skip ci]
2.9 KiB
2.9 KiB
Finetune Voxtral with Axolotl
Voxtral is a 3B/24B parameter opensource model from MistralAI found on HuggingFace. This guide shows how to fine-tune it with Axolotl.
Thanks to the team at MistralAI for giving us early access to prepare for this release.
Getting started
-
Install Axolotl following the installation guide.
Here is an example of how to install from pip:
# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
uv pip install --no-build-isolation 'axolotl>=0.16.1'
- Please install the below.
# audio
uv pip install librosa==0.11.0
uv pip install 'mistral_common[audio]==1.8.3'
# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh
- Download sample dataset files
# for text + audio only
wget https://huggingface.co/datasets/Nanobit/text-audio-2k-test/resolve/main/En-us-African_elephant.oga
- Run the finetuning example:
# text only
axolotl train examples/voxtral/voxtral-mini-qlora.yml
# text + audio
axolotl train examples/voxtral/voxtral-mini-audio-qlora.yml
These configs use about 4.8 GB VRAM.
Let us know how it goes. Happy finetuning! 🚀
TIPS
- For inference, the official MistralAI team recommends
temperature: 0.2andtop_p: 0.95for audio understanding andtemperature: 0.0for transcription. - You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The text dataset format follows the OpenAI Messages format as seen here.
- The multimodal dataset format follows the OpenAI multi-content Messages format as seen here.
Optimization Guides
Limitations
We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.
In addition, we do not support overriding tokens yet.
Related Resources
Future Work
- Add parity to Preference Tuning, RL, etc.
- Add parity to other tokenizer configs like overriding tokens.