* feat: move to uv first * fix: update doc to uv first * fix: merge dev/tests into uv pyproject * fix: update docker docs to match current config * fix: migrate examples to readme * fix: add llmcompressor to conflict * feat: rec uv sync with lockfile for dev/ci * fix: update docker docs to clarify how to use uv images * chore: docs * fix: use system python, no venv * fix: set backend cpu * fix: only set for installing pytorch step * fix: remove unsloth kernel and installs * fix: remove U in tests * fix: set backend in deps too * chore: test * chore: comments * fix: attempt to lock torch * fix: workaround torch cuda and not upgraded * fix: forgot to push * fix: missed source * fix: nightly upstream loralinear config * fix: nightly phi3 long rope not work * fix: forgot commit * fix: test phi3 template change * fix: no more requirements * fix: carry over changes from new requirements to pyproject * chore: remove lockfile per discussion * fix: set match-runtime * fix: remove unneeded hf hub buildtime * fix: duplicate cache delete on nightly * fix: torchvision being overridden * fix: migrate to uv images * fix: leftover from merge * fix: simplify base readme * fix: update assertion message to be clearer * chore: docs * fix: change fallback for cicd script * fix: match against main exactly * fix: peft 0.19.1 change * fix: e2e test * fix: ci * fix: e2e test
83 lines
2.9 KiB
Markdown
83 lines
2.9 KiB
Markdown
# Finetune Mistral Small 4 with Axolotl
|
|
|
|
Mistral Small 4 is a 119B parameter (6.5B active) multimodal MoE model from MistralAI that unifies instruct, reasoning, and coding capabilities into a single model. It is available on HuggingFace at [Mistral-Small-4-119B-2603](https://huggingface.co/mistralai/Mistral-Small-4-119B-2603).
|
|
|
|
Thanks to the team at MistralAI for giving us early access to prepare for this release.
|
|
|
|
## Getting started
|
|
|
|
1. Install Axolotl following the [installation guide](https://docs.axolotl.ai/docs/installation.html).
|
|
|
|
2. Install [Cut Cross Entropy](https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy) to reduce training VRAM usage
|
|
|
|
3. Install transformers from main
|
|
|
|
```bash
|
|
uv pip install git+https://github.com/huggingface/transformers.git
|
|
```
|
|
|
|
4. Run one of the example configs:
|
|
|
|
```bash
|
|
# text-only
|
|
axolotl train examples/mistral4/qlora-text.yml # no experts ~69 GiB, experts ~93 GiB
|
|
axolotl train examples/mistral4/fft-text.yml
|
|
|
|
# text + vision
|
|
# run: wget https://huggingface.co/datasets/Nanobit/text-vision-2k-test/resolve/main/African_elephant.jpg
|
|
axolotl train examples/mistral4/qlora-vision.yml # no experts ~68 GiB
|
|
axolotl train examples/mistral4/fft-vision.yml
|
|
```
|
|
|
|
Note: FFT configs provided as reference. Please adjust hyperparameters as needed.
|
|
|
|
## Reasoning Effort
|
|
|
|
The chat template supports a `reasoning_effort` variable to control the model's reasoning depth:
|
|
|
|
- `"none"` — instruct mode (default)
|
|
- `"high"` — reasoning mode with explicit thinking steps
|
|
|
|
Pass it via `chat_template_kwargs` under your dataset config:
|
|
|
|
```yaml
|
|
datasets:
|
|
- path: your/dataset
|
|
type: chat_template
|
|
chat_template_kwargs:
|
|
reasoning_effort: high
|
|
```
|
|
|
|
## Thinking Support
|
|
|
|
The chat template supports a `thinking` content type in assistant messages for training on reasoning traces (rendered as `[THINK]...[/THINK]` blocks).
|
|
|
|
To use thinking datasets, add the `thinking` mapping via `message_property_mappings`:
|
|
|
|
```yaml
|
|
datasets:
|
|
- path: your/thinking-dataset
|
|
type: chat_template
|
|
message_property_mappings:
|
|
role: role
|
|
content: content
|
|
thinking: thinking
|
|
chat_template_kwargs:
|
|
reasoning_effort: high
|
|
```
|
|
|
|
See the [Magistral thinking guide](../magistral/think/README.md) for dataset format details.
|
|
|
|
## Tips
|
|
|
|
- Read more on how to load your own dataset at [docs](https://docs.axolotl.ai/docs/dataset_loading.html).
|
|
- The text dataset format follows the OpenAI Messages format as seen [here](https://docs.axolotl.ai/docs/dataset-formats/conversation.html#chat_template).
|
|
- The vision model requires multi-modal dataset format as documented [here](https://docs.axolotl.ai/docs/multimodal.html#dataset-format).
|
|
|
|
## Related Resources
|
|
|
|
- [MistralAI Mistral Small 4 Blog](https://mistral.ai/news/mistral-small-4)
|
|
- [Axolotl Docs](https://docs.axolotl.ai)
|
|
- [Axolotl GitHub](https://github.com/axolotl-ai-cloud/axolotl)
|
|
- [Axolotl Discord](https://discord.gg/7m9sfhzaf3)
|