* feat: move to uv first * fix: update doc to uv first * fix: merge dev/tests into uv pyproject * fix: update docker docs to match current config * fix: migrate examples to readme * fix: add llmcompressor to conflict * feat: rec uv sync with lockfile for dev/ci * fix: update docker docs to clarify how to use uv images * chore: docs * fix: use system python, no venv * fix: set backend cpu * fix: only set for installing pytorch step * fix: remove unsloth kernel and installs * fix: remove U in tests * fix: set backend in deps too * chore: test * chore: comments * fix: attempt to lock torch * fix: workaround torch cuda and not upgraded * fix: forgot to push * fix: missed source * fix: nightly upstream loralinear config * fix: nightly phi3 long rope not work * fix: forgot commit * fix: test phi3 template change * fix: no more requirements * fix: carry over changes from new requirements to pyproject * chore: remove lockfile per discussion * fix: set match-runtime * fix: remove unneeded hf hub buildtime * fix: duplicate cache delete on nightly * fix: torchvision being overridden * fix: migrate to uv images * fix: leftover from merge * fix: simplify base readme * fix: update assertion message to be clearer * chore: docs * fix: change fallback for cicd script * fix: match against main exactly * fix: peft 0.19.1 change * fix: e2e test * fix: ci * fix: e2e test
53 lines
1.6 KiB
Markdown
53 lines
1.6 KiB
Markdown
# Mistral Small 3.1/3.2 Fine-tuning
|
|
|
|
This guide covers fine-tuning [Mistral Small 3.1](mistralai/Mistral-Small-3.1-24B-Instruct-2503) and [Mistral Small 3.2](mistralai/Mistral-Small-3.2-24B-Instruct-2506) with vision capabilities using Axolotl.
|
|
|
|
## Prerequisites
|
|
|
|
Before starting, ensure you have:
|
|
|
|
- Installed Axolotl (see [Installation docs](https://docs.axolotl.ai/docs/installation.html))
|
|
|
|
## Getting Started
|
|
|
|
1. Install the required vision lib:
|
|
```bash
|
|
uv pip install 'mistral-common[opencv]==1.8.5'
|
|
```
|
|
|
|
2. Download the example dataset image:
|
|
```bash
|
|
wget https://huggingface.co/datasets/Nanobit/text-vision-2k-test/resolve/main/African_elephant.jpg
|
|
```
|
|
|
|
3. Run the fine-tuning:
|
|
```bash
|
|
axolotl train examples/mistral/mistral-small/mistral-small-3.1-24B-lora.yml
|
|
```
|
|
|
|
This config uses about 29.4 GiB VRAM.
|
|
|
|
## Dataset Format
|
|
|
|
The vision model requires multi-modal dataset format as documented [here](https://docs.axolotl.ai/docs/multimodal.html#dataset-format).
|
|
|
|
One exception is that, passing `"image": PIL.Image` is not supported. MistralTokenizer only supports `path`, `url`, and `base64` for now.
|
|
|
|
Example:
|
|
```json
|
|
{
|
|
"messages": [
|
|
{"role": "system", "content": [{ "type": "text", "text": "{SYSTEM_PROMPT}"}]},
|
|
{"role": "user", "content": [
|
|
{ "type": "text", "text": "What's in this image?"},
|
|
{"type": "image", "path": "path/to/image.jpg" }
|
|
]},
|
|
{"role": "assistant", "content": [{ "type": "text", "text": "..." }]},
|
|
],
|
|
}
|
|
```
|
|
|
|
## Limitations
|
|
|
|
- Sample Packing is not supported for multi-modality training currently.
|