* feat: add internvl3_5 * fix: add timm instructions * chore: add kimi-linear to cce doc * feat: update internvl example * chore: pin revision * chore: remove from multipack * fix: add to multimodal array * fix: internvl use hf version * feat: update cce * chore: lint * fix: list for image_size * chore: add docs vram usage * feat: enable cce * fix: no need trust remote code * fix: inconsistent timm version
Finetune OpenGV's InternVL with Axolotl
InternVL 3.5 is a family of powerful vision-language models supporting dynamic resolution and multi-image understanding by OpenGV. It features a ViT-style vision encoder and strong language model backbone for tasks like visual question answering, OCR, and scene text understanding.
This guide shows how to fine-tune it with Axolotl.
Getting started
-
Install Axolotl following the installation guide.
-
Install
timmfor vision model support:pip install timm==1.0.19 -
Install Cut Cross Entropy to reduce training VRAM usage.
-
Run the finetuning example:
axolotl train examples/internvl3_5/internvl3_5-8b-qlora.yml
This config uses about 8.21 GiB VRAM. Let us know how it goes. Happy finetuning! 🚀
Tips
- You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The dataset format follows the multi-modal format as seen here.
Optimization Guides
Please check the Optimizations doc.