Files
axolotl/examples/internvl3_5
NanoCode012 418933f0d1 feat: add internvl3_5 (#3141) [skip-ci]
* feat: add internvl3_5

* fix: add timm instructions

* chore: add kimi-linear to cce doc

* feat: update internvl example

* chore: pin revision

* chore: remove from multipack

* fix: add to multimodal array

* fix: internvl use hf version

* feat: update cce

* chore: lint

* fix: list for image_size

* chore: add docs vram usage

* feat: enable cce

* fix: no need trust remote code

* fix: inconsistent timm version
2025-12-25 18:07:59 +07:00
..

Finetune OpenGV's InternVL with Axolotl

InternVL 3.5 is a family of powerful vision-language models supporting dynamic resolution and multi-image understanding by OpenGV. It features a ViT-style vision encoder and strong language model backbone for tasks like visual question answering, OCR, and scene text understanding.

This guide shows how to fine-tune it with Axolotl.

Getting started

  1. Install Axolotl following the installation guide.

  2. Install timm for vision model support:

    pip install timm==1.0.19
    
  3. Install Cut Cross Entropy to reduce training VRAM usage.

  4. Run the finetuning example:

    axolotl train examples/internvl3_5/internvl3_5-8b-qlora.yml
    

This config uses about 8.21 GiB VRAM. Let us know how it goes. Happy finetuning! 🚀

Tips

  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the multi-modal format as seen here.

Optimization Guides

Please check the Optimizations doc.