* add glm support + patch * lint * lint * Update examples/glm4/glm-4-6v-flash-qlora.yaml Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> * Update examples/glm4/glm-4-6v-flash-qlora.yaml Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> * Update src/axolotl/processing_strategies.py Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> * patch removed * lint * lint2 * docs + rename * rmv moe * docs * removed processor * sdpa T_T" * ddp_find_unused_parameters: true * muti gpu yaml tested both * muti gpu yaml tested both * Update examples/glm46v/README.md Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> * Update examples/glm46v/README.md Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> * Update examples/glm46v/README.md Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> * rmv text only section + v5 comments * rename --------- Co-authored-by: Ved <ved.work2024@gmail.com> Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
1.6 KiB
1.6 KiB
Finetune GLM-4.6V with Axolotl
GLM-4.6V is a family of vision-language models from ZhipuAI found on HuggingFace. This guide shows how to fine-tune it with Axolotl for vision-language tasks.
Getting started
-
Install Axolotl from source following the installation guide.
-
Install Cut Cross Entropy to reduce training VRAM usage.
-
Run the fine-tuning:
glm-4-6v-flash(9B)
axolotl train examples/glm46v/glm-4-6v-flash-qlora.yaml
Let us know how it goes. Happy finetuning! 🚀
Tips
- Vision datasets should follow the format described in the multimodal docs
- You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset in the dataset loading docs.
Supported Models
- GLM-4.6V: Full vision-language model (
zai-org/GLM-4.6V) - GLM-4.6V-Flash: Faster variant (
zai-org/GLM-4.6V-Flash)
Optimization Guides
Please check the Optimizations doc.