Files
2026-02-10 18:57:20 +07:00
..
2026-02-10 18:57:20 +07:00

Finetune Z.ai's GLM-4.7-Flash with Axolotl

GLM-4.7-Flash is a 30B-A3B MoE model.

This guide shows how to fine-tune it with Axolotl.

Getting started

  1. Install Axolotl following the installation guide.

  2. Install Cut Cross Entropy to reduce training VRAM usage

  3. Run the finetuning example:

axolotl train examples/glm4.7-flash/glm4.7-flash-qlora.yaml

This config uses about X GiB VRAM.

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • For inference, the official Z.ai team recommends top_p: 0.95, temperature: 1.0, and max_new_tokens: 131072.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.

Optimization Guides

Please check the Optimizations doc.