Files
axolotl/examples/seed-oss/README.md
NanoCode012 17fc747f99 fix: docker build failing (#3622)
* fix: uv leftover docs

* fix: docker build failing

* chore: doc

* fix: remove old pytorch build

* fix: stop recommend flash-attn optional, let transformers pull

* fix: remove ring flash attention from image

* fix: quotes [skip ci]

* chore: naming [skip ci]
2026-04-24 14:23:09 +07:00

1.7 KiB

Finetune ByteDance's Seed-OSS with Axolotl

Seed-OSS are a series of 36B parameter open source models trained by ByteDance's Seed Team.

This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.

Getting started

  1. Install Axolotl following the installation guide.

    Here is an example of how to install from pip:

    # Ensure you have a compatible version of Pytorch installed
    uv pip install --no-build-isolation 'axolotl>=0.16.1'
    
    # Install Cut Cross Entropy
    python scripts/cutcrossentropy_install.py | sh
    
  2. Run the finetuning example:

axolotl train examples/seed-oss/seed-oss-36b-qlora.yaml

This config uses about 27.7 GiB VRAM.

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • For inference, the official Seed Team recommends top_p=0.95 and temperature=1.1.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the OpenAI Messages format as seen here.

Optimization Guides

Please check the Optimizations doc.