Files
axolotl/examples/olmo3
NanoCode012 006f226270 Feat: add Olmo3 (BC with Olmo and Olmo2) (#3275)
* feat: update cce to include olmo family

* chore: update docs following feedback

* feat: add olmo3 config

* fix: clarify 3 methods

* chore: add olmo to readme
2025-11-24 10:21:31 +07:00
..

Finetune Allenai's Olmo 3 with Axolotl

Olmo 3 are a family of 7B and 32B models open source models trained by The Allen Institute for Artificial Intelligence.

This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.

Getting started

  1. Install Axolotl following the installation guide.

    Here is an example of how to install from pip:

    # Ensure you have a compatible version of Pytorch installed
    pip3 install packaging setuptools wheel ninja
    pip3 install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
    
    # Install Cut Cross Entropy
    python scripts/cutcrossentropy_install.py | sh
    
  2. Run the finetuning example:

axolotl train examples/olmo3/olmo3-7b-qlora.yaml

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • The example config can be re-used for Olmo and Olmo 2.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the OpenAI Messages format as seen here.

Optimization Guides

Please check the Optimizations doc.