Files
NanoCode012 17fc747f99 fix: docker build failing (#3622)
* fix: uv leftover docs

* fix: docker build failing

* chore: doc

* fix: remove old pytorch build

* fix: stop recommend flash-attn optional, let transformers pull

* fix: remove ring flash attention from image

* fix: quotes [skip ci]

* chore: naming [skip ci]
2026-04-24 14:23:09 +07:00
..
2025-08-08 08:09:11 -04:00
2026-04-24 14:23:09 +07:00

Finetune ArceeAI's AFM with Axolotl

Arcee Foundation Models (AFM) are a family of 4.5B parameter open weight models trained by Arcee.ai.

This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.

Thanks to the team at Arcee.ai for using Axolotl in supervised fine-tuning the AFM model.

Getting started

  1. Install Axolotl following the installation guide. You need to install from main as AFM is only on nightly or use our latest Docker images.

    Here is an example of how to install from main for pip:

# Ensure you have Pytorch installed (Pytorch 2.9.1 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl

uv pip install --no-build-isolation -e '.'

# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh
  1. Run the finetuning example:
axolotl train examples/arcee/afm-4.5b-qlora.yaml

This config uses about 7.8GiB VRAM.

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • For inference, the official Arcee.ai team recommends top_p: 0.95, temperature: 0.5, top_k: 50, and repeat_penalty: 1.1.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the OpenAI Messages format as seen here.

Optimization Guides