Files
axolotl/examples/LiquidAI
NanoCode012 9de5b76336 feat: move to uv first (#3545)
* feat: move to uv first

* fix: update doc to uv first

* fix: merge dev/tests into uv pyproject

* fix: update docker docs to match current config

* fix: migrate examples to readme

* fix: add llmcompressor to conflict

* feat: rec uv sync with lockfile for dev/ci

* fix: update docker docs to clarify how to use uv images

* chore: docs

* fix: use system python, no venv

* fix: set backend cpu

* fix: only set for installing pytorch step

* fix: remove unsloth kernel and installs

* fix: remove U in tests

* fix: set backend in deps too

* chore: test

* chore: comments

* fix: attempt to lock torch

* fix: workaround torch cuda and not upgraded

* fix: forgot to push

* fix: missed source

* fix: nightly upstream loralinear config

* fix: nightly phi3 long rope not work

* fix: forgot commit

* fix: test phi3 template change

* fix: no more requirements

* fix: carry over changes from new requirements to pyproject

* chore: remove lockfile per discussion

* fix: set match-runtime

* fix: remove unneeded hf hub buildtime

* fix: duplicate cache delete on nightly

* fix: torchvision being overridden

* fix: migrate to uv images

* fix: leftover from merge

* fix: simplify base readme

* fix: update assertion message to be clearer

* chore: docs

* fix: change fallback for cicd script

* fix: match against main exactly

* fix: peft 0.19.1 change

* fix: e2e test

* fix: ci

* fix: e2e test
2026-04-21 10:16:03 -04:00
..
2026-04-21 10:16:03 -04:00

Finetune Liquid Foundation Models 2 (LFM2) with Axolotl

Liquid Foundation Models 2 (LFM2) are a family of small, open-weight models from Liquid AI focused on quality, speed, and memory efficiency. Liquid AI released text-only LFM2 and text+vision LFM2-VL models.

LFM2 features a new hybrid Liquid architecture with multiplicative gates, short-range convolutions, and grouped query attention, enabling fast training and inference.

This guide shows how to fine-tune both the LFM2 and LFM2-VL models with Axolotl.

Thanks to the team at LiquidAI for giving us early access to prepare for these releases.

Getting Started

  1. Install Axolotl following the installation guide.

    Here is an example of how to install from pip:

    # Ensure you have a compatible version of Pytorch installed
    uv pip install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
    
  2. Run one of the finetuning examples below.

    LFM2

    # FFT SFT (1x48GB @ 25GiB)
    axolotl train examples/LiquidAI/lfm2-350m-fft.yaml
    

    LFM2-VL

    # LoRA SFT (1x48GB @ 2.7GiB)
    axolotl train examples/LiquidAI/lfm2-vl-lora.yaml
    

    LFM2-MoE

    uv pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6
    
    # LoRA SFT (1x48GB @ 16.2GiB)
    axolotl train examples/LiquidAI/lfm2-8b-a1b-lora.yaml
    

TIPS

  • Installation Error: If you encounter ImportError: ... undefined symbol ... or ModuleNotFoundError: No module named 'causal_conv1d_cuda', the causal-conv1d package may have been installed incorrectly. Try uninstalling it:

    uv pip uninstall causal-conv1d
    
  • Dataset Loading: Read more on how to load your own dataset in our documentation.

  • Dataset Formats:

    • For LFM2 models, the dataset format follows the OpenAI Messages format as seen here.
    • For LFM2-VL models, Axolotl follows the multi-content Messages format. See our Multimodal docs for details.

Optimization Guides