Files
axolotl/examples/LiquidAI

Finetune Liquid Foundation Models 2 (LFM2) with Axolotl

Liquid Foundation Models 2 (LFM2) are a family of small, open-weight models from Liquid AI focused on quality, speed, and memory efficiency. Liquid AI released text-only LFM2 and text+vision LFM2-VL models.

LFM2 features a new hybrid Liquid architecture with multiplicative gates, short-range convolutions, and grouped query attention, enabling fast training and inference.

This guide shows how to fine-tune both the LFM2 and LFM2-VL models with Axolotl.

Getting Started

  1. Install Axolotl following the installation guide.

    Here is an example of how to install from pip:

    # Ensure you have a compatible version of PyTorch installed
    # Option A: manage dependencies in your project
    uv add 'axolotl>=0.12.0'
    uv pip install flash-attn --no-build-isolation
    
    # Option B: quick install
    uv pip install 'axolotl>=0.12.0'
    uv pip install flash-attn --no-build-isolation
    
  2. Run one of the finetuning examples below.

    LFM2

    # FFT SFT (1x48GB @ 25GiB)
    axolotl train examples/LiquidAI/lfm2-350m-fft.yaml
    

    LFM2-VL

    # LoRA SFT (1x48GB @ 2.7GiB)
    axolotl train examples/LiquidAI/lfm2-vl-lora.yaml
    

TIPS

  • Installation Error: If you encounter ImportError: ... undefined symbol ... or ModuleNotFoundError: No module named 'causal_conv1d_cuda', the causal-conv1d package may have been installed incorrectly. Try uninstalling it:

    uv pip uninstall -y causal-conv1d
    
  • Dataset Loading: Read more on how to load your own dataset in our documentation.

  • Dataset Formats:

    • For LFM2 models, the dataset format follows the OpenAI Messages format as seen here.
    • For LFM2-VL models, Axolotl follows the multi-content Messages format. See our Multimodal docs for details.

Optimization Guides