Files
axolotl/examples/llama-4/README.md
NanoCode012 85053f4bd4 Fix(doc): add delinearize instruction (#2545)
* fix: mention to install pytorch before axolotl

* feat(doc): include instruction to delinearize

* fix: update instruction for delinearize with adapter
2025-04-24 01:03:43 -04:00

1.4 KiB

Llama 4 by Meta AI

Flash Attention vs Flex Attention

While Flash Attention to support is "enabled" for Llama-4, the upstream implementation is not correct and usage of Flex Attention is recommended.

Available Examples

Llama 4 Scout 17Bx16Experts (109B)

Flex Attention

Our Single H100 implementation for Llama 4 Scout uses only 64.5GB VRAM for post-training with 4k context length @ 519 tokens/second. WandB logs here Multi-GPU (4xH100) for Llama 4 Scout uses 62.8GB VRAM/GPU @ 4k contenxt length @ 280tps/gpu, WandB logs here

Llama 4 Maverick 17Bx128Experts (400B)

Coming Soon

Delinearized Llama 4 Models

We provide a script to delinearize Llama 4 linearized models into regular HuggingFace Llama 4 models.

axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dir