Files
axolotl/examples/llama-4/README.md
2025-04-09 02:39:03 -04:00

412 B

Llama 4 by Meta AI

Available Examples

Llama 4 Scout 17Bx16Experts (109B)

Our Single GPU implementation for Llama 4 Scout uses only 68.5GB VRAM for post-training with 4k context length @ 546 tokens/second.