diff --git a/docs/cli.qmd b/docs/cli.qmd index 79892fc5a..1003a210c 100644 --- a/docs/cli.qmd +++ b/docs/cli.qmd @@ -199,6 +199,17 @@ output_dir: # Directory to save evaluation results See [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) for more details. +### delinearize-llama4 + +Delinearizes a Llama 4 linearized model into a regular HuggingFace Llama 4 model. This only works with the non-quantized linearized model. + +```bash +axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dir +``` + +This would be necessary to use with other frameworks. If you have an adapter, merge it with the non-quantized linearized model before delinearizing. + + ## Legacy CLI Usage While the new Click-based CLI is preferred, Axolotl still supports the legacy module-based CLI: diff --git a/docs/installation.qmd b/docs/installation.qmd index 6edec4610..0cf5ffceb 100644 --- a/docs/installation.qmd +++ b/docs/installation.qmd @@ -19,6 +19,12 @@ This guide covers all the ways you can install and set up Axolotl for your envir ## Installation Methods {#sec-installation-methods} +::: {.callout-important} +Please make sure to have Pytorch installed before installing Axolotl in your local environment. + +Follow the instructions at: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/) +::: + ### PyPI Installation (Recommended) {#sec-pypi} ```{.bash} diff --git a/examples/llama-4/README.md b/examples/llama-4/README.md index b33f8ae3c..f5fc908a6 100644 --- a/examples/llama-4/README.md +++ b/examples/llama-4/README.md @@ -26,3 +26,11 @@ Multi-GPU (4xH100) for Llama 4 Scout uses 62.8GB VRAM/GPU @ 4k contenxt length @ ### Llama 4 Maverick 17Bx128Experts (400B) Coming Soon + +## Delinearized Llama 4 Models + +We provide a script to delinearize Llama 4 linearized models into regular HuggingFace Llama 4 models. + +```bash +axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dir +```