-Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.
+Axolotl is a tool designed to streamline post-training for various AI models.
+Post-training refers to any modifications or additional training performed on
+pre-trained models - including full model fine-tuning, parameter-efficient tuning (like
+LoRA and QLoRA), supervised fine-tuning (SFT), instruction tuning, and alignment
+techniques. With support for multiple model architectures and training configurations,
+Axolotl makes it easy to get started with these techniques.
+
+Axolotl is designed to work with YAML config files that contain everything you need to
+preprocess a dataset, train or fine-tune a model, run model inference or evaluation,
+and much more.
Features:
+
- Train various Huggingface models such as llama, pythia, falcon, mpt
- Supports fullfinetune, lora, qlora, relora, and gptq
- Customize configurations using a simple yaml file or CLI overwrite
- Load different dataset formats, use custom formats, or bring your own tokenized datasets
-- Integrated with xformer, flash attention, [liger kernel](https://github.com/linkedin/Liger-Kernel), rope scaling, and multipacking
+- Integrated with [xformers](https://github.com/facebookresearch/xformers), flash attention, [liger kernel](https://github.com/linkedin/Liger-Kernel), rope scaling, and multipacking
- Works with single GPU or multiple GPUs via FSDP or Deepspeed
- Easily run with Docker locally or on the cloud
- Log results and optionally checkpoints to wandb, mlflow or Comet
- And more!
-
-
-
+## π Quick Start
-
- Axolotl provides a unified repository for fine-tuning a variety of AI models with ease
-
-
- Go ahead and Axolotl questions!!
-
-
-
-
-
-
-
-
-
-
-## Quickstart β‘
-
-Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.
-
-**Requirements**: *Nvidia* GPU (Ampere architecture or newer for `bf16` and Flash Attention) or *AMD* GPU, Python >=3.10 and PyTorch >=2.4.1.
-
-```bash
+```shell
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
-# download examples and optionally deepspeed configs to the local path
+# Download example axolotl configs, deepspeed configs
axolotl fetch examples
axolotl fetch deepspeed_configs # OPTIONAL
-
-# finetune using lora
-axolotl train examples/llama-3/lora-1b.yml
```
-### Edge Builds ποΈ
+Other installation approaches are described [here](https://axolotl-ai-cloud.github.io/axolotl/docs/installation.html).
-If you're looking for the latest features and updates between releases, you'll need to install
-from source.
+### Your First Fine-tune
-```bash
-git clone https://github.com/axolotl-ai-cloud/axolotl.git
-cd axolotl
-pip3 install packaging ninja
-pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
-```
-
-### Axolotl CLI Usage
-We now support a new, more streamlined CLI using [click](https://click.palletsprojects.com/en/stable/).
-
-```bash
-# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0" axolotl preprocess examples/llama-3/lora-1b.yml
-
-# finetune lora
-axolotl train examples/llama-3/lora-1b.yml
-
-# inference
-axolotl inference examples/llama-3/lora-1b.yml \
- --lora-model-dir="./outputs/lora-out"
-
-# gradio
-axolotl inference examples/llama-3/lora-1b.yml \
- --lora-model-dir="./outputs/lora-out" --gradio
-
-# remote yaml files - the yaml config can be hosted on a public URL
-# Note: the yaml config must directly link to the **raw** yaml
-axolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml
-```
-
-We've also added a new command for fetching `examples` and `deepspeed_configs` to your
-local machine. This will come in handy when installing `axolotl` from PyPI.
-
-```bash
-# Fetch example YAML files (stores in "examples/" folder)
+```shell
+# Fetch axolotl examples
axolotl fetch examples
-# Fetch deepspeed config files (stores in "deepspeed_configs/" folder)
-axolotl fetch deepspeed_configs
-
-# Optionally, specify a destination folder
+# Or, specify a custom path
axolotl fetch examples --dest path/to/folder
+
+# Train a model using LoRA
+axolotl train examples/llama-3/lora-1b.yml
```
-### Legacy Usage
-
+That's it! Check out our [Getting Started Guide](https://axolotl-ai-cloud.github.io/axolotl/docs/getting-started.html) for a more detailed walkthrough.
-Click to Expand
+## β¨ Key Features
-While the Axolotl CLI is the preferred method for interacting with axolotl, we
-still support the legacy `-m axolotl.cli.*` usage.
+- **Multiple Model Support**: Train various models like LLaMA, Mistral, Mixtral, Pythia, and more
+- **Training Methods**: Full fine-tuning, LoRA, QLoRA, and more
+- **Easy Configuration**: Simple YAML files to control your training setup
+- **Performance Optimizations**: Flash Attention, xformers, multi-GPU training
+- **Flexible Dataset Handling**: Use various formats and custom datasets
+- **Cloud Ready**: Run on cloud platforms or local hardware
-```bash
-# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0" python -m axolotl.cli.preprocess examples/llama-3/lora-1b.yml
+## π Documentation
-# finetune lora
-accelerate launch -m axolotl.cli.train examples/llama-3/lora-1b.yml
+- [Installation Options](https://axolotl-ai-cloud.github.io/axolotl/docs/installation.html) - Detailed setup instructions for different environments
+- [Configuration Guide](https://axolotl-ai-cloud.github.io/axolotl/docs/config.html) - Full configuration options and examples
+- [Dataset Guide](https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/) - Supported formats and how to use them
+- [Multi-GPU Training](https://axolotl-ai-cloud.github.io/axolotl/docs/multi-gpu.html)
+- [Multi-Node Training](https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html)
+- [Multipacking](https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html)
+- [FAQ](https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html) - Frequently asked questions
-# inference
-accelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \
- --lora_model_dir="./outputs/lora-out"
+## π€ Getting Help
-# gradio
-accelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \
- --lora_model_dir="./outputs/lora-out" --gradio
+- Join our [Discord community](https://discord.gg/HhrNrHJPRb) for support
+- Check out our [Examples](https://github.com/axolotl-ai-cloud/axolotl/tree/main/examples/) directory
+- Read our [Debugging Guide](https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html)
+- Need dedicated support? Please contact [βοΈwing@axolotl.ai](mailto:wing@axolotl.ai) for options
-# remote yaml files - the yaml config can be hosted on a public URL
-# Note: the yaml config must directly link to the **raw** yaml
-accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml
-```
+## π Contributing
-
+Contributions are welcome! Please see our [Contributing Guide](https://github.com/axolotl-ai-cloud/axolotl/blob/main/.github/CONTRIBUTING.md) for details.
-## Badge β€π·οΈ
-
-Building something cool with Axolotl? Consider adding a badge to your model card.
-
-```markdown
-[](https://github.com/axolotl-ai-cloud/axolotl)
-```
-
-[](https://github.com/axolotl-ai-cloud/axolotl)
-
-## Sponsors π€β€
-
-If you love axolotl, consider sponsoring the project by reaching out directly to [wing@axolotl.ai](mailto:wing@axolotl.ai).
-
----
-
-- [Modal](https://www.modal.com?utm_source=github&utm_medium=github&utm_campaign=axolotl) Modal lets you run data/AI jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.
-
----
-
-## Contributing π€
-
-Please read the [contributing guide](./.github/CONTRIBUTING.md)
-
-Bugs? Please check the [open issues](https://github.com/axolotl-ai-cloud/axolotl/issues/bug) else create a new Issue.
-
-PRs are **greatly welcome**!
-
-Please run the quickstart instructions followed by the below to setup env:
-```bash
-pip3 install -r requirements-dev.txt -r requirements-tests.txt
-pre-commit install
-
-# test
-pytest tests/
-
-# optional: run against all files
-pre-commit run --all-files
-```
-
-Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.
-
-
-
-
-
-## Axolotl supports
+## Supported Models
| | fp16/fp32 | lora | qlora | gptq | gptq w/flash attn | flash attn | xformers attn |
|-------------|:----------|:-----|-------|------|-------------------|------------|--------------|
@@ -272,523 +136,16 @@ Thanks to all of our contributors to date. Help drive open source AI progress fo
β: not supported
β: untested
-## Advanced Setup
+## β€οΈ Sponsors
-### Environment
+Thank you to our sponsors who help make Axolotl possible:
-#### Docker
+- [Modal](https://www.modal.com?utm_source=github&utm_medium=github&utm_campaign=axolotl) - Modal lets you run
+jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale,
+fine-tune large language models, run protein folding simulations, and much more.
- ```bash
- docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest
- ```
+Interested in sponsoring? Contact us at [wing@axolotl.ai](mailto:wing@axolotl.ai)
- Or run on the current files for development:
+## π License
- ```sh
- docker compose up -d
- ```
-
->[!Tip]
-> If you want to debug axolotl or prefer to use Docker as your development environment, see the [debugging guide's section on Docker](docs/debugging.qmd#debugging-with-docker).
-
-
-
- Docker advanced
-
- A more powerful Docker command to run would be this:
-
- ```bash
-docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-latest
- ```
-
- It additionally:
- * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through `--ipc` and `--ulimit` args.
- * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through `--mount`/`-v` args.
- * The `--name` argument simply makes it easier to refer to the container in vscode (`Dev Containers: Attach to Running Container...`) or in your terminal.
- * The `--privileged` flag gives all capabilities to the container.
- * The `--shm-size 10g` argument increases the shared memory size. Use this if you see `exitcode: -7` errors using deepspeed.
-
- [More information on nvidia website](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#setincshmem)
-
-
-
-#### Conda/Pip venv
- 1. Install python >=**3.10**
-
- 2. Install pytorch stable https://pytorch.org/get-started/locally/
-
- 3. Install Axolotl along with python dependencies
- ```bash
- pip3 install packaging
- pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
- ```
- 4. (Optional) Login to Huggingface to use gated models/datasets.
- ```bash
- huggingface-cli login
- ```
- Get the token at huggingface.co/settings/tokens
-
-#### Cloud GPU
-
-For cloud GPU providers that support docker images, use [`axolotlai/axolotl-cloud:main-latest`](https://hub.docker.com/r/axolotlai/axolotl-cloud/tags)
-
-- on Latitude.sh use this [direct link](https://latitude.sh/blueprint/989e0e79-3bf6-41ea-a46b-1f246e309d5c)
-- on JarvisLabs.ai use this [direct link](https://jarvislabs.ai/templates/axolotl)
-- on RunPod use this [direct link](https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz)
-
-#### Bare Metal Cloud GPU
-
-##### LambdaLabs
-
-
-
- Click to Expand
-
- 1. Install python
- ```bash
- sudo apt update
- sudo apt install -y python3.10
-
- sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1
- sudo update-alternatives --config python # pick 3.10 if given option
- python -V # should be 3.10
-
- ```
-
- 2. Install pip
- ```bash
- wget https://bootstrap.pypa.io/get-pip.py
- python get-pip.py
- ```
-
- 3. Install Pytorch https://pytorch.org/get-started/locally/
-
- 4. Follow instructions on quickstart.
-
- 5. Run
- ```bash
- pip3 install protobuf==3.20.3
- pip3 install -U --ignore-installed requests Pillow psutil scipy
- ```
-
- 6. Set path
- ```bash
- export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
- ```
-
-
-##### GCP
-
-
-
-Click to Expand
-
-Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.
-
-Make sure to run the below to uninstall xla.
-```bash
-pip uninstall -y torch_xla[tpu]
-```
-
-
-
-#### Windows
-Please use WSL or Docker!
-
-#### Mac
-
-Use the below instead of the install method in QuickStart.
-```
-pip3 install --no-build-isolation -e '.'
-```
-More info: [mac.md](/docs/mac.qmd)
-
-#### Google Colab
-
-Please use this example [notebook](examples/colab-notebooks/colab-axolotl-example.ipynb).
-
-#### Launching on public clouds via SkyPilot
-To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use [SkyPilot](https://skypilot.readthedocs.io/en/latest/index.html):
-
-```bash
-pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds
-sky check
-```
-
-Get the [example YAMLs](https://github.com/skypilot-org/skypilot/tree/master/llm/axolotl) of using Axolotl to finetune `mistralai/Mistral-7B-v0.1`:
-```
-git clone https://github.com/skypilot-org/skypilot.git
-cd skypilot/llm/axolotl
-```
-
-Use one command to launch:
-```bash
-# On-demand
-HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN
-
-# Managed spot (auto-recovery on preemption)
-HF_TOKEN=xx BUCKET= sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET
-```
-
-#### Launching on public clouds via dstack
-To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use [dstack](https://dstack.ai/).
-
-Write a job description in YAML as below:
-
-```yaml
-# dstack.yaml
-type: task
-
-image: axolotlai/axolotl-cloud:main-latest
-
-env:
- - HUGGING_FACE_HUB_TOKEN
- - WANDB_API_KEY
-
-commands:
- - accelerate launch -m axolotl.cli.train config.yaml
-
-ports:
- - 6006
-
-resources:
- gpu:
- memory: 24GB..
- count: 2
-```
-
-then, simply run the job with `dstack run` command. Append `--spot` option if you want spot instance. `dstack run` command will show you the instance with cheapest price across multi cloud services:
-
-```bash
-pip install dstack
-HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot
-```
-
-For further and fine-grained use cases, please refer to the official [dstack documents](https://dstack.ai/docs/) and the detailed description of [axolotl example](https://github.com/dstackai/dstack/tree/master/examples/fine-tuning/axolotl) on the official repository.
-
-### Dataset
-
-Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.
-
-See [the documentation](https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/) for more information on how to use different dataset formats.
-
-### Config
-
-See [examples](examples) for quick start. It is recommended to duplicate and modify to your needs. The most important options are:
-
-- model
- ```yaml
- base_model: ./llama-7b-hf # local or huggingface repo
- ```
- Note: The code will load the right architecture.
-
-- dataset
- ```yaml
- datasets:
- # huggingface repo
- - path: vicgalle/alpaca-gpt4
- type: alpaca
-
- # huggingface repo with specific configuration/subset
- - path: EleutherAI/pile
- name: enron_emails
- type: completion # format from earlier
- field: text # Optional[str] default: text, field to use for completion data
-
- # huggingface repo with multiple named configurations/subsets
- - path: bigcode/commitpackft
- name:
- - ruby
- - python
- - typescript
- type: ... # unimplemented custom format
-
- # chat_template https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html#chat_template
- - path: ...
- type: chat_template
- chat_template: chatml # defaults to tokenizer's chat_template
-
- # local
- - path: data.jsonl # or json
- ds_type: json # see other options below
- type: alpaca
-
- # dataset with splits, but no train split
- - path: knowrohit07/know_sql
- type: context_qa.load_v2
- train_on_split: validation
-
- # loading from s3 or gcs
- # s3 creds will be loaded from the system default / gcs will attempt to load from gcloud creds, google metadata service, or anon
- - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above
- ...
-
- # Loading Data From a Public URL
- # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.
- - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.
- ds_type: json # this is the default, see other options below.
- ```
-
-- loading
- ```yaml
- load_in_4bit: true
- load_in_8bit: true
-
- bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.
- fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32
- tf32: true # require >=ampere
-
- bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)
- float16: true # use instead of fp16 when you don't want AMP
- ```
- Note: Repo does not do 4-bit quantization.
-
-- lora
- ```yaml
- adapter: lora # 'qlora' or leave blank for full finetune
- lora_r: 8
- lora_alpha: 16
- lora_dropout: 0.05
- lora_target_modules:
- - q_proj
- - v_proj
- ```
-
-#### All Config Options
-
-See [these docs](docs/config.qmd) for all config options.
-
-### Train
-
-Run
-```bash
-accelerate launch -m axolotl.cli.train your_config.yml
-```
-
-> [!TIP]
-> You can also reference a config file that is hosted on a public URL, for example `accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml`
-
-#### Preprocess dataset
-
-You can optionally pre-tokenize dataset with the following before finetuning.
-This is recommended for large datasets.
-
-- Set `dataset_prepared_path:` to a local folder for saving and loading pre-tokenized dataset.
-- (Optional): Set `push_dataset_to_hub: hf_user/repo` to push it to Huggingface.
-- (Optional): Use `--debug` to see preprocessed examples.
-
-```bash
-python -m axolotl.cli.preprocess your_config.yml
-```
-
-#### Multi-GPU
-
-Below are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed
-is the recommended multi-GPU option currently because FSDP may experience
-[loss instability](https://github.com/huggingface/transformers/issues/26498).
-
-##### DeepSpeed
-
-Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you
-might typically be able to fit into your GPU's VRAM. More information about the various optimization types
-for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated
-
-We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.
-
-```yaml
-deepspeed: deepspeed_configs/zero1.json
-```
-
-```shell
-accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
-```
-
-##### FSDP
-
-- llama FSDP
-```yaml
-fsdp:
- - full_shard
- - auto_wrap
-fsdp_config:
- fsdp_offload_params: true
- fsdp_state_dict_type: FULL_STATE_DICT
- fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
-```
-
-##### FSDP + QLoRA
-
-Axolotl supports training with FSDP and QLoRA, see [these docs](docs/fsdp_qlora.qmd) for more information.
-
-##### Weights & Biases Logging
-
-Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.
-
-- wandb options
-```yaml
-wandb_mode:
-wandb_project:
-wandb_entity:
-wandb_watch:
-wandb_name:
-wandb_log_model:
-```
-
-##### Comet Logging
-
-Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to wandb with `comet login`.
-
-- wandb options
-```yaml
-use_comet:
-comet_api_key:
-comet_workspace:
-comet_project_name:
-comet_experiment_key:
-comet_mode:
-comet_online:
-comet_experiment_config:
-```
-
-##### Special Tokens
-
-It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer's vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:
-
-```yml
-special_tokens:
- bos_token: ""
- eos_token: ""
- unk_token: ""
-tokens: # these are delimiters
- - "<|im_start|>"
- - "<|im_end|>"
-```
-
-When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer's vocabulary.
-
-##### Liger Kernel
-
-Liger Kernel: Efficient Triton Kernels for LLM Training
-
-https://github.com/linkedin/Liger-Kernel
-
-Liger (LinkedIn GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training.
-It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The Liger Kernel
-composes well and is compatible with both FSDP and Deepspeed.
-
-```yaml
-plugins:
- - axolotl.integrations.liger.LigerPlugin
-liger_rope: true
-liger_rms_norm: true
-liger_glu_activation: true
-liger_layer_norm: true
-liger_fused_linear_cross_entropy: true
-```
-
-### Inference Playground
-
-Axolotl allows you to load your model in an interactive terminal playground for quick experimentation.
-The config file is the same config file used for training.
-
-Pass the appropriate flag to the inference command, depending upon what kind of model was trained:
-
-- Pretrained LORA:
- ```bash
- python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
- ```
-- Full weights finetune:
- ```bash
- python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
- ```
-- Full weights finetune w/ a prompt from a text file:
- ```bash
- cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
- --base_model="./completed-model" --prompter=None --load_in_8bit=True
- ```
--- With gradio hosting
- ```bash
- python -m axolotl.cli.inference examples/your_config.yml --gradio
- ```
-
-Please use `--sample_packing False` if you have it on and receive the error similar to below:
-
-> RuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1
-
-### Merge LORA to base
-
-The following command will merge your LORA adapater with your base model. You can optionally pass the argument `--lora_model_dir` to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from `output_dir` in your axolotl config file. The merged model is saved in the sub-directory `{lora_model_dir}/merged`.
-
-```bash
-python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"
-```
-
-You may need to use the `gpu_memory_limit` and/or `lora_on_cpu` config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with
-
-```bash
-CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...
-```
-
-although this will be very slow, and using the config options above are recommended instead.
-
-## Common Errors π§°
-
-See also the [FAQ's](./docs/faq.qmd) and [debugging guide](docs/debugging.qmd).
-
-> If you encounter a 'Cuda out of memory' error, it means your GPU ran out of memory during the training process. Here's how to resolve it:
-
-Please reduce any below
- - `micro_batch_size`
- - `eval_batch_size`
- - `gradient_accumulation_steps`
- - `sequence_len`
-
-If it does not help, try running without deepspeed and without accelerate (replace "accelerate launch" with "python") in the command.
-
-Using adamw_bnb_8bit might also save you some memory.
-
-> `failed (exitcode: -9)`
-
-Usually means your system has run out of system memory.
-Similarly, you should consider reducing the same settings as when you run out of VRAM.
-Additionally, look into upgrading your system RAM which should be simpler than GPU upgrades.
-
-> RuntimeError: expected scalar type Float but found Half
-
-Try set `fp16: true`
-
-> NotImplementedError: No operator found for `memory_efficient_attention_forward` ...
-
-Try to turn off xformers.
-
-> accelerate config missing
-
-It's safe to ignore it.
-
-> NCCL Timeouts during training
-
-See the [NCCL](docs/nccl.qmd) guide.
-
-
-### Tokenization Mismatch b/w Inference & Training
-
-For many formats, Axolotl constructs prompts by concatenating token ids _after_ tokenizing strings. The reason for concatenating token ids rather than operating on strings is to maintain precise accounting for attention masks.
-
-If you decode a prompt constructed by axolotl, you might see spaces between tokens (or lack thereof) that you do not expect, especially around delimiters and special tokens. When you are starting out with a new format, you should always do the following:
-
-1. Materialize some data using `python -m axolotl.cli.preprocess your_config.yml --debug`, and then decode the first few rows with your model's tokenizer.
-2. During inference, right before you pass a tensor of token ids to your model, decode these tokens back into a string.
-3. Make sure the inference string from #2 looks **exactly** like the data you fine tuned on from #1, including spaces and new lines. If they aren't the same, adjust your inference server accordingly.
-4. As an additional troubleshooting step, you can look at the token ids between 1 and 2 to make sure they are identical.
-
-Having misalignment between your prompts during training and inference can cause models to perform very poorly, so it is worth checking this. See [this blog post](https://hamel.dev/notes/llm/finetuning/05_tokenizer_gotchas.html) for a concrete example.
-
-## Debugging Axolotl
-
-See [this debugging guide](docs/debugging.qmd) for tips on debugging Axolotl, along with an example configuration for debugging with VSCode.
-
-## Need help? π
-
-Join our [Discord server](https://discord.gg/HhrNrHJPRb) where our community members can help you.
-
-Need dedicated support? Please contact us at [βοΈwing@axolotl.ai](ailto:wing@axolotl.ai) for dedicated support options.
+This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
diff --git a/_quarto.yml b/_quarto.yml
index 8eb79f651..3ec1ce75b 100644
--- a/_quarto.yml
+++ b/_quarto.yml
@@ -28,13 +28,17 @@ website:
- section: "How-To Guides"
contents:
# TODO Edit folder structure after we have more docs.
+ - docs/getting-started.qmd
+ - docs/installation.qmd
- docs/debugging.qmd
+ - docs/inference.qmd
- docs/multipack.qmd
- docs/fsdp_qlora.qmd
- docs/input_output.qmd
- docs/rlhf.qmd
- docs/nccl.qmd
- docs/mac.qmd
+ - docs/multi-gpu.qmd
- docs/multi-node.qmd
- docs/unsloth.qmd
- docs/amd_hpc.qmd
@@ -46,7 +50,6 @@ website:
- docs/config.qmd
- docs/faq.qmd
-
format:
html:
theme: materia
diff --git a/docs/dataset-formats/conversation.qmd b/docs/dataset-formats/conversation.qmd
index 9f6e8c360..4d8dbe769 100644
--- a/docs/dataset-formats/conversation.qmd
+++ b/docs/dataset-formats/conversation.qmd
@@ -8,14 +8,12 @@ order: 3
IMPORTANT: ShareGPT is deprecated!. Please see `chat_template` section below.
-
## pygmalion
```{.json filename="data.jsonl"}
{"conversations": [{"role": "...", "value": "..."}]}
```
-
## chat_template
Chat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer's template, a supported template, or custom jinja2.
diff --git a/docs/dataset-formats/stepwise_supervised.qmd b/docs/dataset-formats/stepwise_supervised.qmd
index 17f0c9141..072bf8353 100644
--- a/docs/dataset-formats/stepwise_supervised.qmd
+++ b/docs/dataset-formats/stepwise_supervised.qmd
@@ -6,8 +6,15 @@ order: 3
## Stepwise Supervised
-The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step.
-### ExampleHere's a simple example of a stepwise supervised dataset entry:```json
+The stepwise supervised format is designed for chain-of-thought (COT) reasoning
+datasets where each example contains multiple completion steps and a preference label
+for each step.
+
+### Example
+
+Here's a simple example of a stepwise supervised dataset entry:
+
+```json
{
"prompt": "Which number is larger, 9.8 or 9.11?",
"completions": [
@@ -16,3 +23,4 @@ The stepwise supervised format is designed for chain-of-thought (COT) reasoning
],
"labels": [true, false]
}
+```
\ No newline at end of file
diff --git a/docs/getting-started.qmd b/docs/getting-started.qmd
new file mode 100644
index 000000000..2292cde15
--- /dev/null
+++ b/docs/getting-started.qmd
@@ -0,0 +1,155 @@
+---
+title: "Getting Started with Axolotl"
+format:
+ html:
+ toc: true
+ toc-depth: 3
+ number-sections: true
+execute:
+ enabled: false
+---
+
+This guide will walk you through your first model fine-tuning project with Axolotl.
+
+## Quick Example {#sec-quick-example}
+
+Let's start by fine-tuning a small language model using LoRA. This example uses a 1B parameter model to ensure it runs on most GPUs.
+Assuming `axolotl` is installed (if not, see our [Installation Guide](installation.qmd))
+
+1. Download example configs:
+```shell
+axolotl fetch examples
+```
+
+2. Run the training:
+```shell
+axolotl train examples/llama-3/lora-1b.yml
+```
+
+That's it! Let's understand what just happened.
+
+## Understanding the Process {#sec-understanding}
+
+### The Configuration File {#sec-config}
+
+The YAML configuration file controls everything about your training. Here's what (part of) our example config looks like:
+
+```yaml
+base_model: NousResearch/Llama-3.2-1B
+# hub_model_id: username/custom_model_name
+
+datasets:
+ - path: teknium/GPT4-LLM-Cleaned
+ type: alpaca
+dataset_prepared_path: last_run_prepared
+val_set_size: 0.1
+output_dir: ./outputs/lora-out
+
+adapter: lora
+lora_model_dir:
+```
+
+See our [Config options](config.qmd) for more details.
+
+### Training {#sec-training}
+
+When you run `axolotl train`, Axolotl:
+
+1. Downloads the base model
+2. (If specified) applies LoRA adapter layers
+3. Loads and processes the dataset
+4. Runs the training loop
+5. Saves the trained model and / or LoRA weights
+
+## Your First Custom Training {#sec-custom}
+
+Let's modify the example for your own data:
+
+1. Create a new config file `my_training.yml`:
+
+```yaml
+base_model: NousResearch/Nous-Hermes-llama-1b-v1
+adapter: lora
+
+# Training settings
+micro_batch_size: 2
+num_epochs: 3
+learning_rate: 0.0003
+
+# Your dataset
+datasets:
+ - path: my_data.jsonl # Your local data file
+ type: alpaca # Or other format
+```
+
+This specific config is for LoRA fine-tuning a model with instruction tuning data using
+the `alpaca` dataset format, which has the following format:
+
+```json
+{
+ "instruction": "Write a description of alpacas.",
+ "input": "",
+ "output": "Alpacas are domesticated South American camelids..."
+}
+```
+
+Please see our [Dataset Formats](dataset-formats) for more dataset formats and how to
+format them.
+
+2. Prepare your JSONL data in the specified format (in this case, the expected `alpaca
+format):
+
+```json
+{"instruction": "Classify this text", "input": "I love this!", "output": "positive"}
+{"instruction": "Classify this text", "input": "Not good at all", "output": "negative"}
+```
+
+Please consult the supported [Dataset Formats](dataset-formats/) for more details.
+
+3. Run the training:
+
+```shell
+axolotl train my_training.yml
+```
+
+## Common Tasks {#sec-common-tasks}
+
+### Testing Your Model {#sec-testing}
+
+After training, test your model:
+
+```shell
+axolotl inference my_training.yml --lora-model-dir="./outputs/lora-out"
+```
+
+### Preprocessing Data {#sec-preprocessing}
+
+For large datasets, preprocess first:
+
+```shell
+axolotl preprocess my_training.yml
+```
+
+### Using a UI {#sec-ui}
+
+Launch a Gradio interface:
+
+```shell
+axolotl inference my_training.yml --lora-model-dir="./outputs/lora-out" --gradio
+```
+
+## Next Steps {#sec-next-steps}
+
+Now that you have the basics, you might want to:
+
+- Try different model architectures
+- Experiment with hyperparameters
+- Use more advanced training methods
+- Scale up to larger models
+
+Check our other guides for details on these topics:
+
+- [Configuration Guide](config.qmd) - Full configuration options
+- [Dataset Formats](dataset-formats) - Working with different data formats
+- [Multi-GPU Training](multi-gpu.qmd)
+- [Multi-Node Training](multi-node.qmd)
diff --git a/docs/inference.qmd b/docs/inference.qmd
new file mode 100644
index 000000000..59e352c18
--- /dev/null
+++ b/docs/inference.qmd
@@ -0,0 +1,148 @@
+---
+title: "Inference Guide"
+format:
+ html:
+ toc: true
+ toc-depth: 3
+ number-sections: true
+ code-tools: true
+execute:
+ enabled: false
+---
+
+This guide covers how to use your trained models for inference, including model loading, interactive testing, and common troubleshooting steps.
+
+## Quick Start {#sec-quickstart}
+
+### Basic Inference {#sec-basic}
+
+::: {.panel-tabset}
+
+## LoRA Models
+
+```{.bash}
+axolotl inference your_config.yml --lora-model-dir="./lora-output-dir"
+```
+
+## Full Fine-tuned Models
+
+```{.bash}
+axolotl inference your_config.yml --base-model="./completed-model"
+```
+
+:::
+
+## Advanced Usage {#sec-advanced}
+
+### Gradio Interface {#sec-gradio}
+
+Launch an interactive web interface:
+
+```{.bash}
+axolotl inference your_config.yml --gradio
+```
+
+### File-based Prompts {#sec-file-prompts}
+
+Process prompts from a text file:
+
+```{.bash}
+cat /tmp/prompt.txt | axolotl inference your_config.yml \
+ --base-model="./completed-model" --prompter=None
+```
+
+### Memory Optimization {#sec-memory}
+
+For large models or limited memory:
+
+```{.bash}
+axolotl inference your_config.yml --load-in-8bit=True
+```
+
+## Merging LoRA Weights {#sec-merging}
+
+Merge LoRA adapters with the base model:
+
+```{.bash}
+axolotl merge-lora your_config.yml --lora-model-dir="./completed-model"
+```
+
+### Memory Management for Merging {#sec-memory-management}
+
+::: {.panel-tabset}
+
+## Configuration Options
+
+```{.yaml}
+gpu_memory_limit: 20GiB # Adjust based on your GPU
+lora_on_cpu: true # Process on CPU if needed
+```
+
+## Force CPU Merging
+
+```{.bash}
+CUDA_VISIBLE_DEVICES="" axolotl merge-lora ...
+```
+
+:::
+
+## Tokenization {#sec-tokenization}
+
+### Common Issues {#sec-tokenization-issues}
+
+::: {.callout-warning}
+Tokenization mismatches between training and inference are a common source of problems.
+:::
+
+To debug:
+
+1. Check training tokenization:
+```{.bash}
+axolotl preprocess your_config.yml --debug
+```
+
+2. Verify inference tokenization by decoding tokens before model input
+
+3. Compare token IDs between training and inference
+
+### Special Tokens {#sec-special-tokens}
+
+Configure special tokens in your YAML:
+
+```{.yaml}
+special_tokens:
+ bos_token: ""
+ eos_token: ""
+ unk_token: ""
+tokens:
+ - "<|im_start|>"
+ - "<|im_end|>"
+```
+
+## Troubleshooting {#sec-troubleshooting}
+
+### Common Problems {#sec-common-problems}
+
+::: {.panel-tabset}
+
+## Memory Issues
+
+- Use 8-bit loading
+- Reduce batch sizes
+- Try CPU offloading
+
+## Token Issues
+
+- Verify special tokens
+- Check tokenizer settings
+- Compare training and inference preprocessing
+
+## Performance Issues
+
+- Verify model loading
+- Check prompt formatting
+- Ensure temperature/sampling settings
+
+:::
+
+For more details, see our [debugging guide](debugging.qmd).
diff --git a/docs/installation.qmd b/docs/installation.qmd
new file mode 100644
index 000000000..f16e814cc
--- /dev/null
+++ b/docs/installation.qmd
@@ -0,0 +1,119 @@
+---
+title: "Installation Guide"
+format:
+ html:
+ toc: true
+ toc-depth: 3
+ number-sections: true
+ code-tools: true
+execute:
+ enabled: false
+---
+
+This guide covers all the ways you can install and set up Axolotl for your environment.
+
+## Requirements {#sec-requirements}
+
+- NVIDIA GPU (Ampere architecture or newer for `bf16` and Flash Attention) or AMD GPU
+- Python β₯3.10
+- PyTorch β₯2.4.1
+
+## Installation Methods {#sec-installation-methods}
+
+### PyPI Installation (Recommended) {#sec-pypi}
+
+```{.bash}
+pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
+```
+
+We use `--no-build-isolation` in order to detect the installed PyTorch version (if
+installed) in order not to clobber it, and so that we set the correct version of
+dependencies that are specific to the PyTorch version or other installed
+co-dependencies.
+
+### Edge/Development Build {#sec-edge-build}
+
+For the latest features between releases:
+
+```{.bash}
+git clone https://github.com/axolotl-ai-cloud/axolotl.git
+cd axolotl
+pip3 install packaging ninja
+pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
+```
+
+### Docker {#sec-docker}
+
+```{.bash}
+docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest
+```
+
+For development with Docker:
+
+```{.bash}
+docker compose up -d
+```
+
+::: {.callout-tip}
+### Advanced Docker Configuration
+```{.bash}
+docker run --privileged --gpus '"all"' --shm-size 10g --rm -it \
+ --name axolotl --ipc=host \
+ --ulimit memlock=-1 --ulimit stack=67108864 \
+ --mount type=bind,src="${PWD}",target=/workspace/axolotl \
+ -v ${HOME}/.cache/huggingface:/root/.cache/huggingface \
+ axolotlai/axolotl:main-latest
+```
+:::
+
+## Cloud Environments {#sec-cloud}
+
+### Cloud GPU Providers {#sec-cloud-gpu}
+
+For providers supporting Docker:
+
+- Use `axolotlai/axolotl-cloud:main-latest`
+- Available on:
+ - [Latitude.sh](https://latitude.sh/blueprint/989e0e79-3bf6-41ea-a46b-1f246e309d5c)
+ - [JarvisLabs.ai](https://jarvislabs.ai/templates/axolotl)
+ - [RunPod](https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz)
+
+### Google Colab {#sec-colab}
+
+Use our [example notebook](../examples/colab-notebooks/colab-axolotl-example.ipynb).
+
+## Platform-Specific Instructions {#sec-platform-specific}
+
+### macOS {#sec-macos}
+
+```{.bash}
+pip3 install --no-build-isolation -e '.'
+```
+
+See @sec-troubleshooting for Mac-specific issues.
+
+### Windows {#sec-windows}
+
+::: {.callout-important}
+We recommend using WSL2 (Windows Subsystem for Linux) or Docker.
+:::
+
+## Environment Managers {#sec-env-managers}
+
+### Conda/Pip venv {#sec-conda}
+
+1. Install Python β₯3.10
+2. Install PyTorch: https://pytorch.org/get-started/locally/
+3. Install Axolotl:
+ ```{.bash}
+ pip3 install packaging
+ pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
+ ```
+4. (Optional) Login to Hugging Face:
+ ```{.bash}
+ huggingface-cli login
+ ```
+
+## Troubleshooting {#sec-troubleshooting}
+
+If you encounter installation issues, see our [FAQ](faq.qmd) and [Debugging Guide](debugging.qmd).
diff --git a/docs/multi-gpu.qmd b/docs/multi-gpu.qmd
new file mode 100644
index 000000000..fe293b750
--- /dev/null
+++ b/docs/multi-gpu.qmd
@@ -0,0 +1,118 @@
+---
+title: "Multi-GPU Training Guide"
+format:
+ html:
+ toc: true
+ toc-depth: 3
+ number-sections: true
+ code-tools: true
+execute:
+ enabled: false
+---
+
+This guide covers advanced training configurations for multi-GPU setups using Axolotl.
+
+## Overview {#sec-overview}
+
+Axolotl supports several methods for multi-GPU training:
+
+- DeepSpeed (recommended)
+- FSDP (Fully Sharded Data Parallel)
+- FSDP + QLoRA
+
+## DeepSpeed {#sec-deepspeed}
+
+DeepSpeed is the recommended approach for multi-GPU training due to its stability and performance. It provides various optimization levels through ZeRO stages.
+
+### Configuration {#sec-deepspeed-config}
+
+Add to your YAML config:
+
+```{.yaml}
+deepspeed: deepspeed_configs/zero1.json
+```
+
+### Usage {#sec-deepspeed-usage}
+
+```{.bash}
+accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
+```
+
+### ZeRO Stages {#sec-zero-stages}
+
+We provide default configurations for:
+
+- ZeRO Stage 1 (`zero1.json`)
+- ZeRO Stage 2 (`zero2.json`)
+- ZeRO Stage 3 (`zero3.json`)
+
+Choose based on your memory requirements and performance needs.
+
+## FSDP {#sec-fsdp}
+
+### Basic FSDP Configuration {#sec-fsdp-config}
+
+```{.yaml}
+fsdp:
+ - full_shard
+ - auto_wrap
+fsdp_config:
+ fsdp_offload_params: true
+ fsdp_state_dict_type: FULL_STATE_DICT
+ fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
+```
+
+### FSDP + QLoRA {#sec-fsdp-qlora}
+
+For combining FSDP with QLoRA, see our [dedicated guide](fsdp_qlora.qmd).
+
+## Performance Optimization {#sec-performance}
+
+### Liger Kernel Integration {#sec-liger}
+
+::: {.callout-note}
+Liger Kernel provides efficient Triton kernels for LLM training, offering:
+
+- 20% increase in multi-GPU training throughput
+- 60% reduction in memory usage
+- Compatibility with both FSDP and DeepSpeed
+:::
+
+Configuration:
+
+```{.yaml}
+plugins:
+ - axolotl.integrations.liger.LigerPlugin
+liger_rope: true
+liger_rms_norm: true
+liger_glu_activation: true
+liger_layer_norm: true
+liger_fused_linear_cross_entropy: true
+```
+
+## Troubleshooting {#sec-troubleshooting}
+
+### NCCL Issues {#sec-nccl}
+
+For NCCL-related problems, see our [NCCL troubleshooting guide](nccl.qmd).
+
+### Common Problems {#sec-common-problems}
+
+::: {.panel-tabset}
+
+## Memory Issues
+
+- Reduce `micro_batch_size`
+- Reduce `eval_batch_size`
+- Adjust `gradient_accumulation_steps`
+- Consider using a higher ZeRO stage
+
+## Training Instability
+
+- Start with DeepSpeed ZeRO-2
+- Monitor loss values
+- Check learning rates
+
+:::
+
+For more detailed troubleshooting, see our [debugging guide](debugging.qmd).
diff --git a/styles.css b/styles.css
index 2ddf50c7b..2e5aa6de8 100644
--- a/styles.css
+++ b/styles.css
@@ -1 +1,5 @@
/* css styles */
+
+img[alt="Axolotl"] {
+ content: url("https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/887513285d98132142bf5db2a74eb5e0928787f1/image/axolotl_logo_digital_black.svg") !important;
+}