diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
deleted file mode 100644
index 29769efb5..000000000
--- a/.github/CONTRIBUTING.md
+++ /dev/null
@@ -1,76 +0,0 @@
-# Contributing to axolotl
-
-First of all, thank you for your interest in contributing to axolotl! We appreciate the time and effort you're willing to invest in making our project better. This document provides guidelines and information to make the contribution process as smooth as possible.
-
-## Table of Contents
-
-- [Code of Conduct](#code-of-conduct)
-- [Getting Started](#getting-started)
-- [How to Contribute](#how-to-contribute)
- - [Reporting Bugs](#reporting-bugs)
- - [Suggesting Enhancements](#suggesting-enhancements)
- - [Submitting Pull Requests](#submitting-pull-requests)
-- [Style Guidelines](#style-guidelines)
- - [Code Style](#code-style)
- - [Commit Messages](#commit-messages)
-- [Additional Resources](#additional-resources)
-
-## Code of Conductcode
-
-All contributors are expected to adhere to our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it before participating in the axolotl community.
-
-## Getting Started
-
-Bugs? Please check for open issue else create a new [Issue](https://github.com/axolotl-ai-cloud/axolotl/issues/new).
-
-PRs are **greatly welcome**!
-
-1. Fork the repository and clone it to your local machine.
-2. Set up the development environment by following the instructions in the [README.md](https://github.com/axolotl-ai-cloud/axolotl/tree/main/README.md) file.
-3. Explore the codebase, run tests, and verify that everything works as expected.
-
-Please run below to setup env
-```bash
-pip3 install -r requirements-dev.txt -r requirements-tests.txt
-pre-commit install
-
-# test
-pytest tests/
-```
-
-## How to Contribute
-
-### Reporting Bugs
-
-If you encounter a bug or issue while using axolotl, please open a new issue on the [GitHub Issues](https://github.com/axolotl-ai-cloud/axolotl/issues) page. Provide a clear and concise description of the problem, steps to reproduce it, and any relevant error messages or logs.
-
-### Suggesting Enhancements
-
-We welcome ideas for improvements and new features. To suggest an enhancement, open a new issue on the [GitHub Issues](https://github.com/axolotl-ai-cloud/axolotl/issues) page. Describe the enhancement in detail, explain the use case, and outline the benefits it would bring to the project.
-
-### Submitting Pull Requests
-
-1. Create a new branch for your feature or bugfix. Use a descriptive name like `feature/your-feature-name` or `fix/your-bugfix-name`.
-2. Make your changes, following the [Style Guidelines](#style-guidelines) below.
-3. Test your changes and ensure that they don't introduce new issues or break existing functionality.
-4. Commit your changes, following the [commit message guidelines](#commit-messages).
-5. Push your branch to your fork on GitHub.
-6. Open a new pull request against the `main` branch of the axolotl repository. Include a clear and concise description of your changes, referencing any related issues.
-
-## Style Guidelines
-
-### Code Style
-
-axolotl uses [{codestyle}]({URLofCodestyle}) as its code style guide. Please ensure that your code follows these guidelines.
-
-### Commit Messages
-
-Write clear and concise commit messages that briefly describe the changes made in each commit. Use the imperative mood and start with a capitalized verb, e.g., "Add new feature" or "Fix bug in function".
-
-## Additional Resources
-
-- [GitHub Help](https://help.github.com/)
-- [GitHub Pull Request Documentation](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests)
-- [{codestyle}]({URLofCodestyle})
-
-Thank you once again for your interest in contributing to axolotl. We look forward to collaborating with you and creating an even better project together!
diff --git a/.nojekyll b/.nojekyll
index 497f4d5b7..da056d77f 100644
--- a/.nojekyll
+++ b/.nojekyll
@@ -1 +1 @@
-17acecc2
\ No newline at end of file
+b8c9c3e0
\ No newline at end of file
diff --git a/FAQS.html b/FAQS.html
index c852cda19..74ddf6d4b 100644
--- a/FAQS.html
+++ b/FAQS.html
@@ -124,9 +124,27 @@ ul.task-list li input[type="checkbox"] {
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 000000000..d64569567
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/TODO.html b/TODO.html
index f800a8985..455016de2 100644
--- a/TODO.html
+++ b/TODO.html
@@ -124,9 +124,27 @@ ul.task-list li input[type="checkbox"] {
@@ -304,9 +365,21 @@ ul.task-list li input[type="checkbox"] {
Stepwise Supervised
-
The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step. ### ExampleHere’s a simple example of a stepwise supervised dataset entry:```json { “prompt”: “Which number is larger, 9.8 or 9.11?”, “completions”: [ “The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.”, “Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8.” ], “labels”: [true, false] }
+
The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step.
+
+
Example
+
Here’s a simple example of a stepwise supervised dataset entry:
+
{
+"prompt":"Which number is larger, 9.8 or 9.11?",
+"completions":[
+"The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.",
+"Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8."
+],
+"labels":[true,false]
+}
+
diff --git a/docs/dataset-formats/template_free.html b/docs/dataset-formats/template_free.html
index 398c3fd5e..5056931d5 100644
--- a/docs/dataset-formats/template_free.html
+++ b/docs/dataset-formats/template_free.html
@@ -125,9 +125,27 @@ ul.task-list li input[type="checkbox"] {
This guide will walk you through your first model fine-tuning project with Axolotl.
+
+
1 Quick Example
+
Let’s start by fine-tuning a small language model using LoRA. This example uses a 1B parameter model to ensure it runs on most GPUs. Assuming axolotl is installed (if not, see our Installation Guide)
+
+
Download example configs:
+
+
axolotl fetch examples
+
+
Run the training:
+
+
axolotl train examples/llama-3/lora-1b.yml
+
That’s it! Let’s understand what just happened.
+
+
+
2 Understanding the Process
+
+
2.1 The Configuration File
+
The YAML configuration file controls everything about your training. Here’s what (part of) our example config looks like:
base_model: NousResearch/Nous-Hermes-llama-1b-v1
+adapter: lora
+
+# Training settings
+micro_batch_size:2
+num_epochs:3
+learning_rate:0.0003
+
+# Your dataset
+datasets:
+-path: my_data.jsonl # Your local data file
+type: alpaca # Or other format
+
This specific config is for LoRA fine-tuning a model with instruction tuning data using the alpaca dataset format, which has the following format:
+
{
+"instruction":"Write a description of alpacas.",
+"input":"",
+"output":"Alpacas are domesticated South American camelids..."
+}
+
Please see our Dataset Formats for more dataset formats and how to format them.
+
+
Prepare your JSONL data in the specified format (in this case, the expected `alpaca format):
+
+
{"instruction":"Classify this text","input":"I love this!","output":"positive"}
+{"instruction":"Classify this text","input":"Not good at all","output":"negative"}
+
Please consult the supported Dataset Formats for more details.
We use --no-build-isolation in order to detect the installed PyTorch version (if installed) in order not to clobber it, and so that we set the correct version of dependencies that are specific to the PyTorch version or other installed co-dependencies.
This guide covers advanced training configurations for multi-GPU setups using Axolotl.
+
+
1 Overview
+
Axolotl supports several methods for multi-GPU training:
+
+
DeepSpeed (recommended)
+
FSDP (Fully Sharded Data Parallel)
+
FSDP + QLoRA
+
+
+
+
2 DeepSpeed
+
DeepSpeed is the recommended approach for multi-GPU training due to its stability and performance. It provides various optimization levels through ZeRO stages.
Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.
-
Features: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!
-
-
-
Quickstart ⚡
-
Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.
-
Requirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.4.1.
-
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
-
-# download examples and optionally deepspeed configs to the local path
-axolotl fetch examples
-axolotl fetch deepspeed_configs # OPTIONAL
-
-# finetune using lora
-axolotl train examples/llama-3/lora-1b.yml
-
-
Edge Builds 🏎️
-
If you’re looking for the latest features and updates between releases, you’ll need to install from source.
We now support a new, more streamlined CLI using click.
-
# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0"axolotl preprocess examples/llama-3/lora-1b.yml
-
-# finetune lora
-axolotl train examples/llama-3/lora-1b.yml
-
-# inference
-axolotl inference examples/llama-3/lora-1b.yml \
---lora-model-dir="./outputs/lora-out"
-
-# gradio
-axolotl inference examples/llama-3/lora-1b.yml \
---lora-model-dir="./outputs/lora-out"--gradio
-
-# remote yaml files - the yaml config can be hosted on a public URL
-# Note: the yaml config must directly link to the **raw** yaml
-axolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml
-
We’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.
-
# Fetch example YAML files (stores in "examples/" folder)
-axolotl fetch examples
-
-# Fetch deepspeed config files (stores in "deepspeed_configs/" folder)
-axolotl fetch deepspeed_configs
-
-# Optionally, specify a destination folder
-axolotl fetch examples --dest path/to/folder
-
-
-
Legacy Usage
-
-
-Click to Expand
-
-
While the Axolotl CLI is the preferred method for interacting with axolotl, we still support the legacy -m axolotl.cli.* usage.
-
# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0"python-m axolotl.cli.preprocess examples/llama-3/lora-1b.yml
-
-# finetune lora
-accelerate launch -m axolotl.cli.train examples/llama-3/lora-1b.yml
-
-# inference
-accelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \
---lora_model_dir="./outputs/lora-out"
-
-# gradio
-accelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \
---lora_model_dir="./outputs/lora-out"--gradio
-
-# remote yaml files - the yaml config can be hosted on a public URL
-# Note: the yaml config must directly link to the **raw** yaml
-accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml
-
-
-
-
-
Badge ❤🏷️
-
Building something cool with Axolotl? Consider adding a badge to your model card.
-
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
-
-
-
-
Sponsors 🤝❤
-
If you love axolotl, consider sponsoring the project by reaching out directly to wing@axolotl.ai.
-
+
Axolotl is a tool designed to streamline post-training for various AI models. Post-training refers to any modifications or additional training performed on pre-trained models - including full model fine-tuning, parameter-efficient tuning (like LoRA and QLoRA), supervised fine-tuning (SFT), instruction tuning, and alignment techniques. With support for multiple model architectures and training configurations, Axolotl makes it easy to get started with these techniques.
+
Axolotl is designed to work with YAML config files that contain everything you need to preprocess a dataset, train or fine-tune a model, run model inference or evaluation, and much more.
+
Features:
-
Modal Modal lets you run data/AI jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.
+
Train various Huggingface models such as llama, pythia, falcon, mpt
+
Supports fullfinetune, lora, qlora, relora, and gptq
+
Customize configurations using a simple yaml file or CLI overwrite
+
Load different dataset formats, use custom formats, or bring your own tokenized datasets
+
Integrated with xformers, flash attention, liger kernel, rope scaling, and multipacking
+
Works with single GPU or multiple GPUs via FSDP or Deepspeed
+
Easily run with Docker locally or on the cloud
+
Log results and optionally checkpoints to wandb, mlflow or Comet
+
And more!
+
+
+
🚀 Quick Start
+
Requirements: - NVIDIA GPU (Ampere or newer for bf16 and Flash Attention) or AMD GPU - Python ≥3.10 - PyTorch ≥2.4.1
Bugs? Please check the open issues else create a new Issue.
-
PRs are greatly welcome!
-
Please run the quickstart instructions followed by the below to setup env:
-
pip3 install -r requirements-dev.txt -r requirements-tests.txt
-pre-commit install
-
-# test
-pytest tests/
-
-# optional: run against all files
-pre-commit run --all-files
-
Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.
-
+
🌟 Contributing
+
Contributions are welcome! Please see our Contributing Guide for details.
-
-
Axolotl supports
+
+
Supported Models
@@ -676,410 +603,17 @@ Click to Expand
✅: supported ❌: not supported ❓: untested
-
-
Advanced Setup
-
-
Environment
-
-
Docker
-
docker run --gpus'"all"'--rm-it axolotlai/axolotl:main-latest
It additionally: * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args. * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args. * The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal. * The --privileged flag gives all capabilities to the container. * The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.
Modal - Modal lets you run jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.
To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.
then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:
For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.
-
-
-
-
Dataset
-
Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.
-
See the documentation for more information on how to use different dataset formats.
-
-
-
Config
-
See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:
-
-
model
-
base_model: ./llama-7b-hf # local or huggingface repo
-
Note: The code will load the right architecture.
-
dataset
-
datasets:
- # huggingface repo
--path: vicgalle/alpaca-gpt4
-type: alpaca
-
- # huggingface repo with specific configuration/subset
--path: EleutherAI/pile
-name: enron_emails
-type: completion # format from earlier
-field: text # Optional[str] default: text, field to use for completion data
-
- # huggingface repo with multiple named configurations/subsets
--path: bigcode/commitpackft
-name:
-- ruby
-- python
-- typescript
-type: ... # unimplemented custom format
-
- # chat_template https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html#chat_template
--path: ...
-type: chat_template
-chat_template: chatml # defaults to tokenizer's chat_template
-
- # local
--path: data.jsonl # or json
-ds_type: json # see other options below
-type: alpaca
-
- # dataset with splits, but no train split
--path: knowrohit07/know_sql
-type: context_qa.load_v2
-train_on_split: validation
-
- # loading from s3 or gcs
- # s3 creds will be loaded from the system default / gcs will attempt to load from gcloud creds, google metadata service, or anon
--path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above
- ...
-
- # Loading Data From a Public URL
- # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.
--path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.
-ds_type: json # this is the default, see other options below.
-
loading
-
load_in_4bit:true
-load_in_8bit:true
-
-bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.
-fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32
-tf32:true # require >=ampere
-
-bfloat16:true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)
-float16:true # use instead of fp16 when you don't want AMP
-
Note: Repo does not do 4-bit quantization.
-
lora
-
adapter: lora # 'qlora' or leave blank for full finetune
-lora_r:8
-lora_alpha:16
-lora_dropout:0.05
-lora_target_modules:
-- q_proj
-- v_proj
[!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml
-
-
-
Preprocess dataset
-
You can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.
-
-
Set dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.
-
(Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
-
(Optional): Use --debug to see preprocessed examples.
-
-
python-m axolotl.cli.preprocess your_config.yml
-
-
-
Multi-GPU
-
Below are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.
-
-
DeepSpeed
-
Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated
-
We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.
It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:
-
special_tokens:
-bos_token:"<s>"
-eos_token:"</s>"
-unk_token:"<unk>"
-tokens: # these are delimiters
--"<|im_start|>"
--"<|im_end|>"
-
When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.
-
-
-
Liger Kernel
-
Liger Kernel: Efficient Triton Kernels for LLM Training
-
https://github.com/linkedin/Liger-Kernel
-
Liger (LinkedIn GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The Liger Kernel composes well and is compatible with both FSDP and Deepspeed.
Axolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.
-
Pass the appropriate flag to the inference command, depending upon what kind of model was trained:
Please use --sample_packing False if you have it on and receive the error similar to below:
-
-
RuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1
-
-
-
-
Merge LORA to base
-
The following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.
You may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with
If it does not help, try running without deepspeed and without accelerate (replace “accelerate launch” with “python”) in the command.
-
Using adamw_bnb_8bit might also save you some memory.
-
-
failed (exitcode: -9)
-
-
Usually means your system has run out of system memory. Similarly, you should consider reducing the same settings as when you run out of VRAM. Additionally, look into upgrading your system RAM which should be simpler than GPU upgrades.
-
-
RuntimeError: expected scalar type Float but found Half
-
-
Try set fp16: true
-
-
NotImplementedError: No operator found for memory_efficient_attention_forward …
For many formats, Axolotl constructs prompts by concatenating token ids after tokenizing strings. The reason for concatenating token ids rather than operating on strings is to maintain precise accounting for attention masks.
-
If you decode a prompt constructed by axolotl, you might see spaces between tokens (or lack thereof) that you do not expect, especially around delimiters and special tokens. When you are starting out with a new format, you should always do the following:
-
-
Materialize some data using python -m axolotl.cli.preprocess your_config.yml --debug, and then decode the first few rows with your model’s tokenizer.
-
During inference, right before you pass a tensor of token ids to your model, decode these tokens back into a string.
-
Make sure the inference string from #2 looks exactly like the data you fine tuned on from #1, including spaces and new lines. If they aren’t the same, adjust your inference server accordingly.
-
As an additional troubleshooting step, you can look at the token ids between 1 and 2 to make sure they are identical.
-
-
Having misalignment between your prompts during training and inference can cause models to perform very poorly, so it is worth checking this. See this blog post for a concrete example.
-
-
-
-
Debugging Axolotl
-
See this debugging guide for tips on debugging Axolotl, along with an example configuration for debugging with VSCode.
-
-
-
Need help? 🙋
-
Join our Discord server where our community members can help you.
-
Need dedicated support? Please contact us at ✉️wing@axolotl.ai for dedicated support options.
+
+
📜 License
+
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
diff --git a/search.json b/search.json
index 15581367d..35020b75f 100644
--- a/search.json
+++ b/search.json
@@ -347,178 +347,251 @@
]
},
{
- "objectID": "docs/multipack.html",
- "href": "docs/multipack.html",
- "title": "Multipack (Sample Packing)",
+ "objectID": "docs/installation.html",
+ "href": "docs/installation.html",
+ "title": "Installation Guide",
"section": "",
- "text": "Because Flash Attention simply drops the attention mask, we do not need to construct a 4d attention mask. We only need to concatenate the sequences into a single batch and let flash attention know where each new sequence begins.\n4k context, bsz =4, each character represents 256 tokens X represents a padding token\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B ]\n C C C C C C C ]\n D D D D ]]\n\n[[ E E E E E E E E ]\n [ F F F F ]\n [ G G G ]\n [ H H H H ]]\n\n[[ I I I ]\n [ J J J ]\n [ K K K K K]\n [ L L L ]]\nafter padding to longest input in each step\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B X X X X X X ]\n C C C C C C C X X X X ]\n D D D D X X X X X X X ]]\n\n[[ E E E E E E E E ]\n [ F F F F X X X X ]\n [ G G G X X X X X ]\n [ H H H H X X X X ]]\n\n[[ I I I X X ]\n [ J J J X X ]\n [ K K K K K ]\n [ L L L X X ]]\nw packing ( note it’s the same effective number of tokens per step, but a true bsz of 1)\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A B B B B B\n B C C C C C C C D D D D E E E E\n E E E E F F F F F G G G H H H H\n I I I J J J J K K K K K L L L X ]]\ncu_seqlens: [[ 0, 11, 17, 24, 28, 36, 41 44, 48, 51, 55, 60, 64]]",
+ "text": "This guide covers all the ways you can install and set up Axolotl for your environment.",
"crumbs": [
"How-To Guides",
- "Multipack (Sample Packing)"
+ "Installation Guide"
]
},
{
- "objectID": "docs/multipack.html#visualization-of-multipack-with-flash-attention",
- "href": "docs/multipack.html#visualization-of-multipack-with-flash-attention",
- "title": "Multipack (Sample Packing)",
+ "objectID": "docs/installation.html#sec-requirements",
+ "href": "docs/installation.html#sec-requirements",
+ "title": "Installation Guide",
+ "section": "1 Requirements",
+ "text": "1 Requirements\n\nNVIDIA GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU\nPython ≥3.10\nPyTorch ≥2.4.1",
+ "crumbs": [
+ "How-To Guides",
+ "Installation Guide"
+ ]
+ },
+ {
+ "objectID": "docs/installation.html#sec-installation-methods",
+ "href": "docs/installation.html#sec-installation-methods",
+ "title": "Installation Guide",
+ "section": "2 Installation Methods",
+ "text": "2 Installation Methods\n\n2.1 PyPI Installation (Recommended)\npip3 install --no-build-isolation axolotl[flash-attn,deepspeed]\nWe use --no-build-isolation in order to detect the installed PyTorch version (if installed) in order not to clobber it, and so that we set the correct version of dependencies that are specific to the PyTorch version or other installed co-dependencies.\n\n\n2.2 Edge/Development Build\nFor the latest features between releases:\ngit clone https://github.com/axolotl-ai-cloud/axolotl.git\ncd axolotl\npip3 install packaging ninja\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n\n\n2.3 Docker\ndocker run --gpus '\"all\"' --rm -it axolotlai/axolotl:main-latest\nFor development with Docker:\ndocker compose up -d\n\n\n\n\n\n\nAdvanced Docker Configuration\n\n\n\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it \\\n --name axolotl --ipc=host \\\n --ulimit memlock=-1 --ulimit stack=67108864 \\\n --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl \\\n -v ${HOME}/.cache/huggingface:/root/.cache/huggingface \\\n axolotlai/axolotl:main-latest",
+ "crumbs": [
+ "How-To Guides",
+ "Installation Guide"
+ ]
+ },
+ {
+ "objectID": "docs/installation.html#sec-cloud",
+ "href": "docs/installation.html#sec-cloud",
+ "title": "Installation Guide",
+ "section": "3 Cloud Environments",
+ "text": "3 Cloud Environments\n\n3.1 Cloud GPU Providers\nFor providers supporting Docker:\n\nUse axolotlai/axolotl-cloud:main-latest\nAvailable on:\n\nLatitude.sh\nJarvisLabs.ai\nRunPod\n\n\n\n\n3.2 Google Colab\nUse our example notebook.",
+ "crumbs": [
+ "How-To Guides",
+ "Installation Guide"
+ ]
+ },
+ {
+ "objectID": "docs/installation.html#sec-platform-specific",
+ "href": "docs/installation.html#sec-platform-specific",
+ "title": "Installation Guide",
+ "section": "4 Platform-Specific Instructions",
+ "text": "4 Platform-Specific Instructions\n\n4.1 macOS\npip3 install --no-build-isolation -e '.'\nSee Section 6 for Mac-specific issues.\n\n\n4.2 Windows\n\n\n\n\n\n\nImportant\n\n\n\nWe recommend using WSL2 (Windows Subsystem for Linux) or Docker.",
+ "crumbs": [
+ "How-To Guides",
+ "Installation Guide"
+ ]
+ },
+ {
+ "objectID": "docs/installation.html#sec-env-managers",
+ "href": "docs/installation.html#sec-env-managers",
+ "title": "Installation Guide",
+ "section": "5 Environment Managers",
+ "text": "5 Environment Managers\n\n5.1 Conda/Pip venv\n\nInstall Python ≥3.10\nInstall PyTorch: https://pytorch.org/get-started/locally/\nInstall Axolotl:\npip3 install packaging\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n(Optional) Login to Hugging Face:\nhuggingface-cli login",
+ "crumbs": [
+ "How-To Guides",
+ "Installation Guide"
+ ]
+ },
+ {
+ "objectID": "docs/installation.html#sec-troubleshooting",
+ "href": "docs/installation.html#sec-troubleshooting",
+ "title": "Installation Guide",
+ "section": "6 Troubleshooting",
+ "text": "6 Troubleshooting\nIf you encounter installation issues, see our FAQ and Debugging Guide.",
+ "crumbs": [
+ "How-To Guides",
+ "Installation Guide"
+ ]
+ },
+ {
+ "objectID": "docs/config.html",
+ "href": "docs/config.html",
+ "title": "Config options",
"section": "",
- "text": "Because Flash Attention simply drops the attention mask, we do not need to construct a 4d attention mask. We only need to concatenate the sequences into a single batch and let flash attention know where each new sequence begins.\n4k context, bsz =4, each character represents 256 tokens X represents a padding token\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B ]\n C C C C C C C ]\n D D D D ]]\n\n[[ E E E E E E E E ]\n [ F F F F ]\n [ G G G ]\n [ H H H H ]]\n\n[[ I I I ]\n [ J J J ]\n [ K K K K K]\n [ L L L ]]\nafter padding to longest input in each step\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B X X X X X X ]\n C C C C C C C X X X X ]\n D D D D X X X X X X X ]]\n\n[[ E E E E E E E E ]\n [ F F F F X X X X ]\n [ G G G X X X X X ]\n [ H H H H X X X X ]]\n\n[[ I I I X X ]\n [ J J J X X ]\n [ K K K K K ]\n [ L L L X X ]]\nw packing ( note it’s the same effective number of tokens per step, but a true bsz of 1)\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A B B B B B\n B C C C C C C C D D D D E E E E\n E E E E F F F F F G G G H H H H\n I I I J J J J K K K K K L L L X ]]\ncu_seqlens: [[ 0, 11, 17, 24, 28, 36, 41 44, 48, 51, 55, 60, 64]]",
+ "text": "# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files\n# This can also be a relative path to a model on disk\nbase_model: ./llama-7b-hf\n# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)\nbase_model_ignore_patterns:\n# If the base_model repo on hf hub doesn't include configuration .json files,\n# You can set that here, or leave this empty to default to base_model\nbase_model_config: ./llama-7b-hf\n# You can specify to choose a specific model revision from huggingface hub\nrevision_of_model:\n# Optional tokenizer configuration path in case you want to use a different tokenizer\n# than the one defined in the base model\ntokenizer_config:\n# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too\nmodel_type: AutoModelForCausalLM\n# Corresponding tokenizer for the model AutoTokenizer is a good choice\ntokenizer_type: AutoTokenizer\n# Trust remote code for untrusted source\ntrust_remote_code:\n# use_fast option for tokenizer loading from_pretrained, default to True\ntokenizer_use_fast:\n# Whether to use the legacy tokenizer setting, defaults to True\ntokenizer_legacy:\n# Resize the model embeddings when new tokens are added to multiples of 32\n# This is reported to improve training speed on some models\nresize_token_embeddings_to_32x:\n\n# (Internal use only)\n# Used to identify which the model is based on\nis_falcon_derived_model:\nis_llama_derived_model:\nis_qwen_derived_model:\n# Please note that if you set this to true, `padding_side` will be set to \"left\" by default\nis_mistral_derived_model:\n\n# optional overrides to the base model configuration\noverrides_of_model_config:\n # RoPE Scaling https://github.com/huggingface/transformers/pull/24653\n rope_scaling:\n type: # linear | dynamic\n factor: # float\n\n# optional overrides to the bnb 4bit quantization configuration\n# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig\nbnb_config_kwargs:\n # These are default values\n llm_int8_has_fp16_weight: false\n bnb_4bit_quant_type: nf4\n bnb_4bit_use_double_quant: true\n\n\n# Whether you are training a 4-bit GPTQ quantized model\ngptq: true\n\n# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer\nload_in_8bit: true\n# Use bitsandbytes 4 bit\nload_in_4bit:\n\n# Use CUDA bf16\nbf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere\n# Use CUDA fp16\nfp16: true\n# Use CUDA tf32\ntf32: true # require >=ampere\n\n# No AMP (automatic mixed precision)\nbfloat16: true # require >=ampere\nfloat16: true\n\n# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset\ngpu_memory_limit: 20GiB\n# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge\nlora_on_cpu: true\n\n# A list of one or more datasets to finetune the model with\ndatasets:\n # HuggingFace dataset repo | s3://,gs:// path | \"json\" for local dataset, make sure to fill data_files\n - path: vicgalle/alpaca-gpt4\n # The type of prompt to use for training. [alpaca, gpteacher, oasst, reflection]\n type: alpaca # format | format:<prompt_style> (chat/instruct) | <prompt_strategies>.load_<load_fn>\n ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file\n data_files: # Optional[str] path to source data files\n shards: # Optional[int] number of shards to split data into\n name: # Optional[str] name of dataset configuration to load\n train_on_split: train # Optional[str] name of dataset split to load from\n revision: # Optional[str] The specific revision of the dataset to use when loading from the Hugging Face Hub. This can be a commit hash, tag, or branch name. If not specified, the latest version will be used. This parameter is ignored for local datasets.\n trust_remote_code: # Optional[bool] Trust remote code for untrusted source\n\n # Custom user instruction prompt\n - path: repo\n type:\n # The below are defaults. only set what's needed if you use a different column name.\n system_prompt: \"\"\n system_format: \"{system}\"\n field_system: system\n field_instruction: instruction\n field_input: input\n field_output: output\n\n # Customizable to be single line or multi-line\n # Use {instruction}/{input} as key to be replaced\n # 'format' can include {input}\n format: |-\n User: {instruction} {input}\n Assistant:\n # 'no_input_format' cannot include {input}\n no_input_format: \"{instruction} \"\n\n # For `completion` datsets only, uses the provided field instead of `text` column\n field:\n\n # Using chat template\n - path: ...\n # Set type to `chat_template` to use this strategy\n type: chat_template\n # Specify the name of the chat template to use\n # The name of the chat template to use for training, following values are supported:\n # - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default.\n # - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n # - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to if the tokenizer does not have a chat template else default to tokenizer. E.g. tokenizer_default_fallback_chatml.\n # - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n chat_template: tokenizer_default\n\n # Custom jinja chat template. Used only if `chat_template: jinja` or empty.\n chat_template_jinja:\n\n # Key containing the messages (default: \"messages\")\n field_messages: messages\n # Key for role in each message (default: \"role\")\n message_field_role: role\n # Key for content in each message (default: \"content\")\n message_field_content: content\n\n # Optional[Dict[str, List]]. Roles mapping in the messages. The default is:\n roles:\n user: [\"human\", \"user\"]\n assistant: [\"gpt\", \"assistant\"]\n system: [\"system\"]\n tool: [\"tool\"]\n\n # IMPORTANT: The following fields determine which parts of the conversation to train on.\n # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train\n # See examples at `docs/dataset-formats/conversation.qmd`\n # Note: If the below 4 fields are empty, defaults to training only on the last message.\n\n # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss.\n roles_to_train: [\"assistant\"] # default\n # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are:\n # - all: train on all EOS tokens\n # - turn (default): train on the EOS token at the end of each trainable turn\n # - last: train on the last EOS token in the conversation\n train_on_eos: last\n # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`.\n message_field_training: training\n # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn.\n # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train).\n message_field_training_detail: train_detail\n\n\n# If false, the datasets will not be shuffled and will keep their original order in `datasets`.\n# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.\nshuffle_merged_datasets: true\n\nDeduplicates datasets and test_datasets with identical entries.\ndataset_exact_deduplication: true\n\n# A list of one or more datasets to eval the model with.\n# You can use either test_datasets, or val_set_size, but not both.\ntest_datasets:\n - path: /workspace/data/eval.jsonl\n ds_type: json\n # You need to specify a split. For \"json\" datasets the default split is called \"train\".\n split: train\n type: completion\n data_files:\n - /workspace/data/eval.jsonl\n\n# use RL training: 'dpo', 'ipo', 'kto'\nrl:\n# whether to perform weighting if doing DPO training. Boolean.\ndpo_use_weighting:\n\n# reward modelling: `True` or `False`\nreward_model:\n\n# process reward modelling: `True` or `False`\nprocess_reward_model:\n\n# The name of the chat template to use for training, following values are supported:\n# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value.\n# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer.\n# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n# The selected chat template will be saved to the tokenizer_config.json for easier inferencing\n# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template.\nchat_template: tokenizer_default\n# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null.\nchat_template_jinja: null\n# Changes the default system message\ndefault_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml.\n# Axolotl attempts to save the dataset as an arrow after packing the data together so\n# subsequent training attempts load faster, relative path\ndataset_prepared_path: data/last_run_prepared\n# Push prepared dataset to hub\npush_dataset_to_hub: # repo path\n# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`\n# if not set.\ndataset_processes: # defaults to os.cpu_count() if not set\n# Keep dataset in memory while preprocessing\n# Only needed if cached dataset is taking too much storage\ndataset_keep_in_memory:\n# push checkpoints to hub\nhub_model_id: # private repo path to push finetuned model\n# how to push checkpoints to hub\n# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy\nhub_strategy:\n# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets\n# Required to be true when used in combination with `push_dataset_to_hub`\nhf_use_auth_token: # boolean\n# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.\nval_set_size: 0.04\n# Num shards for whole dataset\ndataset_shard_num:\n# Index of shard to use for whole dataset\ndataset_shard_idx:\n\n# The maximum length of an input to train with, this should typically be less than 2048\n# as most models have a token/context limit of 2048\nsequence_len: 2048\n# Pad inputs so each step uses constant sized buffers\n# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently\npad_to_sequence_len:\n# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'\nsample_packing:\n# Set to 'false' if getting errors during eval with sample_packing on.\neval_sample_packing:\n# You can set these packing optimizations AFTER starting a training at least once.\n# The trainer will provide recommended values for these values.\nsample_packing_eff_est:\ntotal_num_tokens:\n# Increasing the following values helps with packing, but usually only slightly (<%1.)\n# The number of samples packed at a time.\nsample_packing_group_size: 100000\n# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples.\nsample_packing_bin_size: 200\n# whether to concatenate samples during pretraining\npretraining_sample_concatenation:\n\n# Use batch flattening for speedups when not using sample_packing\nbatch_flattening:\n\n# Passed through to transformers when loading the model when launched without accelerate\n# Use `sequential` when training w/ model parallelism to limit memory\ndevice_map:\n# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.\nmax_memory:\n\n# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model\nadapter: lora\n# If you already have a lora model trained that you want to load, put that here.\n# This means after training, if you want to test the model, you should set this to the value of `output_dir`.\n# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.\nlora_model_dir:\n\n# LoRA hyperparameters\n# For more details about the following options, see:\n# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n# - k_proj\n# - o_proj\n# - gate_proj\n# - down_proj\n# - up_proj\nlora_target_linear: # If true, will target all linear modules\npeft_layers_to_transform: # The layer indices to transform, otherwise, apply to all layers\n\n# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.\n# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.\n# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994\nlora_modules_to_save:\n# - embed_tokens\n# - lm_head\n\nlora_fan_in_fan_out: false\n\n# LoRA+ hyperparameters\n# For more details about the following options, see:\n# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py`\nloraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4.\nloraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6.\n\npeft:\n # Configuration options for loftq initialization for LoRA\n # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization\n loftq_config:\n loftq_bits: # typically 4 bits\n\n# ReLoRA configuration\n# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed\nrelora_steps: # Number of steps per ReLoRA restart\nrelora_warmup_steps: # Number of per-restart warmup steps\nrelora_anneal_steps: # Number of anneal steps for each relora cycle\nrelora_prune_ratio: # threshold for optimizer magnitude when pruning\nrelora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings\n\n# wandb configuration if you're using it\n# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.\nwandb_mode: # \"offline\" to save run metadata locally and not sync to the server, \"disabled\" to turn off wandb\nwandb_project: # Your wandb project name\nwandb_entity: # A wandb Team name if using a Team\nwandb_watch:\nwandb_name: # Set the name of your wandb run\nwandb_run_id: # Set the ID of your wandb run\nwandb_log_model: # \"checkpoint\" to log model to wandb Artifacts every `save_steps` or \"end\" to log only at the end of training\n\n# mlflow configuration if you're using it\nmlflow_tracking_uri: # URI to mlflow\nmlflow_experiment_name: # Your experiment name\nmlflow_run_name: # Your run name\nhf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry\n\n# Comet configuration if you're using it\n# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`.\n# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start\nuse_comet: # Enable or disable Comet integration.\ncomet_api_key: # API key for Comet. Recommended to set via `comet login`.\ncomet_workspace: # Workspace name in Comet. Defaults to the user's default workspace.\ncomet_project_name: # Project name in Comet. Defaults to Uncategorized.\ncomet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key.\ncomet_mode: # Create a new experiment (\"create\") or log to an existing one (\"get\"). Default (\"get_or_create\") auto-selects based on configuration.\ncomet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True.\ncomet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details.\n\n# Where to save the full-finetuned model to\noutput_dir: ./completed-model\n\n# Whether to use torch.compile and which backend to use\n# setting to `auto` will enable torch compile when torch>=2.5.1\ntorch_compile: # Optional[Union[Literal[\"auto\"], bool]]\ntorch_compile_backend: # Optional[str]\n\n# Training hyperparameters\n\n# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.\ngradient_accumulation_steps: 1\n# The number of samples to include in each batch. This is the number of samples sent to each GPU.\n# Batch size per gpu = micro_batch_size * gradient_accumulation_steps\nmicro_batch_size: 2\neval_batch_size:\nnum_epochs: 4\nwarmup_steps: 100 # cannot use with warmup_ratio\nwarmup_ratio: 0.05 # cannot use with warmup_steps\nlearning_rate: 0.00003\nlr_quadratic_warmup:\nlogging_steps:\neval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps\nevals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps\neval_strategy: # Set to `\"no\"` to skip evaluation, `\"epoch\"` at end of each epoch, leave empty to infer from `eval_steps`.\nsave_strategy: # Set to `\"no\"` to skip checkpoint saves, `\"epoch\"` at end of each epoch, `\"best\"` when better result is achieved, leave empty to infer from `save_steps`.\nsave_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps\nsaves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps\nsave_total_limit: # Checkpoints saved at a time\n# Maximum number of iterations to train for. It precedes num_epochs which means that\n# if both are set, num_epochs will not be guaranteed.\n# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps\nmax_steps:\n\neval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0\neval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128\neval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is [\"sacrebleu\", \"comet\", \"ter\", \"chrf\", \"perplexity\"]\n\nprofiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir.\n # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information\n # snapshots can be visualized @ https://pytorch.org/memory_viz\n\nloss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)\nloss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)\n\n# Save model as safetensors (require safetensors package)\nsave_safetensors:\n\n# Whether to mask out or include the human's prompt from the training labels\ntrain_on_inputs: false\n# Group similarly sized data to minimize padding.\n# May be slower to start, as it must download and sort the entire dataset.\n# Note that training loss may have an oscillating pattern with this enabled.\ngroup_by_length: false\n\n# Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\ngradient_checkpointing: false\n# additional kwargs to pass to the trainer for gradient checkpointing\n# gradient_checkpointing_kwargs:\n# use_reentrant: true\n\n# Stop training after this many evaluation losses have increased in a row\n# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback\nearly_stopping_patience: 3\n\n# Specify a scheduler and kwargs to use with the optimizer\nlr_scheduler: # 'one_cycle' | 'log_sweep' | empty for cosine\nlr_scheduler_kwargs:\ncosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr\ncosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)\n\n# For one_cycle optim\nlr_div_factor: # Learning rate div factor\n\n# Specify optimizer\n# Valid values are driven by the Transformers OptimizerNames class, see:\n# https://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/training_args.py#L134\n#\n# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of\n# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used\n# in the examples/ for your model and fine-tuning use case.\n#\n# Valid values for 'optimizer' include:\n# - adamw_hf\n# - adamw_torch\n# - adamw_torch_fused\n# - adamw_torch_xla\n# - adamw_apex_fused\n# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)\n# - adafactor\n# - adamw_anyprecision\n# - sgd\n# - adagrad\n# - adamw_bnb_8bit\n# - lion_8bit\n# - lion_32bit\n# - paged_adamw_32bit\n# - paged_adamw_8bit\n# - paged_lion_32bit\n# - paged_lion_8bit\n# - galore_adamw\n# - galore_adamw_8bit\n# - galore_adafactor\n# - galore_adamw_layerwise\n# - galore_adamw_8bit_layerwise\n# - galore_adafactor_layerwise\noptimizer:\n# Dictionary of arguments to pass to the optimizer\noptim_args:\n# For Galore Optimizers the following optim_args are available\n# rank: # type: int\n# update_proj_gap # type: int\n# scale # type: float\n# proj_type: # type: str, default = std\n\n# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm\noptim_target_modules:\n# - self_attn # for llama\n# - mlp\n\n# Specify weight decay\nweight_decay:\n# adamw hyperparams\nadam_beta1:\nadam_beta2:\nadam_epsilon:\n# Gradient clipping max norm\nmax_grad_norm:\n\n# Augmentation techniques\n# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings\n# currently only supported on Llama and Mistral\nneftune_noise_alpha:\n\n# Whether to bettertransformers\nflash_optimum:\n# Whether to use xformers attention patch https://github.com/facebookresearch/xformers:\nxformers_attention:\n# Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:\nflash_attention:\nflash_attn_cross_entropy: # Whether to use flash-attention cross entropy implementation - advanced use only\nflash_attn_rms_norm: # Whether to use flash-attention rms norm implementation - advanced use only\nflash_attn_fuse_qkv: # Whether to fuse QKV into a single operation\nflash_attn_fuse_mlp: # Whether to fuse part of the MLP into a single operation\n# Whether to use scaled-dot-product attention\n# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html\nsdp_attention:\n# Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf\ns2_attention:\n# Resume from a specific checkpoint dir\nresume_from_checkpoint:\n# If resume_from_checkpoint isn't set and you simply want it to start where it left off.\n# Be careful with this being turned on between different models.\nauto_resume_from_checkpoints: false\n\n# Don't mess with this, it's here for accelerate and torchrun\nlocal_rank:\n\n# Add or change special tokens.\n# If you add tokens here, you don't need to add them to the `tokens` list.\nspecial_tokens:\n # bos_token: \"<s>\"\n # eos_token: \"</s>\"\n # unk_token: \"<unk>\"\n # pad_token: \"[PAD]\"\n\n# Add extra tokens.\ntokens:\n\n# FSDP\nfsdp:\nfsdp_config:\n\n# Deepspeed config path. e.g., deepspeed_configs/zero3.json\ndeepspeed:\n\n# Advanced DDP Arguments\nddp_timeout:\nddp_bucket_cap_mb:\nddp_broadcast_buffers:\n\n# Path to torch distx for optim 'adamw_anyprecision'\ntorchdistx_path:\n\n# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize\npretraining_dataset:\n\n# Debug mode\ndebug:\n\n# Seed\nseed:\n\n# Allow overwrite yml config using from cli\nstrict:",
"crumbs": [
- "How-To Guides",
- "Multipack (Sample Packing)"
+ "Reference",
+ "Config options"
]
},
{
- "objectID": "docs/multipack.html#multipack-without-flash-attention",
- "href": "docs/multipack.html#multipack-without-flash-attention",
- "title": "Multipack (Sample Packing)",
- "section": "Multipack without Flash Attention",
- "text": "Multipack without Flash Attention\nMultipack can still be achieved without Flash attention, but with lower packing efficiency as we are not able to join multiple batches into a single batch due to context length limits without flash attention. We can use either Pytorch’s Scaled Dot Product Attention implementation or native Pytorch attention implementation along with 4d attention masks to pack sequences together and avoid cross attention.",
- "crumbs": [
- "How-To Guides",
- "Multipack (Sample Packing)"
- ]
- },
- {
- "objectID": "docs/rlhf.html",
- "href": "docs/rlhf.html",
- "title": "RLHF (Beta)",
+ "objectID": "docs/lr_groups.html",
+ "href": "docs/lr_groups.html",
+ "title": "Learning Rate Groups",
"section": "",
- "text": "Overview\nReinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback. Various methods include, but not limited to:\n\nProximal Policy Optimization (PPO) (not yet supported in axolotl)\nDirect Preference Optimization (DPO)\nIdentity Preference Optimization (IPO)\n\n\n\nRLHF using Axolotl\n\n[!IMPORTANT] This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality.\n\nThe various RL training methods are implemented in trl and wrapped via axolotl. Below are various examples with how you can use various preference datasets to train models that use ChatML\n\nDPO\nrl: dpo\ndatasets:\n - path: Intel/orca_dpo_pairs\n split: train\n type: chatml.intel\n - path: argilla/ultrafeedback-binarized-preferences\n split: train\n type: chatml.argilla\n\n\nIPO\nrl: ipo\n\n\nORPO\nPaper: https://arxiv.org/abs/2403.07691\nrl: orpo\norpo_alpha: 0.1\nremove_unused_columns: false\n\nchat_template: chatml\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned\n type: chat_template.argilla\n\n\nKTO\nrl: kto\nrl_beta: 0.5\nkto_desirable_weight: 0.2\n\nremove_unused_columns: false\n\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned-kto\n type: llama3.ultra\n split: train\n\ngradient_checkpointing: true\ngradient_checkpointing_kwargs:\n use_reentrant: true\n\n\nUsing local dataset files\ndatasets:\n - ds_type: json\n data_files:\n - orca_rlhf.jsonl\n split: train\n type: chatml.intel\n\n\nTrl autounwrap for peft\nTrl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.\n# load ref model when adapter training.\nrl_adapter_ref_model: true",
- "crumbs": [
- "How-To Guides",
- "RLHF (Beta)"
- ]
+ "text": "Inspired by LoRA+, Axolotl allows practitioners to specify separate learning rates for each module or groups of modules in a model."
},
{
- "objectID": "docs/fsdp_qlora.html",
- "href": "docs/fsdp_qlora.html",
- "title": "FDSP + QLoRA",
+ "objectID": "docs/lr_groups.html#background",
+ "href": "docs/lr_groups.html#background",
+ "title": "Learning Rate Groups",
"section": "",
- "text": "Using FSDP with QLoRA is essential for fine-tuning larger (70b+ parameter) LLMs on consumer GPUs. For example, you can use FSDP + QLoRA to train a 70b model on two 24GB GPUs1.\nBelow, we describe how to use this feature in Axolotl.",
- "crumbs": [
- "How-To Guides",
- "FDSP + QLoRA"
- ]
+ "text": "Inspired by LoRA+, Axolotl allows practitioners to specify separate learning rates for each module or groups of modules in a model."
},
{
- "objectID": "docs/fsdp_qlora.html#background",
- "href": "docs/fsdp_qlora.html#background",
- "title": "FDSP + QLoRA",
+ "objectID": "docs/lr_groups.html#example",
+ "href": "docs/lr_groups.html#example",
+ "title": "Learning Rate Groups",
+ "section": "Example",
+ "text": "Example\nlr_groups:\n - name: o_proj\n modules:\n - self_attn.o_proj.weight\n lr: 1e-6\n - name: q_proj\n modules:\n - model.layers.2.self_attn.q_proj.weight\n lr: 1e-5\n\nlearning_rate: 2e-5\nIn this example, we have a default learning rate of 2e-5 across the entire model, but we have a separate learning rate of 1e-6 for all the self attention o_proj modules across all layers, and a learning are of 1e-5 to the 3rd layer’s self attention q_proj module."
+ },
+ {
+ "objectID": "docs/debugging.html",
+ "href": "docs/debugging.html",
+ "title": "Debugging",
"section": "",
- "text": "Using FSDP with QLoRA is essential for fine-tuning larger (70b+ parameter) LLMs on consumer GPUs. For example, you can use FSDP + QLoRA to train a 70b model on two 24GB GPUs1.\nBelow, we describe how to use this feature in Axolotl.",
+ "text": "This document provides some tips and tricks for debugging Axolotl. It also provides an example configuration for debugging with VSCode. A good debugging setup is essential to understanding how Axolotl code works behind the scenes.",
"crumbs": [
"How-To Guides",
- "FDSP + QLoRA"
+ "Debugging"
]
},
{
- "objectID": "docs/fsdp_qlora.html#usage",
- "href": "docs/fsdp_qlora.html#usage",
- "title": "FDSP + QLoRA",
- "section": "Usage",
- "text": "Usage\nTo enable QLoRA with FSDP, you need to perform the following steps:\n\n![Tip] See the example config file in addition to reading these instructions.\n\n\nSet adapter: qlora in your axolotl config file.\nEnable FSDP in your axolotl config, as described here.\nUse one of the supported model types: llama, mistral or mixtral.",
+ "objectID": "docs/debugging.html#table-of-contents",
+ "href": "docs/debugging.html#table-of-contents",
+ "title": "Debugging",
+ "section": "Table of Contents",
+ "text": "Table of Contents\n\nGeneral Tips\nDebugging with VSCode\n\nBackground\nConfiguration\nCustomizing your debugger\nVideo Tutorial\n\nDebugging With Docker\n\nSetup\nAttach To Container\nVideo - Attaching To Docker On Remote Host",
"crumbs": [
"How-To Guides",
- "FDSP + QLoRA"
+ "Debugging"
]
},
{
- "objectID": "docs/fsdp_qlora.html#example-config",
- "href": "docs/fsdp_qlora.html#example-config",
- "title": "FDSP + QLoRA",
- "section": "Example Config",
- "text": "Example Config\nexamples/llama-2/qlora-fsdp.yml contains an example of how to enable QLoRA + FSDP in axolotl.",
+ "objectID": "docs/debugging.html#general-tips",
+ "href": "docs/debugging.html#general-tips",
+ "title": "Debugging",
+ "section": "General Tips",
+ "text": "General Tips\nWhile debugging it’s helpful to simplify your test scenario as much as possible. Here are some tips for doing so:\n\n[!Important] All of these tips are incorporated into the example configuration for debugging with VSCode below.\n\n\nMake sure you are using the latest version of axolotl: This project changes often and bugs get fixed fast. Check your git branch and make sure you have pulled the latest changes from main.\nEliminate concurrency: Restrict the number of processes to 1 for both training and data preprocessing:\n\nSet CUDA_VISIBLE_DEVICES to a single GPU, ex: export CUDA_VISIBLE_DEVICES=0.\nSet dataset_processes: 1 in your axolotl config or run the training command with --dataset_processes=1.\n\nUse a small dataset: Construct or use a small dataset from HF Hub. When using a small dataset, you will often have to make sure sample_packing: False and eval_sample_packing: False to avoid errors. If you are in a pinch and don’t have time to construct a small dataset but want to use from the HF Hub, you can shard the data (this will still tokenize the entire dataset, but will only use a fraction of the data for training. For example, to shard the dataset into 20 pieces, add the following to your axolotl config): yaml dataset: ... shards: 20\nUse a small model: A good example of a small model is TinyLlama/TinyLlama-1.1B-Chat-v1.0.\nMinimize iteration time: Make sure the training loop finishes as fast as possible, with these settings.\n\nmicro_batch_size: 1\nmax_steps: 1\nval_set_size: 0\n\nClear Caches: Axolotl caches certain steps and so does the underlying HuggingFace trainer. You may want to clear some of these caches when debugging.\n\nData preprocessing: When debugging data preprocessing, which includes prompt template formation, you may want to delete the directory set in dataset_prepared_path: in your axolotl config. If you didn’t set this value, the default is last_run_prepared.\nHF Hub: If you are debugging data preprocessing, you should clear the relevant HF cache HuggingFace cache, by deleting the appropriate ~/.cache/huggingface/datasets/... folder(s).\nThe recommended approach is to redirect all outputs and caches to a temporary folder and delete selected subfolders before each run. This is demonstrated in the example configuration below.",
"crumbs": [
"How-To Guides",
- "FDSP + QLoRA"
+ "Debugging"
]
},
{
- "objectID": "docs/fsdp_qlora.html#references",
- "href": "docs/fsdp_qlora.html#references",
- "title": "FDSP + QLoRA",
- "section": "References",
- "text": "References\n\nPR #1378 enabling QLoRA in FSDP in Axolotl.\nBlog Post from the Answer.AI team describing the work that enabled QLoRA in FSDP.\nRelated HuggingFace PRs Enabling FDSP + QLoRA:\n\nAccelerate PR#2544\nTransformers PR#29587\nTRL PR#1416\nPEFT PR#1550",
+ "objectID": "docs/debugging.html#debugging-with-vscode",
+ "href": "docs/debugging.html#debugging-with-vscode",
+ "title": "Debugging",
+ "section": "Debugging with VSCode",
+ "text": "Debugging with VSCode\n\nBackground\nThe below example shows how to configure VSCode to debug data preprocessing of the chat_template format. This is the format used when you have the following in your axolotl config:\ndatasets:\n - path: <path to your chat_template formatted dataset> # example on HF Hub: fozziethebeat/alpaca_messages_2k_test\n type: chat_template\n\n[!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.\n\n\n[!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).\n\n\n\nSetup\nMake sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:\npip3 install packaging\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n\nRemote Hosts\nIf you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.\n\n\n\nConfiguration\nThe easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.\nFor example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_chat_template.yml, you would use the below configuration1. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.\n// .vscode/launch.json\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Debug axolotl prompt - chat_template\",\n \"type\": \"python\",\n \"module\": \"accelerate.commands.launch\",\n \"request\": \"launch\",\n \"args\": [\n \"-m\", \"axolotl.cli.train\", \"dev_chat_template.yml\",\n // The flags below simplify debugging by overriding the axolotl config\n // with the debugging tips above. Modify as needed.\n \"--dataset_processes=1\", // limits data preprocessing to one process\n \"--max_steps=1\", // limits training to just one step\n \"--batch_size=1\", // minimizes batch size\n \"--micro_batch_size=1\", // minimizes batch size\n \"--val_set_size=0\", // disables validation\n \"--sample_packing=False\", // disables sample packing which is necessary for small datasets\n \"--eval_sample_packing=False\",// disables sample packing on eval set\n \"--dataset_prepared_path=temp_debug/axolotl_outputs/data\", // send data outputs to a temp folder\n \"--output_dir=temp_debug/axolotl_outputs/model\" // send model outputs to a temp folder\n ],\n \"console\": \"integratedTerminal\", // show output in the integrated terminal\n \"cwd\": \"${workspaceFolder}/devtools\", // set working directory to devtools from the root of the project\n \"justMyCode\": true, // step through only axolotl code\n \"env\": {\"CUDA_VISIBLE_DEVICES\": \"0\", // Since we aren't doing distributed training, we need to limit to one GPU\n \"HF_HOME\": \"${workspaceFolder}/devtools/temp_debug/.hf-cache\"}, // send HF cache to a temp folder\n \"preLaunchTask\": \"cleanup-for-dataprep\", // delete temp folders (see below)\n }\n ]\n}\nAdditional notes about this configuration:\n\nThe argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.\nThe preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:\n\n./devtools/temp_debug/axolotl_outputs\n./devtools/temp_debug/.hf-cache/datasets\n\n\n\n[!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.\n\nBelow is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.\n// .vscode/tasks.json\n// this file is used by launch.json\n{\n \"version\": \"2.0.0\",\n \"tasks\": [\n // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder\n {\n \"label\": \"delete-outputs\",\n \"type\": \"shell\",\n \"command\": \"rm -rf temp_debug/axolotl_outputs\",\n \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n \"problemMatcher\": []\n },\n // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder\n {\n \"label\": \"delete-temp-hf-dataset-cache\",\n \"type\": \"shell\",\n \"command\": \"rm -rf temp_debug/.hf-cache/datasets\",\n \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n \"problemMatcher\": []\n },\n // this task combines the two tasks above\n {\n \"label\": \"cleanup-for-dataprep\",\n \"dependsOn\": [\"delete-outputs\", \"delete-temp-hf-dataset-cache\"],\n }\n ]\n}\n\n\nCustomizing your debugger\nYour debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.\n\n\nVideo Tutorial\nThe following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl w/VSCode",
"crumbs": [
"How-To Guides",
- "FDSP + QLoRA"
+ "Debugging"
]
},
{
- "objectID": "docs/fsdp_qlora.html#footnotes",
- "href": "docs/fsdp_qlora.html#footnotes",
- "title": "FDSP + QLoRA",
+ "objectID": "docs/debugging.html#debugging-with-docker",
+ "href": "docs/debugging.html#debugging-with-docker",
+ "title": "Debugging",
+ "section": "Debugging With Docker",
+ "text": "Debugging With Docker\nUsing official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.\n\nSetup\nOn the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\n[!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.\n\nNext, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:2\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-py3.10-cu118-2.0.1\n\n[!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl’s Docker CI builds.\n\nYou will now be in the container. Next, perform an editable install of Axolotl:\npip3 install packaging\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n\n\nAttach To Container\nNext, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.\nNext, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.\nNow you are ready to debug as described above (see Debugging with VSCode).\n\n\nVideo - Attaching To Docker On Remote Host\nHere is a short video that demonstrates how to attach to a Docker container on a remote host:\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl Part 2: Attaching to Docker on a Remote Host",
+ "crumbs": [
+ "How-To Guides",
+ "Debugging"
+ ]
+ },
+ {
+ "objectID": "docs/debugging.html#footnotes",
+ "href": "docs/debugging.html#footnotes",
+ "title": "Debugging",
"section": "Footnotes",
- "text": "Footnotes\n\n\nThis was enabled by this work from the Answer.AI team.↩︎",
+ "text": "Footnotes\n\n\nThe config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/chat_template.yml, but this is the same thing.↩︎\nMany of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.↩︎",
"crumbs": [
"How-To Guides",
- "FDSP + QLoRA"
+ "Debugging"
]
},
{
- "objectID": "docs/ray-integration.html",
- "href": "docs/ray-integration.html",
- "title": "Ray Train integration",
+ "objectID": "docs/dataset_preprocessing.html",
+ "href": "docs/dataset_preprocessing.html",
+ "title": "Dataset Preprocessing",
"section": "",
- "text": "Axolotl supports using Ray as an alternative to accelerate for orchestrating training. This is especially useful for multi-node training since you only have to setup code and dependencies in a single node and launch training as if you were using a single node.\nWith the --use-ray CLI flag, Axolotl will use Ray Train’s TorchTrainer to run training.",
- "crumbs": [
- "How-To Guides",
- "Ray Train integration"
- ]
+ "text": "Dataset pre-processing is the step where Axolotl takes each dataset you’ve configured alongside the (dataset format)[../dataset-formats/] and prompt strategies to: - parse the dataset based on the dataset format - transform the dataset to how you would interact with the model based on the prompt strategy - tokenize the dataset based on the configured model & tokenizer - shuffle and merge multiple datasets together if using more than one\nThe processing of the datasets can happen one of two ways:\n\nBefore kicking off training by calling python -m axolotl.cli.preprocess /path/to/your.yaml --debug\nWhen training is started\n\nWhat are the benefits of pre-processing? When training interactively or for sweeps (e.g. you are restarting the trainer often), processing the datasets can oftentimes be frustratingly slow. Pre-processing will cache the tokenized/formatted datasets according to a hash of dependent training parameters so that it will intelligently pull from its cache when possible.\nThe path of the cache is controlled by dataset_prepared_path: and is often left blank in example YAMLs as this leads to a more robust solution that prevents unexpectedly reusing cached data.\nIf dataset_prepared_path: is left empty, when training, the processed dataset will be cached in a default path of ./last_run_prepared/, but will ignore anything already cached there. By explicitly setting dataset_prepared_path: ./last_run_prepared, the trainer will use whatever pre-processed data is in the cache.\nWhat are the edge cases? Let’s say you are writing a custom prompt strategy or using a user-defined prompt template. Because the trainer cannot readily detect these changes, we cannot change the calculated hash value for the pre-processed dataset. If you have dataset_prepared_path: ... set and change your prompt templating logic, it may not pick up the changes you made and you will be training over the old prompt."
},
{
- "objectID": "docs/ray-integration.html#ray-cluster-setup",
- "href": "docs/ray-integration.html#ray-cluster-setup",
- "title": "Ray Train integration",
- "section": "Ray cluster setup",
- "text": "Ray cluster setup\nA prerequisite using the Ray Train integration is to setup a Ray cluster on your desired node(s). For a detailed guide on how you can get started with ray clusters, check the official Ray docs here: https://docs.ray.io/en/latest/cluster/getting-started.html\nEvery Ray cluster has one head node and a set of worker nodes. The head node is just like any other worker node, but it also runs certain special processes related to scheduling and orchestration. Ray-enabled scripts are run on the head node and depending on the resources (number of CPUs, GPUs, etc) they request, will be scheduled to run certain tasks on the worker nodes. For more on key concepts behind a Ray cluster, you can refer this doc.",
- "crumbs": [
- "How-To Guides",
- "Ray Train integration"
- ]
- },
- {
- "objectID": "docs/ray-integration.html#sanity-check",
- "href": "docs/ray-integration.html#sanity-check",
- "title": "Ray Train integration",
- "section": "Sanity check",
- "text": "Sanity check\nTo run a sanity check on whether your ray cluster is setup properly, execute the following on the head node:\nray status\nThe output should have a summary of your Ray cluster - list of all the nodes in your cluster, the number of CPUs and GPUs in your cluster, etc. For example, if you have a cluster with 1 CPU-only head node and 2 4xL40S worker nodes, the output can look like this:\nNode status\n---------------------------------------------------------------\nActive:\n 1 head\nIdle:\n 2 4xL40S:48CPU-384GB\nPending:\n (no pending nodes)\nRecent failures:\n (no failures)\n\nResources\n---------------------------------------------------------------\nUsage:\n 0.0/96.0 CPU\n 0.0/8.0 GPU\n 0B/800.00GiB memory\n 0B/229.57GiB object_store_memory\n\nDemands:\n (no resource demands)\nYou should also be able to see the same on the Ray dashboard.",
- "crumbs": [
- "How-To Guides",
- "Ray Train integration"
- ]
- },
- {
- "objectID": "docs/ray-integration.html#configuring-training-with-ray-train",
- "href": "docs/ray-integration.html#configuring-training-with-ray-train",
- "title": "Ray Train integration",
- "section": "Configuring training with Ray Train",
- "text": "Configuring training with Ray Train\nYou can find an example configuration at configs/llama-3/lora-1b-ray.yaml.\nThe key parameters to note here are:\n...\nuse_ray: true\nray_num_workers: 4\n# optional\nresources_per_worker:\n GPU: 1\n...\n\nuse_ray: This is the flag that enables the Ray Train integration. You can either use the corresponding --use-ray flag in the CLI or set use_ray in the config file.\nray_num_workers: This is the number of workers/GPUs to use for training.\nresources_per_worker: This is the Ray resource request for each worker. This can be used to request a specific GPU type or a custom resource for each worker. For example, if your ray cluster has GPUs of different types, and you only want to use NVIDIA L40S GPUs, you can do\n\nresources_per_worker:\n accelerator_type:L40S: 0.001",
- "crumbs": [
- "How-To Guides",
- "Ray Train integration"
- ]
- },
- {
- "objectID": "docs/ray-integration.html#launching-training",
- "href": "docs/ray-integration.html#launching-training",
- "title": "Ray Train integration",
- "section": "Launching training",
- "text": "Launching training\nYou can simply run the following command on the head node:\naxolotl train examples/llama-3/lora-1b-ray.yml --use-ray\nThis will launch training on the head node and workers will be scheduled automatically by Ray Train to run on the appropriate head or worker nodes.\nYou can also monitor training progress on the Ray dashboard.\nComing back to the example on a Ray cluster with 1 head node and 2 4xL40S worker nodes, let’s say you want to make use of all 8 GPUs. You would be able to just set ray_num_workers: 8 and run the previous command. The Cluster tab will show the following:\n\n\n\nRay dashboard",
- "crumbs": [
- "How-To Guides",
- "Ray Train integration"
- ]
- },
- {
- "objectID": "docs/faq.html",
- "href": "docs/faq.html",
- "title": "FAQ",
+ "objectID": "docs/getting-started.html",
+ "href": "docs/getting-started.html",
+ "title": "Getting Started with Axolotl",
"section": "",
- "text": "Q: The trainer stopped and hasn’t progressed in several minutes.\n\nA: Usually an issue with the GPUs communicating with each other. See the NCCL doc\n\nQ: Exitcode -9\n\nA: This usually happens when you run out of system RAM.\n\nQ: Exitcode -7 while using deepspeed\n\nA: Try upgrading deepspeed w: pip install -U deepspeed\n\nQ: AttributeError: ‘DummyOptim’ object has no attribute ‘step’\n\nA: You may be using deepspeed with single gpu. Please don’t set deepspeed: in yaml or cli.",
+ "text": "This guide will walk you through your first model fine-tuning project with Axolotl.",
"crumbs": [
- "FAQ"
+ "How-To Guides",
+ "Getting Started with Axolotl"
+ ]
+ },
+ {
+ "objectID": "docs/getting-started.html#sec-quick-example",
+ "href": "docs/getting-started.html#sec-quick-example",
+ "title": "Getting Started with Axolotl",
+ "section": "1 Quick Example",
+ "text": "1 Quick Example\nLet’s start by fine-tuning a small language model using LoRA. This example uses a 1B parameter model to ensure it runs on most GPUs. Assuming axolotl is installed (if not, see our Installation Guide)\n\nDownload example configs:\n\naxolotl fetch examples\n\nRun the training:\n\naxolotl train examples/llama-3/lora-1b.yml\nThat’s it! Let’s understand what just happened.",
+ "crumbs": [
+ "How-To Guides",
+ "Getting Started with Axolotl"
+ ]
+ },
+ {
+ "objectID": "docs/getting-started.html#sec-understanding",
+ "href": "docs/getting-started.html#sec-understanding",
+ "title": "Getting Started with Axolotl",
+ "section": "2 Understanding the Process",
+ "text": "2 Understanding the Process\n\n2.1 The Configuration File\nThe YAML configuration file controls everything about your training. Here’s what (part of) our example config looks like:\nbase_model: NousResearch/Llama-3.2-1B\n# hub_model_id: username/custom_model_name\n\ndatasets:\n - path: teknium/GPT4-LLM-Cleaned\n type: alpaca\ndataset_prepared_path: last_run_prepared\nval_set_size: 0.1\noutput_dir: ./outputs/lora-out\n\nadapter: lora\nlora_model_dir:\nSee our Config options for more details.\n\n\n2.2 Training\nWhen you run axolotl train, Axolotl:\n\nDownloads the base model\n(If specified) applies LoRA adapter layers\nLoads and processes the dataset\nRuns the training loop\nSaves the trained model and / or LoRA weights",
+ "crumbs": [
+ "How-To Guides",
+ "Getting Started with Axolotl"
+ ]
+ },
+ {
+ "objectID": "docs/getting-started.html#sec-custom",
+ "href": "docs/getting-started.html#sec-custom",
+ "title": "Getting Started with Axolotl",
+ "section": "3 Your First Custom Training",
+ "text": "3 Your First Custom Training\nLet’s modify the example for your own data:\n\nCreate a new config file my_training.yml:\n\nbase_model: NousResearch/Nous-Hermes-llama-1b-v1\nadapter: lora\n\n# Training settings\nmicro_batch_size: 2\nnum_epochs: 3\nlearning_rate: 0.0003\n\n# Your dataset\ndatasets:\n - path: my_data.jsonl # Your local data file\n type: alpaca # Or other format\nThis specific config is for LoRA fine-tuning a model with instruction tuning data using the alpaca dataset format, which has the following format:\n{\n \"instruction\": \"Write a description of alpacas.\",\n \"input\": \"\",\n \"output\": \"Alpacas are domesticated South American camelids...\"\n}\nPlease see our Dataset Formats for more dataset formats and how to format them.\n\nPrepare your JSONL data in the specified format (in this case, the expected `alpaca format):\n\n{\"instruction\": \"Classify this text\", \"input\": \"I love this!\", \"output\": \"positive\"}\n{\"instruction\": \"Classify this text\", \"input\": \"Not good at all\", \"output\": \"negative\"}\nPlease consult the supported Dataset Formats for more details.\n\nRun the training:\n\naxolotl train my_training.yml",
+ "crumbs": [
+ "How-To Guides",
+ "Getting Started with Axolotl"
+ ]
+ },
+ {
+ "objectID": "docs/getting-started.html#sec-common-tasks",
+ "href": "docs/getting-started.html#sec-common-tasks",
+ "title": "Getting Started with Axolotl",
+ "section": "4 Common Tasks",
+ "text": "4 Common Tasks\n\n4.1 Testing Your Model\nAfter training, test your model:\naxolotl inference my_training.yml --lora-model-dir=\"./outputs/lora-out\"\n\n\n4.2 Preprocessing Data\nFor large datasets, preprocess first:\naxolotl preprocess my_training.yml\n\n\n4.3 Using a UI\nLaunch a Gradio interface:\naxolotl inference my_training.yml --lora-model-dir=\"./outputs/lora-out\" --gradio",
+ "crumbs": [
+ "How-To Guides",
+ "Getting Started with Axolotl"
+ ]
+ },
+ {
+ "objectID": "docs/getting-started.html#sec-next-steps",
+ "href": "docs/getting-started.html#sec-next-steps",
+ "title": "Getting Started with Axolotl",
+ "section": "5 Next Steps",
+ "text": "5 Next Steps\nNow that you have the basics, you might want to:\n\nTry different model architectures\nExperiment with hyperparameters\nUse more advanced training methods\nScale up to larger models\n\nCheck our other guides for details on these topics:\n\nConfiguration Guide - Full configuration options\nDataset Formats - Working with different data formats\nMulti-GPU Training\nMulti-Node Training",
+ "crumbs": [
+ "How-To Guides",
+ "Getting Started with Axolotl"
]
},
{
@@ -544,14 +617,21 @@
]
},
{
- "objectID": "docs/unsloth.html",
- "href": "docs/unsloth.html",
- "title": "Unsloth",
+ "objectID": "docs/multimodal.html",
+ "href": "docs/multimodal.html",
+ "title": "MultiModal / Vision Language Models (BETA)",
"section": "",
- "text": "Overview\nUnsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over standard industry baselines.\n\n\nInstallation\nThe following will install the correct unsloth and extras from source.\npython scripts/unsloth_install.py | sh\n\n\nUsing unsloth w Axolotl\nAxolotl exposes a few configuration options to try out unsloth and get most of the performance gains.\nOur unsloth integration is currently limited to the following model architectures: - llama\nThese options are specific to LoRA finetuning and cannot be used for multi-GPU finetuning\nunsloth_lora_mlp: true\nunsloth_lora_qkv: true\nunsloth_lora_o: true\nThese options are composable and can be used with multi-gpu finetuning\nunsloth_cross_entropy_loss: true\nunsloth_rms_norm: true\nunsloth_rope: true\n\n\nLimitations\n\nSingle GPU only; e.g. no multi-gpu support\nNo deepspeed or FSDP support (requires multi-gpu)\nLoRA + QLoRA support only. No full fine tunes or fp8 support.\nLimited model architecture support. Llama, Phi, Gemma, Mistral only\nNo MoE support.",
+ "text": "MultiModal / Vision Language Models (BETA)\n\nSupported Models\n\nMllama, i.e. llama with vision models\n\n\n\nUsage\nCurrently multimodal support is limited and doesn’t have full feature parity. To finetune a multimodal Llama w/ LoRA, you’ll need to use the following in YAML in combination with the rest of the required hyperparams.\nbase_model: alpindale/Llama-3.2-11B-Vision-Instruct\nprocessor_type: AutoProcessor\nskip_prepare_dataset: true\n\nchat_template: llama3_2_vision\ndatasets:\n - path: HuggingFaceH4/llava-instruct-mix-vsft\n type: chat_template\n split: train[:1%]\n field_messages: messages\nremove_unused_columns: false\nsample_packing: false\n\n# only finetune the Language model, leave the vision model and vision tower frozen\nlora_target_modules: 'language_model.model.layers.[\\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj'"
+ },
+ {
+ "objectID": "docs/mac.html",
+ "href": "docs/mac.html",
+ "title": "Mac M-series",
+ "section": "",
+ "text": "Currently Axolotl on Mac is partially usable, many of the dependencies of Axolotl including Pytorch do not support MPS or have incomplete support.\nCurrent support:\n\nSupport for all models\nFull training of models\nLoRA training\nSample packing\nFP16 and BF16 (awaiting AMP support for MPS in Pytorch)\nTri-dao’s flash-attn (until it is supported use spd_attention as an alternative)\nxformers\nbitsandbytes (meaning no 4/8 bits loading and bnb optimizers)\nqlora\nDeepSpeed\n\nUntested: - FSDP",
"crumbs": [
"How-To Guides",
- "Unsloth"
+ "Mac M-series"
]
},
{
@@ -636,37 +716,47 @@
"href": "index.html",
"title": "Axolotl",
"section": "",
- "text": "Quickstart ⚡\n \n Edge Builds 🏎️\n Axolotl CLI Usage\n Legacy Usage\n \n Badge ❤🏷️\n Sponsors 🤝❤\n Contributing 🤝\n Axolotl supports\n Advanced Setup\n \n Environment\n Dataset\n Config\n Train\n Inference Playground\n Merge LORA to base\n \n Common Errors 🧰\n \n Tokenization Mismatch b/w Inference & Training\n \n Debugging Axolotl\n Need help? 🙋\nAxolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.\nFeatures: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!",
+ "text": "🚀 Quick Start\n \n Installation\n Your First Fine-tune\n \n ✨ Key Features\n 📚 Documentation\n 🤝 Getting Help\n 🌟 Contributing\n Supported Models\n ❤️ Sponsors\n 📜 License\nAxolotl is a tool designed to streamline post-training for various AI models. Post-training refers to any modifications or additional training performed on pre-trained models - including full model fine-tuning, parameter-efficient tuning (like LoRA and QLoRA), supervised fine-tuning (SFT), instruction tuning, and alignment techniques. With support for multiple model architectures and training configurations, Axolotl makes it easy to get started with these techniques.\nAxolotl is designed to work with YAML config files that contain everything you need to preprocess a dataset, train or fine-tune a model, run model inference or evaluation, and much more.\nFeatures:",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#quickstart",
- "href": "index.html#quickstart",
+ "objectID": "index.html#quick-start",
+ "href": "index.html#quick-start",
"title": "Axolotl",
- "section": "Quickstart ⚡",
- "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.4.1.\npip3 install --no-build-isolation axolotl[flash-attn,deepspeed]\n\n# download examples and optionally deepspeed configs to the local path\naxolotl fetch examples\naxolotl fetch deepspeed_configs # OPTIONAL\n\n# finetune using lora\naxolotl train examples/llama-3/lora-1b.yml\n\nEdge Builds 🏎️\nIf you’re looking for the latest features and updates between releases, you’ll need to install from source.\ngit clone https://github.com/axolotl-ai-cloud/axolotl.git\ncd axolotl\npip3 install packaging ninja\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n\n\nAxolotl CLI Usage\nWe now support a new, more streamlined CLI using click.\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" axolotl preprocess examples/llama-3/lora-1b.yml\n\n# finetune lora\naxolotl train examples/llama-3/lora-1b.yml\n\n# inference\naxolotl inference examples/llama-3/lora-1b.yml \\\n --lora-model-dir=\"./outputs/lora-out\"\n\n# gradio\naxolotl inference examples/llama-3/lora-1b.yml \\\n --lora-model-dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naxolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml\nWe’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.\n# Fetch example YAML files (stores in \"examples/\" folder)\naxolotl fetch examples\n\n# Fetch deepspeed config files (stores in \"deepspeed_configs/\" folder)\naxolotl fetch deepspeed_configs\n\n# Optionally, specify a destination folder\naxolotl fetch examples --dest path/to/folder\n\n\nLegacy Usage\n\n\nClick to Expand\n\nWhile the Axolotl CLI is the preferred method for interacting with axolotl, we still support the legacy -m axolotl.cli.* usage.\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" python -m axolotl.cli.preprocess examples/llama-3/lora-1b.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/llama-3/lora-1b.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml",
+ "section": "🚀 Quick Start",
+ "text": "🚀 Quick Start\nRequirements: - NVIDIA GPU (Ampere or newer for bf16 and Flash Attention) or AMD GPU - Python ≥3.10 - PyTorch ≥2.4.1\n\nInstallation\npip3 install --no-build-isolation axolotl[flash-attn,deepspeed]\n\n# Download example axolotl configs, deepspeed configs\naxolotl fetch examples\naxolotl fetch deepspeed_configs # OPTIONAL\nOther installation approaches are described here.\n\n\nYour First Fine-tune\n# Fetch axolotl examples\naxolotl fetch examples\n\n# Or, specify a custom path\naxolotl fetch examples --dest path/to/folder\n\n# Train a model using LoRA\naxolotl train examples/llama-3/lora-1b.yml\nThat’s it! Check out our Getting Started Guide for a more detailed walkthrough.",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#badge",
- "href": "index.html#badge",
+ "objectID": "index.html#key-features",
+ "href": "index.html#key-features",
"title": "Axolotl",
- "section": "Badge ❤🏷️",
- "text": "Badge ❤🏷️\nBuilding something cool with Axolotl? Consider adding a badge to your model card.\n[<img src=\"https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png\" alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>](https://github.com/axolotl-ai-cloud/axolotl)",
+ "section": "✨ Key Features",
+ "text": "✨ Key Features\n\nMultiple Model Support: Train various models like LLaMA, Mistral, Mixtral, Pythia, and more\nTraining Methods: Full fine-tuning, LoRA, QLoRA, and more\nEasy Configuration: Simple YAML files to control your training setup\nPerformance Optimizations: Flash Attention, xformers, multi-GPU training\nFlexible Dataset Handling: Use various formats and custom datasets\nCloud Ready: Run on cloud platforms or local hardware",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#sponsors",
- "href": "index.html#sponsors",
+ "objectID": "index.html#documentation",
+ "href": "index.html#documentation",
"title": "Axolotl",
- "section": "Sponsors 🤝❤",
- "text": "Sponsors 🤝❤\nIf you love axolotl, consider sponsoring the project by reaching out directly to wing@axolotl.ai.\n\n\nModal Modal lets you run data/AI jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.",
+ "section": "📚 Documentation",
+ "text": "📚 Documentation\n\nInstallation Options - Detailed setup instructions for different environments\nConfiguration Guide - Full configuration options and examples\nDataset Guide - Supported formats and how to use them\nMulti-GPU Training\nMulti-Node Training\nMultipacking\nFAQ - Frequently asked questions",
+ "crumbs": [
+ "Home"
+ ]
+ },
+ {
+ "objectID": "index.html#getting-help",
+ "href": "index.html#getting-help",
+ "title": "Axolotl",
+ "section": "🤝 Getting Help",
+ "text": "🤝 Getting Help\n\nJoin our Discord community for support\nCheck out our Examples directory\nRead our Debugging Guide\nNeed dedicated support? Please contact ✉️wing@axolotl.ai for options",
"crumbs": [
"Home"
]
@@ -675,79 +765,184 @@
"objectID": "index.html#contributing",
"href": "index.html#contributing",
"title": "Axolotl",
- "section": "Contributing 🤝",
- "text": "Contributing 🤝\nPlease read the contributing guide\nBugs? Please check the open issues else create a new Issue.\nPRs are greatly welcome!\nPlease run the quickstart instructions followed by the below to setup env:\npip3 install -r requirements-dev.txt -r requirements-tests.txt\npre-commit install\n\n# test\npytest tests/\n\n# optional: run against all files\npre-commit run --all-files\nThanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.",
+ "section": "🌟 Contributing",
+ "text": "🌟 Contributing\nContributions are welcome! Please see our Contributing Guide for details.",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#axolotl-supports",
- "href": "index.html#axolotl-supports",
+ "objectID": "index.html#supported-models",
+ "href": "index.html#supported-models",
"title": "Axolotl",
- "section": "Axolotl supports",
- "text": "Axolotl supports\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nfp16/fp32\nlora\nqlora\ngptq\ngptq w/flash attn\nflash attn\nxformers attn\n\n\n\n\nllama\n✅\n✅\n✅\n✅\n✅\n✅\n✅\n\n\nMistral\n✅\n✅\n✅\n✅\n✅\n✅\n✅\n\n\nMixtral-MoE\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nMixtral8X22\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nPythia\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\ncerebras\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\nbtlm\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\nmpt\n✅\n❌\n❓\n❌\n❌\n❌\n❓\n\n\nfalcon\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\ngpt-j\n✅\n✅\n✅\n❌\n❌\n❓\n❓\n\n\nXGen\n✅\n❓\n✅\n❓\n❓\n❓\n✅\n\n\nphi\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nRWKV\n✅\n❓\n❓\n❓\n❓\n❓\n❓\n\n\nQwen\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nGemma\n✅\n✅\n✅\n❓\n❓\n✅\n❓\n\n\nJamba\n✅\n✅\n✅\n❓\n❓\n✅\n❓\n\n\n\n✅: supported ❌: not supported ❓: untested",
+ "section": "Supported Models",
+ "text": "Supported Models\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nfp16/fp32\nlora\nqlora\ngptq\ngptq w/flash attn\nflash attn\nxformers attn\n\n\n\n\nllama\n✅\n✅\n✅\n✅\n✅\n✅\n✅\n\n\nMistral\n✅\n✅\n✅\n✅\n✅\n✅\n✅\n\n\nMixtral-MoE\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nMixtral8X22\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nPythia\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\ncerebras\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\nbtlm\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\nmpt\n✅\n❌\n❓\n❌\n❌\n❌\n❓\n\n\nfalcon\n✅\n✅\n✅\n❌\n❌\n❌\n❓\n\n\ngpt-j\n✅\n✅\n✅\n❌\n❌\n❓\n❓\n\n\nXGen\n✅\n❓\n✅\n❓\n❓\n❓\n✅\n\n\nphi\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nRWKV\n✅\n❓\n❓\n❓\n❓\n❓\n❓\n\n\nQwen\n✅\n✅\n✅\n❓\n❓\n❓\n❓\n\n\nGemma\n✅\n✅\n✅\n❓\n❓\n✅\n❓\n\n\nJamba\n✅\n✅\n✅\n❓\n❓\n✅\n❓\n\n\n\n✅: supported ❌: not supported ❓: untested",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#advanced-setup",
- "href": "index.html#advanced-setup",
+ "objectID": "index.html#sponsors",
+ "href": "index.html#sponsors",
"title": "Axolotl",
- "section": "Advanced Setup",
- "text": "Advanced Setup\n\nEnvironment\n\nDocker\ndocker run --gpus '\"all\"' --rm -it axolotlai/axolotl:main-latest\nOr run on the current files for development:\ndocker compose up -d\n\n[!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide’s section on Docker.\n\n\n\nDocker advanced\n\nA more powerful Docker command to run would be this:\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-latest\nIt additionally: * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args. * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args. * The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal. * The --privileged flag gives all capabilities to the container. * The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.\nMore information on nvidia website\n\n\n\nConda/Pip venv\n\nInstall python >=3.10\nInstall pytorch stable https://pytorch.org/get-started/locally/\nInstall Axolotl along with python dependencies bash pip3 install packaging pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n(Optional) Login to Huggingface to use gated models/datasets. bash huggingface-cli login Get the token at huggingface.co/settings/tokens\n\n\n\nCloud GPU\nFor cloud GPU providers that support docker images, use axolotlai/axolotl-cloud:main-latest\n\non Latitude.sh use this direct link\non JarvisLabs.ai use this direct link\non RunPod use this direct link\n\n\n\nBare Metal Cloud GPU\n\nLambdaLabs\n\n\nClick to Expand\n\n\nInstall python\n\nsudo apt update\nsudo apt install -y python3.10\n\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1\nsudo update-alternatives --config python # pick 3.10 if given option\npython -V # should be 3.10\n\nInstall pip\n\nwget https://bootstrap.pypa.io/get-pip.py\npython get-pip.py\n\nInstall Pytorch https://pytorch.org/get-started/locally/\nFollow instructions on quickstart.\nRun\n\npip3 install protobuf==3.20.3\npip3 install -U --ignore-installed requests Pillow psutil scipy\n\nSet path\n\nexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH\n\n\n\nGCP\n\n\nClick to Expand\n\nUse a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.\nMake sure to run the below to uninstall xla.\npip uninstall -y torch_xla[tpu]\n\n\n\n\nWindows\nPlease use WSL or Docker!\n\n\nMac\nUse the below instead of the install method in QuickStart.\npip3 install --no-build-isolation -e '.'\nMore info: mac.md\n\n\nGoogle Colab\nPlease use this example notebook.\n\n\nLaunching on public clouds via SkyPilot\nTo launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:\npip install \"skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]\" # choose your clouds\nsky check\nGet the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:\ngit clone https://github.com/skypilot-org/skypilot.git\ncd skypilot/llm/axolotl\nUse one command to launch:\n# On-demand\nHF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN\n\n# Managed spot (auto-recovery on preemption)\nHF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET\n\n\nLaunching on public clouds via dstack\nTo launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.\nWrite a job description in YAML as below:\n# dstack.yaml\ntype: task\n\nimage: axolotlai/axolotl-cloud:main-latest\n\nenv:\n - HUGGING_FACE_HUB_TOKEN\n - WANDB_API_KEY\n\ncommands:\n - accelerate launch -m axolotl.cli.train config.yaml\n\nports:\n - 6006\n\nresources:\n gpu:\n memory: 24GB..\n count: 2\nthen, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:\npip install dstack\nHUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot\nFor further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.\n\n\n\nDataset\nAxolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.\nSee the documentation for more information on how to use different dataset formats.\n\n\nConfig\nSee examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:\n\nmodel\nbase_model: ./llama-7b-hf # local or huggingface repo\nNote: The code will load the right architecture.\ndataset\ndatasets:\n # huggingface repo\n - path: vicgalle/alpaca-gpt4\n type: alpaca\n\n # huggingface repo with specific configuration/subset\n - path: EleutherAI/pile\n name: enron_emails\n type: completion # format from earlier\n field: text # Optional[str] default: text, field to use for completion data\n\n # huggingface repo with multiple named configurations/subsets\n - path: bigcode/commitpackft\n name:\n - ruby\n - python\n - typescript\n type: ... # unimplemented custom format\n\n # chat_template https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html#chat_template\n - path: ...\n type: chat_template\n chat_template: chatml # defaults to tokenizer's chat_template\n\n # local\n - path: data.jsonl # or json\n ds_type: json # see other options below\n type: alpaca\n\n # dataset with splits, but no train split\n - path: knowrohit07/know_sql\n type: context_qa.load_v2\n train_on_split: validation\n\n # loading from s3 or gcs\n # s3 creds will be loaded from the system default / gcs will attempt to load from gcloud creds, google metadata service, or anon\n - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above\n ...\n\n # Loading Data From a Public URL\n # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.\n - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.\n ds_type: json # this is the default, see other options below.\nloading\nload_in_4bit: true\nload_in_8bit: true\n\nbf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.\nfp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32\ntf32: true # require >=ampere\n\nbfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)\nfloat16: true # use instead of fp16 when you don't want AMP\nNote: Repo does not do 4-bit quantization.\nlora\nadapter: lora # 'qlora' or leave blank for full finetune\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n\n\nAll Config Options\nSee these docs for all config options.\n\n\n\nTrain\nRun\naccelerate launch -m axolotl.cli.train your_config.yml\n\n[!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml\n\n\nPreprocess dataset\nYou can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.\n\nSet dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.\n(Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.\n(Optional): Use --debug to see preprocessed examples.\n\npython -m axolotl.cli.preprocess your_config.yml\n\n\nMulti-GPU\nBelow are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.\n\nDeepSpeed\nDeepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated\nWe provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.\ndeepspeed: deepspeed_configs/zero1.json\naccelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json\n\n\nFSDP\n\nllama FSDP\n\nfsdp:\n - full_shard\n - auto_wrap\nfsdp_config:\n fsdp_offload_params: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n\n\nFSDP + QLoRA\nAxolotl supports training with FSDP and QLoRA, see these docs for more information.\n\n\nWeights & Biases Logging\nMake sure your WANDB_API_KEY environment variable is set (recommended) or you login to wandb with wandb login.\n\nwandb options\n\nwandb_mode:\nwandb_project:\nwandb_entity:\nwandb_watch:\nwandb_name:\nwandb_log_model:\n\n\nComet Logging\nMake sure your COMET_API_KEY environment variable is set (recommended) or you login to wandb with comet login.\n\nwandb options\n\nuse_comet:\ncomet_api_key:\ncomet_workspace:\ncomet_project_name:\ncomet_experiment_key:\ncomet_mode:\ncomet_online:\ncomet_experiment_config:\n\n\nSpecial Tokens\nIt is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:\nspecial_tokens:\n bos_token: \"<s>\"\n eos_token: \"</s>\"\n unk_token: \"<unk>\"\ntokens: # these are delimiters\n - \"<|im_start|>\"\n - \"<|im_end|>\"\nWhen you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.\n\n\nLiger Kernel\nLiger Kernel: Efficient Triton Kernels for LLM Training\nhttps://github.com/linkedin/Liger-Kernel\nLiger (LinkedIn GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The Liger Kernel composes well and is compatible with both FSDP and Deepspeed.\nplugins:\n - axolotl.integrations.liger.LigerPlugin\nliger_rope: true\nliger_rms_norm: true\nliger_glu_activation: true\nliger_layer_norm: true\nliger_fused_linear_cross_entropy: true\n\n\n\n\nInference Playground\nAxolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.\nPass the appropriate flag to the inference command, depending upon what kind of model was trained:\n\nPretrained LORA:\npython -m axolotl.cli.inference examples/your_config.yml --lora_model_dir=\"./lora-output-dir\"\nFull weights finetune:\npython -m axolotl.cli.inference examples/your_config.yml --base_model=\"./completed-model\"\nFull weights finetune w/ a prompt from a text file:\ncat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \\\n --base_model=\"./completed-model\" --prompter=None --load_in_8bit=True\n– With gradio hosting\npython -m axolotl.cli.inference examples/your_config.yml --gradio\n\nPlease use --sample_packing False if you have it on and receive the error similar to below:\n\nRuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1\n\n\n\nMerge LORA to base\nThe following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.\npython3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir=\"./completed-model\"\nYou may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with\nCUDA_VISIBLE_DEVICES=\"\" python3 -m axolotl.cli.merge_lora ...\nalthough this will be very slow, and using the config options above are recommended instead.",
+ "section": "❤️ Sponsors",
+ "text": "❤️ Sponsors\nThank you to our sponsors who help make Axolotl possible:\n\nModal - Modal lets you run jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.\n\nInterested in sponsoring? Contact us at wing@axolotl.ai",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#common-errors",
- "href": "index.html#common-errors",
+ "objectID": "index.html#license",
+ "href": "index.html#license",
"title": "Axolotl",
- "section": "Common Errors 🧰",
- "text": "Common Errors 🧰\nSee also the FAQ’s and debugging guide.\n\nIf you encounter a ‘Cuda out of memory’ error, it means your GPU ran out of memory during the training process. Here’s how to resolve it:\n\nPlease reduce any below - micro_batch_size - eval_batch_size - gradient_accumulation_steps - sequence_len\nIf it does not help, try running without deepspeed and without accelerate (replace “accelerate launch” with “python”) in the command.\nUsing adamw_bnb_8bit might also save you some memory.\n\nfailed (exitcode: -9)\n\nUsually means your system has run out of system memory. Similarly, you should consider reducing the same settings as when you run out of VRAM. Additionally, look into upgrading your system RAM which should be simpler than GPU upgrades.\n\nRuntimeError: expected scalar type Float but found Half\n\nTry set fp16: true\n\nNotImplementedError: No operator found for memory_efficient_attention_forward …\n\nTry to turn off xformers.\n\naccelerate config missing\n\nIt’s safe to ignore it.\n\nNCCL Timeouts during training\n\nSee the NCCL guide.\n\nTokenization Mismatch b/w Inference & Training\nFor many formats, Axolotl constructs prompts by concatenating token ids after tokenizing strings. The reason for concatenating token ids rather than operating on strings is to maintain precise accounting for attention masks.\nIf you decode a prompt constructed by axolotl, you might see spaces between tokens (or lack thereof) that you do not expect, especially around delimiters and special tokens. When you are starting out with a new format, you should always do the following:\n\nMaterialize some data using python -m axolotl.cli.preprocess your_config.yml --debug, and then decode the first few rows with your model’s tokenizer.\nDuring inference, right before you pass a tensor of token ids to your model, decode these tokens back into a string.\nMake sure the inference string from #2 looks exactly like the data you fine tuned on from #1, including spaces and new lines. If they aren’t the same, adjust your inference server accordingly.\nAs an additional troubleshooting step, you can look at the token ids between 1 and 2 to make sure they are identical.\n\nHaving misalignment between your prompts during training and inference can cause models to perform very poorly, so it is worth checking this. See this blog post for a concrete example.",
+ "section": "📜 License",
+ "text": "📜 License\nThis project is licensed under the Apache 2.0 License - see the LICENSE file for details.",
"crumbs": [
"Home"
]
},
{
- "objectID": "index.html#debugging-axolotl",
- "href": "index.html#debugging-axolotl",
- "title": "Axolotl",
- "section": "Debugging Axolotl",
- "text": "Debugging Axolotl\nSee this debugging guide for tips on debugging Axolotl, along with an example configuration for debugging with VSCode.",
- "crumbs": [
- "Home"
- ]
- },
- {
- "objectID": "index.html#need-help",
- "href": "index.html#need-help",
- "title": "Axolotl",
- "section": "Need help? 🙋",
- "text": "Need help? 🙋\nJoin our Discord server where our community members can help you.\nNeed dedicated support? Please contact us at ✉️wing@axolotl.ai for dedicated support options.",
- "crumbs": [
- "Home"
- ]
- },
- {
- "objectID": "docs/mac.html",
- "href": "docs/mac.html",
- "title": "Mac M-series",
+ "objectID": "docs/multi-gpu.html",
+ "href": "docs/multi-gpu.html",
+ "title": "Multi-GPU Training Guide",
"section": "",
- "text": "Currently Axolotl on Mac is partially usable, many of the dependencies of Axolotl including Pytorch do not support MPS or have incomplete support.\nCurrent support:\n\nSupport for all models\nFull training of models\nLoRA training\nSample packing\nFP16 and BF16 (awaiting AMP support for MPS in Pytorch)\nTri-dao’s flash-attn (until it is supported use spd_attention as an alternative)\nxformers\nbitsandbytes (meaning no 4/8 bits loading and bnb optimizers)\nqlora\nDeepSpeed\n\nUntested: - FSDP",
+ "text": "This guide covers advanced training configurations for multi-GPU setups using Axolotl.",
"crumbs": [
"How-To Guides",
- "Mac M-series"
+ "Multi-GPU Training Guide"
]
},
{
- "objectID": "docs/multimodal.html",
- "href": "docs/multimodal.html",
- "title": "MultiModal / Vision Language Models (BETA)",
+ "objectID": "docs/multi-gpu.html#sec-overview",
+ "href": "docs/multi-gpu.html#sec-overview",
+ "title": "Multi-GPU Training Guide",
+ "section": "1 Overview",
+ "text": "1 Overview\nAxolotl supports several methods for multi-GPU training:\n\nDeepSpeed (recommended)\nFSDP (Fully Sharded Data Parallel)\nFSDP + QLoRA",
+ "crumbs": [
+ "How-To Guides",
+ "Multi-GPU Training Guide"
+ ]
+ },
+ {
+ "objectID": "docs/multi-gpu.html#sec-deepspeed",
+ "href": "docs/multi-gpu.html#sec-deepspeed",
+ "title": "Multi-GPU Training Guide",
+ "section": "2 DeepSpeed",
+ "text": "2 DeepSpeed\nDeepSpeed is the recommended approach for multi-GPU training due to its stability and performance. It provides various optimization levels through ZeRO stages.\n\n2.1 Configuration\nAdd to your YAML config:\ndeepspeed: deepspeed_configs/zero1.json\n\n\n2.2 Usage\naccelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json\n\n\n2.3 ZeRO Stages\nWe provide default configurations for:\n\nZeRO Stage 1 (zero1.json)\nZeRO Stage 2 (zero2.json)\nZeRO Stage 3 (zero3.json)\n\nChoose based on your memory requirements and performance needs.",
+ "crumbs": [
+ "How-To Guides",
+ "Multi-GPU Training Guide"
+ ]
+ },
+ {
+ "objectID": "docs/multi-gpu.html#sec-fsdp",
+ "href": "docs/multi-gpu.html#sec-fsdp",
+ "title": "Multi-GPU Training Guide",
+ "section": "3 FSDP",
+ "text": "3 FSDP\n\n3.1 Basic FSDP Configuration\nfsdp:\n - full_shard\n - auto_wrap\nfsdp_config:\n fsdp_offload_params: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n\n\n3.2 FSDP + QLoRA\nFor combining FSDP with QLoRA, see our dedicated guide.",
+ "crumbs": [
+ "How-To Guides",
+ "Multi-GPU Training Guide"
+ ]
+ },
+ {
+ "objectID": "docs/multi-gpu.html#sec-performance",
+ "href": "docs/multi-gpu.html#sec-performance",
+ "title": "Multi-GPU Training Guide",
+ "section": "4 Performance Optimization",
+ "text": "4 Performance Optimization\n\n4.1 Liger Kernel Integration\n\n\n\n\n\n\nNote\n\n\n\nLiger Kernel provides efficient Triton kernels for LLM training, offering:\n\n20% increase in multi-GPU training throughput\n60% reduction in memory usage\nCompatibility with both FSDP and DeepSpeed\n\n\n\nConfiguration:\nplugins:\n - axolotl.integrations.liger.LigerPlugin\nliger_rope: true\nliger_rms_norm: true\nliger_glu_activation: true\nliger_layer_norm: true\nliger_fused_linear_cross_entropy: true",
+ "crumbs": [
+ "How-To Guides",
+ "Multi-GPU Training Guide"
+ ]
+ },
+ {
+ "objectID": "docs/multi-gpu.html#sec-troubleshooting",
+ "href": "docs/multi-gpu.html#sec-troubleshooting",
+ "title": "Multi-GPU Training Guide",
+ "section": "5 Troubleshooting",
+ "text": "5 Troubleshooting\n\n5.1 NCCL Issues\nFor NCCL-related problems, see our NCCL troubleshooting guide.\n\n\n5.2 Common Problems\n\nMemory IssuesTraining Instability\n\n\n\nReduce micro_batch_size\nReduce eval_batch_size\nAdjust gradient_accumulation_steps\nConsider using a higher ZeRO stage\n\n\n\n\nStart with DeepSpeed ZeRO-2\nMonitor loss values\nCheck learning rates\n\n\n\n\nFor more detailed troubleshooting, see our debugging guide.",
+ "crumbs": [
+ "How-To Guides",
+ "Multi-GPU Training Guide"
+ ]
+ },
+ {
+ "objectID": "docs/unsloth.html",
+ "href": "docs/unsloth.html",
+ "title": "Unsloth",
"section": "",
- "text": "MultiModal / Vision Language Models (BETA)\n\nSupported Models\n\nMllama, i.e. llama with vision models\n\n\n\nUsage\nCurrently multimodal support is limited and doesn’t have full feature parity. To finetune a multimodal Llama w/ LoRA, you’ll need to use the following in YAML in combination with the rest of the required hyperparams.\nbase_model: alpindale/Llama-3.2-11B-Vision-Instruct\nprocessor_type: AutoProcessor\nskip_prepare_dataset: true\n\nchat_template: llama3_2_vision\ndatasets:\n - path: HuggingFaceH4/llava-instruct-mix-vsft\n type: chat_template\n split: train[:1%]\n field_messages: messages\nremove_unused_columns: false\nsample_packing: false\n\n# only finetune the Language model, leave the vision model and vision tower frozen\nlora_target_modules: 'language_model.model.layers.[\\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj'"
+ "text": "Overview\nUnsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over standard industry baselines.\n\n\nInstallation\nThe following will install the correct unsloth and extras from source.\npython scripts/unsloth_install.py | sh\n\n\nUsing unsloth w Axolotl\nAxolotl exposes a few configuration options to try out unsloth and get most of the performance gains.\nOur unsloth integration is currently limited to the following model architectures: - llama\nThese options are specific to LoRA finetuning and cannot be used for multi-GPU finetuning\nunsloth_lora_mlp: true\nunsloth_lora_qkv: true\nunsloth_lora_o: true\nThese options are composable and can be used with multi-gpu finetuning\nunsloth_cross_entropy_loss: true\nunsloth_rms_norm: true\nunsloth_rope: true\n\n\nLimitations\n\nSingle GPU only; e.g. no multi-gpu support\nNo deepspeed or FSDP support (requires multi-gpu)\nLoRA + QLoRA support only. No full fine tunes or fp8 support.\nLimited model architecture support. Llama, Phi, Gemma, Mistral only\nNo MoE support.",
+ "crumbs": [
+ "How-To Guides",
+ "Unsloth"
+ ]
+ },
+ {
+ "objectID": "docs/inference.html",
+ "href": "docs/inference.html",
+ "title": "Inference Guide",
+ "section": "",
+ "text": "This guide covers how to use your trained models for inference, including model loading, interactive testing, and common troubleshooting steps.",
+ "crumbs": [
+ "How-To Guides",
+ "Inference Guide"
+ ]
+ },
+ {
+ "objectID": "docs/inference.html#sec-quickstart",
+ "href": "docs/inference.html#sec-quickstart",
+ "title": "Inference Guide",
+ "section": "1 Quick Start",
+ "text": "1 Quick Start\n\n1.1 Basic Inference\n\nLoRA ModelsFull Fine-tuned Models\n\n\naxolotl inference your_config.yml --lora-model-dir=\"./lora-output-dir\"\n\n\naxolotl inference your_config.yml --base-model=\"./completed-model\"",
+ "crumbs": [
+ "How-To Guides",
+ "Inference Guide"
+ ]
+ },
+ {
+ "objectID": "docs/inference.html#sec-advanced",
+ "href": "docs/inference.html#sec-advanced",
+ "title": "Inference Guide",
+ "section": "2 Advanced Usage",
+ "text": "2 Advanced Usage\n\n2.1 Gradio Interface\nLaunch an interactive web interface:\naxolotl inference your_config.yml --gradio\n\n\n2.2 File-based Prompts\nProcess prompts from a text file:\ncat /tmp/prompt.txt | axolotl inference your_config.yml \\\n --base-model=\"./completed-model\" --prompter=None\n\n\n2.3 Memory Optimization\nFor large models or limited memory:\naxolotl inference your_config.yml --load-in-8bit=True",
+ "crumbs": [
+ "How-To Guides",
+ "Inference Guide"
+ ]
+ },
+ {
+ "objectID": "docs/inference.html#sec-merging",
+ "href": "docs/inference.html#sec-merging",
+ "title": "Inference Guide",
+ "section": "3 Merging LoRA Weights",
+ "text": "3 Merging LoRA Weights\nMerge LoRA adapters with the base model:\naxolotl merge-lora your_config.yml --lora-model-dir=\"./completed-model\"\n\n3.1 Memory Management for Merging\n\nConfiguration OptionsForce CPU Merging\n\n\ngpu_memory_limit: 20GiB # Adjust based on your GPU\nlora_on_cpu: true # Process on CPU if needed\n\n\nCUDA_VISIBLE_DEVICES=\"\" axolotl merge-lora ...",
+ "crumbs": [
+ "How-To Guides",
+ "Inference Guide"
+ ]
+ },
+ {
+ "objectID": "docs/inference.html#sec-tokenization",
+ "href": "docs/inference.html#sec-tokenization",
+ "title": "Inference Guide",
+ "section": "4 Tokenization",
+ "text": "4 Tokenization\n\n4.1 Common Issues\n\n\n\n\n\n\nWarning\n\n\n\nTokenization mismatches between training and inference are a common source of problems.\n\n\nTo debug:\n\nCheck training tokenization:\n\naxolotl preprocess your_config.yml --debug\n\nVerify inference tokenization by decoding tokens before model input\nCompare token IDs between training and inference\n\n\n\n4.2 Special Tokens\nConfigure special tokens in your YAML:\nspecial_tokens:\n bos_token: \"<s>\"\n eos_token: \"</s>\"\n unk_token: \"<unk>\"\ntokens:\n - \"<|im_start|>\"\n - \"<|im_end|>\"",
+ "crumbs": [
+ "How-To Guides",
+ "Inference Guide"
+ ]
+ },
+ {
+ "objectID": "docs/inference.html#sec-troubleshooting",
+ "href": "docs/inference.html#sec-troubleshooting",
+ "title": "Inference Guide",
+ "section": "5 Troubleshooting",
+ "text": "5 Troubleshooting\n\n5.1 Common Problems\n\nMemory IssuesToken IssuesPerformance Issues\n\n\n\nUse 8-bit loading\nReduce batch sizes\nTry CPU offloading\n\n\n\n\nVerify special tokens\nCheck tokenizer settings\nCompare training and inference preprocessing\n\n\n\n\nVerify model loading\nCheck prompt formatting\nEnsure temperature/sampling settings\n\n\n\n\nFor more details, see our debugging guide.",
+ "crumbs": [
+ "How-To Guides",
+ "Inference Guide"
+ ]
},
{
"objectID": "docs/batch_vs_grad.html",
@@ -757,108 +952,178 @@
"text": "Gradient accumulation means accumulating gradients over several mini-batches and updating the model weights afterward. When the samples in each batch are diverse, this technique doesn’t significantly impact learning.\nThis method allows for effective training with larger effective batch sizes without needing proportionally larger memory. Here’s why:\n\nMemory Consumption with Batch Size: The primary reason increasing the batch size impacts memory is due to the storage requirements for intermediate activations. When you forward propagate a batch through a network, you have to store the activations at each layer for each sample in the batch, because these activations are used during backpropagation to compute gradients. Therefore, larger batches mean more activations, leading to greater GPU memory consumption.\nGradient Accumulation: With gradient accumulation, you’re effectively simulating a larger batch size by accumulating gradients over several smaller batches (or micro-batches). However, at any given time, you’re only forward and backward propagating a micro-batch. This means you only store activations for the micro-batch, not the full accumulated batch. As a result, you can simulate the effect of a larger batch size without the memory cost of storing activations for a large batch.\n\nExample 1: Micro batch size: 3 Gradient accumulation steps: 2 Number of GPUs: 3 Total batch size = 3 * 2 * 3 = 18\n| GPU 1 | GPU 2 | GPU 3 |\n|----------------|----------------|----------------|\n| S1, S2, S3 | S4, S5, S6 | S7, S8, S9 |\n| e1, e2, e3 | e4, e5, e6 | e7, e8, e9 |\n|----------------|----------------|----------------|\n| → (accumulate) | → (accumulate) | → (accumulate) |\n|----------------|----------------|----------------|\n| S10, S11, S12 | S13, S14, S15 | S16, S17, S18 |\n| e10, e11, e12 | e13, e14, e15 | e16, e17, e18 |\n|----------------|----------------|----------------|\n| → (apply) | → (apply) | → (apply) |\n\nAccumulated gradient for the weight w1 after the second iteration (considering all GPUs):\nTotal gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8 + e9 + e10 + e11 + e12 + e13 + e14 + e15 + e16 + e17 + e18\n\nWeight update for w1:\nw1_new = w1_old - learning rate x (Total gradient for w1 / 18)\nExample 2: Micro batch size: 2 Gradient accumulation steps: 1 Number of GPUs: 3 Total batch size = 2 * 1 * 3 = 6\n| GPU 1 | GPU 2 | GPU 3 |\n|-----------|-----------|-----------|\n| S1, S2 | S3, S4 | S5, S6 |\n| e1, e2 | e3, e4 | e5, e6 |\n|-----------|-----------|-----------|\n| → (apply) | → (apply) | → (apply) |\n\nAccumulated gradient for the weight w1 (considering all GPUs):\nTotal gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6\n\nWeight update for w1:\nw1_new = w1_old - learning rate × (Total gradient for w1 / 6)"
},
{
- "objectID": "docs/dataset_preprocessing.html",
- "href": "docs/dataset_preprocessing.html",
- "title": "Dataset Preprocessing",
+ "objectID": "docs/faq.html",
+ "href": "docs/faq.html",
+ "title": "FAQ",
"section": "",
- "text": "Dataset pre-processing is the step where Axolotl takes each dataset you’ve configured alongside the (dataset format)[../dataset-formats/] and prompt strategies to: - parse the dataset based on the dataset format - transform the dataset to how you would interact with the model based on the prompt strategy - tokenize the dataset based on the configured model & tokenizer - shuffle and merge multiple datasets together if using more than one\nThe processing of the datasets can happen one of two ways:\n\nBefore kicking off training by calling python -m axolotl.cli.preprocess /path/to/your.yaml --debug\nWhen training is started\n\nWhat are the benefits of pre-processing? When training interactively or for sweeps (e.g. you are restarting the trainer often), processing the datasets can oftentimes be frustratingly slow. Pre-processing will cache the tokenized/formatted datasets according to a hash of dependent training parameters so that it will intelligently pull from its cache when possible.\nThe path of the cache is controlled by dataset_prepared_path: and is often left blank in example YAMLs as this leads to a more robust solution that prevents unexpectedly reusing cached data.\nIf dataset_prepared_path: is left empty, when training, the processed dataset will be cached in a default path of ./last_run_prepared/, but will ignore anything already cached there. By explicitly setting dataset_prepared_path: ./last_run_prepared, the trainer will use whatever pre-processed data is in the cache.\nWhat are the edge cases? Let’s say you are writing a custom prompt strategy or using a user-defined prompt template. Because the trainer cannot readily detect these changes, we cannot change the calculated hash value for the pre-processed dataset. If you have dataset_prepared_path: ... set and change your prompt templating logic, it may not pick up the changes you made and you will be training over the old prompt."
+ "text": "Q: The trainer stopped and hasn’t progressed in several minutes.\n\nA: Usually an issue with the GPUs communicating with each other. See the NCCL doc\n\nQ: Exitcode -9\n\nA: This usually happens when you run out of system RAM.\n\nQ: Exitcode -7 while using deepspeed\n\nA: Try upgrading deepspeed w: pip install -U deepspeed\n\nQ: AttributeError: ‘DummyOptim’ object has no attribute ‘step’\n\nA: You may be using deepspeed with single gpu. Please don’t set deepspeed: in yaml or cli.",
+ "crumbs": [
+ "FAQ"
+ ]
},
{
- "objectID": "docs/debugging.html",
- "href": "docs/debugging.html",
- "title": "Debugging",
+ "objectID": "docs/ray-integration.html",
+ "href": "docs/ray-integration.html",
+ "title": "Ray Train integration",
"section": "",
- "text": "This document provides some tips and tricks for debugging Axolotl. It also provides an example configuration for debugging with VSCode. A good debugging setup is essential to understanding how Axolotl code works behind the scenes.",
+ "text": "Axolotl supports using Ray as an alternative to accelerate for orchestrating training. This is especially useful for multi-node training since you only have to setup code and dependencies in a single node and launch training as if you were using a single node.\nWith the --use-ray CLI flag, Axolotl will use Ray Train’s TorchTrainer to run training.",
"crumbs": [
"How-To Guides",
- "Debugging"
+ "Ray Train integration"
]
},
{
- "objectID": "docs/debugging.html#table-of-contents",
- "href": "docs/debugging.html#table-of-contents",
- "title": "Debugging",
- "section": "Table of Contents",
- "text": "Table of Contents\n\nGeneral Tips\nDebugging with VSCode\n\nBackground\nConfiguration\nCustomizing your debugger\nVideo Tutorial\n\nDebugging With Docker\n\nSetup\nAttach To Container\nVideo - Attaching To Docker On Remote Host",
+ "objectID": "docs/ray-integration.html#ray-cluster-setup",
+ "href": "docs/ray-integration.html#ray-cluster-setup",
+ "title": "Ray Train integration",
+ "section": "Ray cluster setup",
+ "text": "Ray cluster setup\nA prerequisite using the Ray Train integration is to setup a Ray cluster on your desired node(s). For a detailed guide on how you can get started with ray clusters, check the official Ray docs here: https://docs.ray.io/en/latest/cluster/getting-started.html\nEvery Ray cluster has one head node and a set of worker nodes. The head node is just like any other worker node, but it also runs certain special processes related to scheduling and orchestration. Ray-enabled scripts are run on the head node and depending on the resources (number of CPUs, GPUs, etc) they request, will be scheduled to run certain tasks on the worker nodes. For more on key concepts behind a Ray cluster, you can refer this doc.",
"crumbs": [
"How-To Guides",
- "Debugging"
+ "Ray Train integration"
]
},
{
- "objectID": "docs/debugging.html#general-tips",
- "href": "docs/debugging.html#general-tips",
- "title": "Debugging",
- "section": "General Tips",
- "text": "General Tips\nWhile debugging it’s helpful to simplify your test scenario as much as possible. Here are some tips for doing so:\n\n[!Important] All of these tips are incorporated into the example configuration for debugging with VSCode below.\n\n\nMake sure you are using the latest version of axolotl: This project changes often and bugs get fixed fast. Check your git branch and make sure you have pulled the latest changes from main.\nEliminate concurrency: Restrict the number of processes to 1 for both training and data preprocessing:\n\nSet CUDA_VISIBLE_DEVICES to a single GPU, ex: export CUDA_VISIBLE_DEVICES=0.\nSet dataset_processes: 1 in your axolotl config or run the training command with --dataset_processes=1.\n\nUse a small dataset: Construct or use a small dataset from HF Hub. When using a small dataset, you will often have to make sure sample_packing: False and eval_sample_packing: False to avoid errors. If you are in a pinch and don’t have time to construct a small dataset but want to use from the HF Hub, you can shard the data (this will still tokenize the entire dataset, but will only use a fraction of the data for training. For example, to shard the dataset into 20 pieces, add the following to your axolotl config): yaml dataset: ... shards: 20\nUse a small model: A good example of a small model is TinyLlama/TinyLlama-1.1B-Chat-v1.0.\nMinimize iteration time: Make sure the training loop finishes as fast as possible, with these settings.\n\nmicro_batch_size: 1\nmax_steps: 1\nval_set_size: 0\n\nClear Caches: Axolotl caches certain steps and so does the underlying HuggingFace trainer. You may want to clear some of these caches when debugging.\n\nData preprocessing: When debugging data preprocessing, which includes prompt template formation, you may want to delete the directory set in dataset_prepared_path: in your axolotl config. If you didn’t set this value, the default is last_run_prepared.\nHF Hub: If you are debugging data preprocessing, you should clear the relevant HF cache HuggingFace cache, by deleting the appropriate ~/.cache/huggingface/datasets/... folder(s).\nThe recommended approach is to redirect all outputs and caches to a temporary folder and delete selected subfolders before each run. This is demonstrated in the example configuration below.",
+ "objectID": "docs/ray-integration.html#sanity-check",
+ "href": "docs/ray-integration.html#sanity-check",
+ "title": "Ray Train integration",
+ "section": "Sanity check",
+ "text": "Sanity check\nTo run a sanity check on whether your ray cluster is setup properly, execute the following on the head node:\nray status\nThe output should have a summary of your Ray cluster - list of all the nodes in your cluster, the number of CPUs and GPUs in your cluster, etc. For example, if you have a cluster with 1 CPU-only head node and 2 4xL40S worker nodes, the output can look like this:\nNode status\n---------------------------------------------------------------\nActive:\n 1 head\nIdle:\n 2 4xL40S:48CPU-384GB\nPending:\n (no pending nodes)\nRecent failures:\n (no failures)\n\nResources\n---------------------------------------------------------------\nUsage:\n 0.0/96.0 CPU\n 0.0/8.0 GPU\n 0B/800.00GiB memory\n 0B/229.57GiB object_store_memory\n\nDemands:\n (no resource demands)\nYou should also be able to see the same on the Ray dashboard.",
"crumbs": [
"How-To Guides",
- "Debugging"
+ "Ray Train integration"
]
},
{
- "objectID": "docs/debugging.html#debugging-with-vscode",
- "href": "docs/debugging.html#debugging-with-vscode",
- "title": "Debugging",
- "section": "Debugging with VSCode",
- "text": "Debugging with VSCode\n\nBackground\nThe below example shows how to configure VSCode to debug data preprocessing of the chat_template format. This is the format used when you have the following in your axolotl config:\ndatasets:\n - path: <path to your chat_template formatted dataset> # example on HF Hub: fozziethebeat/alpaca_messages_2k_test\n type: chat_template\n\n[!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.\n\n\n[!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).\n\n\n\nSetup\nMake sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:\npip3 install packaging\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n\nRemote Hosts\nIf you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.\n\n\n\nConfiguration\nThe easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.\nFor example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_chat_template.yml, you would use the below configuration1. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.\n// .vscode/launch.json\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Debug axolotl prompt - chat_template\",\n \"type\": \"python\",\n \"module\": \"accelerate.commands.launch\",\n \"request\": \"launch\",\n \"args\": [\n \"-m\", \"axolotl.cli.train\", \"dev_chat_template.yml\",\n // The flags below simplify debugging by overriding the axolotl config\n // with the debugging tips above. Modify as needed.\n \"--dataset_processes=1\", // limits data preprocessing to one process\n \"--max_steps=1\", // limits training to just one step\n \"--batch_size=1\", // minimizes batch size\n \"--micro_batch_size=1\", // minimizes batch size\n \"--val_set_size=0\", // disables validation\n \"--sample_packing=False\", // disables sample packing which is necessary for small datasets\n \"--eval_sample_packing=False\",// disables sample packing on eval set\n \"--dataset_prepared_path=temp_debug/axolotl_outputs/data\", // send data outputs to a temp folder\n \"--output_dir=temp_debug/axolotl_outputs/model\" // send model outputs to a temp folder\n ],\n \"console\": \"integratedTerminal\", // show output in the integrated terminal\n \"cwd\": \"${workspaceFolder}/devtools\", // set working directory to devtools from the root of the project\n \"justMyCode\": true, // step through only axolotl code\n \"env\": {\"CUDA_VISIBLE_DEVICES\": \"0\", // Since we aren't doing distributed training, we need to limit to one GPU\n \"HF_HOME\": \"${workspaceFolder}/devtools/temp_debug/.hf-cache\"}, // send HF cache to a temp folder\n \"preLaunchTask\": \"cleanup-for-dataprep\", // delete temp folders (see below)\n }\n ]\n}\nAdditional notes about this configuration:\n\nThe argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.\nThe preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:\n\n./devtools/temp_debug/axolotl_outputs\n./devtools/temp_debug/.hf-cache/datasets\n\n\n\n[!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.\n\nBelow is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.\n// .vscode/tasks.json\n// this file is used by launch.json\n{\n \"version\": \"2.0.0\",\n \"tasks\": [\n // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder\n {\n \"label\": \"delete-outputs\",\n \"type\": \"shell\",\n \"command\": \"rm -rf temp_debug/axolotl_outputs\",\n \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n \"problemMatcher\": []\n },\n // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder\n {\n \"label\": \"delete-temp-hf-dataset-cache\",\n \"type\": \"shell\",\n \"command\": \"rm -rf temp_debug/.hf-cache/datasets\",\n \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n \"problemMatcher\": []\n },\n // this task combines the two tasks above\n {\n \"label\": \"cleanup-for-dataprep\",\n \"dependsOn\": [\"delete-outputs\", \"delete-temp-hf-dataset-cache\"],\n }\n ]\n}\n\n\nCustomizing your debugger\nYour debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.\n\n\nVideo Tutorial\nThe following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl w/VSCode",
+ "objectID": "docs/ray-integration.html#configuring-training-with-ray-train",
+ "href": "docs/ray-integration.html#configuring-training-with-ray-train",
+ "title": "Ray Train integration",
+ "section": "Configuring training with Ray Train",
+ "text": "Configuring training with Ray Train\nYou can find an example configuration at configs/llama-3/lora-1b-ray.yaml.\nThe key parameters to note here are:\n...\nuse_ray: true\nray_num_workers: 4\n# optional\nresources_per_worker:\n GPU: 1\n...\n\nuse_ray: This is the flag that enables the Ray Train integration. You can either use the corresponding --use-ray flag in the CLI or set use_ray in the config file.\nray_num_workers: This is the number of workers/GPUs to use for training.\nresources_per_worker: This is the Ray resource request for each worker. This can be used to request a specific GPU type or a custom resource for each worker. For example, if your ray cluster has GPUs of different types, and you only want to use NVIDIA L40S GPUs, you can do\n\nresources_per_worker:\n accelerator_type:L40S: 0.001",
"crumbs": [
"How-To Guides",
- "Debugging"
+ "Ray Train integration"
]
},
{
- "objectID": "docs/debugging.html#debugging-with-docker",
- "href": "docs/debugging.html#debugging-with-docker",
- "title": "Debugging",
- "section": "Debugging With Docker",
- "text": "Debugging With Docker\nUsing official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.\n\nSetup\nOn the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\n[!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.\n\nNext, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:2\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-py3.10-cu118-2.0.1\n\n[!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl’s Docker CI builds.\n\nYou will now be in the container. Next, perform an editable install of Axolotl:\npip3 install packaging\npip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'\n\n\nAttach To Container\nNext, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.\nNext, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.\nNow you are ready to debug as described above (see Debugging with VSCode).\n\n\nVideo - Attaching To Docker On Remote Host\nHere is a short video that demonstrates how to attach to a Docker container on a remote host:\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl Part 2: Attaching to Docker on a Remote Host",
+ "objectID": "docs/ray-integration.html#launching-training",
+ "href": "docs/ray-integration.html#launching-training",
+ "title": "Ray Train integration",
+ "section": "Launching training",
+ "text": "Launching training\nYou can simply run the following command on the head node:\naxolotl train examples/llama-3/lora-1b-ray.yml --use-ray\nThis will launch training on the head node and workers will be scheduled automatically by Ray Train to run on the appropriate head or worker nodes.\nYou can also monitor training progress on the Ray dashboard.\nComing back to the example on a Ray cluster with 1 head node and 2 4xL40S worker nodes, let’s say you want to make use of all 8 GPUs. You would be able to just set ray_num_workers: 8 and run the previous command. The Cluster tab will show the following:\n\n\n\nRay dashboard",
"crumbs": [
"How-To Guides",
- "Debugging"
+ "Ray Train integration"
]
},
{
- "objectID": "docs/debugging.html#footnotes",
- "href": "docs/debugging.html#footnotes",
- "title": "Debugging",
+ "objectID": "docs/fsdp_qlora.html",
+ "href": "docs/fsdp_qlora.html",
+ "title": "FDSP + QLoRA",
+ "section": "",
+ "text": "Using FSDP with QLoRA is essential for fine-tuning larger (70b+ parameter) LLMs on consumer GPUs. For example, you can use FSDP + QLoRA to train a 70b model on two 24GB GPUs1.\nBelow, we describe how to use this feature in Axolotl.",
+ "crumbs": [
+ "How-To Guides",
+ "FDSP + QLoRA"
+ ]
+ },
+ {
+ "objectID": "docs/fsdp_qlora.html#background",
+ "href": "docs/fsdp_qlora.html#background",
+ "title": "FDSP + QLoRA",
+ "section": "",
+ "text": "Using FSDP with QLoRA is essential for fine-tuning larger (70b+ parameter) LLMs on consumer GPUs. For example, you can use FSDP + QLoRA to train a 70b model on two 24GB GPUs1.\nBelow, we describe how to use this feature in Axolotl.",
+ "crumbs": [
+ "How-To Guides",
+ "FDSP + QLoRA"
+ ]
+ },
+ {
+ "objectID": "docs/fsdp_qlora.html#usage",
+ "href": "docs/fsdp_qlora.html#usage",
+ "title": "FDSP + QLoRA",
+ "section": "Usage",
+ "text": "Usage\nTo enable QLoRA with FSDP, you need to perform the following steps:\n\n![Tip] See the example config file in addition to reading these instructions.\n\n\nSet adapter: qlora in your axolotl config file.\nEnable FSDP in your axolotl config, as described here.\nUse one of the supported model types: llama, mistral or mixtral.",
+ "crumbs": [
+ "How-To Guides",
+ "FDSP + QLoRA"
+ ]
+ },
+ {
+ "objectID": "docs/fsdp_qlora.html#example-config",
+ "href": "docs/fsdp_qlora.html#example-config",
+ "title": "FDSP + QLoRA",
+ "section": "Example Config",
+ "text": "Example Config\nexamples/llama-2/qlora-fsdp.yml contains an example of how to enable QLoRA + FSDP in axolotl.",
+ "crumbs": [
+ "How-To Guides",
+ "FDSP + QLoRA"
+ ]
+ },
+ {
+ "objectID": "docs/fsdp_qlora.html#references",
+ "href": "docs/fsdp_qlora.html#references",
+ "title": "FDSP + QLoRA",
+ "section": "References",
+ "text": "References\n\nPR #1378 enabling QLoRA in FSDP in Axolotl.\nBlog Post from the Answer.AI team describing the work that enabled QLoRA in FSDP.\nRelated HuggingFace PRs Enabling FDSP + QLoRA:\n\nAccelerate PR#2544\nTransformers PR#29587\nTRL PR#1416\nPEFT PR#1550",
+ "crumbs": [
+ "How-To Guides",
+ "FDSP + QLoRA"
+ ]
+ },
+ {
+ "objectID": "docs/fsdp_qlora.html#footnotes",
+ "href": "docs/fsdp_qlora.html#footnotes",
+ "title": "FDSP + QLoRA",
"section": "Footnotes",
- "text": "Footnotes\n\n\nThe config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/chat_template.yml, but this is the same thing.↩︎\nMany of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.↩︎",
+ "text": "Footnotes\n\n\nThis was enabled by this work from the Answer.AI team.↩︎",
"crumbs": [
"How-To Guides",
- "Debugging"
+ "FDSP + QLoRA"
]
},
{
- "objectID": "docs/lr_groups.html",
- "href": "docs/lr_groups.html",
- "title": "Learning Rate Groups",
+ "objectID": "docs/rlhf.html",
+ "href": "docs/rlhf.html",
+ "title": "RLHF (Beta)",
"section": "",
- "text": "Inspired by LoRA+, Axolotl allows practitioners to specify separate learning rates for each module or groups of modules in a model."
- },
- {
- "objectID": "docs/lr_groups.html#background",
- "href": "docs/lr_groups.html#background",
- "title": "Learning Rate Groups",
- "section": "",
- "text": "Inspired by LoRA+, Axolotl allows practitioners to specify separate learning rates for each module or groups of modules in a model."
- },
- {
- "objectID": "docs/lr_groups.html#example",
- "href": "docs/lr_groups.html#example",
- "title": "Learning Rate Groups",
- "section": "Example",
- "text": "Example\nlr_groups:\n - name: o_proj\n modules:\n - self_attn.o_proj.weight\n lr: 1e-6\n - name: q_proj\n modules:\n - model.layers.2.self_attn.q_proj.weight\n lr: 1e-5\n\nlearning_rate: 2e-5\nIn this example, we have a default learning rate of 2e-5 across the entire model, but we have a separate learning rate of 1e-6 for all the self attention o_proj modules across all layers, and a learning are of 1e-5 to the 3rd layer’s self attention q_proj module."
- },
- {
- "objectID": "docs/config.html",
- "href": "docs/config.html",
- "title": "Config options",
- "section": "",
- "text": "# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files\n# This can also be a relative path to a model on disk\nbase_model: ./llama-7b-hf\n# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)\nbase_model_ignore_patterns:\n# If the base_model repo on hf hub doesn't include configuration .json files,\n# You can set that here, or leave this empty to default to base_model\nbase_model_config: ./llama-7b-hf\n# You can specify to choose a specific model revision from huggingface hub\nrevision_of_model:\n# Optional tokenizer configuration path in case you want to use a different tokenizer\n# than the one defined in the base model\ntokenizer_config:\n# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too\nmodel_type: AutoModelForCausalLM\n# Corresponding tokenizer for the model AutoTokenizer is a good choice\ntokenizer_type: AutoTokenizer\n# Trust remote code for untrusted source\ntrust_remote_code:\n# use_fast option for tokenizer loading from_pretrained, default to True\ntokenizer_use_fast:\n# Whether to use the legacy tokenizer setting, defaults to True\ntokenizer_legacy:\n# Resize the model embeddings when new tokens are added to multiples of 32\n# This is reported to improve training speed on some models\nresize_token_embeddings_to_32x:\n\n# (Internal use only)\n# Used to identify which the model is based on\nis_falcon_derived_model:\nis_llama_derived_model:\nis_qwen_derived_model:\n# Please note that if you set this to true, `padding_side` will be set to \"left\" by default\nis_mistral_derived_model:\n\n# optional overrides to the base model configuration\noverrides_of_model_config:\n # RoPE Scaling https://github.com/huggingface/transformers/pull/24653\n rope_scaling:\n type: # linear | dynamic\n factor: # float\n\n# optional overrides to the bnb 4bit quantization configuration\n# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig\nbnb_config_kwargs:\n # These are default values\n llm_int8_has_fp16_weight: false\n bnb_4bit_quant_type: nf4\n bnb_4bit_use_double_quant: true\n\n\n# Whether you are training a 4-bit GPTQ quantized model\ngptq: true\n\n# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer\nload_in_8bit: true\n# Use bitsandbytes 4 bit\nload_in_4bit:\n\n# Use CUDA bf16\nbf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere\n# Use CUDA fp16\nfp16: true\n# Use CUDA tf32\ntf32: true # require >=ampere\n\n# No AMP (automatic mixed precision)\nbfloat16: true # require >=ampere\nfloat16: true\n\n# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset\ngpu_memory_limit: 20GiB\n# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge\nlora_on_cpu: true\n\n# A list of one or more datasets to finetune the model with\ndatasets:\n # HuggingFace dataset repo | s3://,gs:// path | \"json\" for local dataset, make sure to fill data_files\n - path: vicgalle/alpaca-gpt4\n # The type of prompt to use for training. [alpaca, gpteacher, oasst, reflection]\n type: alpaca # format | format:<prompt_style> (chat/instruct) | <prompt_strategies>.load_<load_fn>\n ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file\n data_files: # Optional[str] path to source data files\n shards: # Optional[int] number of shards to split data into\n name: # Optional[str] name of dataset configuration to load\n train_on_split: train # Optional[str] name of dataset split to load from\n revision: # Optional[str] The specific revision of the dataset to use when loading from the Hugging Face Hub. This can be a commit hash, tag, or branch name. If not specified, the latest version will be used. This parameter is ignored for local datasets.\n trust_remote_code: # Optional[bool] Trust remote code for untrusted source\n\n # Custom user instruction prompt\n - path: repo\n type:\n # The below are defaults. only set what's needed if you use a different column name.\n system_prompt: \"\"\n system_format: \"{system}\"\n field_system: system\n field_instruction: instruction\n field_input: input\n field_output: output\n\n # Customizable to be single line or multi-line\n # Use {instruction}/{input} as key to be replaced\n # 'format' can include {input}\n format: |-\n User: {instruction} {input}\n Assistant:\n # 'no_input_format' cannot include {input}\n no_input_format: \"{instruction} \"\n\n # For `completion` datsets only, uses the provided field instead of `text` column\n field:\n\n # Using chat template\n - path: ...\n # Set type to `chat_template` to use this strategy\n type: chat_template\n # Specify the name of the chat template to use\n # The name of the chat template to use for training, following values are supported:\n # - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default.\n # - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n # - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to if the tokenizer does not have a chat template else default to tokenizer. E.g. tokenizer_default_fallback_chatml.\n # - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n chat_template: tokenizer_default\n\n # Custom jinja chat template. Used only if `chat_template: jinja` or empty.\n chat_template_jinja:\n\n # Key containing the messages (default: \"messages\")\n field_messages: messages\n # Key for role in each message (default: \"role\")\n message_field_role: role\n # Key for content in each message (default: \"content\")\n message_field_content: content\n\n # Optional[Dict[str, List]]. Roles mapping in the messages. The default is:\n roles:\n user: [\"human\", \"user\"]\n assistant: [\"gpt\", \"assistant\"]\n system: [\"system\"]\n tool: [\"tool\"]\n\n # IMPORTANT: The following fields determine which parts of the conversation to train on.\n # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train\n # See examples at `docs/dataset-formats/conversation.qmd`\n # Note: If the below 4 fields are empty, defaults to training only on the last message.\n\n # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss.\n roles_to_train: [\"assistant\"] # default\n # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are:\n # - all: train on all EOS tokens\n # - turn (default): train on the EOS token at the end of each trainable turn\n # - last: train on the last EOS token in the conversation\n train_on_eos: last\n # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`.\n message_field_training: training\n # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn.\n # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train).\n message_field_training_detail: train_detail\n\n\n# If false, the datasets will not be shuffled and will keep their original order in `datasets`.\n# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.\nshuffle_merged_datasets: true\n\nDeduplicates datasets and test_datasets with identical entries.\ndataset_exact_deduplication: true\n\n# A list of one or more datasets to eval the model with.\n# You can use either test_datasets, or val_set_size, but not both.\ntest_datasets:\n - path: /workspace/data/eval.jsonl\n ds_type: json\n # You need to specify a split. For \"json\" datasets the default split is called \"train\".\n split: train\n type: completion\n data_files:\n - /workspace/data/eval.jsonl\n\n# use RL training: 'dpo', 'ipo', 'kto'\nrl:\n# whether to perform weighting if doing DPO training. Boolean.\ndpo_use_weighting:\n\n# reward modelling: `True` or `False`\nreward_model:\n\n# process reward modelling: `True` or `False`\nprocess_reward_model:\n\n# The name of the chat template to use for training, following values are supported:\n# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value.\n# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer.\n# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n# The selected chat template will be saved to the tokenizer_config.json for easier inferencing\n# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template.\nchat_template: tokenizer_default\n# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null.\nchat_template_jinja: null\n# Changes the default system message\ndefault_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml.\n# Axolotl attempts to save the dataset as an arrow after packing the data together so\n# subsequent training attempts load faster, relative path\ndataset_prepared_path: data/last_run_prepared\n# Push prepared dataset to hub\npush_dataset_to_hub: # repo path\n# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`\n# if not set.\ndataset_processes: # defaults to os.cpu_count() if not set\n# Keep dataset in memory while preprocessing\n# Only needed if cached dataset is taking too much storage\ndataset_keep_in_memory:\n# push checkpoints to hub\nhub_model_id: # private repo path to push finetuned model\n# how to push checkpoints to hub\n# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy\nhub_strategy:\n# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets\n# Required to be true when used in combination with `push_dataset_to_hub`\nhf_use_auth_token: # boolean\n# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.\nval_set_size: 0.04\n# Num shards for whole dataset\ndataset_shard_num:\n# Index of shard to use for whole dataset\ndataset_shard_idx:\n\n# The maximum length of an input to train with, this should typically be less than 2048\n# as most models have a token/context limit of 2048\nsequence_len: 2048\n# Pad inputs so each step uses constant sized buffers\n# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently\npad_to_sequence_len:\n# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'\nsample_packing:\n# Set to 'false' if getting errors during eval with sample_packing on.\neval_sample_packing:\n# You can set these packing optimizations AFTER starting a training at least once.\n# The trainer will provide recommended values for these values.\nsample_packing_eff_est:\ntotal_num_tokens:\n# Increasing the following values helps with packing, but usually only slightly (<%1.)\n# The number of samples packed at a time.\nsample_packing_group_size: 100000\n# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples.\nsample_packing_bin_size: 200\n# whether to concatenate samples during pretraining\npretraining_sample_concatenation:\n\n# Use batch flattening for speedups when not using sample_packing\nbatch_flattening:\n\n# Passed through to transformers when loading the model when launched without accelerate\n# Use `sequential` when training w/ model parallelism to limit memory\ndevice_map:\n# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.\nmax_memory:\n\n# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model\nadapter: lora\n# If you already have a lora model trained that you want to load, put that here.\n# This means after training, if you want to test the model, you should set this to the value of `output_dir`.\n# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.\nlora_model_dir:\n\n# LoRA hyperparameters\n# For more details about the following options, see:\n# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n# - k_proj\n# - o_proj\n# - gate_proj\n# - down_proj\n# - up_proj\nlora_target_linear: # If true, will target all linear modules\npeft_layers_to_transform: # The layer indices to transform, otherwise, apply to all layers\n\n# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.\n# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.\n# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994\nlora_modules_to_save:\n# - embed_tokens\n# - lm_head\n\nlora_fan_in_fan_out: false\n\n# LoRA+ hyperparameters\n# For more details about the following options, see:\n# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py`\nloraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4.\nloraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6.\n\npeft:\n # Configuration options for loftq initialization for LoRA\n # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization\n loftq_config:\n loftq_bits: # typically 4 bits\n\n# ReLoRA configuration\n# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed\nrelora_steps: # Number of steps per ReLoRA restart\nrelora_warmup_steps: # Number of per-restart warmup steps\nrelora_anneal_steps: # Number of anneal steps for each relora cycle\nrelora_prune_ratio: # threshold for optimizer magnitude when pruning\nrelora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings\n\n# wandb configuration if you're using it\n# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.\nwandb_mode: # \"offline\" to save run metadata locally and not sync to the server, \"disabled\" to turn off wandb\nwandb_project: # Your wandb project name\nwandb_entity: # A wandb Team name if using a Team\nwandb_watch:\nwandb_name: # Set the name of your wandb run\nwandb_run_id: # Set the ID of your wandb run\nwandb_log_model: # \"checkpoint\" to log model to wandb Artifacts every `save_steps` or \"end\" to log only at the end of training\n\n# mlflow configuration if you're using it\nmlflow_tracking_uri: # URI to mlflow\nmlflow_experiment_name: # Your experiment name\nmlflow_run_name: # Your run name\nhf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry\n\n# Comet configuration if you're using it\n# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`.\n# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start\nuse_comet: # Enable or disable Comet integration.\ncomet_api_key: # API key for Comet. Recommended to set via `comet login`.\ncomet_workspace: # Workspace name in Comet. Defaults to the user's default workspace.\ncomet_project_name: # Project name in Comet. Defaults to Uncategorized.\ncomet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key.\ncomet_mode: # Create a new experiment (\"create\") or log to an existing one (\"get\"). Default (\"get_or_create\") auto-selects based on configuration.\ncomet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True.\ncomet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details.\n\n# Where to save the full-finetuned model to\noutput_dir: ./completed-model\n\n# Whether to use torch.compile and which backend to use\n# setting to `auto` will enable torch compile when torch>=2.5.1\ntorch_compile: # Optional[Union[Literal[\"auto\"], bool]]\ntorch_compile_backend: # Optional[str]\n\n# Training hyperparameters\n\n# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.\ngradient_accumulation_steps: 1\n# The number of samples to include in each batch. This is the number of samples sent to each GPU.\n# Batch size per gpu = micro_batch_size * gradient_accumulation_steps\nmicro_batch_size: 2\neval_batch_size:\nnum_epochs: 4\nwarmup_steps: 100 # cannot use with warmup_ratio\nwarmup_ratio: 0.05 # cannot use with warmup_steps\nlearning_rate: 0.00003\nlr_quadratic_warmup:\nlogging_steps:\neval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps\nevals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps\neval_strategy: # Set to `\"no\"` to skip evaluation, `\"epoch\"` at end of each epoch, leave empty to infer from `eval_steps`.\nsave_strategy: # Set to `\"no\"` to skip checkpoint saves, `\"epoch\"` at end of each epoch, `\"best\"` when better result is achieved, leave empty to infer from `save_steps`.\nsave_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps\nsaves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps\nsave_total_limit: # Checkpoints saved at a time\n# Maximum number of iterations to train for. It precedes num_epochs which means that\n# if both are set, num_epochs will not be guaranteed.\n# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps\nmax_steps:\n\neval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0\neval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128\neval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is [\"sacrebleu\", \"comet\", \"ter\", \"chrf\", \"perplexity\"]\n\nprofiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir.\n # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information\n # snapshots can be visualized @ https://pytorch.org/memory_viz\n\nloss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)\nloss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)\n\n# Save model as safetensors (require safetensors package)\nsave_safetensors:\n\n# Whether to mask out or include the human's prompt from the training labels\ntrain_on_inputs: false\n# Group similarly sized data to minimize padding.\n# May be slower to start, as it must download and sort the entire dataset.\n# Note that training loss may have an oscillating pattern with this enabled.\ngroup_by_length: false\n\n# Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\ngradient_checkpointing: false\n# additional kwargs to pass to the trainer for gradient checkpointing\n# gradient_checkpointing_kwargs:\n# use_reentrant: true\n\n# Stop training after this many evaluation losses have increased in a row\n# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback\nearly_stopping_patience: 3\n\n# Specify a scheduler and kwargs to use with the optimizer\nlr_scheduler: # 'one_cycle' | 'log_sweep' | empty for cosine\nlr_scheduler_kwargs:\ncosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr\ncosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)\n\n# For one_cycle optim\nlr_div_factor: # Learning rate div factor\n\n# Specify optimizer\n# Valid values are driven by the Transformers OptimizerNames class, see:\n# https://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/training_args.py#L134\n#\n# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of\n# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used\n# in the examples/ for your model and fine-tuning use case.\n#\n# Valid values for 'optimizer' include:\n# - adamw_hf\n# - adamw_torch\n# - adamw_torch_fused\n# - adamw_torch_xla\n# - adamw_apex_fused\n# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)\n# - adafactor\n# - adamw_anyprecision\n# - sgd\n# - adagrad\n# - adamw_bnb_8bit\n# - lion_8bit\n# - lion_32bit\n# - paged_adamw_32bit\n# - paged_adamw_8bit\n# - paged_lion_32bit\n# - paged_lion_8bit\n# - galore_adamw\n# - galore_adamw_8bit\n# - galore_adafactor\n# - galore_adamw_layerwise\n# - galore_adamw_8bit_layerwise\n# - galore_adafactor_layerwise\noptimizer:\n# Dictionary of arguments to pass to the optimizer\noptim_args:\n# For Galore Optimizers the following optim_args are available\n# rank: # type: int\n# update_proj_gap # type: int\n# scale # type: float\n# proj_type: # type: str, default = std\n\n# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm\noptim_target_modules:\n# - self_attn # for llama\n# - mlp\n\n# Specify weight decay\nweight_decay:\n# adamw hyperparams\nadam_beta1:\nadam_beta2:\nadam_epsilon:\n# Gradient clipping max norm\nmax_grad_norm:\n\n# Augmentation techniques\n# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings\n# currently only supported on Llama and Mistral\nneftune_noise_alpha:\n\n# Whether to bettertransformers\nflash_optimum:\n# Whether to use xformers attention patch https://github.com/facebookresearch/xformers:\nxformers_attention:\n# Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:\nflash_attention:\nflash_attn_cross_entropy: # Whether to use flash-attention cross entropy implementation - advanced use only\nflash_attn_rms_norm: # Whether to use flash-attention rms norm implementation - advanced use only\nflash_attn_fuse_qkv: # Whether to fuse QKV into a single operation\nflash_attn_fuse_mlp: # Whether to fuse part of the MLP into a single operation\n# Whether to use scaled-dot-product attention\n# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html\nsdp_attention:\n# Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf\ns2_attention:\n# Resume from a specific checkpoint dir\nresume_from_checkpoint:\n# If resume_from_checkpoint isn't set and you simply want it to start where it left off.\n# Be careful with this being turned on between different models.\nauto_resume_from_checkpoints: false\n\n# Don't mess with this, it's here for accelerate and torchrun\nlocal_rank:\n\n# Add or change special tokens.\n# If you add tokens here, you don't need to add them to the `tokens` list.\nspecial_tokens:\n # bos_token: \"<s>\"\n # eos_token: \"</s>\"\n # unk_token: \"<unk>\"\n # pad_token: \"[PAD]\"\n\n# Add extra tokens.\ntokens:\n\n# FSDP\nfsdp:\nfsdp_config:\n\n# Deepspeed config path. e.g., deepspeed_configs/zero3.json\ndeepspeed:\n\n# Advanced DDP Arguments\nddp_timeout:\nddp_bucket_cap_mb:\nddp_broadcast_buffers:\n\n# Path to torch distx for optim 'adamw_anyprecision'\ntorchdistx_path:\n\n# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize\npretraining_dataset:\n\n# Debug mode\ndebug:\n\n# Seed\nseed:\n\n# Allow overwrite yml config using from cli\nstrict:",
+ "text": "Overview\nReinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback. Various methods include, but not limited to:\n\nProximal Policy Optimization (PPO) (not yet supported in axolotl)\nDirect Preference Optimization (DPO)\nIdentity Preference Optimization (IPO)\n\n\n\nRLHF using Axolotl\n\n[!IMPORTANT] This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality.\n\nThe various RL training methods are implemented in trl and wrapped via axolotl. Below are various examples with how you can use various preference datasets to train models that use ChatML\n\nDPO\nrl: dpo\ndatasets:\n - path: Intel/orca_dpo_pairs\n split: train\n type: chatml.intel\n - path: argilla/ultrafeedback-binarized-preferences\n split: train\n type: chatml.argilla\n\n\nIPO\nrl: ipo\n\n\nORPO\nPaper: https://arxiv.org/abs/2403.07691\nrl: orpo\norpo_alpha: 0.1\nremove_unused_columns: false\n\nchat_template: chatml\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned\n type: chat_template.argilla\n\n\nKTO\nrl: kto\nrl_beta: 0.5\nkto_desirable_weight: 0.2\n\nremove_unused_columns: false\n\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned-kto\n type: llama3.ultra\n split: train\n\ngradient_checkpointing: true\ngradient_checkpointing_kwargs:\n use_reentrant: true\n\n\nUsing local dataset files\ndatasets:\n - ds_type: json\n data_files:\n - orca_rlhf.jsonl\n split: train\n type: chatml.intel\n\n\nTrl autounwrap for peft\nTrl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.\n# load ref model when adapter training.\nrl_adapter_ref_model: true",
"crumbs": [
- "Reference",
- "Config options"
+ "How-To Guides",
+ "RLHF (Beta)"
+ ]
+ },
+ {
+ "objectID": "docs/multipack.html",
+ "href": "docs/multipack.html",
+ "title": "Multipack (Sample Packing)",
+ "section": "",
+ "text": "Because Flash Attention simply drops the attention mask, we do not need to construct a 4d attention mask. We only need to concatenate the sequences into a single batch and let flash attention know where each new sequence begins.\n4k context, bsz =4, each character represents 256 tokens X represents a padding token\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B ]\n C C C C C C C ]\n D D D D ]]\n\n[[ E E E E E E E E ]\n [ F F F F ]\n [ G G G ]\n [ H H H H ]]\n\n[[ I I I ]\n [ J J J ]\n [ K K K K K]\n [ L L L ]]\nafter padding to longest input in each step\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B X X X X X X ]\n C C C C C C C X X X X ]\n D D D D X X X X X X X ]]\n\n[[ E E E E E E E E ]\n [ F F F F X X X X ]\n [ G G G X X X X X ]\n [ H H H H X X X X ]]\n\n[[ I I I X X ]\n [ J J J X X ]\n [ K K K K K ]\n [ L L L X X ]]\nw packing ( note it’s the same effective number of tokens per step, but a true bsz of 1)\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A B B B B B\n B C C C C C C C D D D D E E E E\n E E E E F F F F F G G G H H H H\n I I I J J J J K K K K K L L L X ]]\ncu_seqlens: [[ 0, 11, 17, 24, 28, 36, 41 44, 48, 51, 55, 60, 64]]",
+ "crumbs": [
+ "How-To Guides",
+ "Multipack (Sample Packing)"
+ ]
+ },
+ {
+ "objectID": "docs/multipack.html#visualization-of-multipack-with-flash-attention",
+ "href": "docs/multipack.html#visualization-of-multipack-with-flash-attention",
+ "title": "Multipack (Sample Packing)",
+ "section": "",
+ "text": "Because Flash Attention simply drops the attention mask, we do not need to construct a 4d attention mask. We only need to concatenate the sequences into a single batch and let flash attention know where each new sequence begins.\n4k context, bsz =4, each character represents 256 tokens X represents a padding token\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B ]\n C C C C C C C ]\n D D D D ]]\n\n[[ E E E E E E E E ]\n [ F F F F ]\n [ G G G ]\n [ H H H H ]]\n\n[[ I I I ]\n [ J J J ]\n [ K K K K K]\n [ L L L ]]\nafter padding to longest input in each step\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A ]\n B B B B B B X X X X X X ]\n C C C C C C C X X X X ]\n D D D D X X X X X X X ]]\n\n[[ E E E E E E E E ]\n [ F F F F X X X X ]\n [ G G G X X X X X ]\n [ H H H H X X X X ]]\n\n[[ I I I X X ]\n [ J J J X X ]\n [ K K K K K ]\n [ L L L X X ]]\nw packing ( note it’s the same effective number of tokens per step, but a true bsz of 1)\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5\n[[ A A A A A A A A A A A B B B B B\n B C C C C C C C D D D D E E E E\n E E E E F F F F F G G G H H H H\n I I I J J J J K K K K K L L L X ]]\ncu_seqlens: [[ 0, 11, 17, 24, 28, 36, 41 44, 48, 51, 55, 60, 64]]",
+ "crumbs": [
+ "How-To Guides",
+ "Multipack (Sample Packing)"
+ ]
+ },
+ {
+ "objectID": "docs/multipack.html#multipack-without-flash-attention",
+ "href": "docs/multipack.html#multipack-without-flash-attention",
+ "title": "Multipack (Sample Packing)",
+ "section": "Multipack without Flash Attention",
+ "text": "Multipack without Flash Attention\nMultipack can still be achieved without Flash attention, but with lower packing efficiency as we are not able to join multiple batches into a single batch due to context length limits without flash attention. We can use either Pytorch’s Scaled Dot Product Attention implementation or native Pytorch attention implementation along with 4d attention masks to pack sequences together and avoid cross attention.",
+ "crumbs": [
+ "How-To Guides",
+ "Multipack (Sample Packing)"
]
},
{
@@ -935,7 +1200,7 @@
"href": "docs/dataset-formats/stepwise_supervised.html",
"title": "Stepwise Supervised Format",
"section": "",
- "text": "The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step. ### ExampleHere’s a simple example of a stepwise supervised dataset entry:```json { “prompt”: “Which number is larger, 9.8 or 9.11?”, “completions”: [ “The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.”, “Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8.” ], “labels”: [true, false] }",
+ "text": "The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step.\n\n\nHere’s a simple example of a stepwise supervised dataset entry:\n{\n \"prompt\": \"Which number is larger, 9.8 or 9.11?\",\n \"completions\": [\n \"The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.\",\n \"Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8.\"\n ],\n \"labels\": [true, false]\n}",
"crumbs": [
"Dataset Formats",
"Stepwise Supervised Format"
@@ -946,7 +1211,7 @@
"href": "docs/dataset-formats/stepwise_supervised.html#stepwise-supervised",
"title": "Stepwise Supervised Format",
"section": "",
- "text": "The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step. ### ExampleHere’s a simple example of a stepwise supervised dataset entry:```json { “prompt”: “Which number is larger, 9.8 or 9.11?”, “completions”: [ “The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.”, “Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8.” ], “labels”: [true, false] }",
+ "text": "The stepwise supervised format is designed for chain-of-thought (COT) reasoning datasets where each example contains multiple completion steps and a preference label for each step.\n\n\nHere’s a simple example of a stepwise supervised dataset entry:\n{\n \"prompt\": \"Which number is larger, 9.8 or 9.11?\",\n \"completions\": [\n \"The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.\",\n \"Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8.\"\n ],\n \"labels\": [true, false]\n}",
"crumbs": [
"Dataset Formats",
"Stepwise Supervised Format"
diff --git a/sitemap.xml b/sitemap.xml
index eacc088a4..2d0008ef8 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -2,134 +2,150 @@
https://axolotl-ai-cloud.github.io/axolotl/FAQS.html
- 2025-01-30T16:48:59.932Z
+ 2025-01-30T17:49:31.437Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.438Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html
- 2025-01-30T16:48:59.938Z
+ 2025-01-30T17:49:31.442Z
- https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html
- 2025-01-30T16:48:59.935Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/ray-integration.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html
- 2025-01-30T16:48:59.935Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html
- 2025-01-30T16:48:59.939Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/TODO.html
- 2025-01-30T16:48:59.932Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html
- 2025-01-30T16:48:59.955Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html
- 2025-01-30T16:48:59.955Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/index.html
- 2025-01-30T16:48:59.952Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html
- 2025-01-30T16:48:59.938Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html
- 2025-01-30T16:48:59.934Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html
- 2025-01-30T16:48:59.934Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html
- 2025-01-30T16:48:59.935Z
-
-
- https://axolotl-ai-cloud.github.io/axolotl/docs/lr_groups.html
- 2025-01-30T16:48:59.938Z
+ https://axolotl-ai-cloud.github.io/axolotl/docs/installation.html
+ 2025-01-30T17:49:31.442Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/config.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/lr_groups.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html
+ 2025-01-30T17:49:31.439Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html
+ 2025-01-30T17:49:31.439Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/getting-started.html
+ 2025-01-30T17:49:31.439Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html
+ 2025-01-30T17:49:31.443Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/TODO.html
+ 2025-01-30T17:49:31.438Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html
+ 2025-01-30T17:49:31.457Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html
+ 2025-01-30T17:49:31.456Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/index.html
+ 2025-01-30T17:49:31.454Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/multi-gpu.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/inference.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html
+ 2025-01-30T17:49:31.438Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html
+ 2025-01-30T17:49:31.439Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/ray-integration.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html
+ 2025-01-30T17:49:31.439Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html
+ 2025-01-30T17:49:31.442Z
+
+
+ https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html
+ 2025-01-30T17:49:31.442Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html
- 2025-01-30T16:48:59.938Z
+ 2025-01-30T17:49:31.442Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/reward_modelling.html
- 2025-01-30T16:48:59.938Z
+ 2025-01-30T17:49:31.442Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html
- 2025-01-30T16:48:59.938Z
+ 2025-01-30T17:49:31.442Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/stepwise_supervised.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/cli.html
- 2025-01-30T16:48:59.934Z
+ 2025-01-30T17:49:31.439Z
diff --git a/src/axolotl/integrations/LICENSE.html b/src/axolotl/integrations/LICENSE.html
index 453ecde07..bac476106 100644
--- a/src/axolotl/integrations/LICENSE.html
+++ b/src/axolotl/integrations/LICENSE.html
@@ -124,9 +124,27 @@ ul.task-list li input[type="checkbox"] {