diff --git a/.nojekyll b/.nojekyll index 3c0ef2ac9..02b3dc3b4 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -9efd6963 \ No newline at end of file +7c6a53ca \ No newline at end of file diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html index 805189389..1592d1745 100644 --- a/docs/dataset-formats/index.html +++ b/docs/dataset-formats/index.html @@ -363,7 +363,7 @@ Description
-rl: kto
+rl_beta: 0.5
+kto_desirable_weight: 0.2
+
+remove_unused_columns: false
+
+datasets:
+ - path: argilla/ultrafeedback-binarized-preferences-cleaned-kto
+ type: llama3.ultra
+ split: train
+
+gradient_checkpointing: true
+gradient_checkpointing_kwargs:
+ use_reentrant: truedatasets:
- - ds_type: json
- data_files:
- - orca_rlhf.jsonl
- split: train
- type: chatml.inteldatasets:
+ - ds_type: json
+ data_files:
+ - orca_rlhf.jsonl
+ split: train
+ type: chatml.intelTrl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.
-# load ref model when adapter training.
-rl_adapter_ref_model: true# load ref model when adapter training.
+rl_adapter_ref_model: trueAxolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.
Features: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!
@@ -370,51 +369,42 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlinGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.
Requirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.3.1.
git clone https://github.com/axolotl-ai-cloud/axolotl
-cd axolotl
-
-pip3 install packaging ninja
-pip3 install -e '.[flash-attn,deepspeed]'# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml
-
-# finetune lora
-accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml
-
-# inference
-accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
- --lora_model_dir="./outputs/lora-out"
-
-# gradio
-accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
- --lora_model_dir="./outputs/lora-out" --gradio
-
-# remote yaml files - the yaml config can be hosted on a public URL
-# Note: the yaml config must directly link to the **raw** yaml
-accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.ymlpip3 install axolotl[flash-attn,deepspeed]
+
+# download examples and optionally deepspeed configs to the local path
+axolotl fetch examples
+axolotl fetch deepspeed_configs # OPTIONAL
+
+# finetune using lora
+axolotl train examples/llama-3/lora-1b.ymlIf you’re looking for the latest features and updates between releases, you’ll need to install from source.
+git clone https://github.com/axolotl-ai-cloud/axolotl.git
+cd axolotl
+pip3 install packaging ninja
+pip3 install -e '.[flash-attn,deepspeed]'If you’ve installed this package using pip from source, we now support a new, more streamlined CLI using click. Rewriting the above commands:
We now support a new, more streamlined CLI using click.
# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0" axolotl preprocess examples/openllama-3b/lora.yml
+CUDA_VISIBLE_DEVICES="0" axolotl preprocess examples/llama-3/lora-1b.yml
# finetune lora
-axolotl train examples/openllama-3b/lora.yml
+axolotl train examples/llama-3/lora-1b.yml
# inference
-axolotl inference examples/openllama-3b/lora.yml \
+axolotl inference examples/llama-3/lora-1b.yml \
--lora-model-dir="./outputs/lora-out"
# gradio
-axolotl inference examples/openllama-3b/lora.yml \
+axolotl inference examples/llama-3/lora-1b.yml \
--lora-model-dir="./outputs/lora-out" --gradio
# remote yaml files - the yaml config can be hosted on a public URL
# Note: the yaml config must directly link to the **raw** yaml
-axolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.ymlWe’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.
# Fetch example YAML files (stores in "examples/" folder)
axolotl fetch examples
@@ -425,11 +415,37 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
# Optionally, specify a destination folder
axolotl fetch examples --dest path/to/folderWhile the Axolotl CLI is the preferred method for interacting with axolotl, we still support the legacy -m axolotl.cli.* usage.
# preprocess datasets - optional but recommended
+CUDA_VISIBLE_DEVICES="0" python -m axolotl.cli.preprocess examples/llama-3/lora-1b.yml
+
+# finetune lora
+accelerate launch -m axolotl.cli.train examples/llama-3/lora-1b.yml
+
+# inference
+accelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \
+ --lora_model_dir="./outputs/lora-out"
+
+# gradio
+accelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \
+ --lora_model_dir="./outputs/lora-out" --gradio
+
+# remote yaml files - the yaml config can be hosted on a public URL
+# Note: the yaml config must directly link to the **raw** yaml
+accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.ymlBuilding something cool with Axolotl? Consider adding a badge to your model card.
-[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)Bugs? Please check the open issues else create a new Issue.
PRs are greatly welcome!
Please run the quickstart instructions followed by the below to setup env:
-pip3 install -r requirements-dev.txt -r requirements-tests.txt
-pre-commit install
-
-# test
-pytest tests/
-
-# optional: run against all files
-pre-commit run --all-filespip3 install -r requirements-dev.txt -r requirements-tests.txt
+pre-commit install
+
+# test
+pytest tests/
+
+# optional: run against all files
+pre-commit run --all-filesThanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.
docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latestdocker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latestOr run on the current files for development:
-docker compose up -ddocker compose up -d@@ -665,7 +681,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin Docker advanced[!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide’s section on Docker.
A more powerful Docker command to run would be this:
-docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-latestdocker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-latestIt additionally: * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args. * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args. * The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal. * The --privileged flag gives all capabilities to the container. * The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.
More information on nvidia website
@@ -699,28 +715,28 @@ Click to Expandsudo apt update
-sudo apt install -y python3.10
-
-sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1
-sudo update-alternatives --config python # pick 3.10 if given option
-python -V # should be 3.10sudo apt update
+sudo apt install -y python3.10
+
+sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1
+sudo update-alternatives --config python # pick 3.10 if given option
+python -V # should be 3.10wget https://bootstrap.pypa.io/get-pip.py
-python get-pip.pywget https://bootstrap.pypa.io/get-pip.py
+python get-pip.pyInstall Pytorch https://pytorch.org/get-started/locally/
Follow instructions on quickstart.
Run
pip3 install protobuf==3.20.3
-pip3 install -U --ignore-installed requests Pillow psutil scipypip3 install protobuf==3.20.3
+pip3 install -U --ignore-installed requests Pillow psutil scipyexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATHexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATHUse a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.
Make sure to run the below to uninstall xla.
-pip uninstall -y torch_xla[tpu]pip uninstall -y torch_xla[tpu]To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:
-pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds
-sky checkpip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds
+sky checkGet the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:
git clone https://github.com/skypilot-org/skypilot.git
cd skypilot/llm/axolotl
Use one command to launch:
-# On-demand
-HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN
-
-# Managed spot (auto-recovery on preemption)
-HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET# On-demand
+HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN
+
+# Managed spot (auto-recovery on preemption)
+HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKETTo launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.
Write a job description in YAML as below:
-# dstack.yaml
-type: task
-
-image: axolotlai/axolotl-cloud:main-latest
-
-env:
- - HUGGING_FACE_HUB_TOKEN
- - WANDB_API_KEY
-
-commands:
- - accelerate launch -m axolotl.cli.train config.yaml
-
-ports:
- - 6006
-
-resources:
- gpu:
- memory: 24GB..
- count: 2# dstack.yaml
+type: task
+
+image: axolotlai/axolotl-cloud:main-latest
+
+env:
+ - HUGGING_FACE_HUB_TOKEN
+ - WANDB_API_KEY
+
+commands:
+ - accelerate launch -m axolotl.cli.train config.yaml
+
+ports:
+ - 6006
+
+resources:
+ gpu:
+ memory: 24GB..
+ count: 2then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:
pip install dstack
-HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spotpip install dstack
+HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spotFor further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.
See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:
model
-base_model: ./llama-7b-hf # local or huggingface repobase_model: ./llama-7b-hf # local or huggingface repoNote: The code will load the right architecture.
dataset
-datasets:
- # huggingface repo
- - path: vicgalle/alpaca-gpt4
- type: alpaca
-
- # huggingface repo with specific configuration/subset
- - path: EleutherAI/pile
- name: enron_emails
- type: completion # format from earlier
- field: text # Optional[str] default: text, field to use for completion data
-
- # huggingface repo with multiple named configurations/subsets
- - path: bigcode/commitpackft
- name:
- - ruby
- - python
- - typescript
- type: ... # unimplemented custom format
-
- # chat_template https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html#chat_template
- - path: ...
- type: chat_template
- chat_template: chatml # defaults to tokenizer's chat_template
-
- # local
- - path: data.jsonl # or json
- ds_type: json # see other options below
- type: alpaca
-
- # dataset with splits, but no train split
- - path: knowrohit07/know_sql
- type: context_qa.load_v2
- train_on_split: validation
-
- # loading from s3 or gcs
- # s3 creds will be loaded from the system default and gcs only supports public access
- - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs.
- ...
-
- # Loading Data From a Public URL
- # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.
- - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.
- ds_type: json # this is the default, see other options below.datasets:
+ # huggingface repo
+ - path: vicgalle/alpaca-gpt4
+ type: alpaca
+
+ # huggingface repo with specific configuration/subset
+ - path: EleutherAI/pile
+ name: enron_emails
+ type: completion # format from earlier
+ field: text # Optional[str] default: text, field to use for completion data
+
+ # huggingface repo with multiple named configurations/subsets
+ - path: bigcode/commitpackft
+ name:
+ - ruby
+ - python
+ - typescript
+ type: ... # unimplemented custom format
+
+ # chat_template https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html#chat_template
+ - path: ...
+ type: chat_template
+ chat_template: chatml # defaults to tokenizer's chat_template
+
+ # local
+ - path: data.jsonl # or json
+ ds_type: json # see other options below
+ type: alpaca
+
+ # dataset with splits, but no train split
+ - path: knowrohit07/know_sql
+ type: context_qa.load_v2
+ train_on_split: validation
+
+ # loading from s3 or gcs
+ # s3 creds will be loaded from the system default and gcs only supports public access
+ - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs.
+ ...
+
+ # Loading Data From a Public URL
+ # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.
+ - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.
+ ds_type: json # this is the default, see other options below.loading
-load_in_4bit: true
-load_in_8bit: true
-
-bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.
-fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32
-tf32: true # require >=ampere
-
-bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)
-float16: true # use instead of fp16 when you don't want AMPload_in_4bit: true
+load_in_8bit: true
+
+bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.
+fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32
+tf32: true # require >=ampere
+
+bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)
+float16: true # use instead of fp16 when you don't want AMPNote: Repo does not do 4-bit quantization.
lora
-adapter: lora # 'qlora' or leave blank for full finetune
-lora_r: 8
-lora_alpha: 16
-lora_dropout: 0.05
-lora_target_modules:
- - q_proj
- - v_projadapter: lora # 'qlora' or leave blank for full finetune
+lora_r: 8
+lora_alpha: 16
+lora_dropout: 0.05
+lora_target_modules:
+ - q_proj
+ - v_projRun
-accelerate launch -m axolotl.cli.train your_config.ymlaccelerate launch -m axolotl.cli.train your_config.yml@@ -889,7 +905,7 @@ cd skypilot/llm/axolotl[!TIP] You can also reference a config file that is hosted on a public URL, for example
accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml
push_dataset_to_hub: hf_user/repo to push it to Huggingface.--debug to see preprocessed examples.python -m axolotl.cli.preprocess your_config.ymlpython -m axolotl.cli.preprocess your_config.ymlDeepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated
We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.
-deepspeed: deepspeed_configs/zero1.jsondeepspeed: deepspeed_configs/zero1.jsonaccelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
fsdp:
- - full_shard
- - auto_wrap
-fsdp_config:
- fsdp_offload_params: true
- fsdp_state_dict_type: FULL_STATE_DICT
- fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayerfsdp:
+ - full_shard
+ - auto_wrap
+fsdp_config:
+ fsdp_offload_params: true
+ fsdp_state_dict_type: FULL_STATE_DICT
+ fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayerwandb_mode:
-wandb_project:
-wandb_entity:
-wandb_watch:
-wandb_name:
-wandb_log_model:wandb_mode:
+wandb_project:
+wandb_entity:
+wandb_watch:
+wandb_name:
+wandb_log_model:use_comet:
-comet_api_key:
-comet_workspace:
-comet_project_name:
-comet_experiment_key:
-comet_mode:
-comet_online:
-comet_experiment_config:use_comet:
+comet_api_key:
+comet_workspace:
+comet_project_name:
+comet_experiment_key:
+comet_mode:
+comet_online:
+comet_experiment_config:It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:
-special_tokens:
- bos_token: "<s>"
- eos_token: "</s>"
- unk_token: "<unk>"
-tokens: # these are delimiters
- - "<|im_start|>"
- - "<|im_end|>"special_tokens:
+ bos_token: "<s>"
+ eos_token: "</s>"
+ unk_token: "<unk>"
+tokens: # these are delimiters
+ - "<|im_start|>"
+ - "<|im_end|>"When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.
Liger Kernel: Efficient Triton Kernels for LLM Training
https://github.com/linkedin/Liger-Kernel
Liger (LinkedIn GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The Liger Kernel composes well and is compatible with both FSDP and Deepspeed.
-plugins:
- - axolotl.integrations.liger.LigerPlugin
-liger_rope: true
-liger_rms_norm: true
-liger_glu_activation: true
-liger_layer_norm: true
-liger_fused_linear_cross_entropy: trueplugins:
+ - axolotl.integrations.liger.LigerPlugin
+liger_rope: true
+liger_rms_norm: true
+liger_glu_activation: true
+liger_layer_norm: true
+liger_fused_linear_cross_entropy: truePass the appropriate flag to the inference command, depending upon what kind of model was trained:
Pretrained LORA:
-python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"Full weights finetune:
-python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"Full weights finetune w/ a prompt from a text file:
-cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
- --base_model="./completed-model" --prompter=None --load_in_8bit=Truecat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
+ --base_model="./completed-model" --prompter=None --load_in_8bit=True– With gradio hosting
-python -m axolotl.cli.inference examples/your_config.yml --gradiopython -m axolotl.cli.inference examples/your_config.yml --gradioPlease use --sample_packing False if you have it on and receive the error similar to below:
@@ -996,9 +1012,9 @@ cd skypilot/llm/axolotldiff --git a/search.json b/search.json index a9248967a..3fa4c5b62 100644 --- a/search.json +++ b/search.json @@ -4,7 +4,7 @@ "href": "index.html", "title": "Axolotl", "section": "", - "text": "Quickstart ⚡\n \n Usage\n Axolotl CLI\n \n Badge ❤🏷️\n Sponsors 🤝❤\n Contributing 🤝\n Axolotl supports\n Advanced Setup\n \n Environment\n Dataset\n Config\n Train\n Inference Playground\n Merge LORA to base\n \n Common Errors 🧰\n \n Tokenization Mismatch b/w Inference & Training\n \n Debugging Axolotl\n Need help? 🙋\nAxolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.\nFeatures: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!", + "text": "Quickstart ⚡\n \n Edge Builds 🏎️\n Axolotl CLI Usage\n Legacy Usage\n \n Badge ❤🏷️\n Sponsors 🤝❤\n Contributing 🤝\n Axolotl supports\n Advanced Setup\n \n Environment\n Dataset\n Config\n Train\n Inference Playground\n Merge LORA to base\n \n Common Errors 🧰\n \n Tokenization Mismatch b/w Inference & Training\n \n Debugging Axolotl\n Need help? 🙋\nAxolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.\nFeatures: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!", "crumbs": [ "Home" ] @@ -14,7 +14,7 @@ "href": "index.html#quickstart", "title": "Axolotl", "section": "Quickstart ⚡", - "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.3.1.\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml\n\n\nAxolotl CLI\nIf you’ve installed this package using pip from source, we now support a new, more streamlined CLI using click. Rewriting the above commands:\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" axolotl preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naxolotl train examples/openllama-3b/lora.yml\n\n# inference\naxolotl inference examples/openllama-3b/lora.yml \\\n --lora-model-dir=\"./outputs/lora-out\"\n\n# gradio\naxolotl inference examples/openllama-3b/lora.yml \\\n --lora-model-dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naxolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml\nWe’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.\n# Fetch example YAML files (stores in \"examples/\" folder)\naxolotl fetch examples\n\n# Fetch deepspeed config files (stores in \"deepspeed_configs/\" folder)\naxolotl fetch deepspeed_configs\n\n# Optionally, specify a destination folder\naxolotl fetch examples --dest path/to/folder", + "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.3.1.\npip3 install axolotl[flash-attn,deepspeed]\n\n# download examples and optionally deepspeed configs to the local path\naxolotl fetch examples\naxolotl fetch deepspeed_configs # OPTIONAL\n\n# finetune using lora\naxolotl train examples/llama-3/lora-1b.yml\n\nEdge Builds 🏎️\nIf you’re looking for the latest features and updates between releases, you’ll need to install from source.\ngit clone https://github.com/axolotl-ai-cloud/axolotl.git\ncd axolotl\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\n\nAxolotl CLI Usage\nWe now support a new, more streamlined CLI using click.\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" axolotl preprocess examples/llama-3/lora-1b.yml\n\n# finetune lora\naxolotl train examples/llama-3/lora-1b.yml\n\n# inference\naxolotl inference examples/llama-3/lora-1b.yml \\\n --lora-model-dir=\"./outputs/lora-out\"\n\n# gradio\naxolotl inference examples/llama-3/lora-1b.yml \\\n --lora-model-dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naxolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml\nWe’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.\n# Fetch example YAML files (stores in \"examples/\" folder)\naxolotl fetch examples\n\n# Fetch deepspeed config files (stores in \"deepspeed_configs/\" folder)\naxolotl fetch deepspeed_configs\n\n# Optionally, specify a destination folder\naxolotl fetch examples --dest path/to/folder\n\n\nLegacy Usage\n\n\nClick to Expand\n\nWhile the Axolotl CLI is the preferred method for interacting with axolotl, we still support the legacy -m axolotl.cli.* usage.\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" python -m axolotl.cli.preprocess examples/llama-3/lora-1b.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/llama-3/lora-1b.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/llama-3/lora-1b.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/llama-3/lora-1b.yml", "crumbs": [ "Home" ] @@ -169,7 +169,7 @@ "href": "docs/rlhf.html", "title": "RLHF (Beta)", "section": "", - "text": "Overview\nReinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback. Various methods include, but not limited to:\n\nProximal Policy Optimization (PPO) (not yet supported in axolotl)\nDirect Preference Optimization (DPO)\nIdentity Preference Optimization (IPO)\n\n\n\nRLHF using Axolotl\n\n[!IMPORTANT] This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality.\n\nThe various RL training methods are implemented in trl and wrapped via axolotl. Below are various examples with how you can use various preference datasets to train models that use ChatML\n\nDPO\nrl: dpo\ndatasets:\n - path: Intel/orca_dpo_pairs\n split: train\n type: chatml.intel\n - path: argilla/ultrafeedback-binarized-preferences\n split: train\n type: chatml.argilla\n\n\nIPO\nrl: ipo\n\n\nORPO\nPaper: https://arxiv.org/abs/2403.07691\nrl: orpo\norpo_alpha: 0.1\nremove_unused_columns: false\n\nchat_template: chatml\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned\n type: chat_template.argilla\n\n\nUsing local dataset files\ndatasets:\n - ds_type: json\n data_files:\n - orca_rlhf.jsonl\n split: train\n type: chatml.intel\n\n\nTrl autounwrap for peft\nTrl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.\n# load ref model when adapter training.\nrl_adapter_ref_model: true", + "text": "Overview\nReinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback. Various methods include, but not limited to:\n\nProximal Policy Optimization (PPO) (not yet supported in axolotl)\nDirect Preference Optimization (DPO)\nIdentity Preference Optimization (IPO)\n\n\n\nRLHF using Axolotl\n\n[!IMPORTANT] This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality.\n\nThe various RL training methods are implemented in trl and wrapped via axolotl. Below are various examples with how you can use various preference datasets to train models that use ChatML\n\nDPO\nrl: dpo\ndatasets:\n - path: Intel/orca_dpo_pairs\n split: train\n type: chatml.intel\n - path: argilla/ultrafeedback-binarized-preferences\n split: train\n type: chatml.argilla\n\n\nIPO\nrl: ipo\n\n\nORPO\nPaper: https://arxiv.org/abs/2403.07691\nrl: orpo\norpo_alpha: 0.1\nremove_unused_columns: false\n\nchat_template: chatml\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned\n type: chat_template.argilla\n\n\nKTO\nrl: kto\nrl_beta: 0.5\nkto_desirable_weight: 0.2\n\nremove_unused_columns: false\n\ndatasets:\n - path: argilla/ultrafeedback-binarized-preferences-cleaned-kto\n type: llama3.ultra\n split: train\n\ngradient_checkpointing: true\ngradient_checkpointing_kwargs:\n use_reentrant: true\n\n\nUsing local dataset files\ndatasets:\n - ds_type: json\n data_files:\n - orca_rlhf.jsonl\n split: train\n type: chatml.intel\n\n\nTrl autounwrap for peft\nTrl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.\n# load ref model when adapter training.\nrl_adapter_ref_model: true", "crumbs": [ "How-To Guides", "RLHF (Beta)" diff --git a/sitemap.xml b/sitemap.xml index 9024a9f90..074f111c7 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,114 +2,114 @@ Merge LORA to base
The following command will merge your LORA adapater with your base model. You can optionally pass the argument
---lora_model_dirto specify the directory where your LORA adapter was saved, otherwhise, this will be inferred fromoutput_dirin your axolotl config file. The merged model is saved in the sub-directory{lora_model_dir}/merged.+python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"You may need to use the
-gpu_memory_limitand/orlora_on_cpuconfig options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with+CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...although this will be very slow, and using the config options above are recommended instead.
https://axolotl-ai-cloud.github.io/axolotl/index.html -2024-12-09T12:25:40.080Z +2024-12-09T19:03:30.613Z https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html -2024-12-09T12:25:40.084Z +2024-12-09T19:03:30.613Z https://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/config.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/FAQS.html -2024-12-09T12:25:40.064Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/TODO.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.597Z https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html -2024-12-09T12:25:40.068Z +2024-12-09T19:03:30.601Z https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html -2024-12-09T12:25:40.084Z +2024-12-09T19:03:30.613Z