diff --git a/.nojekyll b/.nojekyll index 0336d4891..27c62d98d 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -e9fe91b2 \ No newline at end of file +27cea293 \ No newline at end of file diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html index f5e20a81a..23e062379 100644 --- a/docs/dataset-formats/index.html +++ b/docs/dataset-formats/index.html @@ -363,7 +363,7 @@ Description - + Pre-training @@ -371,7 +371,7 @@ Description Data format for a pre-training completion task. - + Instruction Tuning @@ -379,7 +379,7 @@ Description Instruction tuning formats for supervised fine-tuning. - + Conversation @@ -387,7 +387,7 @@ Description Conversation format for supervised fine-tuning. - + Template-Free @@ -395,7 +395,7 @@ Description Construct prompts without a template. - + Custom Pre-Tokenized Dataset diff --git a/index.html b/index.html index 0179fefbf..a4f2cd788 100644 --- a/index.html +++ b/index.html @@ -292,11 +292,15 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Table Of Contents

@@ -328,11 +328,15 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Table Of Contents

@@ -366,6 +366,98 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.

Features: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!

phorm.ai

+
+

Quickstart ⚡

+

Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.

+

Requirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.3.1.

+
git clone https://github.com/axolotl-ai-cloud/axolotl
+cd axolotl
+
+pip3 install packaging ninja
+pip3 install -e '.[flash-attn,deepspeed]'
+
+

Usage

+
# preprocess datasets - optional but recommended
+CUDA_VISIBLE_DEVICES="0" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml
+
+# finetune lora
+accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml
+
+# inference
+accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
+    --lora_model_dir="./outputs/lora-out"
+
+# gradio
+accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
+    --lora_model_dir="./outputs/lora-out" --gradio
+
+# remote yaml files - the yaml config can be hosted on a public URL
+# Note: the yaml config must directly link to the **raw** yaml
+accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml
+
+
+

Axolotl CLI

+

If you’ve installed this package using pip from source, we now support a new, more streamlined CLI using click. Rewriting the above commands:

+
# preprocess datasets - optional but recommended
+CUDA_VISIBLE_DEVICES="0" axolotl preprocess examples/openllama-3b/lora.yml
+
+# finetune lora
+axolotl train examples/openllama-3b/lora.yml
+
+# inference
+axolotl inference examples/openllama-3b/lora.yml \
+    --lora-model-dir="./outputs/lora-out"
+
+# gradio
+axolotl inference examples/openllama-3b/lora.yml \
+    --lora-model-dir="./outputs/lora-out" --gradio
+
+# remote yaml files - the yaml config can be hosted on a public URL
+# Note: the yaml config must directly link to the **raw** yaml
+axolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml
+

We’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.

+
# Fetch example YAML files (stores in "examples/" folder)
+axolotl fetch examples
+
+# Fetch deepspeed config files (stores in "deepspeed_configs/" folder)
+axolotl fetch deepspeed_configs
+
+# Optionally, specify a destination folder
+axolotl fetch examples --dest path/to/folder
+
+
+
+

Badge ❤🏷️

+

Building something cool with Axolotl? Consider adding a badge to your model card.

+
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
+

Built with Axolotl

+
+
+

Sponsors 🤝❤

+

If you love axolotl, consider sponsoring the project by reaching out directly to wing@axolotl.ai.

+
+ +
+
+
+

Contributing 🤝

+

Please read the contributing guide

+

Bugs? Please check the open issues else create a new Issue.

+

PRs are greatly welcome!

+

Please run the quickstart instructions followed by the below to setup env:

+
pip3 install -r requirements-dev.txt -r requirements-tests.txt
+pre-commit install
+
+# test
+pytest tests/
+
+# optional: run against all files
+pre-commit run --all-files
+

Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.

+

contributor chart by https://contrib.rocks

+

Axolotl supports

@@ -556,45 +648,15 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

✅: supported ❌: not supported ❓: untested

-
-

Quickstart ⚡

-

Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.

-

Requirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention), Python >=3.10 and PyTorch >=2.3.1.

-
git clone https://github.com/axolotl-ai-cloud/axolotl
-cd axolotl
-
-pip3 install packaging ninja
-pip3 install -e '.[flash-attn,deepspeed]'
-
-

Usage

-
# preprocess datasets - optional but recommended
-CUDA_VISIBLE_DEVICES="0" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml
-
-# finetune lora
-accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml
-
-# inference
-accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
-    --lora_model_dir="./outputs/lora-out"
-
-# gradio
-accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
-    --lora_model_dir="./outputs/lora-out" --gradio
-
-# remote yaml files - the yaml config can be hosted on a public URL
-# Note: the yaml config must directly link to the **raw** yaml
-accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml
-
-

Advanced Setup

Environment

Docker

-
docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest
+
docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest

Or run on the current files for development:

-
docker compose up -d
+
docker compose up -d

[!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide’s section on Docker.

@@ -603,7 +665,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin Docker advanced

A more powerful Docker command to run would be this:

-
docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-latest
+
docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface axolotlai/axolotl:main-latest

It additionally: * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args. * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args. * The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal. * The --privileged flag gives all capabilities to the container. * The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

More information on nvidia website

@@ -637,28 +699,28 @@ Click to Expand
  1. Install python
-
sudo apt update
-sudo apt install -y python3.10
-
-sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1
-sudo update-alternatives --config python # pick 3.10 if given option
-python -V # should be 3.10
+
sudo apt update
+sudo apt install -y python3.10
+
+sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1
+sudo update-alternatives --config python # pick 3.10 if given option
+python -V # should be 3.10
  1. Install pip
-
wget https://bootstrap.pypa.io/get-pip.py
-python get-pip.py
+
wget https://bootstrap.pypa.io/get-pip.py
+python get-pip.py
  1. Install Pytorch https://pytorch.org/get-started/locally/

  2. Follow instructions on quickstart.

  3. Run

-
pip3 install protobuf==3.20.3
-pip3 install -U --ignore-installed requests Pillow psutil scipy
+
pip3 install protobuf==3.20.3
+pip3 install -U --ignore-installed requests Pillow psutil scipy
  1. Set path
-
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
+
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
@@ -669,7 +731,7 @@ Click to Expand

Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

Make sure to run the below to uninstall xla.

-
pip uninstall -y torch_xla[tpu]
+
pip uninstall -y torch_xla[tpu]
@@ -690,44 +752,44 @@ Click to Expand

Launching on public clouds via SkyPilot

To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

-
pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]"  # choose your clouds
-sky check
+
pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]"  # choose your clouds
+sky check

Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

git clone https://github.com/skypilot-org/skypilot.git
 cd skypilot/llm/axolotl

Use one command to launch:

-
# On-demand
-HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN
-
-# Managed spot (auto-recovery on preemption)
-HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET
+
# On-demand
+HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN
+
+# Managed spot (auto-recovery on preemption)
+HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET

Launching on public clouds via dstack

To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.

Write a job description in YAML as below:

-
# dstack.yaml
-type: task
-
-image: axolotlai/axolotl-cloud:main-latest
-
-env:
-  - HUGGING_FACE_HUB_TOKEN
-  - WANDB_API_KEY
-
-commands:
-  - accelerate launch -m axolotl.cli.train config.yaml
-
-ports:
-  - 6006
-
-resources:
-  gpu:
-    memory: 24GB..
-    count: 2
+
# dstack.yaml
+type: task
+
+image: axolotlai/axolotl-cloud:main-latest
+
+env:
+  - HUGGING_FACE_HUB_TOKEN
+  - WANDB_API_KEY
+
+commands:
+  - accelerate launch -m axolotl.cli.train config.yaml
+
+ports:
+  - 6006
+
+resources:
+  gpu:
+    memory: 24GB..
+    count: 2

then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:

-
pip install dstack
-HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot
+
pip install dstack
+HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot

For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.

@@ -741,71 +803,71 @@ cd skypilot/llm/axolotl

See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

All Config Options

@@ -815,7 +877,7 @@ cd skypilot/llm/axolotl

Train

Run

-
accelerate launch -m axolotl.cli.train your_config.yml
+
accelerate launch -m axolotl.cli.train your_config.yml

[!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml

@@ -827,7 +889,7 @@ cd skypilot/llm/axolotl
  • (Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
  • (Optional): Use --debug to see preprocessed examples.
  • -
    python -m axolotl.cli.preprocess your_config.yml
    +
    python -m axolotl.cli.preprocess your_config.yml

    Multi-GPU

    @@ -836,7 +898,7 @@ cd skypilot/llm/axolotl
    DeepSpeed

    Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated

    We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.

    -
    deepspeed: deepspeed_configs/zero1.json
    +
    deepspeed: deepspeed_configs/zero1.json
    accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
    @@ -844,13 +906,13 @@ cd skypilot/llm/axolotl -
    fsdp:
    -  - full_shard
    -  - auto_wrap
    -fsdp_config:
    -  fsdp_offload_params: true
    -  fsdp_state_dict_type: FULL_STATE_DICT
    -  fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    +
    fsdp:
    +  - full_shard
    +  - auto_wrap
    +fsdp_config:
    +  fsdp_offload_params: true
    +  fsdp_state_dict_type: FULL_STATE_DICT
    +  fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    FSDP + QLoRA
    @@ -862,12 +924,12 @@ cd skypilot/llm/axolotl -
    wandb_mode:
    -wandb_project:
    -wandb_entity:
    -wandb_watch:
    -wandb_name:
    -wandb_log_model:
    +
    wandb_mode:
    +wandb_project:
    +wandb_entity:
    +wandb_watch:
    +wandb_name:
    +wandb_log_model:
    Comet Logging
    @@ -875,25 +937,25 @@ cd skypilot/llm/axolotl -
    use_comet:
    -comet_api_key:
    -comet_workspace:
    -comet_project_name:
    -comet_experiment_key:
    -comet_mode:
    -comet_online:
    -comet_experiment_config:
    +
    use_comet:
    +comet_api_key:
    +comet_workspace:
    +comet_project_name:
    +comet_experiment_key:
    +comet_mode:
    +comet_online:
    +comet_experiment_config:
    Special Tokens

    It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:

    -
    special_tokens:
    -  bos_token: "<s>"
    -  eos_token: "</s>"
    -  unk_token: "<unk>"
    -tokens: # these are delimiters
    -  - "<|im_start|>"
    -  - "<|im_end|>"
    +
    special_tokens:
    +  bos_token: "<s>"
    +  eos_token: "</s>"
    +  unk_token: "<unk>"
    +tokens: # these are delimiters
    +  - "<|im_start|>"
    +  - "<|im_end|>"

    When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.

    @@ -901,13 +963,13 @@ cd skypilot/llm/axolotl

    Liger Kernel: Efficient Triton Kernels for LLM Training

    https://github.com/linkedin/Liger-Kernel

    Liger (LinkedIn GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The Liger Kernel composes well and is compatible with both FSDP and Deepspeed.

    -
    plugins:
    -  - axolotl.integrations.liger.LigerPlugin
    -liger_rope: true
    -liger_rms_norm: true
    -liger_glu_activation: true
    -liger_layer_norm: true
    -liger_fused_linear_cross_entropy: true
    +
    plugins:
    +  - axolotl.integrations.liger.LigerPlugin
    +liger_rope: true
    +liger_rms_norm: true
    +liger_glu_activation: true
    +liger_layer_norm: true
    +liger_fused_linear_cross_entropy: true
    @@ -917,14 +979,14 @@ cd skypilot/llm/axolotl

    Pass the appropriate flag to the inference command, depending upon what kind of model was trained:

    Please use --sample_packing False if you have it on and receive the error similar to below:

    @@ -934,9 +996,9 @@ cd skypilot/llm/axolotl

    Merge LORA to base

    The following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.

    -
    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"
    +
    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"

    You may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with

    -
    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...
    +
    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...

    although this will be very slow, and using the config options above are recommended instead.

    @@ -988,63 +1050,10 @@ cd skypilot/llm/axolotl

    Need help? 🙋

    -

    Join our Discord server where we our community members can help you.

    -

    Need dedicated support? Please contact us at ✉️wing@openaccessaicollective.org for dedicated support options.

    -
    -
    -

    Badge ❤🏷️

    -

    Building something cool with Axolotl? Consider adding a badge to your model card.

    -
    [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
    -

    Built with Axolotl

    -
    -
    -

    Community Showcase

    -

    Check out some of the projects and models that have been built using Axolotl! Have a model you’d like to add to our Community Showcase? Open a PR with your model.

    -

    Open Access AI Collective - Minotaur 13b - Manticore 13b - Hippogriff 30b

    -

    PocketDoc Labs - Dan’s PersonalityEngine 13b LoRA

    -
    -
    -

    Contributing 🤝

    -

    Please read the contributing guide

    -

    Bugs? Please check the open issues else create a new Issue.

    -

    PRs are greatly welcome!

    -

    Please run the quickstart instructions followed by the below to setup env:

    -
    pip3 install -r requirements-dev.txt -r requirements-tests.txt
    -pre-commit install
    -
    -# test
    -pytest tests/
    -
    -# optional: run against all files
    -pre-commit run --all-files
    -

    Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.

    -

    contributor chart by https://contrib.rocks

    -
    -
    -

    Sponsors 🤝❤

    -

    OpenAccess AI Collective is run by volunteer contributors such as winglian, NanoCode012, tmm1, mhenrichsen, casper-hansen, hamelsmu and many more who help us accelerate forward by fixing bugs, answering community questions and implementing new features. Axolotl needs donations from sponsors for the compute needed to run our unit & integration tests, troubleshooting community issues, and providing bounties. If you love axolotl, consider sponsoring the project via GitHub Sponsors, Ko-fi or reach out directly to wing@openaccessaicollective.org.

    -
    -
    -

    💎 Diamond Sponsors - Contact directly

    -
    -
    -
    -

    🥇 Gold Sponsors - $5000/mo

    -
    -
    -
    -

    🥈 Silver Sponsors - $1000/mo

    -
    -
    -
    -

    🥉 Bronze Sponsors - $500/mo

    - -
    +

    Join our Discord server where our community members can help you.

    +

    Need dedicated support? Please contact us at ✉️wing@axolotl.ai for dedicated support options.

    -
    diff --git a/search.json b/search.json index 2d6e7b9c0..a9248967a 100644 --- a/search.json +++ b/search.json @@ -4,7 +4,47 @@ "href": "index.html", "title": "Axolotl", "section": "", - "text": "Axolotl supports\n Quickstart ⚡\n \n Usage\n \n Advanced Setup\n \n Environment\n Dataset\n Config\n Train\n Inference Playground\n Merge LORA to base\n \n Common Errors 🧰\n \n Tokenization Mismatch b/w Inference & Training\n \n Debugging Axolotl\n Need help? 🙋\n Badge ❤🏷️\n Community Showcase\n Contributing 🤝\n Sponsors 🤝❤\nAxolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.\nFeatures: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!", + "text": "Quickstart ⚡\n \n Usage\n Axolotl CLI\n \n Badge ❤🏷️\n Sponsors 🤝❤\n Contributing 🤝\n Axolotl supports\n Advanced Setup\n \n Environment\n Dataset\n Config\n Train\n Inference Playground\n Merge LORA to base\n \n Common Errors 🧰\n \n Tokenization Mismatch b/w Inference & Training\n \n Debugging Axolotl\n Need help? 🙋\nAxolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.\nFeatures: - Train various Huggingface models such as llama, pythia, falcon, mpt - Supports fullfinetune, lora, qlora, relora, and gptq - Customize configurations using a simple yaml file or CLI overwrite - Load different dataset formats, use custom formats, or bring your own tokenized datasets - Integrated with xformer, flash attention, liger kernel, rope scaling, and multipacking - Works with single GPU or multiple GPUs via FSDP or Deepspeed - Easily run with Docker locally or on the cloud - Log results and optionally checkpoints to wandb, mlflow or Comet - And more!", + "crumbs": [ + "Home" + ] + }, + { + "objectID": "index.html#quickstart", + "href": "index.html#quickstart", + "title": "Axolotl", + "section": "Quickstart ⚡", + "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention) or AMD GPU, Python >=3.10 and PyTorch >=2.3.1.\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml\n\n\nAxolotl CLI\nIf you’ve installed this package using pip from source, we now support a new, more streamlined CLI using click. Rewriting the above commands:\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" axolotl preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naxolotl train examples/openllama-3b/lora.yml\n\n# inference\naxolotl inference examples/openllama-3b/lora.yml \\\n --lora-model-dir=\"./outputs/lora-out\"\n\n# gradio\naxolotl inference examples/openllama-3b/lora.yml \\\n --lora-model-dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naxolotl train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml\nWe’ve also added a new command for fetching examples and deepspeed_configs to your local machine. This will come in handy when installing axolotl from PyPI.\n# Fetch example YAML files (stores in \"examples/\" folder)\naxolotl fetch examples\n\n# Fetch deepspeed config files (stores in \"deepspeed_configs/\" folder)\naxolotl fetch deepspeed_configs\n\n# Optionally, specify a destination folder\naxolotl fetch examples --dest path/to/folder", + "crumbs": [ + "Home" + ] + }, + { + "objectID": "index.html#badge", + "href": "index.html#badge", + "title": "Axolotl", + "section": "Badge ❤🏷️", + "text": "Badge ❤🏷️\nBuilding something cool with Axolotl? Consider adding a badge to your model card.\n[<img src=\"https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png\" alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>](https://github.com/axolotl-ai-cloud/axolotl)", + "crumbs": [ + "Home" + ] + }, + { + "objectID": "index.html#sponsors", + "href": "index.html#sponsors", + "title": "Axolotl", + "section": "Sponsors 🤝❤", + "text": "Sponsors 🤝❤\nIf you love axolotl, consider sponsoring the project by reaching out directly to wing@axolotl.ai.\n\n\nModal Modal lets you run data/AI jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune LLM models, run protein folding simulations, and much more.", + "crumbs": [ + "Home" + ] + }, + { + "objectID": "index.html#contributing", + "href": "index.html#contributing", + "title": "Axolotl", + "section": "Contributing 🤝", + "text": "Contributing 🤝\nPlease read the contributing guide\nBugs? Please check the open issues else create a new Issue.\nPRs are greatly welcome!\nPlease run the quickstart instructions followed by the below to setup env:\npip3 install -r requirements-dev.txt -r requirements-tests.txt\npre-commit install\n\n# test\npytest tests/\n\n# optional: run against all files\npre-commit run --all-files\nThanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.", "crumbs": [ "Home" ] @@ -19,16 +59,6 @@ "Home" ] }, - { - "objectID": "index.html#quickstart", - "href": "index.html#quickstart", - "title": "Axolotl", - "section": "Quickstart ⚡", - "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention), Python >=3.10 and PyTorch >=2.3.1.\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"0\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml", - "crumbs": [ - "Home" - ] - }, { "objectID": "index.html#advanced-setup", "href": "index.html#advanced-setup", @@ -64,47 +94,7 @@ "href": "index.html#need-help", "title": "Axolotl", "section": "Need help? 🙋", - "text": "Need help? 🙋\nJoin our Discord server where we our community members can help you.\nNeed dedicated support? Please contact us at ✉️wing@openaccessaicollective.org for dedicated support options.", - "crumbs": [ - "Home" - ] - }, - { - "objectID": "index.html#badge", - "href": "index.html#badge", - "title": "Axolotl", - "section": "Badge ❤🏷️", - "text": "Badge ❤🏷️\nBuilding something cool with Axolotl? Consider adding a badge to your model card.\n[<img src=\"https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png\" alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>](https://github.com/axolotl-ai-cloud/axolotl)", - "crumbs": [ - "Home" - ] - }, - { - "objectID": "index.html#community-showcase", - "href": "index.html#community-showcase", - "title": "Axolotl", - "section": "Community Showcase", - "text": "Community Showcase\nCheck out some of the projects and models that have been built using Axolotl! Have a model you’d like to add to our Community Showcase? Open a PR with your model.\nOpen Access AI Collective - Minotaur 13b - Manticore 13b - Hippogriff 30b\nPocketDoc Labs - Dan’s PersonalityEngine 13b LoRA", - "crumbs": [ - "Home" - ] - }, - { - "objectID": "index.html#contributing", - "href": "index.html#contributing", - "title": "Axolotl", - "section": "Contributing 🤝", - "text": "Contributing 🤝\nPlease read the contributing guide\nBugs? Please check the open issues else create a new Issue.\nPRs are greatly welcome!\nPlease run the quickstart instructions followed by the below to setup env:\npip3 install -r requirements-dev.txt -r requirements-tests.txt\npre-commit install\n\n# test\npytest tests/\n\n# optional: run against all files\npre-commit run --all-files\nThanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.", - "crumbs": [ - "Home" - ] - }, - { - "objectID": "index.html#sponsors", - "href": "index.html#sponsors", - "title": "Axolotl", - "section": "Sponsors 🤝❤", - "text": "Sponsors 🤝❤\nOpenAccess AI Collective is run by volunteer contributors such as winglian, NanoCode012, tmm1, mhenrichsen, casper-hansen, hamelsmu and many more who help us accelerate forward by fixing bugs, answering community questions and implementing new features. Axolotl needs donations from sponsors for the compute needed to run our unit & integration tests, troubleshooting community issues, and providing bounties. If you love axolotl, consider sponsoring the project via GitHub Sponsors, Ko-fi or reach out directly to wing@openaccessaicollective.org.\n\n\n💎 Diamond Sponsors - Contact directly\n\n\n\n🥇 Gold Sponsors - $5000/mo\n\n\n\n🥈 Silver Sponsors - $1000/mo\n\n\n\n🥉 Bronze Sponsors - $500/mo\n\nJarvisLabs.ai", + "text": "Need help? 🙋\nJoin our Discord server where our community members can help you.\nNeed dedicated support? Please contact us at ✉️wing@axolotl.ai for dedicated support options.", "crumbs": [ "Home" ] diff --git a/sitemap.xml b/sitemap.xml index fdbf250d1..db518418c 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,114 +2,114 @@ https://axolotl-ai-cloud.github.io/axolotl/index.html - 2024-12-04T17:26:24.645Z + 2024-12-06T03:12:00.727Z https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html - 2024-12-04T17:26:24.645Z + 2024-12-06T03:12:00.731Z https://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/config.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/FAQS.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.711Z https://axolotl-ai-cloud.github.io/axolotl/TODO.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.711Z https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html - 2024-12-04T17:26:24.629Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html - 2024-12-04T17:26:24.633Z + 2024-12-06T03:12:00.715Z https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html - 2024-12-04T17:26:24.645Z + 2024-12-06T03:12:00.731Z