diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 9eec23e1a..29769efb5 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -21,12 +21,12 @@ All contributors are expected to adhere to our [Code of Conduct](CODE_OF_CONDUCT
## Getting Started
-Bugs? Please check for open issue else create a new [Issue](https://github.com/OpenAccess-AI-Collective/axolotl/issues/new).
+Bugs? Please check for open issue else create a new [Issue](https://github.com/axolotl-ai-cloud/axolotl/issues/new).
PRs are **greatly welcome**!
1. Fork the repository and clone it to your local machine.
-2. Set up the development environment by following the instructions in the [README.md](https://github.com/OpenAccess-AI-Collective/axolotl/tree/main/README.md) file.
+2. Set up the development environment by following the instructions in the [README.md](https://github.com/axolotl-ai-cloud/axolotl/tree/main/README.md) file.
3. Explore the codebase, run tests, and verify that everything works as expected.
Please run below to setup env
@@ -42,11 +42,11 @@ pytest tests/
### Reporting Bugs
-If you encounter a bug or issue while using axolotl, please open a new issue on the [GitHub Issues](https://github.com/OpenAccess-AI-Collective/axolotl/issues) page. Provide a clear and concise description of the problem, steps to reproduce it, and any relevant error messages or logs.
+If you encounter a bug or issue while using axolotl, please open a new issue on the [GitHub Issues](https://github.com/axolotl-ai-cloud/axolotl/issues) page. Provide a clear and concise description of the problem, steps to reproduce it, and any relevant error messages or logs.
### Suggesting Enhancements
-We welcome ideas for improvements and new features. To suggest an enhancement, open a new issue on the [GitHub Issues](https://github.com/OpenAccess-AI-Collective/axolotl/issues) page. Describe the enhancement in detail, explain the use case, and outline the benefits it would bring to the project.
+We welcome ideas for improvements and new features. To suggest an enhancement, open a new issue on the [GitHub Issues](https://github.com/axolotl-ai-cloud/axolotl/issues) page. Describe the enhancement in detail, explain the use case, and outline the benefits it would bring to the project.
### Submitting Pull Requests
diff --git a/.nojekyll b/.nojekyll
index eb6bf9964..c3ea685ae 100644
--- a/.nojekyll
+++ b/.nojekyll
@@ -1 +1 @@
-c1f28ecd
\ No newline at end of file
+8bdf9ce8
\ No newline at end of file
diff --git a/FAQS.html b/FAQS.html
index 71acc5b26..b855d2997 100644
--- a/FAQS.html
+++ b/FAQS.html
@@ -84,7 +84,7 @@ ul.task-list li input[type="checkbox"] {
-
+
diff --git a/TODO.html b/TODO.html
index 029e754f7..bd9dcd9bb 100644
--- a/TODO.html
+++ b/TODO.html
@@ -84,7 +84,7 @@ ul.task-list li input[type="checkbox"] {
-
+
diff --git a/docs/batch_vs_grad.html b/docs/batch_vs_grad.html
index cffe375b6..d3a51f0e4 100644
--- a/docs/batch_vs_grad.html
+++ b/docs/batch_vs_grad.html
@@ -85,7 +85,7 @@ ul.task-list li input[type="checkbox"] {
One of the most popular features of axolotl is setting the following configuration value:
+
One of the most popular features of axolotl is setting the following configuration value:
train_on_inputs:false
-
If you declare a dataset formats such as alpaca or chatml, axolotl knows what is an input (i.e. human) vs. an output (i.e. the assistant) and masks the input labels so that your model can focus on predicting the outputs only.
+
If you declare a dataset formats such as alpaca or chatml, axolotl knows what is an input (i.e. human) vs. an output (i.e. the assistant) and masks the input labels so that your model can focus on predicting the outputs only.
diff --git a/docs/mac.html b/docs/mac.html
index d4d37f225..2169d8674 100644
--- a/docs/mac.html
+++ b/docs/mac.html
@@ -85,7 +85,7 @@ ul.task-list li input[type="checkbox"] {
git clone https://github.com/axolotl-ai-cloud/axolotlcd axolotlpip3 install packaging ninja
@@ -560,7 +560,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
# remote yaml files - the yaml config can be hosted on a public URL# Note: the yaml config must directly link to the **raw** yaml
-accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/examples/openllama-3b/lora.yml
# optional: run against all filespre-commit run --all-files
Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.
-
+
Sponsors 🤝❤
diff --git a/search.json b/search.json
index fc3b98d87..a98b342f1 100644
--- a/search.json
+++ b/search.json
@@ -48,7 +48,7 @@
"href": "docs/debugging.html#debugging-with-docker",
"title": "Debugging",
"section": "Debugging With Docker",
- "text": "Debugging With Docker\nUsing official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.\n\nSetup\nOn the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:\ngit clone https://github.com/OpenAccess-AI-Collective/axolotl\ncd axolotl\n\n[!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.\n\nNext, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:2\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-py3.10-cu118-2.0.1\n\n[!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl’s Docker CI builds.\n\nYou will now be in the container. Next, perform an editable install of Axolotl:\npip3 install packaging\npip3 install -e '.[flash-attn,deepspeed]'\n\n\nAttach To Container\nNext, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.\nNext, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.\nNow you are ready to debug as described above (see Debugging with VSCode).\n\n\nVideo - Attaching To Docker On Remote Host\nHere is a short video that demonstrates how to attach to a Docker container on a remote host:\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl Part 2: Attaching to Docker on a Remote Host",
+ "text": "Debugging With Docker\nUsing official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.\n\nSetup\nOn the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\n[!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.\n\nNext, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:2\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-py3.10-cu118-2.0.1\n\n[!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl’s Docker CI builds.\n\nYou will now be in the container. Next, perform an editable install of Axolotl:\npip3 install packaging\npip3 install -e '.[flash-attn,deepspeed]'\n\n\nAttach To Container\nNext, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.\nNext, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.\nNow you are ready to debug as described above (see Debugging with VSCode).\n\n\nVideo - Attaching To Docker On Remote Host\nHere is a short video that demonstrates how to attach to a Docker container on a remote host:\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl Part 2: Attaching to Docker on a Remote Host",
"crumbs": [
"How-To Guides",
"Debugging"
@@ -404,7 +404,7 @@
"href": "index.html#quickstart",
"title": "Axolotl",
"section": "Quickstart ⚡",
- "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Python >=3.10 and Pytorch >=2.1.1.\ngit clone https://github.com/OpenAccess-AI-Collective/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/examples/openllama-3b/lora.yml",
+ "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Python >=3.10 and Pytorch >=2.1.1.\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml",
"crumbs": [
"Home"
]
@@ -454,7 +454,7 @@
"href": "index.html#badge",
"title": "Axolotl",
"section": "Badge ❤🏷️",
- "text": "Badge ❤🏷️\nBuilding something cool with Axolotl? Consider adding a badge to your model card.\n[<img src=\"https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png\" alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>](https://github.com/OpenAccess-AI-Collective/axolotl)",
+ "text": "Badge ❤🏷️\nBuilding something cool with Axolotl? Consider adding a badge to your model card.\n[<img src=\"https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png\" alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>](https://github.com/axolotl-ai-cloud/axolotl)",
"crumbs": [
"Home"
]
@@ -501,7 +501,7 @@
"href": "examples/colab-notebooks/colab-axolotl-example.html#install-axolotl-and-dependencies",
"title": "Example notebook for running Axolotl on google colab",
"section": "Install Axolotl and dependencies",
- "text": "Install Axolotl and dependencies\n\n!pip install torch==\"2.1.2\"\n!pip install -e git+https://github.com/OpenAccess-AI-Collective/axolotl#egg=axolotl\n!pip install flash-attn==\"2.5.0\"\n!pip install deepspeed==\"0.13.1\"!pip install mlflow==\"2.13.0\""
+ "text": "Install Axolotl and dependencies\n\n!pip install torch==\"2.1.2\"\n!pip install -e git+https://github.com/axolotl-ai-cloud/axolotl#egg=axolotl\n!pip install flash-attn==\"2.5.0\"\n!pip install deepspeed==\"0.13.1\"!pip install mlflow==\"2.13.0\""
},
{
"objectID": "examples/colab-notebooks/colab-axolotl-example.html#create-an-yaml-config-file",
diff --git a/sitemap.xml b/sitemap.xml
index 24190da06..03446afc4 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -2,90 +2,90 @@
https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/faq.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/mac.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/config.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html
- 2024-07-10T15:16:04.180Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/index.html
- 2024-07-10T15:16:04.188Z
+ 2024-07-11T13:19:42.287Zhttps://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html
- 2024-07-10T15:16:04.180Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/TODO.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.271Zhttps://axolotl-ai-cloud.github.io/axolotl/FAQS.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.271Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html
- 2024-07-10T15:16:04.180Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html
- 2024-07-10T15:16:04.180Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html
- 2024-07-10T15:16:04.180Z
+ 2024-07-11T13:19:42.275Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html
- 2024-07-10T15:16:04.176Z
+ 2024-07-11T13:19:42.275Z