diff --git a/.nojekyll b/.nojekyll index 177d171cc..78cb61ce6 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -fca0ba2e \ No newline at end of file +4c0d776b \ No newline at end of file diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html index 4d247d609..57249804c 100644 --- a/docs/dataset-formats/index.html +++ b/docs/dataset-formats/index.html @@ -363,7 +363,7 @@ Description - + Pre-training @@ -371,7 +371,7 @@ Description Data format for a pre-training completion task. - + Instruction Tuning @@ -379,7 +379,7 @@ Description Instruction tuning formats for supervised fine-tuning. - + Conversation @@ -387,7 +387,7 @@ Description Conversation format for supervised fine-tuning. - + Template-Free @@ -395,7 +395,7 @@ Description Construct prompts without a template. - + Custom Pre-Tokenized Dataset diff --git a/docs/debugging.html b/docs/debugging.html index 365dea6fd..45229c749 100644 --- a/docs/debugging.html +++ b/docs/debugging.html @@ -392,10 +392,10 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Debugging with VSCode

Background

-

The below example shows how to configure VSCode to debug data preprocessing of the sharegpt format. This is the format used when you have the following in your axolotl config:

+

The below example shows how to configure VSCode to debug data preprocessing of the chat_template format. This is the format used when you have the following in your axolotl config:

datasets:
-  - path: <path to your sharegpt formatted dataset> # example on HF Hub: philschmid/guanaco-sharegpt-style
-    type: sharegpt
+ - path: <path to your chat_template formatted dataset> # example on HF Hub: fozziethebeat/alpaca_messages_2k_test + type: chat_template

[!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.

@@ -416,18 +416,18 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Configuration

The easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.

-

For example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_sharegpt.yml, you would use the below configuration1. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.

+

For example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_chat_template.yml, you would use the below configuration1. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.

// .vscode/launch.json
 {
     "version": "0.2.0",
     "configurations": [
         {
-            "name": "Debug axolotl prompt - sharegpt",
+            "name": "Debug axolotl prompt - chat_template",
             "type": "python",
             "module": "accelerate.commands.launch",
             "request": "launch",
             "args": [
-                "-m", "axolotl.cli.train", "dev_sharegpt.yml",
+                "-m", "axolotl.cli.train", "dev_chat_template.yml",
                 // The flags below simplify debugging by overriding the axolotl config
                 // with the debugging tips above.  Modify as needed.
                 "--dataset_processes=1",      // limits data preprocessing to one process
@@ -552,7 +552,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
 

Footnotes

    -
  1. The config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/sharegpt.yml, but this is the same thing.↩︎

  2. +
  3. The config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/chat_template.yml, but this is the same thing.↩︎

  4. Many of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.↩︎

diff --git a/index.html b/index.html index 7f2db242f..e67051540 100644 --- a/index.html +++ b/index.html @@ -559,7 +559,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Quickstart ⚡

Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.

-

Requirements: Python >=3.10 and Pytorch >=2.1.1.

+

Requirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention), Python >=3.10 and PyTorch >=2.3.1.

git clone https://github.com/axolotl-ai-cloud/axolotl
 cd axolotl
 
diff --git a/search.json b/search.json
index 49be4318c..3b18d4e8a 100644
--- a/search.json
+++ b/search.json
@@ -24,7 +24,7 @@
     "href": "index.html#quickstart",
     "title": "Axolotl",
     "section": "Quickstart ⚡",
-    "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Python >=3.10 and Pytorch >=2.1.1.\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n    --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n    --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml",
+    "text": "Quickstart ⚡\nGet started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.\nRequirements: Nvidia GPU (Ampere architecture or newer for bf16 and Flash Attention), Python >=3.10 and PyTorch >=2.3.1.\ngit clone https://github.com/axolotl-ai-cloud/axolotl\ncd axolotl\n\npip3 install packaging ninja\npip3 install -e '.[flash-attn,deepspeed]'\n\nUsage\n# preprocess datasets - optional but recommended\nCUDA_VISIBLE_DEVICES=\"\" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml\n\n# finetune lora\naccelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml\n\n# inference\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n    --lora_model_dir=\"./outputs/lora-out\"\n\n# gradio\naccelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \\\n    --lora_model_dir=\"./outputs/lora-out\" --gradio\n\n# remote yaml files - the yaml config can be hosted on a public URL\n# Note: the yaml config must directly link to the **raw** yaml\naccelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml",
     "crumbs": [
       "Home"
     ]
@@ -591,7 +591,7 @@
     "href": "docs/debugging.html#debugging-with-vscode",
     "title": "Debugging",
     "section": "Debugging with VSCode",
-    "text": "Debugging with VSCode\n\nBackground\nThe below example shows how to configure VSCode to debug data preprocessing of the sharegpt format. This is the format used when you have the following in your axolotl config:\ndatasets:\n  - path: <path to your sharegpt formatted dataset> # example on HF Hub: philschmid/guanaco-sharegpt-style\n    type: sharegpt\n\n[!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.\n\n\n[!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).\n\n\n\nSetup\nMake sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:\npip3 install packaging\npip3 install -e '.[flash-attn,deepspeed]'\n\nRemote Hosts\nIf you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.\n\n\n\nConfiguration\nThe easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.\nFor example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_sharegpt.yml, you would use the below configuration1. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.\n// .vscode/launch.json\n{\n    \"version\": \"0.2.0\",\n    \"configurations\": [\n        {\n            \"name\": \"Debug axolotl prompt - sharegpt\",\n            \"type\": \"python\",\n            \"module\": \"accelerate.commands.launch\",\n            \"request\": \"launch\",\n            \"args\": [\n                \"-m\", \"axolotl.cli.train\", \"dev_sharegpt.yml\",\n                // The flags below simplify debugging by overriding the axolotl config\n                // with the debugging tips above.  Modify as needed.\n                \"--dataset_processes=1\",      // limits data preprocessing to one process\n                \"--max_steps=1\",              // limits training to just one step\n                \"--batch_size=1\",             // minimizes batch size\n                \"--micro_batch_size=1\",       // minimizes batch size\n                \"--val_set_size=0\",           // disables validation\n                \"--sample_packing=False\",     // disables sample packing which is necessary for small datasets\n                \"--eval_sample_packing=False\",// disables sample packing on eval set\n                \"--dataset_prepared_path=temp_debug/axolotl_outputs/data\", // send data outputs to a temp folder\n                \"--output_dir=temp_debug/axolotl_outputs/model\" // send model outputs to a temp folder\n                ],\n            \"console\": \"integratedTerminal\",      // show output in the integrated terminal\n            \"cwd\": \"${workspaceFolder}/devtools\", // set working directory to devtools from the root of the project\n            \"justMyCode\": true,                   // step through only axolotl code\n            \"env\": {\"CUDA_VISIBLE_DEVICES\": \"0\",  // Since we aren't doing distributed training, we need to limit to one GPU\n                    \"HF_HOME\": \"${workspaceFolder}/devtools/temp_debug/.hf-cache\"}, // send HF cache to a temp folder\n            \"preLaunchTask\": \"cleanup-for-dataprep\", // delete temp folders (see below)\n        }\n    ]\n}\nAdditional notes about this configuration:\n\nThe argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.\nThe preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:\n\n./devtools/temp_debug/axolotl_outputs\n./devtools/temp_debug/.hf-cache/datasets\n\n\n\n[!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.\n\nBelow is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.\n// .vscode/tasks.json\n// this file is used by launch.json\n{\n    \"version\": \"2.0.0\",\n    \"tasks\": [\n      // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder\n      {\n        \"label\": \"delete-outputs\",\n        \"type\": \"shell\",\n        \"command\": \"rm -rf temp_debug/axolotl_outputs\",\n        \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n        \"problemMatcher\": []\n      },\n      // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder\n      {\n        \"label\": \"delete-temp-hf-dataset-cache\",\n        \"type\": \"shell\",\n        \"command\": \"rm -rf temp_debug/.hf-cache/datasets\",\n        \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n        \"problemMatcher\": []\n      },\n        // this task combines the two tasks above\n      {\n       \"label\": \"cleanup-for-dataprep\",\n       \"dependsOn\": [\"delete-outputs\", \"delete-temp-hf-dataset-cache\"],\n      }\n    ]\n}\n\n\nCustomizing your debugger\nYour debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.\n\n\nVideo Tutorial\nThe following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl w/VSCode",
+    "text": "Debugging with VSCode\n\nBackground\nThe below example shows how to configure VSCode to debug data preprocessing of the chat_template format. This is the format used when you have the following in your axolotl config:\ndatasets:\n  - path: <path to your chat_template formatted dataset> # example on HF Hub: fozziethebeat/alpaca_messages_2k_test\n    type: chat_template\n\n[!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.\n\n\n[!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).\n\n\n\nSetup\nMake sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:\npip3 install packaging\npip3 install -e '.[flash-attn,deepspeed]'\n\nRemote Hosts\nIf you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.\n\n\n\nConfiguration\nThe easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.\nFor example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_chat_template.yml, you would use the below configuration1. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.\n// .vscode/launch.json\n{\n    \"version\": \"0.2.0\",\n    \"configurations\": [\n        {\n            \"name\": \"Debug axolotl prompt - chat_template\",\n            \"type\": \"python\",\n            \"module\": \"accelerate.commands.launch\",\n            \"request\": \"launch\",\n            \"args\": [\n                \"-m\", \"axolotl.cli.train\", \"dev_chat_template.yml\",\n                // The flags below simplify debugging by overriding the axolotl config\n                // with the debugging tips above.  Modify as needed.\n                \"--dataset_processes=1\",      // limits data preprocessing to one process\n                \"--max_steps=1\",              // limits training to just one step\n                \"--batch_size=1\",             // minimizes batch size\n                \"--micro_batch_size=1\",       // minimizes batch size\n                \"--val_set_size=0\",           // disables validation\n                \"--sample_packing=False\",     // disables sample packing which is necessary for small datasets\n                \"--eval_sample_packing=False\",// disables sample packing on eval set\n                \"--dataset_prepared_path=temp_debug/axolotl_outputs/data\", // send data outputs to a temp folder\n                \"--output_dir=temp_debug/axolotl_outputs/model\" // send model outputs to a temp folder\n                ],\n            \"console\": \"integratedTerminal\",      // show output in the integrated terminal\n            \"cwd\": \"${workspaceFolder}/devtools\", // set working directory to devtools from the root of the project\n            \"justMyCode\": true,                   // step through only axolotl code\n            \"env\": {\"CUDA_VISIBLE_DEVICES\": \"0\",  // Since we aren't doing distributed training, we need to limit to one GPU\n                    \"HF_HOME\": \"${workspaceFolder}/devtools/temp_debug/.hf-cache\"}, // send HF cache to a temp folder\n            \"preLaunchTask\": \"cleanup-for-dataprep\", // delete temp folders (see below)\n        }\n    ]\n}\nAdditional notes about this configuration:\n\nThe argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.\nThe preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:\n\n./devtools/temp_debug/axolotl_outputs\n./devtools/temp_debug/.hf-cache/datasets\n\n\n\n[!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.\n\nBelow is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.\n// .vscode/tasks.json\n// this file is used by launch.json\n{\n    \"version\": \"2.0.0\",\n    \"tasks\": [\n      // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder\n      {\n        \"label\": \"delete-outputs\",\n        \"type\": \"shell\",\n        \"command\": \"rm -rf temp_debug/axolotl_outputs\",\n        \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n        \"problemMatcher\": []\n      },\n      // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder\n      {\n        \"label\": \"delete-temp-hf-dataset-cache\",\n        \"type\": \"shell\",\n        \"command\": \"rm -rf temp_debug/.hf-cache/datasets\",\n        \"options\":{ \"cwd\": \"${workspaceFolder}/devtools\"},\n        \"problemMatcher\": []\n      },\n        // this task combines the two tasks above\n      {\n       \"label\": \"cleanup-for-dataprep\",\n       \"dependsOn\": [\"delete-outputs\", \"delete-temp-hf-dataset-cache\"],\n      }\n    ]\n}\n\n\nCustomizing your debugger\nYour debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.\n\n\nVideo Tutorial\nThe following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):\n\n\n\nHamel Husain’s tutorial: Debugging Axolotl w/VSCode",
     "crumbs": [
       "How-To Guides",
       "Debugging"
@@ -613,7 +613,7 @@
     "href": "docs/debugging.html#footnotes",
     "title": "Debugging",
     "section": "Footnotes",
-    "text": "Footnotes\n\n\nThe config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/sharegpt.yml, but this is the same thing.↩︎\nMany of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.↩︎",
+    "text": "Footnotes\n\n\nThe config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/chat_template.yml, but this is the same thing.↩︎\nMany of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.↩︎",
     "crumbs": [
       "How-To Guides",
       "Debugging"
diff --git a/sitemap.xml b/sitemap.xml
index 0c539e4c4..0ac1dd421 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -2,110 +2,110 @@
 
   
     https://axolotl-ai-cloud.github.io/axolotl/index.html
-    2024-10-29T03:15:05.381Z
+    2024-10-30T16:27:17.009Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.993Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.993Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/TODO.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.993Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/FAQS.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.993Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.993Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/config.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.993Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html
-    2024-10-29T03:15:05.365Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html
-    2024-10-29T03:15:05.369Z
+    2024-10-30T16:27:16.997Z
   
   
     https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html
-    2024-10-29T03:15:05.381Z
+    2024-10-30T16:27:17.013Z