diff --git a/.nojekyll b/.nojekyll
index 45eb3398d..1c9cde5fc 100644
--- a/.nojekyll
+++ b/.nojekyll
@@ -1 +1 @@
-3c080d8f
\ No newline at end of file
+5d33848c
\ No newline at end of file
diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html
index 4e7ed8c02..9193a8e48 100644
--- a/docs/dataset-formats/index.html
+++ b/docs/dataset-formats/index.html
@@ -363,7 +363,7 @@ Description
The following will install the correct unsloth and extras from source.
+
python scripts/unsloth_install.py |sh
Using unsloth w Axolotl
diff --git a/search.json b/search.json
index c30479083..d5ac9be02 100644
--- a/search.json
+++ b/search.json
@@ -284,7 +284,7 @@
"href": "docs/unsloth.html",
"title": "Unsloth",
"section": "",
- "text": "Overview\nUnsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over standard industry baselines.\n\n\nInstallation\nThe following will install unsloth from source and downgrade xformers as unsloth is incompatible with the most up to date libraries.\npip install --no-deps \"unsloth @ git+https://github.com/unslothai/unsloth.git\"\npip install --no-deps --force-reinstall xformers==0.0.26.post1\n\n\nUsing unsloth w Axolotl\nAxolotl exposes a few configuration options to try out unsloth and get most of the performance gains.\nOur unsloth integration is currently limited to the following model architectures: - llama\nThese options are specific to LoRA finetuning and cannot be used for multi-GPU finetuning\nunsloth_lora_mlp: true\nunsloth_lora_qkv: true\nunsloth_lora_o: true\nThese options are composable and can be used with multi-gpu finetuning\nunsloth_cross_entropy_loss: true\nunsloth_rms_norm: true\nunsloth_rope: true\n\n\nLimitations\n\nSingle GPU only; e.g. no multi-gpu support\nNo deepspeed or FSDP support (requires multi-gpu)\nLoRA + QLoRA support only. No full fine tunes or fp8 support.\nLimited model architecture support. Llama, Phi, Gemma, Mistral only\nNo MoE support.",
+ "text": "Overview\nUnsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over standard industry baselines.\n\n\nInstallation\nThe following will install the correct unsloth and extras from source.\npython scripts/unsloth_install.py | sh\n\n\nUsing unsloth w Axolotl\nAxolotl exposes a few configuration options to try out unsloth and get most of the performance gains.\nOur unsloth integration is currently limited to the following model architectures: - llama\nThese options are specific to LoRA finetuning and cannot be used for multi-GPU finetuning\nunsloth_lora_mlp: true\nunsloth_lora_qkv: true\nunsloth_lora_o: true\nThese options are composable and can be used with multi-gpu finetuning\nunsloth_cross_entropy_loss: true\nunsloth_rms_norm: true\nunsloth_rope: true\n\n\nLimitations\n\nSingle GPU only; e.g. no multi-gpu support\nNo deepspeed or FSDP support (requires multi-gpu)\nLoRA + QLoRA support only. No full fine tunes or fp8 support.\nLimited model architecture support. Llama, Phi, Gemma, Mistral only\nNo MoE support.",
"crumbs": [
"How-To Guides",
"Unsloth"
diff --git a/sitemap.xml b/sitemap.xml
index 2cb0a1376..dc0a2698e 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -2,110 +2,110 @@
https://axolotl-ai-cloud.github.io/axolotl/index.html
- 2024-11-20T19:06:21.084Z
+ 2024-11-20T19:08:05.925Zhttps://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html
- 2024-11-20T19:06:21.068Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/faq.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/TODO.html
- 2024-11-20T19:06:21.068Z
+ 2024-11-20T19:08:05.909Zhttps://axolotl-ai-cloud.github.io/axolotl/FAQS.html
- 2024-11-20T19:06:21.068Z
+ 2024-11-20T19:08:05.909Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/mac.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/config.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html
- 2024-11-20T19:06:21.072Z
+ 2024-11-20T19:08:05.913Zhttps://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html
- 2024-11-20T19:06:21.088Z
+ 2024-11-20T19:08:05.929Z