diff --git a/.nojekyll b/.nojekyll index dc950e583..68a08cc50 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -4c6ae543 \ No newline at end of file +0b256313 \ No newline at end of file diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html index 04601f49f..b8222bf09 100644 --- a/docs/dataset-formats/index.html +++ b/docs/dataset-formats/index.html @@ -357,7 +357,7 @@ Description
-When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.
+Liger Kernel: Efficient Triton Kernels for LLM Training
+https://github.com/linkedin/Liger-Kernel
+Liger (LinkedIn GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The Liger Kernel composes well and is compatible with both FSDP and Deepspeed.
+plugins:
+ - axolotl.integrations.liger.LigerPlugin
+liger_rope: true
+liger_rms_norm: true
+liger_swiglu: true
+liger_fused_linear_cross_entropy: truePass the appropriate flag to the inference command, depending upon what kind of model was trained:
Pretrained LORA:
-python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"Full weights finetune:
-python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"Full weights finetune w/ a prompt from a text file:
-cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
- --base_model="./completed-model" --prompter=None --load_in_8bit=Truecat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
+ --base_model="./completed-model" --prompter=None --load_in_8bit=True– With gradio hosting
-python -m axolotl.cli.inference examples/your_config.yml --gradiopython -m axolotl.cli.inference examples/your_config.yml --gradioPlease use --sample_packing False if you have it on and receive the error similar to below:
@@ -901,9 +913,9 @@ cd skypilot/llm/axolotlMerge LORA to base
The following command will merge your LORA adapater with your base model. You can optionally pass the argument
---lora_model_dirto specify the directory where your LORA adapter was saved, otherwhise, this will be inferred fromoutput_dirin your axolotl config file. The merged model is saved in the sub-directory{lora_model_dir}/merged.+python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"You may need to use the
-gpu_memory_limitand/orlora_on_cpuconfig options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with+CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...although this will be very slow, and using the config options above are recommended instead.
Building something cool with Axolotl? Consider adding a badge to your model card.
-[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)Bugs? Please check the open issues else create a new Issue.
PRs are greatly welcome!
Please run the quickstart instructions followed by the below to setup env:
-pip3 install -r requirements-dev.txt -r requirements-tests.txt
-pre-commit install
-
-# test
-pytest tests/
-
-# optional: run against all files
-pre-commit run --all-filespip3 install -r requirements-dev.txt -r requirements-tests.txt
+pre-commit install
+
+# test
+pytest tests/
+
+# optional: run against all files
+pre-commit run --all-filesThanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.