From ba9ac723f120d492eb136ccac8d835d64988c9d7 Mon Sep 17 00:00:00 2001 From: NanoCode012 Date: Thu, 25 May 2023 09:31:34 +0900 Subject: [PATCH] Update quickstart. Add common error and contribution section. --- README.md | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index d4d7128ff..961eb413c 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ | mpt | ✅ | ❌ | ❌ | ❌ | ❌ | ❓ | -## Quick start +## Quickstart ⚡ **Requirements**: Python 3.9. @@ -32,12 +32,15 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl pip3 install -e .[int4] accelerate config + +# finetune accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml + +# inference +accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml --inference ``` - - -## Requirements and Installation +## Installation ### Environment @@ -108,6 +111,8 @@ Have dataset(s) in one of the following format (JSONL recommended): {"text": "..."} ``` +> Have some new format to propose? Check if it's already defined in [data.py](src/axolotl/utils/data.py) in `dev` branch! + Optionally, download some datasets, see [data/README.md](data/README.md) @@ -309,6 +314,7 @@ Configure accelerate ```bash accelerate config +# Edit manually # nano ~/.cache/huggingface/accelerate/default_config.yaml ``` @@ -330,5 +336,18 @@ If you are inferencing a pretrained LORA, pass ### Merge LORA to base (Dev branch 🔧 ) -Add `--merge_lora --lora_model_dir="path/to/lora"` flag to train command above +Add below flag to train command above +```bash +--merge_lora --lora_model_dir="./completed-model" +``` + +## Common Errors 🧰 + +- Cuda out of memory: Please reduce `micro_batch_size` and/or `eval_batch_size` + +## Contributing 🤝 + +Bugs? Please check for open issue else create a new [Issue](https://github.com/OpenAccess-AI-Collective/axolotl/issues/new). + +PRs are **greatly welcome**! \ No newline at end of file