diff --git a/README.md b/README.md
index 55689696e..1e46944db 100644
--- a/README.md
+++ b/README.md
@@ -1,71 +1,211 @@
# Axolotl
-#### Go ahead and axolotl questions
+
+

+
+
+ One repo to finetune them all!
+
+
+ Go ahead and axolotl questions!!
+
+
+
-## Support Matrix
+## Axolotl supports
| | fp16/fp32 | fp16/fp32 w/ lora | 4bit-quant | 4bit-quant w/flash attention | flash attention | xformers attention |
|----------|:----------|:------------------|------------|------------------------------|-----------------|--------------------|
| llama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Pythia | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ |
| cerebras | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ |
+| mpt | ✅ | ❌ | ❌ | ❌ | ❌ | ❓ |
-## Getting Started
-- install python 3.9. 3.10 and above are not supported.
+## Quickstart ⚡
-- Point the config you are using to a huggingface hub dataset (see [configs/llama_7B_4bit.yml](https://github.com/winglian/axolotl/blob/main/configs/llama_7B_4bit.yml#L6-L8))
+**Requirements**: Python 3.9.
-```yaml
-datasets:
- - path: vicgalle/alpaca-gpt4
- type: alpaca
+```bash
+git clone https://github.com/OpenAccess-AI-Collective/axolotl
+
+pip3 install -e .[int4]
+
+accelerate config
+
+# finetune
+accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml
+
+# inference
+accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml \
+ --inference --lora_model_dir="./llama-7b-lora-int4"
```
-- Optionally Download some datasets, see [data/README.md](data/README.md)
+## Installation
+### Environment
-- Create a new or update the existing YAML config [config/sample.yml](config/sample.yml)
+- Docker
+ ```bash
+ docker run --gpus '"all"' --rm -it winglian/axolotl:main
+ ```
+ - `winglian/axolotl:dev`: dev branch
+ - `winglian/axolotl-runpod:main`: for runpod
+
+- Conda/Pip venv
+ 1. Install python **3.9**
+
+ 2. Install python dependencies with ONE of the following:
+ - `pip3 install -e .[int4]` (recommended)
+ - `pip3 install -e .[int4_triton]`
+ - `pip3 install -e .`
+
+### Dataset
+
+Have dataset(s) in one of the following format (JSONL recommended):
+
+- `alpaca`: instruction; input(optional)
+ ```json
+ {"instruction": "...", "input": "...", "output": "..."}
+ ```
+- `sharegpt`: conversations
+ ```json
+ {"conversations": [{"from": "...", "value": "..."}]}
+ ```
+- `completion`: raw corpus
+ ```json
+ {"text": "..."}
+ ```
+
+
+
+See other formats
+
+- `jeopardy`: question and answer
+ ```json
+ {"question": "...", "category": "...", "answer": "..."}
+ ```
+- `oasst`: instruction
+ ```json
+ {"INSTRUCTION": "...", "RESPONSE": "..."}
+ ```
+- `gpteacher`: instruction; input(optional)
+ ```json
+ {"instruction": "...", "input": "...", "response": "..."}
+ ```
+- `reflection`: instruction with reflect; input(optional)
+ ```json
+ {"instruction": "...", "input": "...", "output": "...", "reflection": "...", "corrected": "..."}
+ ```
+
+> Have some new format to propose? Check if it's already defined in [data.py](src/axolotl/utils/data.py) in `dev` branch!
+
+
+
+Optionally, download some datasets, see [data/README.md](data/README.md)
+
+### Config
+
+See sample configs in [configs](configs) folder or [examples](examples) for quick start. It is recommended to duplicate and modify to your needs. The most important options are:
+
+- model
+ ```yaml
+ base_model: ./llama-7b-hf # local or huggingface repo
+ ```
+ Note: The code will load the right architecture.
+
+- dataset
+ ```yaml
+ datasets:
+ - path: vicgalle/alpaca-gpt4 # local or huggingface repo
+ type: alpaca # format from earlier
+ sequence_len: 2048 # max token length / prompt
+ ```
+
+- loading
+ ```yaml
+ load_4bit: true
+ load_in_8bit: true
+ bf16: true
+ fp16: true
+ tf32: true
+ ```
+ Note: Repo does not do 4-bit quantization.
+
+- lora
+ ```yaml
+ adapter: lora # blank for full finetune
+ lora_r: 8
+ lora_alpha: 16
+ lora_dropout: 0.05
+ lora_target_modules:
+ - q_proj
+ - v_proj
+ ```
+
+
+
+All yaml options
```yaml
# this is the huggingface model that contains *.pt, *.safetensors, or *.bin files
# this can also be a relative path to a model on disk
-base_model: decapoda-research/llama-7b-hf-int4
+base_model: ./llama-7b-hf
# you can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)
base_model_ignore_patterns:
# if the base_model repo on hf hub doesn't include configuration .json files,
# you can set that here, or leave this empty to default to base_model
-base_model_config: decapoda-research/llama-7b-hf
+base_model_config: ./llama-7b-hf
# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too
model_type: AutoModelForCausalLM
# Corresponding tokenizer for the model AutoTokenizer is a good choice
tokenizer_type: AutoTokenizer
+# Trust remote code for untrusted source
+trust_remote_code:
+
# whether you are training a 4-bit quantized model
load_4bit: true
+gptq_groupsize: 128 # group size
+gptq_model_v1: false # v1 or v2
+
# this will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer
load_in_8bit: true
+
+# Use CUDA bf16
+bf16: true
+# Use CUDA fp16
+fp16: true
+# Use CUDA tf32
+tf32: true
+
# a list of one or more datasets to finetune the model with
datasets:
# this can be either a hf dataset, or relative path
- path: vicgalle/alpaca-gpt4
# The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
type: alpaca
+ data_files: # path to source data files
+
# axolotl attempts to save the dataset as an arrow after packing the data together so
# subsequent training attempts load faster, relative path
dataset_prepared_path: data/last_run_prepared
+# push prepared dataset to hub
+push_dataset_to_hub: # repo path
# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc
val_set_size: 0.04
-# if you want to use lora, leave blank to train all parameters in original model
-adapter: lora
-# if you already have a lora model trained that you want to load, put that here
-lora_model_dir:
+
# the maximum length of an input to train with, this should typically be less than 2048
# as most models have a token/context limit of 2048
sequence_len: 2048
# max sequence length to concatenate training samples together up to
# inspired by StackLLaMA. see https://huggingface.co/blog/stackllama#supervised-fine-tuning
max_packed_sequence_len: 1024
+
+# if you want to use lora, leave blank to train all parameters in original model
+adapter: lora
+# if you already have a lora model trained that you want to load, put that here
# lora hyperparameters
+lora_model_dir:
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
@@ -74,14 +214,24 @@ lora_target_modules:
- v_proj
# - k_proj
# - o_proj
+# - gate_proj
+# - down_proj
+# - up_proj
+lora_modules_to_save:
+# - embed_tokens
+# - lm_head
+lora_out_dir:
lora_fan_in_fan_out: false
-# wandb configuration if your're using it
+
+# wandb configuration if you're using it
wandb_project:
wandb_watch:
wandb_run_id:
-wandb_log_model: checkpoint
-# where to save the finsihed model to
+wandb_log_model: # 'checkpoint'
+
+# where to save the finished model to
output_dir: ./completed-model
+
# training hyperparameters
batch_size: 8
micro_batch_size: 2
@@ -89,87 +239,110 @@ eval_batch_size: 2
num_epochs: 3
warmup_steps: 100
learning_rate: 0.00003
+logging_steps:
+
# whether to mask out or include the human's prompt from the training labels
train_on_inputs: false
# don't use this, leads to wonky training (according to someone on the internet)
group_by_length: false
-# Use CUDA bf16
-bf16: true
-# Use CUDA tf32
-tf32: true
+
# does not work with current implementation of 4-bit LoRA
gradient_checkpointing: false
+
# stop training after this many evaluation losses have increased in a row
# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback
early_stopping_patience: 3
# specify a scheduler to use with the optimizer. only one_cycle is supported currently
lr_scheduler:
+# specify optimizer
+optimizer:
+# specify weight decay
+weight_decay:
+
# whether to use xformers attention patch https://github.com/facebookresearch/xformers:
xformers_attention:
# whether to use flash attention patch https://github.com/HazyResearch/flash-attention:
flash_attention:
+
# resume from a specific checkpoint dir
resume_from_checkpoint:
# if resume_from_checkpoint isn't set and you simply want it to start where it left off
# be careful with this being turned on between different models
auto_resume_from_checkpoints: false
+
# don't mess with this, it's here for accelerate and torchrun
local_rank:
+
+# add or change special tokens
+special_tokens:
+ # bos_token: ""
+ # eos_token: ""
+ # unk_token: ""
+# add extra tokens
+tokens:
+
+# FSDP
+fsdp:
+fsdp_config:
+
+# Deepspeed
+deepspeed:
+
+# TODO
+torchdistx_path:
+
+# Debug mode
+debug:
```
-- Install python dependencies with ONE of the following:
+
- - `pip3 install -e .[int4]` (recommended)
- - `pip3 install -e .[int4_triton]`
- - `pip3 install -e .`
--
-- If not using `int4` or `int4_triton`, run `pip install "peft @ git+https://github.com/huggingface/peft.git"`
-- Configure accelerate `accelerate config` or update `~/.cache/huggingface/accelerate/default_config.yaml`
+### Accelerate
-```yaml
-compute_environment: LOCAL_MACHINE
-distributed_type: MULTI_GPU
-downcast_bf16: 'no'
-gpu_ids: all
-machine_rank: 0
-main_training_function: main
-mixed_precision: bf16
-num_machines: 1
-num_processes: 4
-rdzv_backend: static
-same_network: true
-tpu_env: []
-tpu_use_cluster: false
-tpu_use_sudo: false
-use_cpu: false
+Configure accelerate
+
+```bash
+accelerate config
+
+# Edit manually
+# nano ~/.cache/huggingface/accelerate/default_config.yaml
```
-- Train! `accelerate launch scripts/finetune.py`, make sure to choose the correct YAML config file
-- Alternatively you can pass in the config file like: `accelerate launch scripts/finetune.py configs/llama_7B_alpaca.yml`~~
+### Train
-
-## How to start training on Runpod in under 10 minutes
-
-- Choose your Docker container wisely.
-- I recommend `huggingface:transformers-pytorch-deepspeed-latest-gpu` see https://hub.docker.com/r/huggingface/transformers-pytorch-deepspeed-latest-gpu/
-- Once you start your runpod, and SSH into it:
-```shell
-export TORCH_CUDA_ARCH_LIST="7.0 7.5 8.0 8.6+PTX"
-source <(curl -s https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/dev/scripts/setup-runpod.sh)
+Run
+```bash
+accelerate launch scripts/finetune.py configs/your_config.yml
```
-- Once the setup script completes
-```shell
-accelerate launch scripts/finetune.py configs/quickstart.yml
+### Inference
+
+Add `--inference` flag to train command above
+
+If you are inferencing a pretrained LORA, pass
+```bash
+--lora_model_dir ./completed-model
```
-- Here are some helpful environment variables you'll want to manually set if you open a new shell
-```shell
-export WANDB_MODE=offline
-export WANDB_CACHE_DIR=/workspace/data/wandb-cache
-export HF_DATASETS_CACHE="/workspace/data/huggingface-cache/datasets"
-export HUGGINGFACE_HUB_CACHE="/workspace/data/huggingface-cache/hub"
-export TRANSFORMERS_CACHE="/workspace/data/huggingface-cache/hub"
-export NCCL_P2P_DISABLE=1
+### Merge LORA to base (Dev branch 🔧 )
+
+Add below flag to train command above
+
+```bash
+--merge_lora --lora_model_dir="./completed-model"
```
+## Common Errors 🧰
+
+> Cuda out of memory
+
+Please reduce any below
+ - `micro_batch_size`
+ - `eval_batch_size`
+ - `sequence_len`
+
+## Contributing 🤝
+
+Bugs? Please check for open issue else create a new [Issue](https://github.com/OpenAccess-AI-Collective/axolotl/issues/new).
+
+PRs are **greatly welcome**!
\ No newline at end of file
diff --git a/data/README.md b/data/README.md
index 57b8b76c8..34d7a5659 100644
--- a/data/README.md
+++ b/data/README.md
@@ -1,6 +1,5 @@
-- Download some datasets
--
+## Download some datasets
```shell
curl https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_gpt4.json -o data/raw/alpaca_data_gpt4.json
curl https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json -L -o data/raw/vicuna_cleaned.json
@@ -8,7 +7,7 @@ curl https://github.com/teknium1/GPTeacher/blob/main/Instruct/gpt4-instruct-simi
curl https://github.com/teknium1/GPTeacher/blob/main/Roleplay/roleplay-similarity_0.6-instruct-dataset.json?raw=true -L -o data/raw/roleplay-similarity_0.6-instruct-dataset.json
```
-- Convert the JSON data files to JSONL.
+## Convert the JSON data files to JSONL.
```shell
python3 ./scripts/alpaca_json_to_jsonl.py --input data/alpaca_data_gpt4.json > data/alpaca_data_gpt4.jsonl
@@ -16,8 +15,9 @@ python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/vicuna_cleaned.json >
python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/roleplay-similarity_0.6-instruct-dataset.json > data/roleplay-similarity_0.6-instruct-dataset.jsonl
python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/gpt4-instruct-similarity-0.6-dataset.json > data/gpt4-instruct-similarity-0.6-dataset.jsonl
```
+---
-- Using JSONL makes it easier to subset the data if you want a smaller training set, i.e get 2000 random examples.
+Using JSONL makes it easier to subset the data if you want a smaller training set, i.e get 2000 random examples.
```shell
shuf -n2000 data/vicuna_cleaned.jsonl > data/vicuna_cleaned.subset0.jsonl
diff --git a/image/axolotl.png b/image/axolotl.png
new file mode 100644
index 000000000..21c27db85
Binary files /dev/null and b/image/axolotl.png differ
diff --git a/src/axolotl/utils/models.py b/src/axolotl/utils/models.py
index b2955ad1a..8792e5ba2 100644
--- a/src/axolotl/utils/models.py
+++ b/src/axolotl/utils/models.py
@@ -124,6 +124,7 @@ def load_model(
base_model_config if base_model_config else base_model,
model_path,
device_map=cfg.device_map,
+ half=cfg.fp16,
groupsize=cfg.gptq_groupsize if cfg.gptq_groupsize else -1,
is_v1_model=cfg.gptq_model_v1
if cfg.gptq_model_v1 is not None
@@ -343,6 +344,7 @@ def load_lora(model, cfg):
target_modules=cfg.lora_target_modules,
lora_dropout=cfg.lora_dropout,
fan_in_fan_out=cfg.lora_fan_in_fan_out,
+ modules_to_save=cfg.lora_modules_to_save if cfg.lora_modules_to_save else None,
bias="none",
task_type="CAUSAL_LM",
)