Commit Graph

62 Commits

Author SHA1 Message Date
NanoCode012
392dfd9b07 Lint and format 2023-05-31 02:53:22 +09:00
Wing Lian
21f17cca69 bnb fixes 2023-05-29 00:06:35 -04:00
NanoCode012
56f9ca5709 refactor: fix previous refactors 2023-05-28 23:06:10 +09:00
NanoCode012
8bd7a49cd7 Refactor to use DictDefault instead 2023-05-28 23:06:10 +09:00
NanoCode012
93acb648bd Fix load error 2023-05-28 23:06:10 +09:00
NanoCode012
bdfe7c9201 Convert attrdict to addict 2023-05-28 23:06:10 +09:00
Wing Lian
cc67862dd3 move list not in list logic to fn 2023-05-27 16:42:05 -04:00
Wing Lian
32e6fe9286 load the tokenizer seperately from the model 2023-05-26 07:29:35 -04:00
Wing Lian
a5bf838685 add logging and make sure model unloads to float16 2023-05-26 00:09:55 -04:00
Wing Lian
1f5d83ea72 remove un-needed code, add validation 2023-05-24 22:47:43 -04:00
Wing Lian
3457810988 Update scripts/finetune.py
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2023-05-21 23:00:28 -04:00
Wing Lian
ae1719d30c Update scripts/finetune.py for logging
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2023-05-21 23:00:23 -04:00
Wing Lian
1d5ab84486 optionally be able to specify alpaca or chat style prompts 2023-05-20 18:16:22 -04:00
Wing Lian
b46bc02f0a add alpaca multiple choice instruct dataset support 2023-05-16 21:45:34 -04:00
Wing Lian
f98e173b59 reorder options so debug can happen in the same prepare step 2023-05-15 22:26:30 -04:00
Wing Lian
bdbca8fa6c more fixes 2023-05-15 14:07:17 -04:00
Wing Lian
0d28df0fd2 move filter to before saving so it doesn't happen everytime, update runpod manual script 2023-05-13 21:51:41 -04:00
NanoCode012
52aada7174 Fix typo 2023-05-11 20:22:30 +09:00
Wing Lian
2bc1a5bde1 black formatting 2023-05-10 16:01:08 -04:00
Wing Lian
915c56cd97 Update finetune.py 2023-05-09 15:05:39 -04:00
NanoCode012
cd2395987e Don't save full model for lora 2023-05-10 03:18:38 +09:00
NanoCode012
71a1f7f38c Save adapter for lora 2023-05-10 01:08:22 +09:00
Wing Lian
79deb35c68 setup runpod images
use github.ref_name
2023-05-08 10:48:32 -04:00
Wing Lian
47ad3890bc fix whitespace and instruction on inference 2023-05-07 08:28:15 -04:00
Wing Lian
247825bd57 refactor inference, warn if model is frozen 2023-05-07 01:54:15 -04:00
Wing Lian
9105935b00 support for multi line inference input, log sweep over learning rates 2023-05-03 13:48:54 -04:00
Wing Lian
fe9c29d73e install peft from main branch 2023-05-01 12:24:04 -04:00
Wing Lian
2255bb7f4f support llama-adapter zero init attention 2023-05-01 10:42:21 -04:00
Wing Lian
55baef0e03 use prebuilt wheels for flash-attn and deepspeed 2023-05-01 09:52:03 -04:00
Wing Lian
5159d00a86 fix sharegpt tokenization, refactor tokenization debugging 2023-04-30 00:23:53 -04:00
Wing Lian
94f5e415a3 various bugfixes 2023-04-24 09:41:34 -04:00
Wing Lian
d65385912e improve inference 2023-04-19 12:57:27 -04:00
Wing Lian
5749eb0a1c fix runpod script 2023-04-19 08:39:54 -04:00
Wing Lian
0a472e1e08 quickstart instructions for starting from runpod (#5) 2023-04-18 19:22:25 -04:00
Wing Lian
6045345d6b WIP large refactor to make finetune script a little more manageable (#3) 2023-04-18 14:01:38 -04:00
Wing Lian
81de0efc18 add support for alpaca reflect training (#2) 2023-04-18 08:34:05 -04:00
Wing Lian
87d7825435 Tokenization open assistant (#1)
* refactor prompt tokenization to more easily support open assistant

* add open assisstant handling, more logging, black formatting
2023-04-18 01:45:49 -04:00
Wing Lian
eb808903e5 fix llama check 2023-04-18 01:19:53 -04:00
Wing Lian
8f36f3cd5a fix conditional check to prevent always using 4bit 2023-04-18 00:36:03 -04:00
Wing Lian
69164da079 imrpove llama check and fix safetensors file check 2023-04-17 23:49:21 -04:00
Wing Lian
e1076430ff suppport for alpaca-like instruction datasets without inputs 2023-04-17 23:32:57 -04:00
Wing Lian
2db9436410 casts the prepared data to int16 (doesn't help with training memory) 2023-04-17 21:36:02 -04:00
Wing Lian
120e7df7df bugfixes 2023-04-17 18:23:55 -04:00
Wing Lian
87e073d0de fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets 2023-04-17 18:01:12 -04:00
Wing Lian
77fca25f1b 4bit quantized support (wip) 2023-04-17 11:37:39 -04:00
Wing Lian
12de7b7cf7 cleanup, prep for 4bit quant support 2023-04-16 11:06:41 -04:00
Wing Lian
d1aed4c8e5 deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches 2023-04-16 06:59:47 -04:00
Wing Lian
a4593832a9 fix logging 2023-04-15 23:12:48 -04:00
Wing Lian
23938015c8 prepare datasets only flag 2023-04-15 16:30:55 -04:00
Wing Lian
d33a975747 configure log level, add llama 7b config 2023-04-15 14:24:37 -04:00