Wing Lian
|
9190ada23a
|
8bit and deepspeed changes
|
2023-04-30 06:50:35 -04:00 |
|
Wing Lian
|
4dbef0941f
|
update ds_config
|
2023-04-30 04:24:58 -04:00 |
|
Wing Lian
|
6dfdd2dec0
|
don't load models in 8bit unless they are using an adapter, also fix tokenizer load in exceptional case
|
2023-04-30 03:19:56 -04:00 |
|
Wing Lian
|
29936bba7f
|
fix fsdp training args
|
2023-04-30 00:56:28 -04:00 |
|
Wing Lian
|
78821815de
|
fix for zero value warmup steps
|
2023-04-30 00:34:12 -04:00 |
|
Wing Lian
|
5159d00a86
|
fix sharegpt tokenization, refactor tokenization debugging
|
2023-04-30 00:23:53 -04:00 |
|
Wing Lian
|
c0f50d9c61
|
wire up gradient checkpointing for 4bit
|
2023-04-28 22:28:41 -04:00 |
|
Wing Lian
|
4e705eda6d
|
Merge pull request #9 from winglian/dev
feature dump into main
|
2023-04-24 21:56:17 -04:00 |
|
Wing Lian
|
4a17a4c9a1
|
fix dataset handling, support galactica
|
2023-04-24 10:54:45 -04:00 |
|
Wing Lian
|
097d367af6
|
tweaks to data loading, 8 bit adam, accelerate and deepspeed
|
2023-04-24 09:41:35 -04:00 |
|
Wing Lian
|
4f2584f2dc
|
shuffle and split dataset after save/load
|
2023-04-24 09:41:35 -04:00 |
|
Wing Lian
|
8d437853c8
|
fix sharegpt handling from hf, don't worry about loading llama if using earlier transformers release
|
2023-04-24 09:41:35 -04:00 |
|
Wing Lian
|
8e2a5609b3
|
stablelm support
|
2023-04-24 09:41:34 -04:00 |
|
Wing Lian
|
94f5e415a3
|
various bugfixes
|
2023-04-24 09:41:34 -04:00 |
|
Eric Hartford
|
2624bc2f11
|
ignore config, add python 3.9 (#8)
|
2023-04-24 07:23:19 -04:00 |
|
Wing Lian
|
bb991fd870
|
fix bug when model_type not explicitly passed
|
2023-04-19 13:15:33 -04:00 |
|
Wing Lian
|
d65385912e
|
improve inference
|
2023-04-19 12:57:27 -04:00 |
|
Wing Lian
|
5749eb0a1c
|
fix runpod script
|
2023-04-19 08:39:54 -04:00 |
|
Wing Lian
|
7753cdee57
|
cleanup empty lines, tweak env for runpod setup
|
2023-04-19 08:24:58 -04:00 |
|
Wing Lian
|
f50de1b1cb
|
handle empty lines
|
2023-04-19 08:03:34 -04:00 |
|
Wing Lian
|
0a472e1e08
|
quickstart instructions for starting from runpod (#5)
|
2023-04-18 19:22:25 -04:00 |
|
Wing Lian
|
5cb7ea49a6
|
update readme w compat matrix
|
2023-04-18 14:42:37 -04:00 |
|
Wing Lian
|
8746b701fe
|
attempt xformers hijack attention
|
2023-04-18 14:03:50 -04:00 |
|
Wing Lian
|
6045345d6b
|
WIP large refactor to make finetune script a little more manageable (#3)
|
2023-04-18 14:01:38 -04:00 |
|
Wing Lian
|
81de0efc18
|
add support for alpaca reflect training (#2)
|
2023-04-18 08:34:05 -04:00 |
|
Wing Lian
|
34af1b465f
|
update readme
|
2023-04-18 01:58:32 -04:00 |
|
Wing Lian
|
87d7825435
|
Tokenization open assistant (#1)
* refactor prompt tokenization to more easily support open assistant
* add open assisstant handling, more logging, black formatting
|
2023-04-18 01:45:49 -04:00 |
|
Wing Lian
|
eb808903e5
|
fix llama check
|
2023-04-18 01:19:53 -04:00 |
|
Wing Lian
|
3f3f561c06
|
update readme
|
2023-04-18 00:45:25 -04:00 |
|
Wing Lian
|
8f36f3cd5a
|
fix conditional check to prevent always using 4bit
|
2023-04-18 00:36:03 -04:00 |
|
Wing Lian
|
69164da079
|
imrpove llama check and fix safetensors file check
|
2023-04-17 23:49:21 -04:00 |
|
Wing Lian
|
e1076430ff
|
suppport for alpaca-like instruction datasets without inputs
|
2023-04-17 23:32:57 -04:00 |
|
Wing Lian
|
2db9436410
|
casts the prepared data to int16 (doesn't help with training memory)
|
2023-04-17 21:36:02 -04:00 |
|
Wing Lian
|
120e7df7df
|
bugfixes
|
2023-04-17 18:23:55 -04:00 |
|
Wing Lian
|
87e073d0de
|
fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets
|
2023-04-17 18:01:12 -04:00 |
|
Wing Lian
|
4131183115
|
fix install to work with latest alpaca lora 4bit
|
2023-04-17 12:45:12 -04:00 |
|
Wing Lian
|
77fca25f1b
|
4bit quantized support (wip)
|
2023-04-17 11:37:39 -04:00 |
|
Wing Lian
|
12de7b7cf7
|
cleanup, prep for 4bit quant support
|
2023-04-16 11:06:41 -04:00 |
|
Wing Lian
|
d1aed4c8e5
|
deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches
|
2023-04-16 06:59:47 -04:00 |
|
Wing Lian
|
a4593832a9
|
fix logging
|
2023-04-15 23:12:48 -04:00 |
|
Wing Lian
|
23938015c8
|
prepare datasets only flag
|
2023-04-15 16:30:55 -04:00 |
|
Wing Lian
|
d060c803ce
|
add llama 7b config and fiz lora_fan_in_fan_out for llama (copy pasta bug)
|
2023-04-15 14:26:52 -04:00 |
|
Wing Lian
|
d33a975747
|
configure log level, add llama 7b config
|
2023-04-15 14:24:37 -04:00 |
|
Wing Lian
|
05fffb53b4
|
more logging, wandb fixes
|
2023-04-15 13:37:17 -04:00 |
|
Wing Lian
|
2df63ef815
|
refactor trainer setup to account for deepspeed integration
|
2023-04-15 12:16:42 -04:00 |
|
Wing Lian
|
b164725417
|
improve prepared dataset loading, fix inference
|
2023-04-15 12:14:52 -04:00 |
|
Wing Lian
|
937f44f021
|
helpful info output
|
2023-04-15 00:03:43 -04:00 |
|
Wing Lian
|
902dd0ab47
|
fix issue with completed model being empty
see https://github.com/huggingface/peft/issues/286#issuecomment-1501617281
|
2023-04-14 23:57:55 -04:00 |
|
Wing Lian
|
80b2ed29d8
|
various bugfixes
|
2023-04-14 21:37:07 -04:00 |
|
Wing Lian
|
45f77dd51e
|
bettter handling of llama model import
|
2023-04-14 19:30:41 -04:00 |
|