Wing Lian
|
aa3c3f97ae
|
optimize dataloading to use cache, fix model token embedding sizes
|
2023-05-12 13:53:27 -04:00 |
|
NanoCode012
|
89b7f26b9d
|
Merge branch 'main' into patch-2
|
2023-05-11 21:18:38 +09:00 |
|
Wing Lian
|
2bc1a5bde1
|
black formatting
|
2023-05-10 16:01:08 -04:00 |
|
Wing Lian
|
7a490a4646
|
various fixes
|
2023-05-10 16:00:09 -04:00 |
|
NanoCode012
|
813aab378f
|
Fix Trainer() got multiple values for keyword argument 'callbacks'
|
2023-05-10 18:28:28 +09:00 |
|
Wing Lian
|
e2e68c3965
|
testing mpt triton
|
2023-05-09 20:57:40 -04:00 |
|
Wing Lian
|
a27d594788
|
fix conditional so alpaca doesn't choke
|
2023-05-09 20:57:07 -04:00 |
|
NanoCode012
|
174b74ddc9
|
Rename variable to use same convention
|
2023-05-09 02:49:44 +09:00 |
|
NanoCode012
|
cf681537ec
|
Add CompletionPrompt type
|
2023-05-09 02:49:44 +09:00 |
|
Wing Lian
|
bd3c5a5cb3
|
Merge pull request #21 from NanoCode012/patch-1
Fix: Scheduler and optimizer condition
|
2023-05-08 13:34:44 -04:00 |
|
Wing Lian
|
bcbc99e655
|
Merge pull request #19 from NanoCode012/feat/callback-save-lora
Feat: Add callback save peft_model on_save
|
2023-05-08 13:34:07 -04:00 |
|
NanoCode012
|
36aaea02b9
|
Update trainer.py
|
2023-05-09 02:01:08 +09:00 |
|
NanoCode012
|
5b6690ac25
|
Fix condition scheduler
|
2023-05-09 01:44:12 +09:00 |
|
Wing Lian
|
a125693122
|
add support for trust_remote_code for mpt models
|
2023-05-08 12:07:27 -04:00 |
|
NanoCode012
|
cc77bab526
|
Add callbacks to Trainer
|
2023-05-09 00:41:19 +09:00 |
|
NanoCode012
|
0d6708bfe4
|
Add callback save peft_model on_save
|
2023-05-09 00:38:27 +09:00 |
|
Wing Lian
|
a12fb0a8da
|
Jeopardy bot! (#17)
* support for jeopardy dataset
* commit the final config for jeopardy bot
|
2023-05-08 03:21:40 -04:00 |
|
Wing Lian
|
a4329b1068
|
fix #16 load best model setting when using 8bit
|
2023-05-07 18:30:48 -04:00 |
|
Wing Lian
|
550502b321
|
use micro batch size for eval size if not specified
|
2023-05-07 18:26:05 -04:00 |
|
Wing Lian
|
247825bd57
|
refactor inference, warn if model is frozen
|
2023-05-07 01:54:15 -04:00 |
|
Wing Lian
|
cb9a887047
|
Merge pull request #13 from winglian/dev
merge dev branch for various fixes
|
2023-05-07 01:48:02 -04:00 |
|
NanoCode012
|
0e74b6402e
|
Add eval_batch_size for evaluation
|
2023-05-06 22:21:24 +09:00 |
|
Wing Lian
|
a10a8265ef
|
fix log sweep lr
|
2023-05-03 15:06:03 -04:00 |
|
Wing Lian
|
9105935b00
|
support for multi line inference input, log sweep over learning rates
|
2023-05-03 13:48:54 -04:00 |
|
Wing Lian
|
7748f3d6da
|
fix adam bnb optimizer grouped parameters, fix peft model 8bit conversion logic, black formatting
|
2023-05-01 16:31:46 -04:00 |
|
Wing Lian
|
2255bb7f4f
|
support llama-adapter zero init attention
|
2023-05-01 10:42:21 -04:00 |
|
Wing Lian
|
ad2b48c0fa
|
fdsp config dict fix, todo list, add torchdistx support
|
2023-04-30 13:32:07 -04:00 |
|
Wing Lian
|
9190ada23a
|
8bit and deepspeed changes
|
2023-04-30 06:50:35 -04:00 |
|
Wing Lian
|
6dfdd2dec0
|
don't load models in 8bit unless they are using an adapter, also fix tokenizer load in exceptional case
|
2023-04-30 03:19:56 -04:00 |
|
Wing Lian
|
29936bba7f
|
fix fsdp training args
|
2023-04-30 00:56:28 -04:00 |
|
Wing Lian
|
78821815de
|
fix for zero value warmup steps
|
2023-04-30 00:34:12 -04:00 |
|
Wing Lian
|
5159d00a86
|
fix sharegpt tokenization, refactor tokenization debugging
|
2023-04-30 00:23:53 -04:00 |
|
Wing Lian
|
c0f50d9c61
|
wire up gradient checkpointing for 4bit
|
2023-04-28 22:28:41 -04:00 |
|
Wing Lian
|
4a17a4c9a1
|
fix dataset handling, support galactica
|
2023-04-24 10:54:45 -04:00 |
|
Wing Lian
|
097d367af6
|
tweaks to data loading, 8 bit adam, accelerate and deepspeed
|
2023-04-24 09:41:35 -04:00 |
|
Wing Lian
|
4f2584f2dc
|
shuffle and split dataset after save/load
|
2023-04-24 09:41:35 -04:00 |
|
Wing Lian
|
8d437853c8
|
fix sharegpt handling from hf, don't worry about loading llama if using earlier transformers release
|
2023-04-24 09:41:35 -04:00 |
|
Wing Lian
|
94f5e415a3
|
various bugfixes
|
2023-04-24 09:41:34 -04:00 |
|
Wing Lian
|
bb991fd870
|
fix bug when model_type not explicitly passed
|
2023-04-19 13:15:33 -04:00 |
|
Wing Lian
|
d65385912e
|
improve inference
|
2023-04-19 12:57:27 -04:00 |
|
Wing Lian
|
0a472e1e08
|
quickstart instructions for starting from runpod (#5)
|
2023-04-18 19:22:25 -04:00 |
|
Wing Lian
|
8746b701fe
|
attempt xformers hijack attention
|
2023-04-18 14:03:50 -04:00 |
|
Wing Lian
|
6045345d6b
|
WIP large refactor to make finetune script a little more manageable (#3)
|
2023-04-18 14:01:38 -04:00 |
|
Wing Lian
|
81de0efc18
|
add support for alpaca reflect training (#2)
|
2023-04-18 08:34:05 -04:00 |
|
Wing Lian
|
87d7825435
|
Tokenization open assistant (#1)
* refactor prompt tokenization to more easily support open assistant
* add open assisstant handling, more logging, black formatting
|
2023-04-18 01:45:49 -04:00 |
|
Wing Lian
|
e1076430ff
|
suppport for alpaca-like instruction datasets without inputs
|
2023-04-17 23:32:57 -04:00 |
|
Wing Lian
|
2db9436410
|
casts the prepared data to int16 (doesn't help with training memory)
|
2023-04-17 21:36:02 -04:00 |
|
Wing Lian
|
77fca25f1b
|
4bit quantized support (wip)
|
2023-04-17 11:37:39 -04:00 |
|
Wing Lian
|
80b2ed29d8
|
various bugfixes
|
2023-04-14 21:37:07 -04:00 |
|
Wing Lian
|
f2a2029d0d
|
config chooser, update readme instructions, device config, llama flash attention, debug out the labels, fix config key checks, other bugfixes
|
2023-04-14 12:18:56 -04:00 |
|