Wing Lian
|
607a4d33f2
|
make sure to use train split if loading from hf
|
2023-05-21 22:04:39 -04:00 |
|
Wing Lian
|
99383f14a3
|
make one cycle lr div factor configurable
|
2023-05-21 20:25:06 -04:00 |
|
Wing Lian
|
0f74464652
|
fix new dataset prompt tokenizers
|
2023-05-21 18:57:09 -04:00 |
|
Wing Lian
|
e0602a9e54
|
add missing __init__
|
2023-05-21 16:36:41 -04:00 |
|
Wing Lian
|
2809f3f21b
|
pygmalion dataset prompts format, cached tokenized datasets should be hashed on the tokenizer too
|
2023-05-21 16:16:09 -04:00 |
|
Wing Lian
|
4ea9a66dbd
|
tokenization fixes
|
2023-05-21 08:33:06 -04:00 |
|
Wing Lian
|
1d5ab84486
|
optionally be able to specify alpaca or chat style prompts
|
2023-05-20 18:16:22 -04:00 |
|
NanoCode012
|
641f8012f9
|
Set half using cfg.fp16 for 4bit
|
2023-05-20 02:29:31 +09:00 |
|
Wing Lian
|
13650732f8
|
concise multiple choice and tldr summarize
|
2023-05-17 11:29:17 -04:00 |
|
Wing Lian
|
8c2f3cb0f8
|
support for replit lm
|
2023-05-17 08:49:03 -04:00 |
|
Wing Lian
|
b46bc02f0a
|
add alpaca multiple choice instruct dataset support
|
2023-05-16 21:45:34 -04:00 |
|
NanoCode012
|
2c73c81348
|
Add lora_modules_to_save
|
2023-05-16 19:22:00 +09:00 |
|
Wing Lian
|
5e37144754
|
fix prompters, especially the sharegpt prompter
|
2023-05-15 22:15:36 -04:00 |
|
Wing Lian
|
bdbca8fa6c
|
more fixes
|
2023-05-15 14:07:17 -04:00 |
|
Wing Lian
|
42410c783c
|
more fixes
|
2023-05-14 09:16:41 -04:00 |
|
Wing Lian
|
aef00b6c13
|
fix torch_dtype for model load
|
2023-05-14 08:44:22 -04:00 |
|
Wing Lian
|
0d28df0fd2
|
move filter to before saving so it doesn't happen everytime, update runpod manual script
|
2023-05-13 21:51:41 -04:00 |
|
Wing Lian
|
84c7bc4b68
|
whoops, gt vs lt
|
2023-05-12 14:03:25 -04:00 |
|
Wing Lian
|
aa3c3f97ae
|
optimize dataloading to use cache, fix model token embedding sizes
|
2023-05-12 13:53:27 -04:00 |
|
NanoCode012
|
89b7f26b9d
|
Merge branch 'main' into patch-2
|
2023-05-11 21:18:38 +09:00 |
|
Wing Lian
|
2bc1a5bde1
|
black formatting
|
2023-05-10 16:01:08 -04:00 |
|
Wing Lian
|
7a490a4646
|
various fixes
|
2023-05-10 16:00:09 -04:00 |
|
NanoCode012
|
813aab378f
|
Fix Trainer() got multiple values for keyword argument 'callbacks'
|
2023-05-10 18:28:28 +09:00 |
|
Wing Lian
|
e2e68c3965
|
testing mpt triton
|
2023-05-09 20:57:40 -04:00 |
|
Wing Lian
|
a27d594788
|
fix conditional so alpaca doesn't choke
|
2023-05-09 20:57:07 -04:00 |
|
NanoCode012
|
174b74ddc9
|
Rename variable to use same convention
|
2023-05-09 02:49:44 +09:00 |
|
NanoCode012
|
cf681537ec
|
Add CompletionPrompt type
|
2023-05-09 02:49:44 +09:00 |
|
Wing Lian
|
bd3c5a5cb3
|
Merge pull request #21 from NanoCode012/patch-1
Fix: Scheduler and optimizer condition
|
2023-05-08 13:34:44 -04:00 |
|
Wing Lian
|
bcbc99e655
|
Merge pull request #19 from NanoCode012/feat/callback-save-lora
Feat: Add callback save peft_model on_save
|
2023-05-08 13:34:07 -04:00 |
|
NanoCode012
|
36aaea02b9
|
Update trainer.py
|
2023-05-09 02:01:08 +09:00 |
|
NanoCode012
|
5b6690ac25
|
Fix condition scheduler
|
2023-05-09 01:44:12 +09:00 |
|
Wing Lian
|
a125693122
|
add support for trust_remote_code for mpt models
|
2023-05-08 12:07:27 -04:00 |
|
NanoCode012
|
cc77bab526
|
Add callbacks to Trainer
|
2023-05-09 00:41:19 +09:00 |
|
NanoCode012
|
0d6708bfe4
|
Add callback save peft_model on_save
|
2023-05-09 00:38:27 +09:00 |
|
Wing Lian
|
a12fb0a8da
|
Jeopardy bot! (#17)
* support for jeopardy dataset
* commit the final config for jeopardy bot
|
2023-05-08 03:21:40 -04:00 |
|
Wing Lian
|
a4329b1068
|
fix #16 load best model setting when using 8bit
|
2023-05-07 18:30:48 -04:00 |
|
Wing Lian
|
550502b321
|
use micro batch size for eval size if not specified
|
2023-05-07 18:26:05 -04:00 |
|
Wing Lian
|
247825bd57
|
refactor inference, warn if model is frozen
|
2023-05-07 01:54:15 -04:00 |
|
Wing Lian
|
cb9a887047
|
Merge pull request #13 from winglian/dev
merge dev branch for various fixes
|
2023-05-07 01:48:02 -04:00 |
|
NanoCode012
|
0e74b6402e
|
Add eval_batch_size for evaluation
|
2023-05-06 22:21:24 +09:00 |
|
Wing Lian
|
a10a8265ef
|
fix log sweep lr
|
2023-05-03 15:06:03 -04:00 |
|
Wing Lian
|
9105935b00
|
support for multi line inference input, log sweep over learning rates
|
2023-05-03 13:48:54 -04:00 |
|
Wing Lian
|
7748f3d6da
|
fix adam bnb optimizer grouped parameters, fix peft model 8bit conversion logic, black formatting
|
2023-05-01 16:31:46 -04:00 |
|
Wing Lian
|
2255bb7f4f
|
support llama-adapter zero init attention
|
2023-05-01 10:42:21 -04:00 |
|
Wing Lian
|
ad2b48c0fa
|
fdsp config dict fix, todo list, add torchdistx support
|
2023-04-30 13:32:07 -04:00 |
|
Wing Lian
|
9190ada23a
|
8bit and deepspeed changes
|
2023-04-30 06:50:35 -04:00 |
|
Wing Lian
|
6dfdd2dec0
|
don't load models in 8bit unless they are using an adapter, also fix tokenizer load in exceptional case
|
2023-04-30 03:19:56 -04:00 |
|
Wing Lian
|
29936bba7f
|
fix fsdp training args
|
2023-04-30 00:56:28 -04:00 |
|
Wing Lian
|
78821815de
|
fix for zero value warmup steps
|
2023-04-30 00:34:12 -04:00 |
|
Wing Lian
|
5159d00a86
|
fix sharegpt tokenization, refactor tokenization debugging
|
2023-04-30 00:23:53 -04:00 |
|