Wing Lian
|
8746b701fe
|
attempt xformers hijack attention
|
2023-04-18 14:03:50 -04:00 |
|
Wing Lian
|
6045345d6b
|
WIP large refactor to make finetune script a little more manageable (#3)
|
2023-04-18 14:01:38 -04:00 |
|
Wing Lian
|
81de0efc18
|
add support for alpaca reflect training (#2)
|
2023-04-18 08:34:05 -04:00 |
|
Wing Lian
|
87d7825435
|
Tokenization open assistant (#1)
* refactor prompt tokenization to more easily support open assistant
* add open assisstant handling, more logging, black formatting
|
2023-04-18 01:45:49 -04:00 |
|
Wing Lian
|
e1076430ff
|
suppport for alpaca-like instruction datasets without inputs
|
2023-04-17 23:32:57 -04:00 |
|
Wing Lian
|
2db9436410
|
casts the prepared data to int16 (doesn't help with training memory)
|
2023-04-17 21:36:02 -04:00 |
|
Wing Lian
|
77fca25f1b
|
4bit quantized support (wip)
|
2023-04-17 11:37:39 -04:00 |
|
Wing Lian
|
80b2ed29d8
|
various bugfixes
|
2023-04-14 21:37:07 -04:00 |
|
Wing Lian
|
f2a2029d0d
|
config chooser, update readme instructions, device config, llama flash attention, debug out the labels, fix config key checks, other bugfixes
|
2023-04-14 12:18:56 -04:00 |
|
Wing Lian
|
a6028d302e
|
black formatting
|
2023-04-14 07:25:52 -04:00 |
|
Wing Lian
|
8d959a7e26
|
make it work with pythia in the cloud
|
2023-04-14 07:24:55 -04:00 |
|
Wing Lian
|
ce24f5e246
|
WIP for axolotl trainer
|
2023-04-14 00:20:05 -04:00 |
|