Commit Graph

83 Commits

Author SHA1 Message Date
Wing Lian
c4e4f8115c pass a prompt in from stdin for inference 2023-06-10 15:07:40 -04:00
Wing Lian
f36e227eaf formatting for linter 2023-06-10 12:00:52 -04:00
Glavin Wiechert
fec6bcc3e6 Add streaming inference & fix stopping at EOS 2023-06-10 08:14:47 +00:00
NanoCode012
52765ac588 Set matmul tf32 2023-06-08 23:41:12 +09:00
Wing Lian
4ac9e251b7 new prompters, misc fixes for output dir missing using fsdp, and changing max seq len 2023-06-05 22:41:00 -04:00
Wing Lian
74ebbf4371 fix device map 2023-06-02 14:29:08 -04:00
Wing Lian
5a631b305b fix batch size calculation 2023-05-31 14:11:32 -04:00
NanoCode012
fac46002d4 Merge pull request #119 from NanoCode012/feat/update-inference
Feat(inference): Swap to GenerationConfig
2023-05-31 14:09:18 +09:00
NanoCode012
33d40179ba Increase max_new_tokens
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2023-05-31 14:04:49 +09:00
Wing Lian
c7021e191f Merge pull request #120 from OpenAccess-AI-Collective/model-from-path
split up llama model loading so config can be loaded from base config and models can be loaded from a path
2023-05-31 00:08:38 -04:00
Wing Lian
6fa40bf8ad black formatting 2023-05-30 23:33:37 -04:00
Wing Lian
3aad5f3b3e add support for gradient accumulation steps 2023-05-30 23:24:37 -04:00
Wing Lian
39a208c2bc fix up tokenizer config, isort fix 2023-05-30 23:00:02 -04:00
NanoCode012
988aeb9c34 Feat: Swap to GenerationConfig 2023-05-31 10:48:19 +09:00
Wing Lian
bbc5bc5791 Merge pull request #108 from OpenAccess-AI-Collective/docker-gptq
default to qlora support, make gptq specific image
2023-05-30 15:07:04 -04:00
NanoCode012
a1f9850b91 Fix security issue or ignore false positives 2023-05-31 02:53:53 +09:00
NanoCode012
37293dce07 Apply isort then black 2023-05-31 02:53:53 +09:00
NanoCode012
96e8378692 Delete extract_lora.py 2023-05-31 02:53:53 +09:00
NanoCode012
e9650d3ae4 Fix mypy typing 2023-05-31 02:53:53 +09:00
NanoCode012
82971e1565 Lint finetune.py 2023-05-31 02:53:23 +09:00
NanoCode012
392dfd9b07 Lint and format 2023-05-31 02:53:22 +09:00
Wing Lian
6ef96f569b default to qlora support, make gptq specific image 2023-05-29 20:34:41 -04:00
Wing Lian
21f17cca69 bnb fixes 2023-05-29 00:06:35 -04:00
NanoCode012
56f9ca5709 refactor: fix previous refactors 2023-05-28 23:06:10 +09:00
NanoCode012
8bd7a49cd7 Refactor to use DictDefault instead 2023-05-28 23:06:10 +09:00
NanoCode012
93acb648bd Fix load error 2023-05-28 23:06:10 +09:00
NanoCode012
bdfe7c9201 Convert attrdict to addict 2023-05-28 23:06:10 +09:00
Wing Lian
cc67862dd3 move list not in list logic to fn 2023-05-27 16:42:05 -04:00
Wing Lian
32e6fe9286 load the tokenizer seperately from the model 2023-05-26 07:29:35 -04:00
Wing Lian
a5bf838685 add logging and make sure model unloads to float16 2023-05-26 00:09:55 -04:00
Wing Lian
1f5d83ea72 remove un-needed code, add validation 2023-05-24 22:47:43 -04:00
Wing Lian
3457810988 Update scripts/finetune.py
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2023-05-21 23:00:28 -04:00
Wing Lian
ae1719d30c Update scripts/finetune.py for logging
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2023-05-21 23:00:23 -04:00
Wing Lian
1d5ab84486 optionally be able to specify alpaca or chat style prompts 2023-05-20 18:16:22 -04:00
Wing Lian
b46bc02f0a add alpaca multiple choice instruct dataset support 2023-05-16 21:45:34 -04:00
Wing Lian
f98e173b59 reorder options so debug can happen in the same prepare step 2023-05-15 22:26:30 -04:00
Wing Lian
bdbca8fa6c more fixes 2023-05-15 14:07:17 -04:00
Wing Lian
0d28df0fd2 move filter to before saving so it doesn't happen everytime, update runpod manual script 2023-05-13 21:51:41 -04:00
NanoCode012
52aada7174 Fix typo 2023-05-11 20:22:30 +09:00
Wing Lian
2bc1a5bde1 black formatting 2023-05-10 16:01:08 -04:00
Wing Lian
915c56cd97 Update finetune.py 2023-05-09 15:05:39 -04:00
NanoCode012
cd2395987e Don't save full model for lora 2023-05-10 03:18:38 +09:00
NanoCode012
71a1f7f38c Save adapter for lora 2023-05-10 01:08:22 +09:00
Wing Lian
79deb35c68 setup runpod images
use github.ref_name
2023-05-08 10:48:32 -04:00
Wing Lian
47ad3890bc fix whitespace and instruction on inference 2023-05-07 08:28:15 -04:00
Wing Lian
247825bd57 refactor inference, warn if model is frozen 2023-05-07 01:54:15 -04:00
Wing Lian
9105935b00 support for multi line inference input, log sweep over learning rates 2023-05-03 13:48:54 -04:00
Wing Lian
fe9c29d73e install peft from main branch 2023-05-01 12:24:04 -04:00
Wing Lian
2255bb7f4f support llama-adapter zero init attention 2023-05-01 10:42:21 -04:00
Wing Lian
55baef0e03 use prebuilt wheels for flash-attn and deepspeed 2023-05-01 09:52:03 -04:00