theobjectivedad
|
553a86b52c
|
Adding logging enhancement
|
2023-07-14 07:26:19 -05:00 |
|
Wing Lian
|
16bb6276a5
|
Merge pull request #92 from OpenAccess-AI-Collective/flash-optimum
add support for opimum bettertransformers
|
2023-06-14 07:50:15 -04:00 |
|
NanoCode012
|
dc77c8ebce
|
chore: Refactor inf_kwargs out
|
2023-06-13 12:01:46 +09:00 |
|
Wing Lian
|
fd2c9814c9
|
Merge branch 'main' into flash-optimum
|
2023-06-12 13:12:15 -04:00 |
|
Wing Lian
|
8002ffb41f
|
Merge pull request #177 from NanoCode012/fix/landmark-patch
Fix landmark attention patch
|
2023-06-12 08:27:12 -04:00 |
|
NanoCode012
|
8e568bbdae
|
Merge pull request #159 from AngainorDev/patch-1
Fix training over existing lora
|
2023-06-12 20:27:11 +09:00 |
|
AngainorDev
|
b565ecf0a1
|
Fix strict and Lint
|
2023-06-11 15:23:38 +02:00 |
|
NanoCode012
|
974dc00a7d
|
Fix set mem_id for inference and refactor
|
2023-06-11 14:00:54 +09:00 |
|
NanoCode012
|
572d1141e6
|
Set mem cache args on inference
|
2023-06-11 12:05:37 +09:00 |
|
Wing Lian
|
958da70376
|
fix formatting
|
2023-06-10 15:28:08 -04:00 |
|
Wing Lian
|
c4e4f8115c
|
pass a prompt in from stdin for inference
|
2023-06-10 15:07:40 -04:00 |
|
Wing Lian
|
759e8673ce
|
Update scripts/finetune.py
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
|
2023-06-10 14:25:21 -04:00 |
|
Wing Lian
|
0c6f928601
|
address PR feedback
|
2023-06-10 14:23:56 -04:00 |
|
Wing Lian
|
eea2731a5e
|
add streaming dataset support for pretraining datasets
|
2023-06-10 14:23:56 -04:00 |
|
Wing Lian
|
1210dc8fd5
|
more tweaks to do pre-training with bettertransformers
|
2023-06-10 14:23:55 -04:00 |
|
Wing Lian
|
488a67d75a
|
experimental expansion of ctx len
|
2023-06-10 14:23:53 -04:00 |
|
Wing Lian
|
8792199799
|
add flash attn context for efficient training and attempt setting model to train mode:
|
2023-06-10 14:22:30 -04:00 |
|
Wing Lian
|
1edc30c786
|
add support for opimum bettertransformers
|
2023-06-10 14:22:30 -04:00 |
|
Angainor Development
|
79e2a6f140
|
Merge branch 'main' into patch-1
|
2023-06-10 19:07:54 +02:00 |
|
Angainor Development
|
c2508987a6
|
Remove explicit definition of cfg.inference
|
2023-06-10 19:06:10 +02:00 |
|
Wing Lian
|
f36e227eaf
|
formatting for linter
|
2023-06-10 12:00:52 -04:00 |
|
Glavin Wiechert
|
fec6bcc3e6
|
Add streaming inference & fix stopping at EOS
|
2023-06-10 08:14:47 +00:00 |
|
Angainor Development
|
bd3b537344
|
Feed cfg.inference
|
2023-06-09 08:59:05 +02:00 |
|
NanoCode012
|
52765ac588
|
Set matmul tf32
|
2023-06-08 23:41:12 +09:00 |
|
Wing Lian
|
4ac9e251b7
|
new prompters, misc fixes for output dir missing using fsdp, and changing max seq len
|
2023-06-05 22:41:00 -04:00 |
|
Wing Lian
|
74ebbf4371
|
fix device map
|
2023-06-02 14:29:08 -04:00 |
|
Wing Lian
|
5a631b305b
|
fix batch size calculation
|
2023-05-31 14:11:32 -04:00 |
|
NanoCode012
|
fac46002d4
|
Merge pull request #119 from NanoCode012/feat/update-inference
Feat(inference): Swap to GenerationConfig
|
2023-05-31 14:09:18 +09:00 |
|
NanoCode012
|
33d40179ba
|
Increase max_new_tokens
Co-authored-by: Wing Lian <wing.lian@gmail.com>
|
2023-05-31 14:04:49 +09:00 |
|
Wing Lian
|
c7021e191f
|
Merge pull request #120 from OpenAccess-AI-Collective/model-from-path
split up llama model loading so config can be loaded from base config and models can be loaded from a path
|
2023-05-31 00:08:38 -04:00 |
|
Wing Lian
|
6fa40bf8ad
|
black formatting
|
2023-05-30 23:33:37 -04:00 |
|
Wing Lian
|
3aad5f3b3e
|
add support for gradient accumulation steps
|
2023-05-30 23:24:37 -04:00 |
|
Wing Lian
|
39a208c2bc
|
fix up tokenizer config, isort fix
|
2023-05-30 23:00:02 -04:00 |
|
NanoCode012
|
988aeb9c34
|
Feat: Swap to GenerationConfig
|
2023-05-31 10:48:19 +09:00 |
|
Wing Lian
|
bbc5bc5791
|
Merge pull request #108 from OpenAccess-AI-Collective/docker-gptq
default to qlora support, make gptq specific image
|
2023-05-30 15:07:04 -04:00 |
|
NanoCode012
|
a1f9850b91
|
Fix security issue or ignore false positives
|
2023-05-31 02:53:53 +09:00 |
|
NanoCode012
|
37293dce07
|
Apply isort then black
|
2023-05-31 02:53:53 +09:00 |
|
NanoCode012
|
96e8378692
|
Delete extract_lora.py
|
2023-05-31 02:53:53 +09:00 |
|
NanoCode012
|
e9650d3ae4
|
Fix mypy typing
|
2023-05-31 02:53:53 +09:00 |
|
NanoCode012
|
82971e1565
|
Lint finetune.py
|
2023-05-31 02:53:23 +09:00 |
|
NanoCode012
|
392dfd9b07
|
Lint and format
|
2023-05-31 02:53:22 +09:00 |
|
Wing Lian
|
6ef96f569b
|
default to qlora support, make gptq specific image
|
2023-05-29 20:34:41 -04:00 |
|
Wing Lian
|
21f17cca69
|
bnb fixes
|
2023-05-29 00:06:35 -04:00 |
|
NanoCode012
|
56f9ca5709
|
refactor: fix previous refactors
|
2023-05-28 23:06:10 +09:00 |
|
NanoCode012
|
8bd7a49cd7
|
Refactor to use DictDefault instead
|
2023-05-28 23:06:10 +09:00 |
|
NanoCode012
|
93acb648bd
|
Fix load error
|
2023-05-28 23:06:10 +09:00 |
|
NanoCode012
|
bdfe7c9201
|
Convert attrdict to addict
|
2023-05-28 23:06:10 +09:00 |
|
Wing Lian
|
cc67862dd3
|
move list not in list logic to fn
|
2023-05-27 16:42:05 -04:00 |
|
Wing Lian
|
32e6fe9286
|
load the tokenizer seperately from the model
|
2023-05-26 07:29:35 -04:00 |
|
Wing Lian
|
a5bf838685
|
add logging and make sure model unloads to float16
|
2023-05-26 00:09:55 -04:00 |
|