Wing Lian
c969f0a9dc
add docs
2023-06-15 08:43:20 -04:00
Wing Lian
d7635b7148
hint to what AMP means
2023-06-15 02:06:27 -04:00
Wing Lian
88e17ffc50
add float16 docs and tweak typehints
2023-06-15 02:05:31 -04:00
Wing Lian
16bb6276a5
Merge pull request #92 from OpenAccess-AI-Collective/flash-optimum
...
add support for opimum bettertransformers
2023-06-14 07:50:15 -04:00
NanoCode012
3513885f43
Fix sharegpt type
2023-06-14 01:10:58 +09:00
PocketDoc Labs
5ff547dc70
Update README.md to include a community showcase
2023-06-12 22:38:10 -07:00
mhenrichsen
34ae69989f
fix inference
2023-06-12 21:39:19 +02:00
Wing Lian
fd2c9814c9
Merge branch 'main' into flash-optimum
2023-06-12 13:12:15 -04:00
Wing Lian
74ef5cc083
Merge pull request #192 from OpenAccess-AI-Collective/sharegpt-custom-prompt
...
misc fixes
2023-06-12 08:26:38 -04:00
NanoCode012
52cde69288
Fix config path after config moved
2023-06-12 17:06:15 +09:00
Wing Lian
aac4b7691e
add new sharegpt, refactor prompt so it can be customized later, add exception if no data is processed
2023-06-11 19:42:25 -04:00
NanoCode012
4cd1deeef2
Add save_steps and eval_steps to Readme
2023-06-12 02:44:46 +09:00
Wing Lian
336aa3fd48
gptq lora llama is obviously good
2023-06-11 11:05:29 -04:00
Wing Lian
d0d7eaa4f3
update openllama and clean up paths
2023-06-11 11:03:31 -04:00
Wing Lian
a6ebf57e82
fix table formatting
2023-06-11 10:55:32 -04:00
Wing Lian
280832cec2
more matrix updates
2023-06-11 10:52:36 -04:00
Wing Lian
a43bae9ff0
update the support matrix
2023-06-11 10:44:03 -04:00
Wing Lian
c4e4f8115c
pass a prompt in from stdin for inference
2023-06-10 15:07:40 -04:00
Wing Lian
eea2731a5e
add streaming dataset support for pretraining datasets
2023-06-10 14:23:56 -04:00
Wing Lian
5878bb1f3a
add option to readme
2023-06-10 11:57:41 -04:00
PocketDocLabs
16f9e28048
Update README.md to reflect current gradient checkpointing support
...
Previously the readme stated gradient checkpointing was incompatible with 4-bit lora in the current implementation however this is no longer the case. I have replaced the warning with a link to the hugging face documentation on gradient checkpointing.
2023-06-09 16:10:58 -07:00
NanoCode012
b5aa8d854c
Merge pull request #169 from NanoCode012/feat/landmark
...
Feat: Add landmark attention
2023-06-10 07:26:06 +09:00
NanoCode012
b242b69e10
Fix falcon support lora
2023-06-09 17:50:16 +09:00
NanoCode012
2e13ceff37
Improve lambda labs instruction
2023-06-09 15:03:08 +09:00
NanoCode012
55b8542de8
Feat: Add landmark attention
2023-06-09 12:54:08 +09:00
NanoCode012
c8242de725
Merge pull request #132 from utensil/falcon-7b-qlora
...
Axolotl supports falcon + qlora
2023-06-09 01:14:03 +09:00
NanoCode012
f8d379883d
Merge pull request #162 from NanoCode012/fix/custom-prompt-readme
...
Fix: Move custom prompts out of hidden
2023-06-08 23:05:17 +09:00
NanoCode012
2097a09d2d
Move custom prompts out of hidden
2023-06-08 22:53:56 +09:00
NanoCode012
cfff94b123
Add peft install for quickstart
2023-06-08 22:50:20 +09:00
NanoCode012
2b222de5b6
Update peft and gptq instruction
2023-06-08 22:48:26 +09:00
Wing Lian
5b33e295bd
update docs
2023-06-05 22:48:16 -04:00
Wing Lian
618816d4df
Update README.md for correct image tags
2023-06-02 14:10:23 -04:00
FarisHijazi
84169d15b3
added docker-compose file
2023-06-02 18:17:43 +03:00
Wing Lian
ecfe8d0a1a
Merge pull request #142 from NanoCode012/feat/custom-prompt-readme
...
Feat: Add custom prompt readme and add missing prompt strategies to Readme
2023-06-02 07:21:04 -04:00
NanoCode012
078a43eef8
Remove redundant instruction
2023-06-02 12:30:11 +09:00
NanoCode012
33e1890086
Add pygmalion
2023-06-02 12:27:51 +09:00
NanoCode012
1c38253692
Add other prompt_strategies
2023-06-02 12:24:44 +09:00
NanoCode012
496b83f778
Add short instruction for custom prompts
2023-06-02 12:16:20 +09:00
NanoCode012
ff68a95781
Add lambdalabs instruction
2023-06-02 12:09:40 +09:00
NanoCode012
3c71c8debe
Update doc for grad_accu and add validation tests for batch size
2023-06-01 06:13:47 +09:00
Wing Lian
f94dd626f0
Merge pull request #130 from OpenAccess-AI-Collective/gas
...
swap batch size for gradient accumulation steps to decouple from num gpu
2023-05-31 13:03:51 -04:00
Utensil
8afb0fbaba
Axolotl supports falcon + qlora
2023-05-31 23:58:40 +08:00
Wing Lian
c2a0792680
swap batch size for gradient accumulation steps to decouple from num gpu
2023-05-31 09:38:12 -04:00
Wing Lian
b267d24a2b
add badge info to readme
2023-05-31 09:28:44 -04:00
NanoCode012
0e4be625ae
Merge pull request #118 from NanoCode012/feat/torch-readme
...
Fix(readme): Fix torch missing from readme
2023-05-31 13:29:41 +09:00
NanoCode012
bdc4bd7d4e
Update README.md
2023-05-31 13:24:28 +09:00
Wing Lian
2d0ba3b818
Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix
...
copy xformers attn from ooba since we removed dep on alpaca_lora_4bit
2023-05-31 00:11:40 -04:00
Wing Lian
2675fb756e
update readme for SDP
2023-05-31 00:04:54 -04:00
Wing Lian
e3c494ca7b
remove unused import and update readme
2023-05-30 23:55:45 -04:00
NanoCode012
cf61f14bff
FIx(readme): Fix torch missing from readme
2023-05-31 10:28:49 +09:00