This website requires JavaScript.
Explore
Help
Sign In
tocmo0nlord
/
axolotl
Watch
1
Star
0
Fork
0
You've already forked axolotl
Code
Issues
Pull Requests
Actions
3
Packages
Projects
Releases
Wiki
Activity
Files
beaee36191a2c67408639d7f161beca0f2e059d2
axolotl
/
examples
/
llama-3
History
Wing Lian
dca1fe47d4
fix optimizer + fsdp combination in example (
#1893
)
2024-09-04 11:28:47 -04:00
..
fft-8b-liger-fsdp.yaml
fix optimizer + fsdp combination in example (
#1893
)
2024-09-04 11:28:47 -04:00
fft-8b.yaml
add liger example (
#1864
)
2024-08-23 12:37:50 -04:00
instruct-dpo-lora-8b.yml
Add a
chat_template
prompt strategy for DPO (
#1725
)
2024-07-21 09:10:42 -04:00
instruct-lora-8b.yml
Update instruct-lora-8b.yml (
#1789
) [skip ci]
2024-08-05 12:43:20 -04:00
lora-8b.yml
bump transformers and set roundup_power2_divisions for more VRAM improvements, low bit ao optimizers (
#1769
)
2024-07-19 00:47:07 -04:00
qlora-fsdp-70b.yaml
update outputs path so that we can mount workspace to /workspace/data (
#1623
)
2024-05-15 12:44:13 -04:00
qlora-fsdp-405b.yaml
qlora-fsdp ram efficient loading with hf trainer (
#1791
)
2024-07-30 19:21:38 -04:00
qlora.yml
bump transformers and set roundup_power2_divisions for more VRAM improvements, low bit ao optimizers (
#1769
)
2024-07-19 00:47:07 -04:00
README.md
llama-3 examples (
#1537
)
2024-04-18 14:28:03 -04:00
README.md
Llama-3
https://llama.meta.com/llama3/
8B Base Model
Full Fine Tune
Single GPU @ 48GB VRAM
LoRA
Single GPU @ 11GB VRAM
70B Base Model
QLORA+FSDP
Dual GPU @ 21GB VRAM
Reference in New Issue
View Git Blame
Copy Permalink