Logo
Explore Help
Sign In
tocmo0nlord/axolotl
1
0
Fork 0
You've already forked axolotl
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
6d9a3c4d817cd57e702b270c04d2b2d2400c3ad4
axolotl/examples/llama-3
History
JohanWork 6d9a3c4d81 examples: Fix config llama3 (#1833) [skip ci]
* update llama3 config

* llama3 config
2024-10-14 16:00:48 -04:00
..
fft-8b-liger-fsdp.yaml
fix optimizer + fsdp combination in example (#1893)
2024-09-04 11:28:47 -04:00
fft-8b.yaml
add liger example (#1864)
2024-08-23 12:37:50 -04:00
instruct-dpo-lora-8b.yml
examples: Fix config llama3 (#1833) [skip ci]
2024-10-14 16:00:48 -04:00
instruct-lora-8b.yml
examples: Fix config llama3 (#1833) [skip ci]
2024-10-14 16:00:48 -04:00
lora-8b.yml
bump transformers and set roundup_power2_divisions for more VRAM improvements, low bit ao optimizers (#1769)
2024-07-19 00:47:07 -04:00
qlora-fsdp-70b.yaml
update outputs path so that we can mount workspace to /workspace/data (#1623)
2024-05-15 12:44:13 -04:00
qlora-fsdp-405b.yaml
qlora-fsdp ram efficient loading with hf trainer (#1791)
2024-07-30 19:21:38 -04:00
qlora.yml
bump transformers and set roundup_power2_divisions for more VRAM improvements, low bit ao optimizers (#1769)
2024-07-19 00:47:07 -04:00
README.md
llama-3 examples (#1537)
2024-04-18 14:28:03 -04:00

README.md

Llama-3

https://llama.meta.com/llama3/

8B Base Model

  • Full Fine Tune
    • Single GPU @ 48GB VRAM
  • LoRA
    • Single GPU @ 11GB VRAM

70B Base Model

  • QLORA+FSDP
    • Dual GPU @ 21GB VRAM
Reference in New Issue View Git Blame Copy Permalink
Powered by Gitea Version: 1.25.4 Page: 152ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API