* Fix eval_sample_packing in llama-3 lora example * Update examples/llama-3/lora-8b.yml Co-authored-by: Wing Lian <wing.lian@gmail.com> --------- Co-authored-by: Wing Lian <wing.lian@gmail.com>
Llama-3
https://llama.meta.com/llama3/
- Full Fine Tune
- Single GPU @ 48GB VRAM
- LoRA
- Single GPU @ 11GB VRAM
- QLORA+FSDP
- Dual GPU @ 21GB VRAM