The current yml code throws an error: ValueError: Please set lora_modules_to_save to [`embed_tokens`, `lm_head`] when using an adapter and changing the special tokens. I added the required changes to resolve it
Llama-3
https://llama.meta.com/llama3/
- Full Fine Tune
- Single GPU @ 48GB VRAM
- LoRA
- Single GPU @ 11GB VRAM
- QLORA+FSDP
- Dual GPU @ 21GB VRAM