Config is giving an error if not using the end of the token as the `pad_to_sequence_len` is true.
Llama-3
https://llama.meta.com/llama3/
- Full Fine Tune
- Single GPU @ 48GB VRAM
- LoRA
- Single GPU @ 11GB VRAM
- QLORA+FSDP
- Dual GPU @ 21GB VRAM
Config is giving an error if not using the end of the token as the `pad_to_sequence_len` is true.
https://llama.meta.com/llama3/