Fix eval_sample_packing in llama-3 lora example (#1716) [skip ci]

* Fix eval_sample_packing in llama-3 lora example

* Update examples/llama-3/lora-8b.yml

Co-authored-by: Wing Lian <wing.lian@gmail.com>

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
This commit is contained in:
RodriMora
2024-07-13 20:34:44 +02:00
committed by GitHub
parent 634f384e06
commit 219cd0d3c5

View File

@@ -15,6 +15,7 @@ output_dir: ./outputs/lora-out
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora