* fix: run deduplication before saving dataset during preprocessing
Move deduplicate_and_log_datasets call before save_preprocessed_dataset
in both SFT and RL data loading pipelines. This ensures the saved
preprocessed dataset is already de-duplicated, so subsequent loads
from cache don't contain duplicates.
Fixes#2719
* fix: include deduplication flag in dataset hash and warn on skip_prepare_dataset+dedup
- Add dataset_exact_deduplication to the hash string in
generate_dataset_hash_from_config so cached datasets are invalidated
when the dedup setting changes.
- Log a warning when skip_prepare_dataset=True and
dataset_exact_deduplication=True, since dedup will be silently
skipped in that configuration (both SFT and RL paths).
* fix: add ValueError for skip_prepare+dedup, fix test mock target and formatting
- Add config validator (check_deduplication_with_skip_prepare) that raises
ValueError when skip_prepare_dataset=True and dataset_exact_deduplication=True
- Replace runtime warnings in sft.py/rl.py with the validator check
- Fix RL test: patch axolotl.utils.data.rl.load_tokenizer instead of
axolotl.loaders.load_tokenizer to properly mock the imported reference
- Fix ruff lint (remove unused imports) and formatting issues
* refactor: inline deduplicate function per review feedback
* fix test fixture, lint
---------
Co-authored-by: ManasVardhan <manasvardhan@users.noreply.github.com>
Co-authored-by: Wing Lian <wing@axolotl.ai>