* checkpoint model on first step callback
* remove debug
* add test cases; update existing tests not to save on first step
* move test out of solo
* delete
* default to False
* typo
* make sure to validate the config before normalizing so defaults get set
* validation not needed for particular test
* remove duplicate validations
* set qlora correctly
* add helper to verify the correct model output file exists
* more checks using helper
* chore: lint
* fix import and relora model check
* workaround for trl trainer saves
* remove stray print
* bump transformers and trl
* fix: update trainer.log signature
* fix trl trainer.log interfaces
* broken 🦥 with latest transformers
* skip parent, call grandparent - yeah, super janky
* update HF HUB env var and fix reward trainer log since it doesn't directly override log
* also bump accelerate
* patches for llama ga
* detab the code to check
* fix whitespace for patch check
* play nicely with CI tests since we patch everytime
* fix pop default in case it doesn't exist
* more tweaks to make patches nicer in CI
* fix detab for when there are possibly multiple patches
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* add mhenrichsen/alpaca_2k_test with revision dataset download fixture for flaky tests
* log slowest tests
* pin pynvml==11.5.3
* fix load local hub path
* optimize for speed w smaller models and val_set_size
* replace pynvml
* make the resume from checkpoint e2e faster
* make tests smaller
* see if unsloth installs cleanly in ci
* check unsloth install on regular tests, not sdist
* fix ampere check exception for ci
* use cached_property instead
* add an e2e test for unsloth qlora
* reduce seq len and mbsz to prevent oom in ci
* add checks for fp16 and sdp_attention
* pin unsloth to a specific release
* add unsloth to docker image too
* fix flash attn xentropy patch
* fix loss, add check for loss when using fa_xentropy
* fix special tokens for test
* typo
* test fa xentropy with and without gradient accum
* pr feedback changes