* fix: use apply_chat_template to find turn boundaries and allow tool_calling field
* fix: keys to include in turn
* feat(doc): explicitly recommend setting train_on_eos and roles_to_train
* fix: eos not being masked for tool due to template padding
* chore: clear up docs
* fix: default messages format, train_on_eos: turn, and train on all assistant msg
* fix: properly warn if empty content
* feat: parametrize chat_template tests to test different tokenizers
* fix: set proper default for message key
* fix: update defaults to match load function
* fix: change defaults to use new
* feat: add tool_calling dataset
* feat: add tool_calling test
* fix: add handling of edge case of mistral tokenizer with only system prompt
* feat: refactor all test to follow source code
* fix: remove unnecessary eos_token from phi35
* fix test for phi3.5 since eos was dropped from chat_template
---------
Co-authored-by: Wing Lian <wing@axolotl.ai>
* move the setting of PYTORCH_CUDA_ALLOC_CONF to the cli rather than train module
* move set_pytorch_cuda_alloc_conf to a different module to have fewer loaded dependencies for the CLI
* transformers 4.47.1
* drop monkeypatches
* can't remove patches yet
* make flash attention forward ignore the loss kwargs
* patch the flash attention in the modeling arch too
* remove fsdp and deepspeed patches
* cleanup PR
* bump accelerate and torchao, also logically reorder/group requirements
* meant to include torchao
* use official patch release
* add pytorch profiling
* kick off the profiler asap since things may get allcoated before train start
* document feature
* add url for visualizer [skip ci]
* fix build w pyproject to respect insalled torch version
* include in manifest
* disable duplicate code check for now
* move parser so it can be found
* add checks for correct pytorch version so this doesn't slip by again
* update quickstart for new CLI
* add blurb about bleeding edge builds
* missed a yaml reference
* prefer lora over qlora for examples
* fix commands for parity with previous instructions
* consistency on pip/pip3 install
* one more parity pip=>pip3
* remove extraneous options in example yaml
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* update copy
* update badges and for discord and socials in readme
* Fix a few broken links
* bump version to 0.6.0 for release
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* need to update deepspeed version in extras too
* fix patch import
* fix monkeypatch reloading in tests and deepspeed patch
* remove duplicated functionality fixture
* reset LlamaForCausalLM too in fixtures for cce patch
* reset llama attn too
* disable xformers patch for cce
* skip problematic test on low usage functionality
* fix: chat_template masking due to truncation, consolidate turn build and keys within field
* fix: revert roles change
* fix: handling of training and training_detail
* fix: do not skip setting eos mask even if failed finding turn boundary
* fix: truncate reward modelling outputs
* allow flexibility in transformers version for FSDP
* more flexibility with dev versions of 4.47.0.dev0
* add patch for fsdp
* fix typo
* correct fn name
* stray character
* fix patch
* reset Trainer too
* also reset Trainer.training_step
* allow tests/patched to run more than one process on e2e runner
* skip tests/patched in e2e for now since it's run in regular pytest
* reset known modules that are patched on each test function end
* fix the llama model module name
* prevent unsloth patching multiple times
* pop classes out of the globals after reset
* fix tuple indexing
* manually workaround for llama fa2
* bump transformers and trl
* fix: update trainer.log signature
* fix trl trainer.log interfaces
* broken 🦥 with latest transformers
* skip parent, call grandparent - yeah, super janky
* update HF HUB env var and fix reward trainer log since it doesn't directly override log
* also bump accelerate
* patches for llama ga
* detab the code to check
* fix whitespace for patch check
* play nicely with CI tests since we patch everytime
* fix pop default in case it doesn't exist
* more tweaks to make patches nicer in CI
* fix detab for when there are possibly multiple patches
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* reduce test concurrency to avoid HF rate limiting, test suite parity
* make val_set_size smaller to speed up e2e tests
* more retries for pytest fixture downloads
* val_set_size was too small
* move retry_on_request_exceptions to data utils and add retry strategy
* pre-download ultrafeedback as a test fixture
* refactor download retry into it's own fn
* don't import from data utils
* use retry mechanism now for fixtures