* fix: use text_column even when not packing for pretraining
* feat: update test to check when not packing
* chore: lint
* Update src/axolotl/utils/data/pretraining.py
Co-authored-by: Wing Lian <wing.lian@gmail.com>
---------
Co-authored-by: Wing Lian <wing@axolotl.ai>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
* add helper to verify the correct model output file exists
* more checks using helper
* chore: lint
* fix import and relora model check
* workaround for trl trainer saves
* remove stray print
* fix: use apply_chat_template to find turn boundaries and allow tool_calling field
* fix: keys to include in turn
* feat(doc): explicitly recommend setting train_on_eos and roles_to_train
* fix: eos not being masked for tool due to template padding
* chore: clear up docs
* fix: default messages format, train_on_eos: turn, and train on all assistant msg
* fix: properly warn if empty content
* feat: parametrize chat_template tests to test different tokenizers
* fix: set proper default for message key
* fix: update defaults to match load function
* fix: change defaults to use new
* feat: add tool_calling dataset
* feat: add tool_calling test
* fix: add handling of edge case of mistral tokenizer with only system prompt
* feat: refactor all test to follow source code
* fix: remove unnecessary eos_token from phi35
* fix test for phi3.5 since eos was dropped from chat_template
---------
Co-authored-by: Wing Lian <wing@axolotl.ai>
* move the setting of PYTORCH_CUDA_ALLOC_CONF to the cli rather than train module
* move set_pytorch_cuda_alloc_conf to a different module to have fewer loaded dependencies for the CLI
* transformers 4.47.1
* drop monkeypatches
* can't remove patches yet
* make flash attention forward ignore the loss kwargs
* patch the flash attention in the modeling arch too
* remove fsdp and deepspeed patches
* cleanup PR
* bump accelerate and torchao, also logically reorder/group requirements
* meant to include torchao
* use official patch release
* add pytorch profiling
* kick off the profiler asap since things may get allcoated before train start
* document feature
* add url for visualizer [skip ci]
* fix build w pyproject to respect insalled torch version
* include in manifest
* disable duplicate code check for now
* move parser so it can be found
* add checks for correct pytorch version so this doesn't slip by again
* update quickstart for new CLI
* add blurb about bleeding edge builds
* missed a yaml reference
* prefer lora over qlora for examples
* fix commands for parity with previous instructions
* consistency on pip/pip3 install
* one more parity pip=>pip3
* remove extraneous options in example yaml
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* update copy
* update badges and for discord and socials in readme
* Fix a few broken links
* bump version to 0.6.0 for release
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* need to update deepspeed version in extras too
* fix patch import
* fix monkeypatch reloading in tests and deepspeed patch
* remove duplicated functionality fixture
* reset LlamaForCausalLM too in fixtures for cce patch
* reset llama attn too
* disable xformers patch for cce
* skip problematic test on low usage functionality
* fix: chat_template masking due to truncation, consolidate turn build and keys within field
* fix: revert roles change
* fix: handling of training and training_detail
* fix: do not skip setting eos mask even if failed finding turn boundary
* fix: truncate reward modelling outputs