* add uv tooling for e2e gpu tests
* fixes from PR feedback
* simplify check
* fix env var
* make sure to use uv for other install
* use raw_dockerfile_image
* Fix import
* fix args to experimental dockerfile image call
* use updated modal versions
* remove unused field for chat_template.default
"messages" field present in final dataset causes issues with DPO
training otherwise
* lint and fix tests for new return value
* remove unused field for chat_template.default
"messages" field present in final dataset causes issues with DPO
training otherwise
lint and fix tests for new return value
fix for updated expected fields for dpo
remove unused field for chat_template.default
"messages" field present in final dataset causes issues with DPO
training otherwise
fix test still expecting "messages" field
* chore: lint
---------
Co-authored-by: Wing Lian <wing@axolotl.ai>
* bump hf deps
* upgrade liger-kernel too
* install cce from fork for transformers fix
* fix reference to vocab size in gemma3 patch
* use padding_idx instead of pad_token_id
* remove fixed gemma3 patch
* use updated cce fork
* fix local mllama cce patches w docstring
* add test for multipack with trainer setup and fix trainer for trainer refactor upstream
* bump modal version
* guard for iterable datasetS
* mllama model arch layout changed in latest transformers
* fix batch sampler with drop_last
* fix: address upstream vlm changes for lora
* fix: update references to old lora target path
* fix: remove mllama fa2 patch due to upstream fix
* fix: lora kernel patch path for multimodal models
* fix: removed mllama from quarto
* run test for came optim on 2.6.0+
* fix fsdp2 patch and remove deprecated patch
* make sure to set sequence_parallel_degree for grpo
* Add SP test for GRPO
* add sp to grpo config for trainer
* use reward_funcs as kwarg to grpo trainer
* fix the comprehension for reward funcs
* reward funcs already passed in as args
* init sp_group right before training
* fix check for adding models to SP context
* make sure to pass args to super
* upgrade deepspeed
* use updated trl and add reasoning flags for vllm
* patch the worker
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* chore: update pre-commit hooks
* trigger linter when pre commit hooks are updated
* fix type checks from upgraded pre-commit
---------
Co-authored-by: djsaunde <1245942+djsaunde@users.noreply.github.com>
Co-authored-by: Wing Lian <wing@axolotl.ai>
* fix: increase log level for root loggers and axolotl's
* fix: BasePlugin using wrong logger
* fix: update logger to take name from module
* feat: change logger class to AxolotlLogger to filter non-axolotl infos or below
* fix: change behavior to not disable existing loggers
* fix: update logging to respect correct env
* chore: fix comment
* fix: suppress accelerate log to LOG_LEVEL if not set
---------
Co-authored-by: salman <salman.mohammadi@outlook.com>