* feat: add llama4 multimodal
* feat: add torchvision to base docker
* just use latest torchvision
---------
Co-authored-by: Wing Lian <wing@axolotl.ai>
* llama4 support
* add xet support [skip ci]
* be flexible on transformers version and skip test on version
* don't use deepspeed for the fix_untrained_tokens test
* reordering to trigger torch 2.6.0 tests first
* slightly smaller train set
* use 4.51.0 for now
* remove stray print, add llama4 chat template to schema, bump peft to 0.15.1
* patches to make llama4 performant
* add preliminary fp8 support
* fsdp2 support
* use accelerate release 1.6.0
* allow 8bit optims with fsdp2
* liger + torch compile fix
* add fsdp2 e2e tests
* use transformers commit with fsdp2 support
* skip zero3 tests for this PR for now
* fix fsdp2 config for ci
* make sure both flex and flash attn work with fsdp2, skip fix untrained tokens
* okay, actually use fdsp2...
* more fixes to flex for fsdp2
* make sure to patch all the loaded models
* additional validation for fsdp2, bump dep versions
* make torch 2.6.0 the default image
* fix tests against upstream main
* fix attribute access
* use fixture dataset
* fix dataset load
* correct the fixtures + tests
* more fixtures
* add accidentally removed shakespeare fixture
* fix conversion from unittest to pytest class
* nightly main ci caches
* build 12.6.3 cuda base image
* override for fix from huggingface/transformers#37162
* address PR feedback
* make gemma3 work with packing
* multi-gpu e2e for ci
* update gemma3 model namespace to use mirror
* add gradient checkpointing to multigpu e2e ci
* update gemma3 examples for use_reentrant and fix ddp find unused params
* fix tests for gemma3
* fix import for test utils
* set correct train loss for gemma3 e2e
* fix: clarify input type
* fix: handling of error message if data_files not available
* fix: clarify attention handling
* fix: add doc on missing pad token
* add grpo scale_rewards config for trl#3135
* options to connect to vllm server directly w grpo trl#3094
* temperature support trl#3029
* sampling/generation kwargs for grpo trl#2989
* make vllm_enable_prefix_caching a config param trl#2900
* grpo multi-step optimizeations trl#2899
* remove overrides for grpo trainer
* bump trl to 0.16.0
* add cli to start vllm-serve via trl
* call the python module directly
* update to use vllm with 2.6.0 too now and call trl vllm serve from module
* vllm 0.8.1
* use python3
* use sys.executable
* remove context and wait for start
* fixes to make it actually work
* fixes so the grpo tests pass with new vllm paradigm
* explicit host/port and check in start vllm
* make sure that vllm doesn't hang by setting quiet so outouts go to dev null
* also bump bnb to latest release
* add option for wait from cli and nccl debugging for ci
* grpo + vllm test on separate devices for now
* make sure grpo + vllm tests runs single worker since pynccl comms would conflict
* fix cli
* remove wait and add caching for argilla dataset
* refactoring configs
* chore: lint
* add vllm config
* fixup vllm grpo args
* fix one more incorrect schema/config path
* fix another vlllm reference and increase timeout
* make the tests run a bit faster
* change mbsz back so it is correct for grpo
* another change mbsz back so it is correct for grpo
* fixing cli args
* nits
* adding docs
* docs
* include tensor parallel size for vllm in pydantic schema
* moving start_vllm, more docs
* limit output len for grpo vllm
* vllm enable_prefix_caching isn't a bool cli arg
* fix env ordering in tests and also use pid check when looking for vllm
---------
Co-authored-by: Salman Mohammadi <salman.mohammadi@outlook.com>