* make the initial call to tokenizer.pad not spam the console
* add guard from feedback
* make another common console output less verbose
* more logging fixes
* Added a feature to save prepared dataset in specified shards, removed limiter on multiprocessing during tokenization, and a bug fix of qwen tokenizer
* removed limiters and fixed config variable name
* black lint
* chore: lint
* feat: update handling of dataset_processes
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* Apply generic fused liger ce for unknown models
* fix deepseek liger modeling
* generic cce and config tiled mlp to use original mlp and auto detect compute params
* fix weight and lint
* update warnings
* address PR feedback
* use lookup for model class prefixes
* revert inadvertent change to flash attn verison
* remove un-needed pylint annotations
* fix import
* checkpoint model on first step callback
* remove debug
* add test cases; update existing tests not to save on first step
* move test out of solo
* delete
* default to False
* typo
* support for deepspeed autotup
* bump to latest deepspeed that supports deepcompile too
* add deepcompile support too
* fix total steps calculation for TP
* setup fixture for tp
* update ds config to ensure weights are gathered for checkpoint
* fix duplicate validation names
* chore: lint
* use cuda streams for activation offloading
* use torch native ops
* update cfg schema for streams
* fix literal constructor for set
* use context for training step so it doesn't affect evals
* disable streams
* auto gc on eval steps
* use activation_offloading config arg
* add docs for gradient checkpointing
* handle validation for gc/ao
* use cuda streams for act offloading
* add more validation for AC w/o GC
* fix docs
* move activation_offloading lower in definition so it doesn't break args/kwargs
* fix kd due to import order
* upgrade peft to 0.16.0
* upgrade datasets to 4.0.0
* refactor dupes from merge/rebase
* fix check for fsdp1 + sharded_state_dict
* use full state dict for ci
* upgrade trl==0.19.1
* add vllm for tests for grpo
* fixes to work with latest trl
* need data_parallel_size config too
* support for vllm_mode for server / colocate
* vllm settings for colocate
* relax vllm version
* bump min hf hub for latest vllm support
* add hints on string literal for vllm mode
* use latest transformers 4.53.2
* tweak acceptable loss on flaky test_ds_zero3_packed test
* don't run flaky vllm/grpo tests for now
* FSDP2 args migration implementation
This commit implements the migration to FSDP2 arguments including:
- FSDP2 support with LoRA training
- DPO integration with FSDP2
- Model loading fixes and refactoring
- CPU offloading and PEFT handling
- Test updates and CI improvements
- Bug fixes for dtype errors and various edge cases
* tiled_mlp supports single gpu
* use checkpoint offloading for arctic training
* patch torch checkpoint too
* support for single gpu zero3
* add linkback to where it was copied from
* fix: do not add training and training_detail block by default
* fixed: magistral docs
* fix: address pad adding new fields and use built-in from_openai
* feat: try enable multiprocessing
* fix: check for keys before deleting attn_mask
* feat: add mistral pad test
* feat: add tool calling test
* feat: add devstral tokenizer tests
* fix: comma format
* chore: remove unused support_preprocessing as tokenizer is pickable now
* chore: update magistral doc
* feat: add devstral readme and example
* chore: refactor error handling