* upgrade to torchao 0.17.0
* chore: lint
* refactor attention handling
* replace legacy attention boolean flags with capability properties
Replace checks with capability-based properties derived from attn_implementation
This separates three concerns that were conflated under flash_attention:
1. Backend selection -> attn_implementation enum
2. Packing capability -> attn_supports_packing property
3. Flash-attn library dependency -> attn_uses_flash_lib property
* compute attn capability flags in normalizer instead of properties
* make attn_implementation the single source of truth
* move attention-dependent validators to mode=after
* migrate remaining consumers to canonical attn_implementation
* expand attention tests + rewrite docs
* migrate example configs to canonical attn_implementation
* update doc snippets + reject gemma4-hybrid with non-FA2 backend
* remove dead gemma4 branch in _set_attention_config
* fix duplicate attn_implementation in gpt-oss yamls and flaky caplog tests
* drop "Phase 2" naming from attn-implementation tests
* regroup attn_implementation tests by feature concern
* clean up verbose comments and remove MD
Signed-off-by: Wing Lian <wing@axolotl.ai>
Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai>
* fix(collator): pass return_dict=True at apply_chat_template top level for transformers 5.x
In transformers 5.x, ProcessorMixin.apply_chat_template gained its own
`return_dict` parameter (defaulting to False). When return_dict=False
and tokenize=True the method returns out["input_ids"] directly — a 2-D
tensor — rather than the full BatchFeature dict.
The old code placed `return_dict=True` inside processor_kwargs. In
transformers 5.x those kwargs are forwarded to the underlying processor
call self(...) where _merge_kwargs silently ignores any key not present
in MllamaProcessorKwargs (emitting a warning). The outer return_dict
therefore stayed False, apply_chat_template returned the raw input_ids
tensor, and the subsequent `batch["input_ids"]` attempted to index a
2-D tensor with the 9-character string "input_ids", producing:
IndexError: too many indices for tensor of dimension 2
The fix is to pass return_dict=True as a top-level keyword argument to
apply_chat_template (where it is actually consumed) and remove it from
processor_kwargs (where it was silently dropped). No version guard is
needed: transformers is pinned to ==5.5.4 in pyproject.toml.
Adds a unit-level regression test (tests/test_mm_chat_collator.py) that
mocks the processor to return a raw tensor when apply_chat_template is
called without top-level return_dict=True, verifying the four invariants:
process_rows returns a dict, input_ids is 2-D, labels is 2-D, and
apply_chat_template receives return_dict=True as a top-level kwarg.
Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_multimodal_dataset
Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_text_only_dataset
Signed-off-by: Wing Lian <wing@axolotl.ai>
Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai>
* fix(collator): process_rows returns dict (BatchFeature) shape
Two related changes for the multimodal chat collator under transformers 5.x:
1. Wrap apply_chat_template result in dict(...) so process_rows returns
a plain dict rather than a BatchFeature instance. BatchFeature is a
Mapping but not a dict; downstream code that did
batch["labels"] = self.processing_strategy.process_labels(batch["input_ids"])
would index on a tensor when the result wasn't dict-shaped, raising
IndexError: too many indices for tensor of dimension 2
2. Soften the regression test's contract from `dict` to `Mapping` so it
exercises the actual semantic guarantee (key/value access) rather
than the implementation detail (dict vs BatchFeature). Test guards
against the original transformers 5.x breakage where apply_chat_template's
return_dict default went from True to False.
Includes regression test under tests/test_mm_chat_collator.py.
Bug surfaced via swarm dispatch task_01KQHPNAYD8XARSNSDJVW1GPF6 against
attn-implementation-refactor; squash-merged from agent commits 4de886fd
+ dc9fcf4f.
Signed-off-by: Wing Lian <wing@axolotl.ai>
---------
Signed-off-by: Wing Lian <wing@axolotl.ai>
Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai>
* make pad_to_sequence_len default to the same value as sample_packing
* remove duplicate validation
* fix test
* update description meta
Co-authored-by: NanoCode012 <nano@axolotl.ai>
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai>
* checkpoint model on first step callback
* remove debug
* add test cases; update existing tests not to save on first step
* move test out of solo
* delete
* default to False
* typo
* phi2 multipack
* update validation and examples for phi
* more updates to phi examples
* make sure to use the correct collator for phi multipack
* phi needs attention mask now for multipack
* if the special token already exists in the tokenizer, don't require in lora modules to save
* fix qlora yml for phi, fix phi test validation
* test qlora too
* make sure flash attention is enabled for the test
* don't use remote code for phi anymore
* reduce sequence len for sample packing phi
* set fp16 to false if bf16, update bf16: auto in example YAMLs
* unset fp16 so that it fallsback properly if bf16 isn't available
* Update README.md [skip-ci]
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
* test that bf16 disables fp16
---------
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
* restore to current phi modeling code from phi-2
* enable gradient checkpointing
* don't cast everything to float32 all the time
* gradient checkpointing for phi2 ParallelBlock module too
* fix enabling flash attn for phi2
* add comment about import
* fix phi2 example
* fix model type check for tokenizer
* revert float32 -> bf16 casting changes
* support fused dense flash attn
* fix the repo for flash-attn
* add package name for subdir pkg
* fix the data collator when not using sample packing
* install packaging for pytests in ci
* also fix setup to not install flash attn fused dense subdir if not extras
* split out the fused-dense-lib in extra requires
* don't train w group_by_length for phi
* update integration test to use phi2
* set max steps and save steps for phi e2e tests
* try to workaround ssave issue in ci
* skip phi2 e2e test for now