Refactor separate attention flags with attn_implementation and capability/concerns feature flags (#3602)
* upgrade to torchao 0.17.0 * chore: lint * refactor attention handling * replace legacy attention boolean flags with capability properties Replace checks with capability-based properties derived from attn_implementation This separates three concerns that were conflated under flash_attention: 1. Backend selection -> attn_implementation enum 2. Packing capability -> attn_supports_packing property 3. Flash-attn library dependency -> attn_uses_flash_lib property * compute attn capability flags in normalizer instead of properties * make attn_implementation the single source of truth * move attention-dependent validators to mode=after * migrate remaining consumers to canonical attn_implementation * expand attention tests + rewrite docs * migrate example configs to canonical attn_implementation * update doc snippets + reject gemma4-hybrid with non-FA2 backend * remove dead gemma4 branch in _set_attention_config * fix duplicate attn_implementation in gpt-oss yamls and flaky caplog tests * drop "Phase 2" naming from attn-implementation tests * regroup attn_implementation tests by feature concern * clean up verbose comments and remove MD Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai> * fix(collator): pass return_dict=True at apply_chat_template top level for transformers 5.x In transformers 5.x, ProcessorMixin.apply_chat_template gained its own `return_dict` parameter (defaulting to False). When return_dict=False and tokenize=True the method returns out["input_ids"] directly — a 2-D tensor — rather than the full BatchFeature dict. The old code placed `return_dict=True` inside processor_kwargs. In transformers 5.x those kwargs are forwarded to the underlying processor call self(...) where _merge_kwargs silently ignores any key not present in MllamaProcessorKwargs (emitting a warning). The outer return_dict therefore stayed False, apply_chat_template returned the raw input_ids tensor, and the subsequent `batch["input_ids"]` attempted to index a 2-D tensor with the 9-character string "input_ids", producing: IndexError: too many indices for tensor of dimension 2 The fix is to pass return_dict=True as a top-level keyword argument to apply_chat_template (where it is actually consumed) and remove it from processor_kwargs (where it was silently dropped). No version guard is needed: transformers is pinned to ==5.5.4 in pyproject.toml. Adds a unit-level regression test (tests/test_mm_chat_collator.py) that mocks the processor to return a raw tensor when apply_chat_template is called without top-level return_dict=True, verifying the four invariants: process_rows returns a dict, input_ids is 2-D, labels is 2-D, and apply_chat_template receives return_dict=True as a top-level kwarg. Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_multimodal_dataset Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_text_only_dataset Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai> * fix(collator): process_rows returns dict (BatchFeature) shape Two related changes for the multimodal chat collator under transformers 5.x: 1. Wrap apply_chat_template result in dict(...) so process_rows returns a plain dict rather than a BatchFeature instance. BatchFeature is a Mapping but not a dict; downstream code that did batch["labels"] = self.processing_strategy.process_labels(batch["input_ids"]) would index on a tensor when the result wasn't dict-shaped, raising IndexError: too many indices for tensor of dimension 2 2. Soften the regression test's contract from `dict` to `Mapping` so it exercises the actual semantic guarantee (key/value access) rather than the implementation detail (dict vs BatchFeature). Test guards against the original transformers 5.x breakage where apply_chat_template's return_dict default went from True to False. Includes regression test under tests/test_mm_chat_collator.py. Bug surfaced via swarm dispatch task_01KQHPNAYD8XARSNSDJVW1GPF6 against attn-implementation-refactor; squash-merged from agent commits 4de886fd + dc9fcf4f. Signed-off-by: Wing Lian <wing@axolotl.ai> --------- Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai>
This commit is contained in:
@@ -521,9 +521,9 @@ class TestMultiGPULlama:
|
||||
}
|
||||
)
|
||||
if attention_backend == "flash":
|
||||
cfg.flash_attention = True
|
||||
cfg.attn_implementation = "flash_attention_2"
|
||||
elif attention_backend == "flex":
|
||||
cfg.flex_attention = True
|
||||
cfg.attn_implementation = "flex_attention"
|
||||
|
||||
# write cfg to yaml file
|
||||
Path(temp_dir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
418
tests/test_attn_implementation.py
Normal file
418
tests/test_attn_implementation.py
Normal file
@@ -0,0 +1,418 @@
|
||||
"""Tests for attn_implementation: normalization, canonical-value acceptance,
|
||||
capability flags, backend registration, and downstream validators.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from contextlib import contextmanager
|
||||
|
||||
import pytest
|
||||
|
||||
from axolotl.utils.config import validate_config
|
||||
from axolotl.utils.dict import DictDefault
|
||||
from axolotl.utils.schemas.config import AxolotlInputConfig
|
||||
from axolotl.utils.schemas.enums import (
|
||||
ATTN_IMPLS_SUPPORTING_PACKING,
|
||||
ATTN_IMPLS_USING_FLASH_LIB,
|
||||
ATTN_IMPLS_WITHOUT_DTYPE_CAST,
|
||||
CANONICAL_ATTN_IMPLS,
|
||||
)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _capture_axolotl_warnings(caplog):
|
||||
"""Capture WARNINGs from `axolotl.*` loggers via caplog.
|
||||
|
||||
`axolotl.cli` calls `configure_logging()` at import time, which sets
|
||||
`propagate=False` on the `axolotl` logger so records do not reach the root
|
||||
logger that pytest's `caplog` hooks. This helper temporarily re-enables
|
||||
propagation for the duration of the block.
|
||||
"""
|
||||
ax_logger = logging.getLogger("axolotl")
|
||||
old_propagate = ax_logger.propagate
|
||||
ax_logger.propagate = True
|
||||
try:
|
||||
with caplog.at_level(logging.WARNING, logger="axolotl"):
|
||||
yield
|
||||
finally:
|
||||
ax_logger.propagate = old_propagate
|
||||
|
||||
|
||||
def _xformers_available():
|
||||
try:
|
||||
import xformers.ops # noqa: F401
|
||||
|
||||
return True
|
||||
except (ImportError, OSError):
|
||||
return False
|
||||
|
||||
|
||||
class TestCapabilityTables:
|
||||
"""Backend capability classification via frozensets and computed_field properties."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"impl",
|
||||
[
|
||||
"flash_attention_2",
|
||||
"flash_attention_3",
|
||||
"flex_attention",
|
||||
"xformers",
|
||||
"sage",
|
||||
],
|
||||
)
|
||||
def test_supports_packing(self, impl):
|
||||
assert impl in ATTN_IMPLS_SUPPORTING_PACKING
|
||||
|
||||
@pytest.mark.parametrize("impl", ["eager", "sdpa", "s2", "fp8"])
|
||||
def test_does_not_support_packing(self, impl):
|
||||
assert impl not in ATTN_IMPLS_SUPPORTING_PACKING
|
||||
|
||||
@pytest.mark.parametrize("impl", ["flash_attention_2", "flash_attention_3", "s2"])
|
||||
def test_uses_flash_lib(self, impl):
|
||||
assert impl in ATTN_IMPLS_USING_FLASH_LIB
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"impl", ["eager", "sdpa", "xformers", "flex_attention", "sage", "fp8"]
|
||||
)
|
||||
def test_does_not_use_flash_lib(self, impl):
|
||||
assert impl not in ATTN_IMPLS_USING_FLASH_LIB
|
||||
|
||||
@pytest.mark.parametrize("impl", ["eager", "sdpa"])
|
||||
def test_no_dtype_cast(self, impl):
|
||||
assert impl in ATTN_IMPLS_WITHOUT_DTYPE_CAST
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"impl",
|
||||
[
|
||||
"flash_attention_2",
|
||||
"flash_attention_3",
|
||||
"flex_attention",
|
||||
"xformers",
|
||||
"sage",
|
||||
"s2",
|
||||
"fp8",
|
||||
],
|
||||
)
|
||||
def test_needs_dtype_cast(self, impl):
|
||||
assert impl not in ATTN_IMPLS_WITHOUT_DTYPE_CAST
|
||||
|
||||
def test_known_hub_kernels_classified(self):
|
||||
assert "kernels-community/flash-attn3" in ATTN_IMPLS_SUPPORTING_PACKING
|
||||
assert "kernels-community/flash-attn3" in ATTN_IMPLS_USING_FLASH_LIB
|
||||
assert "kernels-community/sage-attention" in ATTN_IMPLS_SUPPORTING_PACKING
|
||||
|
||||
def test_computed_flags_readable_on_validated_cfg(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(attn_implementation="sdpa")
|
||||
validated = validate_config(cfg)
|
||||
assert validated.attn_implementation == "sdpa"
|
||||
assert validated.attn_supports_packing is False
|
||||
assert validated.attn_uses_flash_lib is False
|
||||
assert validated.attn_needs_dtype_cast is False
|
||||
|
||||
def test_computed_flags_not_overridable_from_yaml(self, min_base_cfg):
|
||||
"""YAML attempts to override a computed field must not win."""
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="eager", attn_uses_flash_lib=True
|
||||
)
|
||||
validated = validate_config(cfg)
|
||||
# The computed field reflects the backend, not the YAML input.
|
||||
assert validated.attn_uses_flash_lib is False
|
||||
|
||||
|
||||
class TestBackendRegistration:
|
||||
"""Axolotl-owned backends register under their canonical names in HF's registries."""
|
||||
|
||||
@pytest.mark.skipif(not _xformers_available(), reason="xformers not available")
|
||||
def test_register_xformers(self):
|
||||
from transformers.masking_utils import ALL_MASK_ATTENTION_FUNCTIONS
|
||||
from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS
|
||||
|
||||
from axolotl.monkeypatch.attention import register_xformers_attn
|
||||
|
||||
register_xformers_attn()
|
||||
|
||||
assert "xformers" in ALL_ATTENTION_FUNCTIONS
|
||||
assert "xformers" in ALL_MASK_ATTENTION_FUNCTIONS
|
||||
assert (
|
||||
ALL_MASK_ATTENTION_FUNCTIONS["xformers"]
|
||||
== ALL_MASK_ATTENTION_FUNCTIONS["flash_attention_2"]
|
||||
)
|
||||
|
||||
def test_register_sage(self):
|
||||
from transformers.masking_utils import ALL_MASK_ATTENTION_FUNCTIONS
|
||||
from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS
|
||||
|
||||
from axolotl.monkeypatch.attention import register_sage_attn
|
||||
|
||||
register_sage_attn()
|
||||
|
||||
assert "sage" in ALL_ATTENTION_FUNCTIONS
|
||||
assert "sage" in ALL_MASK_ATTENTION_FUNCTIONS
|
||||
assert (
|
||||
ALL_MASK_ATTENTION_FUNCTIONS["sage"]
|
||||
== ALL_MASK_ATTENTION_FUNCTIONS["flash_attention_2"]
|
||||
)
|
||||
|
||||
@pytest.mark.skipif(not _xformers_available(), reason="xformers not available")
|
||||
def test_xformers_does_not_overwrite_fa2(self):
|
||||
from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS
|
||||
|
||||
original_fa2 = ALL_ATTENTION_FUNCTIONS["flash_attention_2"]
|
||||
|
||||
from axolotl.monkeypatch.attention import register_xformers_attn
|
||||
|
||||
register_xformers_attn()
|
||||
|
||||
assert ALL_ATTENTION_FUNCTIONS["flash_attention_2"] is original_fa2
|
||||
|
||||
def test_sage_does_not_overwrite_fa2(self):
|
||||
from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS
|
||||
|
||||
original_fa2 = ALL_ATTENTION_FUNCTIONS["flash_attention_2"]
|
||||
|
||||
from axolotl.monkeypatch.attention import register_sage_attn
|
||||
|
||||
register_sage_attn()
|
||||
|
||||
assert ALL_ATTENTION_FUNCTIONS["flash_attention_2"] is original_fa2
|
||||
|
||||
|
||||
class TestLegacyFlagDeprecation:
|
||||
"""Legacy boolean flags (flash_attention, sdp_attention, ...) map to a
|
||||
canonical attn_implementation value, are stripped from the validated
|
||||
config, and cannot be combined with an explicit canonical value.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def _normalize(data):
|
||||
return AxolotlInputConfig.normalize_attn_implementation(data)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"flag,expected",
|
||||
[
|
||||
("flash_attention", "flash_attention_2"),
|
||||
("sdp_attention", "sdpa"),
|
||||
("xformers_attention", "xformers"),
|
||||
("flex_attention", "flex_attention"),
|
||||
("sage_attention", "sage"),
|
||||
("eager_attention", "eager"),
|
||||
("s2_attention", "s2"),
|
||||
],
|
||||
)
|
||||
def test_legacy_flag_maps_to_canonical(self, flag, expected):
|
||||
result = self._normalize({flag: True})
|
||||
assert result["attn_implementation"] == expected
|
||||
|
||||
def test_legacy_flags_are_stripped_after_mapping(self):
|
||||
result = self._normalize({"flash_attention": True})
|
||||
for flag in [
|
||||
"flash_attention",
|
||||
"sdp_attention",
|
||||
"xformers_attention",
|
||||
"flex_attention",
|
||||
"sage_attention",
|
||||
"eager_attention",
|
||||
"s2_attention",
|
||||
]:
|
||||
assert flag not in result
|
||||
|
||||
def test_s2_plus_flash_priority_is_s2(self):
|
||||
result = self._normalize({"s2_attention": True, "flash_attention": True})
|
||||
assert result["attn_implementation"] == "s2"
|
||||
|
||||
def test_sage_plus_flash_priority_is_sage(self):
|
||||
result = self._normalize({"sage_attention": True, "flash_attention": True})
|
||||
assert result["attn_implementation"] == "sage"
|
||||
|
||||
def test_canonical_plus_legacy_flag_raises(self):
|
||||
with pytest.raises(ValueError, match="cannot be combined with legacy"):
|
||||
self._normalize(
|
||||
{"attn_implementation": "flash_attention_2", "flash_attention": True}
|
||||
)
|
||||
|
||||
def test_canonical_plus_unrelated_legacy_flag_raises(self):
|
||||
with pytest.raises(ValueError, match="cannot be combined with legacy"):
|
||||
self._normalize(
|
||||
{"attn_implementation": "xformers", "flash_attention": True}
|
||||
)
|
||||
|
||||
def test_legacy_flag_stripped_on_validated_cfg(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(flash_attention=True)
|
||||
validated = validate_config(cfg)
|
||||
assert validated.attn_implementation == "flash_attention_2"
|
||||
# Legacy flag must not survive to the validated DictDefault
|
||||
# (normalizer pops it, model_dump excludes Nones).
|
||||
assert "flash_attention" not in dict(validated)
|
||||
|
||||
def test_canonical_plus_legacy_rejected_on_full_validation(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="flash_attention_2", flash_attention=True
|
||||
)
|
||||
with pytest.raises(ValueError, match="cannot be combined with legacy"):
|
||||
validate_config(cfg)
|
||||
|
||||
def test_s2_plus_flash_maps_to_s2_on_full_validation(self, min_base_cfg):
|
||||
"""Priority resolution applies through the full validator chain too."""
|
||||
cfg = min_base_cfg | DictDefault(s2_attention=True, flash_attention=True)
|
||||
validated = validate_config(cfg)
|
||||
assert validated.attn_implementation == "s2"
|
||||
|
||||
|
||||
class TestCanonicalValueAcceptance:
|
||||
"""`attn_implementation` accepts canonical names and `org/name` hub-kernel
|
||||
paths. Short-form aliases (`flash`, `flex`, `sdp`) and unknown bare names
|
||||
are rejected. Absent input is a noop.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def _normalize(data):
|
||||
return AxolotlInputConfig.normalize_attn_implementation(data)
|
||||
|
||||
def test_canonical_value_is_passthrough(self):
|
||||
data = {"attn_implementation": "flash_attention_2"}
|
||||
result = self._normalize(data)
|
||||
assert result["attn_implementation"] == "flash_attention_2"
|
||||
|
||||
def test_hub_kernel_is_passthrough(self):
|
||||
data = {"attn_implementation": "kernels-community/flash-attn3"}
|
||||
result = self._normalize(data)
|
||||
assert result["attn_implementation"] == "kernels-community/flash-attn3"
|
||||
|
||||
def test_no_attention_set_is_noop(self):
|
||||
result = self._normalize({"some_other_config": True})
|
||||
assert result.get("attn_implementation") is None
|
||||
|
||||
def test_field_validator_accepts_all_canonical(self):
|
||||
for impl in CANONICAL_ATTN_IMPLS:
|
||||
assert AxolotlInputConfig.validate_attn_implementation(impl) == impl
|
||||
|
||||
def test_field_validator_accepts_hub_kernels(self):
|
||||
for impl in (
|
||||
"kernels-community/flash-attn3",
|
||||
"kernels-community/sage-attention",
|
||||
"someorg/custom-kernel",
|
||||
):
|
||||
assert AxolotlInputConfig.validate_attn_implementation(impl) == impl
|
||||
|
||||
def test_field_validator_accepts_none(self):
|
||||
assert AxolotlInputConfig.validate_attn_implementation(None) is None
|
||||
|
||||
@pytest.mark.parametrize("alias", ["flash", "flex", "sdp"])
|
||||
def test_short_form_alias_rejected(self, alias):
|
||||
with pytest.raises(ValueError, match="is not accepted"):
|
||||
AxolotlInputConfig.validate_attn_implementation(alias)
|
||||
|
||||
def test_unknown_bare_name_rejected(self):
|
||||
with pytest.raises(ValueError, match="not a recognized backend"):
|
||||
AxolotlInputConfig.validate_attn_implementation("not_a_real_backend")
|
||||
|
||||
def test_canonical_value_passes_through_full_validation(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(attn_implementation="flash_attention_3")
|
||||
validated = validate_config(cfg)
|
||||
assert validated.attn_implementation == "flash_attention_3"
|
||||
assert validated.attn_uses_flash_lib is True
|
||||
assert validated.attn_supports_packing is True
|
||||
|
||||
def test_hub_kernel_passes_through_full_validation(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="kernels-community/flash-attn3"
|
||||
)
|
||||
validated = validate_config(cfg)
|
||||
assert validated.attn_implementation == "kernels-community/flash-attn3"
|
||||
assert validated.attn_uses_flash_lib is True
|
||||
assert validated.attn_supports_packing is True
|
||||
|
||||
def test_short_form_alias_rejected_on_full_validation(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(attn_implementation="flash")
|
||||
with pytest.raises(ValueError, match="is not accepted"):
|
||||
validate_config(cfg)
|
||||
|
||||
|
||||
class TestGemma4HybridMode:
|
||||
"""`gemma4_hybrid_attn_impl` pins `attn_implementation` to `flash_attention_2`."""
|
||||
|
||||
@staticmethod
|
||||
def _normalize(data):
|
||||
return AxolotlInputConfig.normalize_attn_implementation(data)
|
||||
|
||||
def test_defaults_to_flash_attention_2(self):
|
||||
result = self._normalize({"gemma4_hybrid_attn_impl": True})
|
||||
assert result["attn_implementation"] == "flash_attention_2"
|
||||
|
||||
def test_explicit_fa2_passes(self):
|
||||
result = self._normalize(
|
||||
{
|
||||
"gemma4_hybrid_attn_impl": True,
|
||||
"attn_implementation": "flash_attention_2",
|
||||
}
|
||||
)
|
||||
assert result["attn_implementation"] == "flash_attention_2"
|
||||
|
||||
def test_non_fa2_raises(self):
|
||||
with pytest.raises(
|
||||
ValueError, match="requires attn_implementation=flash_attention_2"
|
||||
):
|
||||
self._normalize(
|
||||
{"gemma4_hybrid_attn_impl": True, "attn_implementation": "sdpa"}
|
||||
)
|
||||
|
||||
|
||||
class TestSamplePackingValidation:
|
||||
"""`sample_packing` warns for non-varlen backends; s2 raises outright."""
|
||||
|
||||
def test_eager_warns(self, min_base_cfg, caplog):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="eager", sample_packing=True
|
||||
)
|
||||
with _capture_axolotl_warnings(caplog):
|
||||
validate_config(cfg)
|
||||
assert any(
|
||||
"does not handle cross-sample decontamination" in r.getMessage()
|
||||
for r in caplog.records
|
||||
)
|
||||
|
||||
def test_sdpa_warns(self, min_base_cfg, caplog):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="sdpa", sample_packing=True
|
||||
)
|
||||
with _capture_axolotl_warnings(caplog):
|
||||
validate_config(cfg)
|
||||
assert any(
|
||||
"does not handle cross-sample decontamination" in r.getMessage()
|
||||
for r in caplog.records
|
||||
)
|
||||
|
||||
def test_flash_attention_2_does_not_warn(self, min_base_cfg, caplog):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="flash_attention_2", sample_packing=True
|
||||
)
|
||||
with _capture_axolotl_warnings(caplog):
|
||||
validate_config(cfg)
|
||||
assert not any(
|
||||
"does not handle cross-sample decontamination" in r.getMessage()
|
||||
for r in caplog.records
|
||||
)
|
||||
|
||||
def test_s2_raises(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(attn_implementation="s2", sample_packing=True)
|
||||
with pytest.raises(
|
||||
ValueError, match="shifted-sparse attention does not currently support"
|
||||
):
|
||||
validate_config(cfg)
|
||||
|
||||
|
||||
class TestScalingSoftmaxValidation:
|
||||
"""`scaling_softmax` is only implemented under flex_attention."""
|
||||
|
||||
def test_non_flex_raises(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="flash_attention_2", scaling_softmax=True
|
||||
)
|
||||
with pytest.raises(ValueError, match="scaling_softmax requires flex"):
|
||||
validate_config(cfg)
|
||||
|
||||
def test_flex_passes(self, min_base_cfg):
|
||||
cfg = min_base_cfg | DictDefault(
|
||||
attn_implementation="flex_attention", scaling_softmax=True
|
||||
)
|
||||
validated = validate_config(cfg)
|
||||
assert validated.attn_implementation == "flex_attention"
|
||||
163
tests/test_mm_chat_collator.py
Normal file
163
tests/test_mm_chat_collator.py
Normal file
@@ -0,0 +1,163 @@
|
||||
"""
|
||||
Regression tests for MultiModalChatDataCollator shape contracts.
|
||||
|
||||
Guard against the transformers 5.x breakage where apply_chat_template's
|
||||
own `return_dict` parameter (default False) caused it to return the raw
|
||||
input_ids tensor instead of the full BatchFeature dict, leading to
|
||||
IndexError: too many indices for tensor of dimension 2
|
||||
when downstream code did batch["input_ids"] on the resulting tensor.
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
from transformers import BatchFeature
|
||||
|
||||
|
||||
@pytest.fixture(name="mock_processor")
|
||||
def fixture_mock_processor():
|
||||
"""
|
||||
A mock processor whose apply_chat_template returns a BatchFeature
|
||||
when called with return_dict=True (the correct call convention),
|
||||
or a raw input_ids tensor when called without return_dict=True
|
||||
(the broken call convention that the bug introduced).
|
||||
"""
|
||||
processor = MagicMock()
|
||||
processor.tokenizer = MagicMock()
|
||||
processor.tokenizer.pad_token_id = 0
|
||||
processor.image_token = "<|image|>"
|
||||
processor.tokenizer.convert_tokens_to_ids = MagicMock(return_value=128256)
|
||||
|
||||
batch_size, seq_len = 2, 16
|
||||
input_ids = torch.ones(batch_size, seq_len, dtype=torch.long)
|
||||
attention_mask = torch.ones(batch_size, seq_len, dtype=torch.long)
|
||||
|
||||
batch_feature = BatchFeature(
|
||||
data={
|
||||
"input_ids": input_ids,
|
||||
"attention_mask": attention_mask,
|
||||
}
|
||||
)
|
||||
|
||||
def _apply_chat_template(*args, **kwargs):
|
||||
if kwargs.get("return_dict", False):
|
||||
return batch_feature
|
||||
# Simulate transformers 5.x default behaviour: returns out["input_ids"]
|
||||
return input_ids
|
||||
|
||||
processor.apply_chat_template = MagicMock(side_effect=_apply_chat_template)
|
||||
processor.chat_template = None
|
||||
return processor
|
||||
|
||||
|
||||
@pytest.fixture(name="mock_processing_strategy")
|
||||
def fixture_mock_processing_strategy(mock_processor):
|
||||
from axolotl.processing_strategies import ProcessingStrategy
|
||||
|
||||
strategy = ProcessingStrategy(processor=mock_processor)
|
||||
return strategy
|
||||
|
||||
|
||||
class TestMultiModalChatDataCollatorShapeContract:
|
||||
"""
|
||||
Verify that MultiModalChatDataCollator.process_rows returns a dict with
|
||||
2-D input_ids and labels, not a raw tensor. This is the shape contract
|
||||
that process_labels depends on.
|
||||
"""
|
||||
|
||||
def _make_collator(self, mock_processing_strategy):
|
||||
from axolotl.utils.collators.mm_chat import MultiModalChatDataCollator
|
||||
|
||||
tokenizer = mock_processing_strategy.processor.tokenizer
|
||||
return MultiModalChatDataCollator(
|
||||
tokenizer=tokenizer,
|
||||
processing_strategy=mock_processing_strategy,
|
||||
)
|
||||
|
||||
def _make_examples(self):
|
||||
return [
|
||||
{
|
||||
"messages": [
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi there"},
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def test_process_rows_returns_dict(self, mock_processing_strategy):
|
||||
"""batch must be a dict, not a raw tensor."""
|
||||
collator = self._make_collator(mock_processing_strategy)
|
||||
examples = self._make_examples()
|
||||
|
||||
with patch.object(
|
||||
mock_processing_strategy,
|
||||
"__call__",
|
||||
return_value=examples,
|
||||
):
|
||||
batch = collator.process_rows(examples)
|
||||
|
||||
assert isinstance(batch, dict), (
|
||||
"process_rows must return a dict (BatchFeature), not a raw tensor. "
|
||||
"If it returns a tensor, apply_chat_template was called without "
|
||||
"return_dict=True at the top level."
|
||||
)
|
||||
|
||||
def test_process_rows_input_ids_shape(self, mock_processing_strategy):
|
||||
"""batch['input_ids'] must be a 2-D tensor (batch, seq_len)."""
|
||||
collator = self._make_collator(mock_processing_strategy)
|
||||
examples = self._make_examples()
|
||||
|
||||
with patch.object(
|
||||
mock_processing_strategy,
|
||||
"__call__",
|
||||
return_value=examples,
|
||||
):
|
||||
batch = collator.process_rows(examples)
|
||||
|
||||
assert "input_ids" in batch
|
||||
assert isinstance(batch["input_ids"], torch.Tensor)
|
||||
assert batch["input_ids"].ndim == 2, (
|
||||
f"input_ids must be 2-D (batch, seq_len), got shape {batch['input_ids'].shape}"
|
||||
)
|
||||
|
||||
def test_process_rows_labels_shape(self, mock_processing_strategy):
|
||||
"""batch['labels'] must be a 2-D tensor matching input_ids shape."""
|
||||
collator = self._make_collator(mock_processing_strategy)
|
||||
examples = self._make_examples()
|
||||
|
||||
with patch.object(
|
||||
mock_processing_strategy,
|
||||
"__call__",
|
||||
return_value=examples,
|
||||
):
|
||||
batch = collator.process_rows(examples)
|
||||
|
||||
assert "labels" in batch
|
||||
assert isinstance(batch["labels"], torch.Tensor)
|
||||
assert batch["labels"].ndim == 2
|
||||
assert batch["labels"].shape == batch["input_ids"].shape
|
||||
|
||||
def test_apply_chat_template_called_with_return_dict_true(
|
||||
self, mock_processing_strategy
|
||||
):
|
||||
"""apply_chat_template must be called with return_dict=True as a keyword arg."""
|
||||
collator = self._make_collator(mock_processing_strategy)
|
||||
examples = self._make_examples()
|
||||
|
||||
with patch.object(
|
||||
mock_processing_strategy,
|
||||
"__call__",
|
||||
return_value=examples,
|
||||
):
|
||||
collator.process_rows(examples)
|
||||
|
||||
call_kwargs = (
|
||||
mock_processing_strategy.processor.apply_chat_template.call_args.kwargs
|
||||
)
|
||||
assert call_kwargs.get("return_dict") is True, (
|
||||
"apply_chat_template must be called with return_dict=True as a top-level "
|
||||
"keyword argument (not inside processor_kwargs). In transformers 5.x, "
|
||||
"apply_chat_template has its own return_dict param (default False) that "
|
||||
"controls whether it returns the full BatchFeature or just input_ids."
|
||||
)
|
||||
62
tests/test_no_legacy_attn_reads.py
Normal file
62
tests/test_no_legacy_attn_reads.py
Normal file
@@ -0,0 +1,62 @@
|
||||
"""Enforce attn_implementation as the single source of truth.
|
||||
|
||||
Fails if src/ contains a cfg.<legacy>_attention read. Migrate offending sites
|
||||
to cfg.attn_implementation or the attn_supports_packing/attn_uses_flash_lib/
|
||||
attn_needs_dtype_cast computed flags.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
LEGACY_FLAGS = (
|
||||
"flash_attention",
|
||||
"sdp_attention",
|
||||
"xformers_attention",
|
||||
"flex_attention",
|
||||
"sage_attention",
|
||||
"s2_attention",
|
||||
"eager_attention",
|
||||
)
|
||||
|
||||
# The normalizer is allowed to read the legacy keys (that's its job).
|
||||
# lm_eval/cli.py is a raw-YAML entry point (bypasses AxolotlInputConfig) that
|
||||
# honors both forms during the deprecation period — when we remove the legacy
|
||||
# flags entirely, drop this allowlist entry and the BC branch in that file.
|
||||
ALLOWED_FILES = {
|
||||
Path("src/axolotl/utils/schemas/config.py"),
|
||||
Path("src/axolotl/integrations/lm_eval/cli.py"),
|
||||
}
|
||||
|
||||
# `cfg.<flag>`, `self.cfg.<flag>`, `data.get("<flag>")`, `data["<flag>"]`
|
||||
_PATTERNS = [re.compile(rf"\bcfg\.{flag}\b") for flag in LEGACY_FLAGS] + [
|
||||
re.compile(rf'\bdata\.get\("{flag}"\)') for flag in LEGACY_FLAGS
|
||||
]
|
||||
|
||||
|
||||
def _repo_root() -> Path:
|
||||
return Path(__file__).resolve().parent.parent
|
||||
|
||||
|
||||
def test_no_legacy_attn_reads_in_src():
|
||||
root = _repo_root()
|
||||
src = root / "src"
|
||||
offenders: list[str] = []
|
||||
|
||||
for py_file in src.rglob("*.py"):
|
||||
rel = py_file.relative_to(root)
|
||||
if rel in ALLOWED_FILES:
|
||||
continue
|
||||
text = py_file.read_text(encoding="utf-8")
|
||||
for pattern in _PATTERNS:
|
||||
for match in pattern.finditer(text):
|
||||
# Line number for the user's convenience.
|
||||
line_no = text.count("\n", 0, match.start()) + 1
|
||||
offenders.append(f"{rel}:{line_no} {match.group(0)}")
|
||||
|
||||
assert not offenders, (
|
||||
"Found legacy attention-flag reads in src/. Migrate to "
|
||||
"`cfg.attn_implementation` / capability flags:\n "
|
||||
+ "\n ".join(sorted(offenders))
|
||||
)
|
||||
Reference in New Issue
Block a user