diff --git a/docs/attention.qmd b/docs/attention.qmd index 771299a29..2180a42a5 100644 --- a/docs/attention.qmd +++ b/docs/attention.qmd @@ -3,28 +3,71 @@ title: Attention description: Supported attention modules in Axolotl --- -## SDP Attention - -This is the default built-in attention in PyTorch. +Axolotl routes attention via a single config field: ```yaml -sdp_attention: true +attn_implementation: ``` -For more details: [PyTorch docs](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) +`attn_implementation` is passed through to `transformers` verbatim (via +`model.config._attn_implementation`). Accepted values are the HF-native +backends, axolotl-registered backends, or a hub-kernel path. -## Flash Attention +## Backends -Axolotl supports Flash Attention 2, 3, and 4. The best available version is used automatically -based on your installed packages and GPU. +| `attn_implementation` | Description | +|---|---| +| `eager` | Plain PyTorch attention. No packing support. | +| `sdpa` | PyTorch `scaled_dot_product_attention`. No packing support. | +| `flash_attention_2` | Dao-AILab Flash Attention 2. | +| `flash_attention_3` | Dao-AILab Flash Attention 3 (Hopper+). | +| `flex_attention` | Torch Flex Attention (requires torch ≥ 2.6). | +| `xformers` | xFormers memory-efficient attention. | +| `sage` | SageAttention (QK int8 / PV fp16). | +| `s2` | Shifted-Sparse Attention (LLaMA only, FA2 under the hood). | +| `fp8` | torchao FP8 low-precision attention (requires SM90+, torch ≥ 2.11). Loaded as SDPA and patched post-load. | +| `kernels-community/flash-attn3` | HF hub FA3 kernel. | +| `kernels-community/sage-attention` | HF hub SageAttention kernel. | +| Other `/` path | Any hub-kernel path supported by `transformers`. | + +Short-form aliases (`flash`, `fa2`, `flex`, `sdp`, etc.) are **not accepted** — +set the canonical name above. + +### Capability flags + +Axolotl derives three boolean capability flags from `attn_implementation` and +exposes them on the validated config: + +- `cfg.attn_supports_packing` — backend supports varlen sample packing via + `position_ids`. Gates multipack patches and `sample_packing_drop_attention_mask`. +- `cfg.attn_uses_flash_lib` — backend needs the `flash_attn` (Dao-AILab) + monkeypatches (FA4 auto, LLaMA flash hijack, ring-FA). +- `cfg.attn_needs_dtype_cast` — backend requires fp16/bf16 embeddings + (everything except `eager` and `sdpa`). + +These are **computed** — they cannot be overridden from YAML. + +## Per-backend notes + +### SDPA + +Default PyTorch attention. See +[PyTorch docs](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html). ```yaml -flash_attention: true +attn_implementation: sdpa ``` -For more details: [Flash Attention](https://github.com/Dao-AILab/flash-attention/) +### Flash Attention -### Flash Attention 2 +Axolotl supports FA2, FA3, and FA4. The best available version is used +automatically based on your installed packages and GPU. + +```yaml +attn_implementation: flash_attention_2 # or flash_attention_3 +``` + +#### Flash Attention 2 Requirements: Ampere, Ada, or Hopper GPUs (Turing or lower not supported) @@ -39,20 +82,20 @@ Alternatively, try reinstall or downgrade a version. ::: -### Flash Attention 3 +#### Flash Attention 3 Requirements: Hopper only and CUDA 12.8 (recommended) ```bash git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention/hopper - python setup.py install ``` -### Flash Attention 4 +#### Flash Attention 4 -Requirements: Hopper or Blackwell GPUs +Requirements: Hopper or Blackwell GPUs. Auto-applied when `attn_uses_flash_lib` +is true and FA4 is importable. ```bash pip install flash-attn-4 @@ -63,7 +106,6 @@ Or from source: ```bash git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention/flash_attn/cute - pip install -e . # FA2's flash_attn package includes a cute/ stub that shadows FA4. @@ -86,93 +128,113 @@ and falls back to FA2/3. ::: -For more details: [flash-attention/flash_attn/cute](https://github.com/Dao-AILab/flash-attention/tree/main/flash_attn/cute) - ### AMD -Requirements: ROCm 6.0 and above. +Requirements: ROCm 6.0 and above. See +[Flash Attention AMD docs](https://github.com/Dao-AILab/flash-attention/tree/main?tab=readme-ov-file#amd-rocm-support). -See [Flash Attention AMD docs](https://github.com/Dao-AILab/flash-attention/tree/main?tab=readme-ov-file#amd-rocm-support). - -## Flex Attention - -A flexible PyTorch API for attention used in combination with `torch.compile`. +### Flex Attention ```yaml -flex_attention: true - -# recommended -torch_compile: true +attn_implementation: flex_attention +torch_compile: true # recommended ``` -::: {.callout-note} +Requires torch ≥ 2.6. See [PyTorch docs](https://pytorch.org/blog/flexattention/). -We recommend using latest stable version of PyTorch for best performance. +### SageAttention -::: - -For more details: [PyTorch docs](https://pytorch.org/blog/flexattention/) - -## SageAttention - -Attention kernels with QK Int8 and PV FP16 accumulator. +Requirements: Ampere, Ada, or Hopper GPUs. ```yaml -sage_attention: true +attn_implementation: sage ``` -Requirements: Ampere, Ada, or Hopper GPUs - ```bash pip install sageattention==2.2.0 --no-build-isolation ``` ::: {.callout-warning} -Only LoRA/QLoRA recommended at the moment. We found loss drop to 0 for full finetuning. See [GitHub Issue](https://github.com/thu-ml/SageAttention/issues/198). +Only LoRA/QLoRA recommended. Full finetuning has been observed to drop loss to 0. See +[GitHub Issue](https://github.com/thu-ml/SageAttention/issues/198). ::: -For more details: [Sage Attention](https://github.com/thu-ml/SageAttention) +For more details: [Sage Attention](https://github.com/thu-ml/SageAttention). -::: {.callout-note} - -We do not support SageAttention 3 at the moment. If you are interested on adding this or improving SageAttention implementation, please make an Issue. - -::: - - -## xFormers +### xFormers ```yaml -xformers_attention: true +attn_implementation: xformers ``` ::: {.callout-tip} -We recommend using with Turing GPUs or below (such as on Colab). +Recommended for Turing GPUs or below (e.g. Colab T4). ::: -For more details: [xFormers](https://github.com/facebookresearch/xformers) - -## Shifted Sparse Attention +### Shifted Sparse Attention ::: {.callout-warning} -We plan to deprecate this! If you use this feature, we recommend switching to methods above. +Planned for deprecation. Prefer one of the backends above. ::: -Requirements: LLaMA model architecture +Requirements: LLaMA model architecture. Loaded as FA2 under the hood and +patched to implement shifted-sparse attention. Does not support sample packing. ```yaml -flash_attention: true -s2_attention: true +attn_implementation: s2 ``` -::: {.callout-tip} +### FP8 -No sample packing support! +torchao low-precision attention. Loaded as SDPA and patched post-load. + +Requirements: SM90+ (Hopper/Blackwell), PyTorch ≥ 2.11, torchao ≥ 0.17, +flash-attn with FA3. KV caching must be disabled. + +```yaml +attn_implementation: fp8 +``` + +### Hub kernels + +```yaml +attn_implementation: kernels-community/flash-attn3 +``` + +Passed through to `transformers`; axolotl does not install the kernel itself. +For recognized hub paths the capability flags are set automatically; for +arbitrary paths axolotl uses conservative defaults (`attn_supports_packing=False`, +`attn_uses_flash_lib=False`). + +## Migrating from legacy boolean flags + +The following legacy config fields are **deprecated** and will be removed in a +future release. Each emits a `DeprecationWarning` when set and is stripped from +the validated config. + +| Legacy | Canonical | +|---|---| +| `flash_attention: true` | `attn_implementation: flash_attention_2` | +| `sdp_attention: true` | `attn_implementation: sdpa` | +| `xformers_attention: true` | `attn_implementation: xformers` | +| `flex_attention: true` | `attn_implementation: flex_attention` | +| `sage_attention: true` | `attn_implementation: sage` | +| `s2_attention: true` | `attn_implementation: s2` | +| `eager_attention: true` | `attn_implementation: eager` | + +Combining `attn_implementation` with a legacy flag (e.g. `attn_implementation: +flash_attention_2` **and** `flash_attention: true`) raises — pick one. + +::: {.callout-note} + +Existing example configs under `examples/` still use the legacy flags. They +continue to work with a deprecation warning; they will be migrated in a +follow-up pass. ::: diff --git a/tests/test_attn_implementation.py b/tests/test_attn_implementation.py index 68282a59b..44628fd33 100644 --- a/tests/test_attn_implementation.py +++ b/tests/test_attn_implementation.py @@ -5,10 +5,14 @@ Covers the Phase 1 contract: - Legacy boolean flags are mapped to the canonical value, warned on, and stripped. - Canonical `attn_implementation` + legacy flag raises. - Capability flags are computed from `attn_implementation`. + +Plus Phase 2 gap fixes and full-model validation behaviour. """ import pytest +from axolotl.utils.config import validate_config +from axolotl.utils.dict import DictDefault from axolotl.utils.schemas.config import AxolotlInputConfig from axolotl.utils.schemas.enums import ( ATTN_IMPLS_SUPPORTING_PACKING, @@ -267,3 +271,138 @@ class TestAttentionRegistration: register_sage_attn() assert ALL_ATTENTION_FUNCTIONS["flash_attention_2"] is original_fa2 + + +class TestValidatedConfig: + """Exercise the full validator chain on `AxolotlInputConfig(**data)`. + + Classmethod tests above cover the normalizer in isolation. These tests + verify that `model_validator(mode="before")` ordering works under the real + MRO chain — specifically that legacy flags are stripped, the computed + capability fields are readable on the validated instance, and + `attn_supports_packing`/`attn_uses_flash_lib` aren't overridable from YAML. + """ + + def test_legacy_flag_stripped_on_validated_cfg(self, min_base_cfg): + cfg = min_base_cfg | DictDefault(flash_attention=True) + validated = validate_config(cfg) + assert validated.attn_implementation == "flash_attention_2" + # Legacy flag must not survive to the validated DictDefault + # (normalizer pops it, model_dump excludes Nones). + assert "flash_attention" not in dict(validated) + + def test_canonical_name_passes_through(self, min_base_cfg): + cfg = min_base_cfg | DictDefault(attn_implementation="flash_attention_3") + validated = validate_config(cfg) + assert validated.attn_implementation == "flash_attention_3" + assert validated.attn_uses_flash_lib is True + assert validated.attn_supports_packing is True + + def test_computed_capability_flags_readable(self, min_base_cfg): + cfg = min_base_cfg | DictDefault(attn_implementation="sdpa") + validated = validate_config(cfg) + assert validated.attn_implementation == "sdpa" + assert validated.attn_supports_packing is False + assert validated.attn_uses_flash_lib is False + assert validated.attn_needs_dtype_cast is False + + def test_capability_flags_not_overridable_from_yaml(self, min_base_cfg): + """YAML attempts to override a computed field must not win.""" + cfg = min_base_cfg | DictDefault( + attn_implementation="eager", attn_uses_flash_lib=True + ) + validated = validate_config(cfg) + # The computed field reflects the backend, not the YAML input. + assert validated.attn_uses_flash_lib is False + + def test_short_form_alias_rejected_on_full_validation(self, min_base_cfg): + cfg = min_base_cfg | DictDefault(attn_implementation="flash") + with pytest.raises(ValueError, match="is not accepted"): + validate_config(cfg) + + def test_canonical_plus_legacy_rejected_on_full_validation(self, min_base_cfg): + cfg = min_base_cfg | DictDefault( + attn_implementation="flash_attention_2", flash_attention=True + ) + with pytest.raises(ValueError, match="cannot be combined with legacy"): + validate_config(cfg) + + def test_s2_plus_flash_maps_to_s2_on_full_validation(self, min_base_cfg): + """The inherited `check_attention_fields` mixin used to raise here; + after Phase 1 it's removed and the normalizer owns the priority.""" + cfg = min_base_cfg | DictDefault(s2_attention=True, flash_attention=True) + validated = validate_config(cfg) + assert validated.attn_implementation == "s2" + + def test_hub_kernel_on_full_validation(self, min_base_cfg): + cfg = min_base_cfg | DictDefault( + attn_implementation="kernels-community/flash-attn3" + ) + validated = validate_config(cfg) + assert validated.attn_implementation == "kernels-community/flash-attn3" + assert validated.attn_uses_flash_lib is True + assert validated.attn_supports_packing is True + + +class TestPhase2GapFixes: + """Regression tests for the validator gaps closed in Phase 2.""" + + def test_sample_packing_with_eager_warns(self, min_base_cfg, caplog): + import logging + + cfg = min_base_cfg | DictDefault( + attn_implementation="eager", sample_packing=True + ) + with caplog.at_level(logging.WARNING): + validate_config(cfg) + assert any( + "does not handle cross-sample decontamination" in r.message + for r in caplog.records + ) + + def test_sample_packing_with_sdpa_warns(self, min_base_cfg, caplog): + import logging + + cfg = min_base_cfg | DictDefault( + attn_implementation="sdpa", sample_packing=True + ) + with caplog.at_level(logging.WARNING): + validate_config(cfg) + assert any( + "does not handle cross-sample decontamination" in r.message + for r in caplog.records + ) + + def test_sample_packing_with_flash_does_not_warn(self, min_base_cfg, caplog): + import logging + + cfg = min_base_cfg | DictDefault( + attn_implementation="flash_attention_2", sample_packing=True + ) + with caplog.at_level(logging.WARNING): + validate_config(cfg) + assert not any( + "does not handle cross-sample decontamination" in r.message + for r in caplog.records + ) + + def test_sample_packing_with_s2_raises(self, min_base_cfg): + cfg = min_base_cfg | DictDefault(attn_implementation="s2", sample_packing=True) + with pytest.raises( + ValueError, match="shifted-sparse attention does not currently support" + ): + validate_config(cfg) + + def test_scaling_softmax_without_flex_raises(self, min_base_cfg): + cfg = min_base_cfg | DictDefault( + attn_implementation="flash_attention_2", scaling_softmax=True + ) + with pytest.raises(ValueError, match="scaling_softmax requires flex"): + validate_config(cfg) + + def test_scaling_softmax_with_flex_passes(self, min_base_cfg): + cfg = min_base_cfg | DictDefault( + attn_implementation="flex_attention", scaling_softmax=True + ) + validated = validate_config(cfg) + assert validated.attn_implementation == "flex_attention"