* upgrade to torchao 0.17.0 * chore: lint * refactor attention handling * replace legacy attention boolean flags with capability properties Replace checks with capability-based properties derived from attn_implementation This separates three concerns that were conflated under flash_attention: 1. Backend selection -> attn_implementation enum 2. Packing capability -> attn_supports_packing property 3. Flash-attn library dependency -> attn_uses_flash_lib property * compute attn capability flags in normalizer instead of properties * make attn_implementation the single source of truth * move attention-dependent validators to mode=after * migrate remaining consumers to canonical attn_implementation * expand attention tests + rewrite docs * migrate example configs to canonical attn_implementation * update doc snippets + reject gemma4-hybrid with non-FA2 backend * remove dead gemma4 branch in _set_attention_config * fix duplicate attn_implementation in gpt-oss yamls and flaky caplog tests * drop "Phase 2" naming from attn-implementation tests * regroup attn_implementation tests by feature concern * clean up verbose comments and remove MD Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai> * fix(collator): pass return_dict=True at apply_chat_template top level for transformers 5.x In transformers 5.x, ProcessorMixin.apply_chat_template gained its own `return_dict` parameter (defaulting to False). When return_dict=False and tokenize=True the method returns out["input_ids"] directly — a 2-D tensor — rather than the full BatchFeature dict. The old code placed `return_dict=True` inside processor_kwargs. In transformers 5.x those kwargs are forwarded to the underlying processor call self(...) where _merge_kwargs silently ignores any key not present in MllamaProcessorKwargs (emitting a warning). The outer return_dict therefore stayed False, apply_chat_template returned the raw input_ids tensor, and the subsequent `batch["input_ids"]` attempted to index a 2-D tensor with the 9-character string "input_ids", producing: IndexError: too many indices for tensor of dimension 2 The fix is to pass return_dict=True as a top-level keyword argument to apply_chat_template (where it is actually consumed) and remove it from processor_kwargs (where it was silently dropped). No version guard is needed: transformers is pinned to ==5.5.4 in pyproject.toml. Adds a unit-level regression test (tests/test_mm_chat_collator.py) that mocks the processor to return a raw tensor when apply_chat_template is called without top-level return_dict=True, verifying the four invariants: process_rows returns a dict, input_ids is 2-D, labels is 2-D, and apply_chat_template receives return_dict=True as a top-level kwarg. Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_multimodal_dataset Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_text_only_dataset Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai> * fix(collator): process_rows returns dict (BatchFeature) shape Two related changes for the multimodal chat collator under transformers 5.x: 1. Wrap apply_chat_template result in dict(...) so process_rows returns a plain dict rather than a BatchFeature instance. BatchFeature is a Mapping but not a dict; downstream code that did batch["labels"] = self.processing_strategy.process_labels(batch["input_ids"]) would index on a tensor when the result wasn't dict-shaped, raising IndexError: too many indices for tensor of dimension 2 2. Soften the regression test's contract from `dict` to `Mapping` so it exercises the actual semantic guarantee (key/value access) rather than the implementation detail (dict vs BatchFeature). Test guards against the original transformers 5.x breakage where apply_chat_template's return_dict default went from True to False. Includes regression test under tests/test_mm_chat_collator.py. Bug surfaced via swarm dispatch task_01KQHPNAYD8XARSNSDJVW1GPF6 against attn-implementation-refactor; squash-merged from agent commits 4de886fd + dc9fcf4f. Signed-off-by: Wing Lian <wing@axolotl.ai> --------- Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai>
78 lines
1.9 KiB
YAML
78 lines
1.9 KiB
YAML
# EBFT Structured Mode: Qwen3.5-4B (hybrid linear attention)
|
|
#
|
|
# Qwen3.5 uses hybrid attention: linear attention (conv1d) on 3/4 of layers,
|
|
# full attention every 4th layer. This tests EBFT compatibility.
|
|
#
|
|
# Prerequisites:
|
|
# 1. Start vLLM on GPU 0:
|
|
# CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model Qwen/Qwen3.5-4B \
|
|
# --gpu-memory-utilization 0.5 --max-model-len 2048 --enforce-eager
|
|
#
|
|
# 2. Run training on GPU 1:
|
|
# CUDA_VISIBLE_DEVICES=1 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \
|
|
# axolotl train examples/ebft/qwen35-4b-ebft-structured.yaml
|
|
|
|
base_model: Qwen/Qwen3.5-4B
|
|
|
|
rl: ebft
|
|
|
|
ebft:
|
|
feature_layers: [0.25, 0.5, 0.75]
|
|
embed_method: last_token
|
|
use_whitening: false
|
|
alignment_coef: 1.0
|
|
diversity_coef: 1.0
|
|
ce_coef: 0.0
|
|
|
|
trl:
|
|
num_generations: 4
|
|
max_completion_length: 256
|
|
temperature: 0.7
|
|
use_vllm: true
|
|
vllm_server_host: 0.0.0.0
|
|
vllm_server_port: 8000
|
|
scale_rewards: true
|
|
loss_type: grpo
|
|
epsilon: 0.2
|
|
generation_kwargs:
|
|
stop_token_ids: [248044, 248046] # <|endoftext|>, <|im_end|>
|
|
chat_template_kwargs:
|
|
enable_thinking: false # disable Qwen3.5 thinking mode for shorter completions
|
|
|
|
datasets:
|
|
- path: nvidia/OpenCodeInstruct
|
|
type: ebft_opencode.transform
|
|
split: train[:500]
|
|
|
|
sequence_len: 1024
|
|
micro_batch_size: 1
|
|
gradient_accumulation_steps: 4
|
|
num_epochs: 1
|
|
max_steps: 10
|
|
|
|
learning_rate: 5.0e-6
|
|
optimizer: adamw_torch_fused
|
|
lr_scheduler: cosine
|
|
warmup_steps: 3
|
|
weight_decay: 0.01
|
|
|
|
adapter: lora
|
|
lora_r: 16
|
|
lora_alpha: 32
|
|
lora_dropout: 0.0
|
|
lora_target_modules: ".*\\.layers\\.(3|7|11|15|19|23|27|31)\\.self_attn\\.(q|k|v|o)_proj|.*\\.mlp\\.(gate|up|down)_proj"
|
|
|
|
bf16: auto
|
|
attn_implementation: flash_attention_2
|
|
gradient_checkpointing: true
|
|
|
|
special_tokens:
|
|
pad_token: "<|endoftext|>"
|
|
|
|
val_set_size: 0.0
|
|
output_dir: ./outputs/ebft-qwen35-4b-structured
|
|
|
|
wandb_project: ebft
|
|
logging_steps: 1
|
|
save_steps: 50
|