* upgrade to torchao 0.17.0 * chore: lint * refactor attention handling * replace legacy attention boolean flags with capability properties Replace checks with capability-based properties derived from attn_implementation This separates three concerns that were conflated under flash_attention: 1. Backend selection -> attn_implementation enum 2. Packing capability -> attn_supports_packing property 3. Flash-attn library dependency -> attn_uses_flash_lib property * compute attn capability flags in normalizer instead of properties * make attn_implementation the single source of truth * move attention-dependent validators to mode=after * migrate remaining consumers to canonical attn_implementation * expand attention tests + rewrite docs * migrate example configs to canonical attn_implementation * update doc snippets + reject gemma4-hybrid with non-FA2 backend * remove dead gemma4 branch in _set_attention_config * fix duplicate attn_implementation in gpt-oss yamls and flaky caplog tests * drop "Phase 2" naming from attn-implementation tests * regroup attn_implementation tests by feature concern * clean up verbose comments and remove MD Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai> * fix(collator): pass return_dict=True at apply_chat_template top level for transformers 5.x In transformers 5.x, ProcessorMixin.apply_chat_template gained its own `return_dict` parameter (defaulting to False). When return_dict=False and tokenize=True the method returns out["input_ids"] directly — a 2-D tensor — rather than the full BatchFeature dict. The old code placed `return_dict=True` inside processor_kwargs. In transformers 5.x those kwargs are forwarded to the underlying processor call self(...) where _merge_kwargs silently ignores any key not present in MllamaProcessorKwargs (emitting a warning). The outer return_dict therefore stayed False, apply_chat_template returned the raw input_ids tensor, and the subsequent `batch["input_ids"]` attempted to index a 2-D tensor with the 9-character string "input_ids", producing: IndexError: too many indices for tensor of dimension 2 The fix is to pass return_dict=True as a top-level keyword argument to apply_chat_template (where it is actually consumed) and remove it from processor_kwargs (where it was silently dropped). No version guard is needed: transformers is pinned to ==5.5.4 in pyproject.toml. Adds a unit-level regression test (tests/test_mm_chat_collator.py) that mocks the processor to return a raw tensor when apply_chat_template is called without top-level return_dict=True, verifying the four invariants: process_rows returns a dict, input_ids is 2-D, labels is 2-D, and apply_chat_template receives return_dict=True as a top-level kwarg. Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_multimodal_dataset Fixes: tests/e2e/test_llama_vision.py::TestLlamaVision::test_lora_llama_vision_text_only_dataset Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai> * fix(collator): process_rows returns dict (BatchFeature) shape Two related changes for the multimodal chat collator under transformers 5.x: 1. Wrap apply_chat_template result in dict(...) so process_rows returns a plain dict rather than a BatchFeature instance. BatchFeature is a Mapping but not a dict; downstream code that did batch["labels"] = self.processing_strategy.process_labels(batch["input_ids"]) would index on a tensor when the result wasn't dict-shaped, raising IndexError: too many indices for tensor of dimension 2 2. Soften the regression test's contract from `dict` to `Mapping` so it exercises the actual semantic guarantee (key/value access) rather than the implementation detail (dict vs BatchFeature). Test guards against the original transformers 5.x breakage where apply_chat_template's return_dict default went from True to False. Includes regression test under tests/test_mm_chat_collator.py. Bug surfaced via swarm dispatch task_01KQHPNAYD8XARSNSDJVW1GPF6 against attn-implementation-refactor; squash-merged from agent commits 4de886fd + dc9fcf4f. Signed-off-by: Wing Lian <wing@axolotl.ai> --------- Signed-off-by: Wing Lian <wing@axolotl.ai> Co-authored-by: Axolotl Swarm <no-reply@axolotl.ai>
179 lines
5.3 KiB
YAML
179 lines
5.3 KiB
YAML
# SwanLab LoRA Training Example with Performance Profiling
|
|
#
|
|
# This example demonstrates standard LoRA fine-tuning with SwanLab integration
|
|
# for performance profiling and optimization.
|
|
#
|
|
# Features enabled:
|
|
# - SwanLab experiment tracking
|
|
# - Performance profiling (training step, forward/backward pass timing)
|
|
# - Real-time metrics visualization
|
|
#
|
|
# To run:
|
|
# export SWANLAB_API_KEY=your-api-key
|
|
# accelerate launch -m axolotl.cli.train examples/swanlab/lora-swanlab-profiling.yml
|
|
|
|
# Model Configuration
|
|
base_model: NousResearch/Llama-3.2-1B
|
|
|
|
# Dataset Configuration
|
|
datasets:
|
|
- path: teknium/GPT4-LLM-Cleaned
|
|
type: alpaca
|
|
|
|
val_set_size: 0.1
|
|
output_dir: ./outputs/lora-swanlab-profiling-out
|
|
|
|
# LoRA Configuration
|
|
adapter: lora
|
|
lora_r: 16
|
|
lora_alpha: 32
|
|
lora_dropout: 0.05
|
|
lora_target_modules:
|
|
- gate_proj
|
|
- down_proj
|
|
- up_proj
|
|
- q_proj
|
|
- v_proj
|
|
- k_proj
|
|
- o_proj
|
|
|
|
# Training Configuration
|
|
sequence_len: 2048
|
|
sample_packing: true
|
|
eval_sample_packing: true
|
|
|
|
micro_batch_size: 2
|
|
gradient_accumulation_steps: 2
|
|
num_epochs: 1
|
|
|
|
# Optimization
|
|
optimizer: adamw_8bit
|
|
lr_scheduler: cosine
|
|
learning_rate: 0.0002
|
|
warmup_ratio: 0.1
|
|
weight_decay: 0.0
|
|
|
|
# Precision
|
|
bf16: auto
|
|
tf32: false
|
|
|
|
# Performance
|
|
gradient_checkpointing: true
|
|
attn_implementation: flash_attention_2
|
|
|
|
# Checkpointing and Logging
|
|
logging_steps: 1
|
|
evals_per_epoch: 4
|
|
saves_per_epoch: 1
|
|
|
|
# Loss Monitoring
|
|
loss_watchdog_threshold: 5.0
|
|
loss_watchdog_patience: 3
|
|
|
|
special_tokens:
|
|
pad_token: "<|end_of_text|>"
|
|
|
|
# ============================================================================
|
|
# SwanLab Integration
|
|
# ============================================================================
|
|
|
|
plugins:
|
|
- axolotl.integrations.swanlab.SwanLabPlugin
|
|
|
|
# Basic SwanLab Configuration
|
|
use_swanlab: true
|
|
swanlab_project: lora-profiling
|
|
swanlab_experiment_name: llama-3.2-1b-profiling-demo
|
|
swanlab_description: "LoRA fine-tuning with performance profiling"
|
|
swanlab_mode: cloud # Options: cloud, local, offline, disabled
|
|
|
|
# SwanLab Authentication
|
|
# Recommended: Set via environment variable
|
|
# export SWANLAB_API_KEY=your-api-key
|
|
# Or set in config (less secure):
|
|
# swanlab_api_key: your-api-key
|
|
|
|
# Optional: Team workspace
|
|
# swanlab_workspace: my-ml-team
|
|
|
|
# ============================================================================
|
|
# Performance Profiling
|
|
# ============================================================================
|
|
#
|
|
# SwanLab automatically profiles trainer methods when enabled.
|
|
# Profiling metrics appear in SwanLab dashboard under "profiling/" namespace.
|
|
#
|
|
# Built-in profiling:
|
|
# - Minimal overhead (< 0.1% per step)
|
|
# - High-precision timing (microsecond accuracy)
|
|
# - Exception-safe (logs duration even if method fails)
|
|
#
|
|
# View profiling metrics in SwanLab dashboard:
|
|
# profiling/Time taken: AxolotlTrainer.training_step
|
|
# profiling/Time taken: AxolotlTrainer.compute_loss
|
|
# profiling/Time taken: AxolotlTrainer.prediction_step
|
|
#
|
|
# For custom profiling in your own trainer, see:
|
|
# examples/swanlab/custom_trainer_profiling.py
|
|
|
|
# Completion logging is disabled for non-RLHF trainers
|
|
swanlab_log_completions: false # Only works with DPO/KTO/ORPO/GRPO
|
|
|
|
# ============================================================================
|
|
# Optional: Compare with Multiple Runs
|
|
# ============================================================================
|
|
#
|
|
# To compare profiling metrics across different configurations:
|
|
#
|
|
# 1. Run baseline without flash attention:
|
|
# swanlab_experiment_name: llama-3.2-1b-no-flash-attn
|
|
# flash_attention: false
|
|
#
|
|
# 2. Run with gradient checkpointing:
|
|
# swanlab_experiment_name: llama-3.2-1b-grad-checkpoint
|
|
# gradient_checkpointing: true
|
|
#
|
|
# 3. Run with both:
|
|
# swanlab_experiment_name: llama-3.2-1b-optimized
|
|
# flash_attention: true
|
|
# gradient_checkpointing: true
|
|
#
|
|
# Then compare profiling metrics in SwanLab dashboard to see performance impact
|
|
|
|
# ============================================================================
|
|
# Optional: Lark (Feishu) Team Notifications
|
|
# ============================================================================
|
|
#
|
|
# Get notified when profiling experiments complete:
|
|
|
|
# swanlab_lark_webhook_url: https://open.feishu.cn/open-apis/bot/v2/hook/xxxxxxxxxx
|
|
# swanlab_lark_secret: your-webhook-secret
|
|
|
|
# ============================================================================
|
|
# Profiling Best Practices
|
|
# ============================================================================
|
|
#
|
|
# 1. Run multiple epochs to see profiling trends over time
|
|
# 2. Ignore first ~10 steps (warmup period, slower)
|
|
# 3. Look for outliers (steps that take significantly longer)
|
|
# 4. Compare profiling metrics before/after optimization changes
|
|
# 5. Monitor per-rank profiling in distributed training
|
|
#
|
|
# Common bottlenecks to profile:
|
|
# - training_step: Overall step time (should be consistent)
|
|
# - compute_loss: Loss computation (scales with sequence length)
|
|
# - prediction_step: Evaluation time (can be slow for large val sets)
|
|
#
|
|
# If you see inconsistent timing:
|
|
# - Check for data loading bottlenecks
|
|
# - Monitor GPU utilization (may be CPU-bound)
|
|
# - Check for gradient accumulation effects
|
|
# - Verify CUDA kernel synchronization
|
|
|
|
# ============================================================================
|
|
# Disable WandB if you're migrating from it
|
|
# ============================================================================
|
|
|
|
# wandb_project:
|
|
# use_wandb: false
|