@@ -1112,21 +1118,24 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
# subsequences, or set to 4 to split into four equal-sized subsequences.# See https://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html for more details.sequence_parallel_degree:
-
-# Path to torch distx for optim 'adamw_anyprecision'
-torchdistx_path:
+# Optional; strides across the key dimension. Larger values use more memory but should make training faster.
+# Must evenly divide the number of KV heads in your model.
+heads_k_stride:1
-# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize
-pretraining_dataset:
+# Path to torch distx for optim 'adamw_anyprecision'
+torchdistx_path:
-# Debug mode
-debug:
+# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize
+pretraining_dataset:
-# Seed
-seed:
+# Debug mode
+debug:
-# Allow overwrite yml config using from cli
-strict:
+# Seed
+seed:
+
+# Allow overwrite yml config using from cli
+strict:
diff --git a/docs/custom_integrations.html b/docs/custom_integrations.html
index 83b368bf8..1a6afed43 100644
--- a/docs/custom_integrations.html
+++ b/docs/custom_integrations.html
@@ -390,6 +390,12 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
Custom Integrations
+
+
We support sequence parallelism (SP) via the
+ring-flash-attention project. This
+allows one to split up sequences across GPUs, which is useful in the event that a
+single sequence causes OOM errors during model training.
+
First, install ring-flash-attn, recommended via pip install axolotl[ring-flash-attn],
+or from source with pip install .[ring-flash-attn].
+
Your Axolotl YAML config should contain the following lines:
+
sequence_parallel_degree:4 # Split each sequence into 4 parts, one per GPU
+flash_attention:true # Required with sequence parallelism
+
+# Optional; strides across the key dimension. Larger values use more memory but will make training faster.
+heads_k_stride:1
@@ -493,7 +499,9 @@ through a ring communication pattern.
Configuration
To enable sequence parallelism, add the following to your configuration file:
# Set to a divisor (> 1) of the number of GPUs available
-sequence_parallel_degree:4 # Split sequences across 4 GPUs
+sequence_parallel_degree:4 # Split sequences across 4 GPUs
+# Optional; strides across the key dimension. Larger values use more memory but should make training faster.
+heads_k_stride:1
The sequence_parallel_degree should be a divisor of the total number of GPUs. For example:
With 8 GPUs, valid values would be 2, 4, or 8
@@ -531,12 +539,17 @@ through a ring communication pattern.
Example
-
# Example config with sequence parallelism
-base_model: meta-llama/Llama-3-8B-Instruct
-sequence_len:8192
-sequence_parallel_degree:2 # Split each sequence into 4 parts
-flash_attention:true # Required with sequence parallelism
-...
+
base_model: meta-llama/Llama-3-8B-Instruct
+sequence_len:8192
+
+...
+
+sequence_parallel_degree: 4 # Split each sequence into 4 parts, one per GPU
+flash_attention: true # Required with sequence parallelism
+# Optional; strides across the key dimension. Larger values use more memory but should make training faster.
+heads_k_stride: 1
+
+...
This will train the Llama 3 8B model with 8K context length, with each sequence split
into 2 subsequences of length 4096 across 2 GPUs.
diff --git a/search.json b/search.json
index e6e3e9a31..af982f7a4 100644
--- a/search.json
+++ b/search.json
@@ -152,7 +152,7 @@
"href": "docs/config.html",
"title": "Config Reference",
"section": "",
- "text": "# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files\n# This can also be a relative path to a model on disk\nbase_model: ./llama-7b-hf\n# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)\nbase_model_ignore_patterns:\n# If the base_model repo on hf hub doesn't include configuration .json files,\n# You can set that here, or leave this empty to default to base_model\nbase_model_config: ./llama-7b-hf\n# You can specify to choose a specific model revision from huggingface hub\nrevision_of_model:\n# Optional tokenizer configuration path in case you want to use a different tokenizer\n# than the one defined in the base model\ntokenizer_config:\n# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too\nmodel_type: AutoModelForCausalLM\n# Corresponding tokenizer for the model AutoTokenizer is a good choice\ntokenizer_type: AutoTokenizer\n# Trust remote code for untrusted source\ntrust_remote_code:\n# use_fast option for tokenizer loading from_pretrained, default to True\ntokenizer_use_fast:\n# Whether to use the legacy tokenizer setting, defaults to True\ntokenizer_legacy:\n# Resize the model embeddings when new tokens are added to multiples of 32\n# This is reported to improve training speed on some models\nresize_token_embeddings_to_32x:\n# Optional[bool] Whether to shrink the embeddings to len(tokenizer). By default, we won't shrink.\nshrink_embeddings:\n# Whether to load the model with randomly initialized weights. Useful for\n# pre-training a model from scratch or debugging purposes.\nrandom_init_weights:\n\n# (Internal use only)\n# Used to identify which the model is based on\nis_falcon_derived_model:\nis_llama_derived_model:\nis_qwen_derived_model:\n# Please note that if you set this to true, `padding_side` will be set to \"left\" by default\nis_mistral_derived_model:\n\n# optional overrides to the base model configuration\noverrides_of_model_config:\n # RoPE Scaling https://github.com/huggingface/transformers/pull/24653\n rope_scaling:\n type: # linear | dynamic\n factor: # float\n\n# optional overrides the base model loading from_pretrained\noverrides_of_model_kwargs:\n # use_cache: False\n\n# optional overrides to the bnb 4bit quantization configuration\n# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig\nbnb_config_kwargs:\n # These are default values\n llm_int8_has_fp16_weight: false\n bnb_4bit_quant_type: nf4\n bnb_4bit_use_double_quant: true\n\n\n# Whether you are training a 4-bit GPTQ quantized model\ngptq: true\n\n# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer\nload_in_8bit: true\n# Use bitsandbytes 4 bit\nload_in_4bit:\n\n# Use CUDA bf16\nbf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere\n# Use CUDA fp16\nfp16: true\n# Use CUDA tf32\ntf32: true # require >=ampere\n\n# No AMP (automatic mixed precision)\nbfloat16: true # require >=ampere\nfloat16: true\n\n# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset\ngpu_memory_limit: 20GiB\n# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge\nlora_on_cpu: true\n\n# List[str]. Add plugins to extend the pipeline.\n# See `src/axolotl/integrations` for the available plugins or doc below for more details.\n# https://axolotl-ai-cloud.github.io/axolotl/docs/custom_integrations.html\nplugins:\n # - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin\n\n# A list of one or more datasets to finetune the model with\ndatasets:\n # HuggingFace dataset repo | s3://,gs:// path | \"json\" for local dataset, make sure to fill data_files\n - path: vicgalle/alpaca-gpt4\n # The type of prompt to use for training. [alpaca, gpteacher, oasst, reflection]\n type: alpaca # format | format:<prompt_style> (chat/instruct) | <prompt_strategies>.load_<load_fn>\n ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file\n data_files: # Optional[str] path to source data files\n\n shards: # Optional[int] split dataset into N pieces (use with shards_idx)\n shards_idx: # Optional[int] = 0 the index of sharded dataset to use\n\n preprocess_shards: # Optional[int] process dataset in N sequential chunks for memory efficiency (exclusive with `shards`)\n\n name: # Optional[str] name of dataset configuration to load\n train_on_split: train # Optional[str] name of dataset split to load from\n revision: # Optional[str] The specific revision of the dataset to use when loading from the Hugging Face Hub. This can be a commit hash, tag, or branch name. If not specified, the latest version will be used. This parameter is ignored for local datasets.\n trust_remote_code: # Optional[bool] Trust remote code for untrusted source\n\n # Custom user instruction prompt\n - path: repo\n type:\n # The below are defaults. only set what's needed if you use a different column name.\n system_prompt: \"\"\n system_format: \"{system}\"\n field_system: system\n field_instruction: instruction\n field_input: input\n field_output: output\n\n # Customizable to be single line or multi-line\n # Use {instruction}/{input} as key to be replaced\n # 'format' can include {input}\n format: |-\n User: {instruction} {input}\n Assistant:\n # 'no_input_format' cannot include {input}\n no_input_format: \"{instruction} \"\n\n # For `completion` datsets only, uses the provided field instead of `text` column\n field:\n\n # Using chat template\n - path: ...\n # Set type to `chat_template` to use this strategy\n type: chat_template\n # Specify the name of the chat template to use\n # The name of the chat template to use for training, following values are supported:\n # - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default.\n # - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n # - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to if the tokenizer does not have a chat template else default to tokenizer. E.g. tokenizer_default_fallback_chatml.\n # - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n chat_template: tokenizer_default\n\n # Custom jinja chat template. Used only if `chat_template: jinja` or empty.\n chat_template_jinja:\n\n # Key containing the messages (default: \"messages\")\n field_messages: messages\n\n # Mapping of properties from the input dataset to the chat template.\n # (default: message_property_mappings={'role':'role', 'content':'content'})\n # If a property exists in the template but not in this mapping, the system will attempt\n # to load it directly from the message using the property name as the key.\n # Example: In the mapping below, 'from' is loaded from input dataset and used as 'role',\n # while 'value' is loaded and used as 'content' in the chat template.\n message_property_mappings:\n role: from\n content: value\n # ...\n\n # Optional[Dict[str, List]]. Roles mapping in the messages. The default is:\n roles:\n user: [\"human\", \"user\"]\n assistant: [\"gpt\", \"assistant\"]\n system: [\"system\"]\n tool: [\"tool\"]\n\n # Optional[bool]. Whether to drop the system turn from the dataset. Only works with chat_template.\n # This does not drop the default system message from chat_template if it exists. If you wish to,\n # we recommend using a custom jinja template with the default system message removed or\n # adding a system turn with empty content.\n drop_system_message:\n\n # IMPORTANT: The following fields determine which parts of the conversation to train on.\n # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train\n # See examples at `docs/dataset-formats/conversation.qmd`\n # Note: If the below 4 fields are set to empty, defaults to training only on the last message.\n\n # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss.\n roles_to_train: [\"assistant\"] # default\n # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are:\n # - all: train on all EOS tokens\n # - turn (default): train on the EOS token at the end of each trainable turn\n # - last: train on the last EOS token in the conversation\n # TIP: Please make sure that your `tokenizer.eos_token` is same as EOS/EOT token in template. Otherwise, set `eos_token` under `special_tokens`.\n train_on_eos: last\n # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`.\n message_field_training: training\n # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn.\n # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train).\n message_field_training_detail: train_detail\n\n\n# If false, the datasets will not be shuffled and will keep their original order in `datasets`.\n# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.\nshuffle_merged_datasets: true\n\nDeduplicates datasets and test_datasets with identical entries.\ndataset_exact_deduplication: true\n\n# A list of one or more datasets to eval the model with.\n# You can use either test_datasets, or val_set_size, but not both.\ntest_datasets:\n - path: /workspace/data/eval.jsonl\n ds_type: json\n # You need to specify a split. For \"json\" datasets the default split is called \"train\".\n split: train\n type: completion\n data_files:\n - /workspace/data/eval.jsonl\n\n# use RL training: 'dpo', 'ipo', 'kto', 'simpo', 'orpo', 'grpo'\nrl:\nrl_beta: # Optional[float]. The beta parameter for the RL training.\n\n# dpo\ndpo_use_weighting: # Optional[bool]. Whether to perform weighting.\nrpo_alpha: # Optional[float]. Weighting of NLL term in loss from RPO paper.\n\n# orpo\norpo_alpha: 0.1 # Parameter controlling the relative ratio loss weight in the ORPO loss. Passed to `beta` in `ORPOConfig` due to trl mapping.\n\n# kto\nkto_desirable_weight: # Optional[float]. Factor for desirable loss term in KTO loss.\nkto_undesirable_weight: # Optional[float]. Factor for undesirable loss term in KTO loss.\n\n# simpo\ncpo_alpha: 1.0 # Weight of the BC regularizer\nsimpo_gamma: 0.5 # Target reward margin for the SimPO loss\n\n# grpo\ntrl:\n use_vllm: # Optional[bool]. Whether to use VLLM for RL training.\n vllm_device: # Optional[str]. Device to use for VLLM.\n vllm_gpu_memory_utilization: # Optional[float]. GPU memory utilization for VLLM.\n vllm_max_model_len: # Optional[int]. Maximum length of the model for VLLM.\n vllm_dtype: # Optional[str]. Data type for VLLM.\n\n beta: # Optional[float]. Beta parameter for the RL training. Same as `rl_beta`. Use\n max_completion_length: # Optional[int]. Maximum length of the completion for RL training.\n\n reward_funcs: # Optional[list[str]]. List of reward functions to load. Paths must be importable from current dir.\n reward_weights: # Optional[list[float]]. List of reward weights for the reward functions.\n\n num_generations: # Optional[int]. Number of generations to sample.\n log_completions: # Optional[bool]. Whether to log completions.\n\n sync_ref_model: # Optional[bool]. Whether to sync the reference model.\n ref_model_mixup_alpha: # Optional[float]. Mixup alpha for the reference model.\n ref_model_sync_steps: # Optional[int]. Sync steps for the reference model.\n\n\n# reward modelling: `True` or `False`\nreward_model:\n\n# process reward modelling: `True` or `False`\nprocess_reward_model:\n\n# The name of the chat template to use for training, following values are supported:\n# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value.\n# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer.\n# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n# The selected chat template will be saved to the tokenizer_config.json for easier inferencing\n# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template.\nchat_template: tokenizer_default\n# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null.\nchat_template_jinja: null\n# Changes the default system message. Currently only supports chatml.\ndefault_system_message: You are a helpful assistant. Please give a long and detailed answer.\n# Axolotl attempts to save the dataset as an arrow after packing the data together so\n# subsequent training attempts load faster, relative path\ndataset_prepared_path: data/last_run_prepared\n# Push prepared dataset to hub\npush_dataset_to_hub: # Optional[str] repo_org/repo_name\n# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`\n# if not set.\ndataset_processes: # defaults to os.cpu_count() if not set\n# Keep dataset in memory while preprocessing\n# Only needed if cached dataset is taking too much storage\ndataset_keep_in_memory:\n# push checkpoints to hub\nhub_model_id: # private repo path to push finetuned model\n# how to push checkpoints to hub\n# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy\nhub_strategy:\n# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets\n# Required to be true when used in combination with `push_dataset_to_hub`\nhf_use_auth_token: # boolean\n# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.\nval_set_size: 0.04\n# Num shards for whole dataset\ndataset_shard_num:\n# Index of shard to use for whole dataset\ndataset_shard_idx:\n\n# The maximum length of an input to train with, this should typically be less than 2048\n# as most models have a token/context limit of 2048\nsequence_len: 2048\n# Pad inputs so each step uses constant sized buffers\n# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently\npad_to_sequence_len:\n# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'\nsample_packing:\n# Set to 'false' if getting errors during eval with sample_packing on.\neval_sample_packing:\n# You can set these packing optimizations AFTER starting a training at least once.\n# The trainer will provide recommended values for these values.\nsample_packing_eff_est:\ntotal_num_tokens:\n# Increasing the following values helps with packing, but usually only slightly (<%1.)\n# The number of samples packed at a time.\nsample_packing_group_size: 100000\n# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples.\nsample_packing_bin_size: 200\n# whether to concatenate samples during pretraining\npretraining_sample_concatenation:\n\n# Use batch flattening for speedups when not using sample_packing\nbatch_flattening:\n\n# Passed through to transformers when loading the model when launched without accelerate\n# Use `sequential` when training w/ model parallelism to limit memory\ndevice_map:\n# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.\nmax_memory:\n\n# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model\nadapter: lora\n# If you already have a lora model trained that you want to load, put that here.\n# This means after training, if you want to test the model, you should set this to the value of `output_dir`.\n# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.\nlora_model_dir:\n\n# LoRA hyperparameters\n# For more details about the following options, see:\n# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n# - k_proj\n# - o_proj\n# - gate_proj\n# - down_proj\n# - up_proj\nlora_target_linear: # If true, will target all linear modules\npeft_layers_to_transform: # The layer indices to transform, otherwise, apply to all layers\n\n# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.\n# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.\n# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994\nlora_modules_to_save:\n# - embed_tokens\n# - lm_head\n\nlora_fan_in_fan_out: false\n\n# Apply custom LoRA autograd functions and activation function Triton kernels for\n# speed and memory savings\n# See: https://axolotl-ai-cloud.github.io/axolotl/docs/lora_optims.html\nlora_mlp_kernel: true\nlora_qkv_kernel: true\nlora_o_kernel: true\n\n# LoRA+ hyperparameters\n# For more details about the following options, see:\n# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py`\nloraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4.\nloraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6.\n\npeft:\n # Configuration options for loftq initialization for LoRA\n # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization\n loftq_config:\n loftq_bits: # typically 4 bits\n\n# ReLoRA configuration\n# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed\nrelora_steps: # Number of steps per ReLoRA restart\nrelora_warmup_steps: # Number of per-restart warmup steps\nrelora_anneal_steps: # Number of anneal steps for each relora cycle\nrelora_prune_ratio: # threshold for optimizer magnitude when pruning\nrelora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings\n\n# wandb configuration if you're using it\n# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.\nwandb_mode: # \"offline\" to save run metadata locally and not sync to the server, \"disabled\" to turn off wandb\nwandb_project: # Your wandb project name\nwandb_entity: # A wandb Team name if using a Team\nwandb_watch:\nwandb_name: # Set the name of your wandb run\nwandb_run_id: # Set the ID of your wandb run\nwandb_log_model: # \"checkpoint\" to log model to wandb Artifacts every `save_steps` or \"end\" to log only at the end of training\n\n# mlflow configuration if you're using it\nmlflow_tracking_uri: # URI to mlflow\nmlflow_experiment_name: # Your experiment name\nmlflow_run_name: # Your run name\nhf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry\n\n# Comet configuration if you're using it\n# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`.\n# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start\nuse_comet: # Enable or disable Comet integration.\ncomet_api_key: # API key for Comet. Recommended to set via `comet login`.\ncomet_workspace: # Workspace name in Comet. Defaults to the user's default workspace.\ncomet_project_name: # Project name in Comet. Defaults to Uncategorized.\ncomet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key.\ncomet_mode: # Create a new experiment (\"create\") or log to an existing one (\"get\"). Default (\"get_or_create\") auto-selects based on configuration.\ncomet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True.\ncomet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details.\n\n# Tensorboard\nuse_tensorboard: # Optional[bool]\n\n# Where to save the full-finetuned model to\noutput_dir: ./completed-model\n\n# Whether to use torch.compile and which backend to use\n# setting to `auto` will enable torch compile when torch>=2.5.1\ntorch_compile: # Optional[Union[Literal[\"auto\"], bool]]\ntorch_compile_backend: # Optional[str]\n\n# Training hyperparameters\n\n# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.\ngradient_accumulation_steps: 1\n# The number of samples to include in each batch. This is the number of samples sent to each GPU.\n# Batch size per gpu = micro_batch_size * gradient_accumulation_steps\nmicro_batch_size: 2\neval_batch_size:\nnum_epochs: 4\nwarmup_steps: 100 # cannot use with warmup_ratio\nwarmup_ratio: 0.05 # cannot use with warmup_steps\nlearning_rate: 0.00003\nlr_quadratic_warmup:\nlogging_steps:\neval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps\nevals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps\neval_strategy: # Set to `\"no\"` to skip evaluation, `\"epoch\"` at end of each epoch, leave empty to infer from `eval_steps`.\nsave_strategy: # Set to `\"no\"` to skip checkpoint saves, `\"epoch\"` at end of each epoch, `\"best\"` when better result is achieved, leave empty to infer from `save_steps`.\nsave_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps\nsaves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps\nsave_total_limit: # Checkpoints saved at a time\n# Maximum number of iterations to train for. It precedes num_epochs which means that\n# if both are set, num_epochs will not be guaranteed.\n# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps\nmax_steps:\n\n# bool of whether to include tokens trainer per second in the training metrics. This iterates over the entire dataset once, so it takes some time.\ninclude_tokens_per_second: # Optional[bool]\n\n# whether to find batch size that fits in memory. Passed to underlying transformers Trainer\nauto_find_batch_size: # Optional[bool]\n\neval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0\neval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128\ndo_causal_lm_eval: # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`.\neval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is [\"sacrebleu\", \"comet\", \"ter\", \"chrf\", \"perplexity\"]\n\nprofiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir.\n # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information\n # snapshots can be visualized @ https://pytorch.org/memory_viz\n\nloss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)\nloss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)\n\n# Save model as safetensors (require safetensors package)\nsave_safetensors:\n\n# Whether to mask out or include the human's prompt from the training labels\ntrain_on_inputs: false\n# Group similarly sized data to minimize padding.\n# May be slower to start, as it must download and sort the entire dataset.\n# Note that training loss may have an oscillating pattern with this enabled.\ngroup_by_length: false\n\n# Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\ngradient_checkpointing: false\n# additional kwargs to pass to the trainer for gradient checkpointing\n# gradient_checkpointing_kwargs:\n# use_reentrant: true\n\n# Stop training after this many evaluation losses have increased in a row\n# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback\nearly_stopping_patience: 3\n\n# Specify a scheduler and kwargs to use with the optimizer\nlr_scheduler: # 'one_cycle' | 'rex' | 'log_sweep' | empty for cosine\nlr_scheduler_kwargs:\ncosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr\ncosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)\n\n# For one_cycle optim\nlr_div_factor: # Learning rate div factor\n\n# Specify optimizer\n# Valid values are driven by the Transformers OptimizerNames class, see:\n# https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189\n#\n# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of\n# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used\n# in the examples/ for your model and fine-tuning use case.\n#\n# Valid values for 'optimizer' include:\n# - adamw_torch\n# - adamw_torch_fused\n# - adamw_torch_xla\n# - adamw_torch_npu_fused\n# - adamw_apex_fused\n# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)\n# - adafactor\n# - adamw_anyprecision\n# - adamw_torch_4bit\n# - ademamix\n# - sgd\n# - adagrad\n# - adamw_bnb_8bit\n# - adamw_8bit # alias for adamw_bnb_8bit\n# - ademamix_8bit\n# - lion_8bit\n# - lion_32bit\n# - paged_adamw_32bit\n# - paged_adamw_8bit\n# - paged_ademamix_32bit\n# - paged_ademamix_8bit\n# - paged_lion_32bit\n# - paged_lion_8bit\n# - rmsprop\n# - rmsprop_bnb\n# - rmsprop_bnb_8bit\n# - rmsprop_bnb_32bit\n# - galore_adamw\n# - galore_adamw_8bit\n# - galore_adafactor\n# - galore_adamw_layerwise\n# - galore_adamw_8bit_layerwise\n# - galore_adafactor_layerwise\n# - lomo\n# - adalomo\n# - grokadamw\n# - schedule_free_adamw\n# - schedule_free_sgd\n# - apollo_adamw\n# - apollo_adamw_layerwise\n#\n# Additional custom optimizers include:\n# - optimi_adamw\n# - ao_adamw_8bit\n# - ao_adamw_fp8\noptimizer:\n# Dictionary of arguments to pass to the optimizer\noptim_args:\n# For Galore Optimizers the following optim_args are available\n# rank: # type: int\n# update_proj_gap # type: int\n# scale # type: float\n# proj_type: # type: str, default = std\n\n# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm\noptim_target_modules:\n# - self_attn # for llama\n# - mlp\n\n# Specify weight decay\nweight_decay:\n# adamw hyperparams\nadam_beta1:\nadam_beta2:\nadam_epsilon:\n# Gradient clipping max norm\nmax_grad_norm:\n\n# Augmentation techniques\n# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings\n# currently only supported on Llama and Mistral\nneftune_noise_alpha:\n\n# Whether to bettertransformers\nflash_optimum:\n# Whether to use xformers attention patch https://github.com/facebookresearch/xformers:\nxformers_attention:\n# Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:\nflash_attention:\nflash_attn_cross_entropy: # Whether to use flash-attention cross entropy implementation - advanced use only\nflash_attn_rms_norm: # Whether to use flash-attention rms norm implementation - advanced use only\nflash_attn_fuse_qkv: # Whether to fuse QKV into a single operation\nflash_attn_fuse_mlp: # Whether to fuse part of the MLP into a single operation\n# Whether to use scaled-dot-product attention\n# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html\nsdp_attention:\n# Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf\ns2_attention:\n# Optional[bool]. Whether to use low_cpu_mem_usage\nlow_cpu_mem_usage:\n# Resume from a specific checkpoint dir\nresume_from_checkpoint:\n# If resume_from_checkpoint isn't set and you simply want it to start where it left off.\n# Be careful with this being turned on between different models.\nauto_resume_from_checkpoints: false\n\n## Multimodal section\n# int | tuple[int, int] | None . Size to resize images to, width x height.\n# Will read from model/processor config if not set.\nimage_size:\n# str. Algorithm to use for image resizing. \"bilinear\", \"bicubic\", \"lanczos\". Default is \"bilinear\".\nimage_resize_algorithm: 'bilinear'\n## End of multimodal section\n\n# Don't mess with this, it's here for accelerate and torchrun\nlocal_rank:\n\n# Add or change special tokens.\n# If you add tokens here, you don't need to add them to the `tokens` list.\nspecial_tokens:\n # bos_token: \"<s>\"\n # eos_token: \"</s>\"\n # unk_token: \"<unk>\"\n # pad_token: \"[PAD]\"\n\n# Add extra tokens.\ntokens:\n\n# Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer.\n# Only works for tokens that are not part of the base vocab (aka are added_tokens).\n# Can be checked if they exist in tokenizer.json added_tokens.\nadded_tokens_overrides: # Dict[int, str]\n# 128041: \"<|im_start|>\"\n# 128042: \"<|im_end|>\"\n\n# FSDP\nfsdp:\nfsdp_config:\n\n# Deepspeed config path. e.g., deepspeed_configs/zero3.json\ndeepspeed:\n\n# Advanced DDP Arguments\nddp_timeout:\nddp_bucket_cap_mb:\nddp_broadcast_buffers:\n\n# Sequence parallelism\n# Set to a divisor of the number of GPUs available to split sequences into chunks of equal size.\n# Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM.\n# E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized\n# subsequences, or set to 4 to split into four equal-sized subsequences.\n# See https://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html for more details.\nsequence_parallel_degree:\n\n# Path to torch distx for optim 'adamw_anyprecision'\ntorchdistx_path:\n\n# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize\npretraining_dataset:\n\n# Debug mode\ndebug:\n\n# Seed\nseed:\n\n# Allow overwrite yml config using from cli\nstrict:",
+ "text": "# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files\n# This can also be a relative path to a model on disk\nbase_model: ./llama-7b-hf\n# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)\nbase_model_ignore_patterns:\n# If the base_model repo on hf hub doesn't include configuration .json files,\n# You can set that here, or leave this empty to default to base_model\nbase_model_config: ./llama-7b-hf\n# You can specify to choose a specific model revision from huggingface hub\nrevision_of_model:\n# Optional tokenizer configuration path in case you want to use a different tokenizer\n# than the one defined in the base model\ntokenizer_config:\n# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too\nmodel_type: AutoModelForCausalLM\n# Corresponding tokenizer for the model AutoTokenizer is a good choice\ntokenizer_type: AutoTokenizer\n# Trust remote code for untrusted source\ntrust_remote_code:\n# use_fast option for tokenizer loading from_pretrained, default to True\ntokenizer_use_fast:\n# Whether to use the legacy tokenizer setting, defaults to True\ntokenizer_legacy:\n# Resize the model embeddings when new tokens are added to multiples of 32\n# This is reported to improve training speed on some models\nresize_token_embeddings_to_32x:\n# Optional[bool] Whether to shrink the embeddings to len(tokenizer). By default, we won't shrink.\nshrink_embeddings:\n# Whether to load the model with randomly initialized weights. Useful for\n# pre-training a model from scratch or debugging purposes.\nrandom_init_weights:\n\n# (Internal use only)\n# Used to identify which the model is based on\nis_falcon_derived_model:\nis_llama_derived_model:\nis_qwen_derived_model:\n# Please note that if you set this to true, `padding_side` will be set to \"left\" by default\nis_mistral_derived_model:\n\n# optional overrides to the base model configuration\noverrides_of_model_config:\n # RoPE Scaling https://github.com/huggingface/transformers/pull/24653\n rope_scaling:\n type: # linear | dynamic\n factor: # float\n\n# optional overrides the base model loading from_pretrained\noverrides_of_model_kwargs:\n # use_cache: False\n\n# optional overrides to the bnb 4bit quantization configuration\n# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig\nbnb_config_kwargs:\n # These are default values\n llm_int8_has_fp16_weight: false\n bnb_4bit_quant_type: nf4\n bnb_4bit_use_double_quant: true\n\n\n# Whether you are training a 4-bit GPTQ quantized model\ngptq: true\n\n# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer\nload_in_8bit: true\n# Use bitsandbytes 4 bit\nload_in_4bit:\n\n# Use CUDA bf16\nbf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere\n# Use CUDA fp16\nfp16: true\n# Use CUDA tf32\ntf32: true # require >=ampere\n\n# No AMP (automatic mixed precision)\nbfloat16: true # require >=ampere\nfloat16: true\n\n# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset\ngpu_memory_limit: 20GiB\n# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge\nlora_on_cpu: true\n\n# List[str]. Add plugins to extend the pipeline.\n# See `src/axolotl/integrations` for the available plugins or doc below for more details.\n# https://axolotl-ai-cloud.github.io/axolotl/docs/custom_integrations.html\nplugins:\n # - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin\n\n# A list of one or more datasets to finetune the model with\ndatasets:\n # HuggingFace dataset repo | s3://,gs:// path | \"json\" for local dataset, make sure to fill data_files\n - path: vicgalle/alpaca-gpt4\n # The type of prompt to use for training. [alpaca, gpteacher, oasst, reflection]\n type: alpaca # format | format:<prompt_style> (chat/instruct) | <prompt_strategies>.load_<load_fn>\n ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file\n data_files: # Optional[str] path to source data files\n\n shards: # Optional[int] split dataset into N pieces (use with shards_idx)\n shards_idx: # Optional[int] = 0 the index of sharded dataset to use\n\n preprocess_shards: # Optional[int] process dataset in N sequential chunks for memory efficiency (exclusive with `shards`)\n\n name: # Optional[str] name of dataset configuration to load\n train_on_split: train # Optional[str] name of dataset split to load from\n revision: # Optional[str] The specific revision of the dataset to use when loading from the Hugging Face Hub. This can be a commit hash, tag, or branch name. If not specified, the latest version will be used. This parameter is ignored for local datasets.\n trust_remote_code: # Optional[bool] Trust remote code for untrusted source\n\n # Custom user instruction prompt\n - path: repo\n type:\n # The below are defaults. only set what's needed if you use a different column name.\n system_prompt: \"\"\n system_format: \"{system}\"\n field_system: system\n field_instruction: instruction\n field_input: input\n field_output: output\n\n # Customizable to be single line or multi-line\n # Use {instruction}/{input} as key to be replaced\n # 'format' can include {input}\n format: |-\n User: {instruction} {input}\n Assistant:\n # 'no_input_format' cannot include {input}\n no_input_format: \"{instruction} \"\n\n # For `completion` datsets only, uses the provided field instead of `text` column\n field:\n\n # Using chat template\n - path: ...\n # Set type to `chat_template` to use this strategy\n type: chat_template\n # Specify the name of the chat template to use\n # The name of the chat template to use for training, following values are supported:\n # - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default.\n # - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n # - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to if the tokenizer does not have a chat template else default to tokenizer. E.g. tokenizer_default_fallback_chatml.\n # - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n chat_template: tokenizer_default\n\n # Custom jinja chat template. Used only if `chat_template: jinja` or empty.\n chat_template_jinja:\n\n # Key containing the messages (default: \"messages\")\n field_messages: messages\n\n # Mapping of properties from the input dataset to the chat template.\n # (default: message_property_mappings={'role':'role', 'content':'content'})\n # If a property exists in the template but not in this mapping, the system will attempt\n # to load it directly from the message using the property name as the key.\n # Example: In the mapping below, 'from' is loaded from input dataset and used as 'role',\n # while 'value' is loaded and used as 'content' in the chat template.\n message_property_mappings:\n role: from\n content: value\n # ...\n\n # Optional[Dict[str, List]]. Roles mapping in the messages. The default is:\n roles:\n user: [\"human\", \"user\"]\n assistant: [\"gpt\", \"assistant\"]\n system: [\"system\"]\n tool: [\"tool\"]\n\n # Optional[bool]. Whether to drop the system turn from the dataset. Only works with chat_template.\n # This does not drop the default system message from chat_template if it exists. If you wish to,\n # we recommend using a custom jinja template with the default system message removed or\n # adding a system turn with empty content.\n drop_system_message:\n\n # IMPORTANT: The following fields determine which parts of the conversation to train on.\n # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train\n # See examples at `docs/dataset-formats/conversation.qmd`\n # Note: If the below 4 fields are set to empty, defaults to training only on the last message.\n\n # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss.\n roles_to_train: [\"assistant\"] # default\n # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are:\n # - all: train on all EOS tokens\n # - turn (default): train on the EOS token at the end of each trainable turn\n # - last: train on the last EOS token in the conversation\n # TIP: Please make sure that your `tokenizer.eos_token` is same as EOS/EOT token in template. Otherwise, set `eos_token` under `special_tokens`.\n train_on_eos: last\n # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`.\n message_field_training: training\n # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn.\n # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train).\n message_field_training_detail: train_detail\n\n\n# If false, the datasets will not be shuffled and will keep their original order in `datasets`.\n# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.\nshuffle_merged_datasets: true\n\nDeduplicates datasets and test_datasets with identical entries.\ndataset_exact_deduplication: true\n\n# A list of one or more datasets to eval the model with.\n# You can use either test_datasets, or val_set_size, but not both.\ntest_datasets:\n - path: /workspace/data/eval.jsonl\n ds_type: json\n # You need to specify a split. For \"json\" datasets the default split is called \"train\".\n split: train\n type: completion\n data_files:\n - /workspace/data/eval.jsonl\n\n# use RL training: 'dpo', 'ipo', 'kto', 'simpo', 'orpo', 'grpo'\nrl:\nrl_beta: # Optional[float]. The beta parameter for the RL training.\n\n# dpo\ndpo_use_weighting: # Optional[bool]. Whether to perform weighting.\nrpo_alpha: # Optional[float]. Weighting of NLL term in loss from RPO paper.\n\n# orpo\norpo_alpha: 0.1 # Parameter controlling the relative ratio loss weight in the ORPO loss. Passed to `beta` in `ORPOConfig` due to trl mapping.\n\n# kto\nkto_desirable_weight: # Optional[float]. Factor for desirable loss term in KTO loss.\nkto_undesirable_weight: # Optional[float]. Factor for undesirable loss term in KTO loss.\n\n# simpo\ncpo_alpha: 1.0 # Weight of the BC regularizer\nsimpo_gamma: 0.5 # Target reward margin for the SimPO loss\n\n# grpo\ntrl:\n use_vllm: # Optional[bool]. Whether to use VLLM for RL training.\n vllm_device: # Optional[str]. Device to use for VLLM.\n vllm_gpu_memory_utilization: # Optional[float]. GPU memory utilization for VLLM.\n vllm_max_model_len: # Optional[int]. Maximum length of the model for VLLM.\n vllm_dtype: # Optional[str]. Data type for VLLM.\n\n beta: # Optional[float]. Beta parameter for the RL training. Same as `rl_beta`. Use\n max_completion_length: # Optional[int]. Maximum length of the completion for RL training.\n\n reward_funcs: # Optional[list[str]]. List of reward functions to load. Paths must be importable from current dir.\n reward_weights: # Optional[list[float]]. List of reward weights for the reward functions.\n\n num_generations: # Optional[int]. Number of generations to sample.\n log_completions: # Optional[bool]. Whether to log completions.\n\n sync_ref_model: # Optional[bool]. Whether to sync the reference model.\n ref_model_mixup_alpha: # Optional[float]. Mixup alpha for the reference model.\n ref_model_sync_steps: # Optional[int]. Sync steps for the reference model.\n\n\n# reward modelling: `True` or `False`\nreward_model:\n\n# process reward modelling: `True` or `False`\nprocess_reward_model:\n\n# The name of the chat template to use for training, following values are supported:\n# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value.\n# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer.\n# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n# The selected chat template will be saved to the tokenizer_config.json for easier inferencing\n# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template.\nchat_template: tokenizer_default\n# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null.\nchat_template_jinja: null\n# Changes the default system message. Currently only supports chatml.\ndefault_system_message: You are a helpful assistant. Please give a long and detailed answer.\n# Axolotl attempts to save the dataset as an arrow after packing the data together so\n# subsequent training attempts load faster, relative path\ndataset_prepared_path: data/last_run_prepared\n# Push prepared dataset to hub\npush_dataset_to_hub: # Optional[str] repo_org/repo_name\n# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`\n# if not set.\ndataset_processes: # defaults to os.cpu_count() if not set\n# Keep dataset in memory while preprocessing\n# Only needed if cached dataset is taking too much storage\ndataset_keep_in_memory:\n# push checkpoints to hub\nhub_model_id: # private repo path to push finetuned model\n# how to push checkpoints to hub\n# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy\nhub_strategy:\n# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets\n# Required to be true when used in combination with `push_dataset_to_hub`\nhf_use_auth_token: # boolean\n# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.\nval_set_size: 0.04\n# Num shards for whole dataset\ndataset_shard_num:\n# Index of shard to use for whole dataset\ndataset_shard_idx:\n\n# The maximum length of an input to train with, this should typically be less than 2048\n# as most models have a token/context limit of 2048\nsequence_len: 2048\n# Pad inputs so each step uses constant sized buffers\n# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently\npad_to_sequence_len:\n# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'\nsample_packing:\n# Set to 'false' if getting errors during eval with sample_packing on.\neval_sample_packing:\n# You can set these packing optimizations AFTER starting a training at least once.\n# The trainer will provide recommended values for these values.\nsample_packing_eff_est:\ntotal_num_tokens:\n# Increasing the following values helps with packing, but usually only slightly (<%1.)\n# The number of samples packed at a time.\nsample_packing_group_size: 100000\n# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples.\nsample_packing_bin_size: 200\n# whether to concatenate samples during pretraining\npretraining_sample_concatenation:\n\n# Use batch flattening for speedups when not using sample_packing\nbatch_flattening:\n\n# Passed through to transformers when loading the model when launched without accelerate\n# Use `sequential` when training w/ model parallelism to limit memory\ndevice_map:\n# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.\nmax_memory:\n\n# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model\nadapter: lora\n# If you already have a lora model trained that you want to load, put that here.\n# This means after training, if you want to test the model, you should set this to the value of `output_dir`.\n# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.\nlora_model_dir:\n\n# LoRA hyperparameters\n# For more details about the following options, see:\n# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n# - k_proj\n# - o_proj\n# - gate_proj\n# - down_proj\n# - up_proj\nlora_target_linear: # If true, will target all linear modules\npeft_layers_to_transform: # The layer indices to transform, otherwise, apply to all layers\n\n# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.\n# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.\n# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994\nlora_modules_to_save:\n# - embed_tokens\n# - lm_head\n\nlora_fan_in_fan_out: false\n\n# Apply custom LoRA autograd functions and activation function Triton kernels for\n# speed and memory savings\n# See: https://axolotl-ai-cloud.github.io/axolotl/docs/lora_optims.html\nlora_mlp_kernel: true\nlora_qkv_kernel: true\nlora_o_kernel: true\n\n# LoRA+ hyperparameters\n# For more details about the following options, see:\n# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py`\nloraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4.\nloraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6.\n\npeft:\n # Configuration options for loftq initialization for LoRA\n # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization\n loftq_config:\n loftq_bits: # typically 4 bits\n\n# ReLoRA configuration\n# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed\nrelora_steps: # Number of steps per ReLoRA restart\nrelora_warmup_steps: # Number of per-restart warmup steps\nrelora_anneal_steps: # Number of anneal steps for each relora cycle\nrelora_prune_ratio: # threshold for optimizer magnitude when pruning\nrelora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings\n\n# wandb configuration if you're using it\n# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.\nwandb_mode: # \"offline\" to save run metadata locally and not sync to the server, \"disabled\" to turn off wandb\nwandb_project: # Your wandb project name\nwandb_entity: # A wandb Team name if using a Team\nwandb_watch:\nwandb_name: # Set the name of your wandb run\nwandb_run_id: # Set the ID of your wandb run\nwandb_log_model: # \"checkpoint\" to log model to wandb Artifacts every `save_steps` or \"end\" to log only at the end of training\n\n# mlflow configuration if you're using it\nmlflow_tracking_uri: # URI to mlflow\nmlflow_experiment_name: # Your experiment name\nmlflow_run_name: # Your run name\nhf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry\n\n# Comet configuration if you're using it\n# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`.\n# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start\nuse_comet: # Enable or disable Comet integration.\ncomet_api_key: # API key for Comet. Recommended to set via `comet login`.\ncomet_workspace: # Workspace name in Comet. Defaults to the user's default workspace.\ncomet_project_name: # Project name in Comet. Defaults to Uncategorized.\ncomet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key.\ncomet_mode: # Create a new experiment (\"create\") or log to an existing one (\"get\"). Default (\"get_or_create\") auto-selects based on configuration.\ncomet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True.\ncomet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details.\n\n# Tensorboard\nuse_tensorboard: # Optional[bool]\n\n# Where to save the full-finetuned model to\noutput_dir: ./completed-model\n\n# Whether to use torch.compile and which backend to use\n# setting to `auto` will enable torch compile when torch>=2.5.1\ntorch_compile: # Optional[Union[Literal[\"auto\"], bool]]\ntorch_compile_backend: # Optional[str]\n\n# Training hyperparameters\n\n# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.\ngradient_accumulation_steps: 1\n# The number of samples to include in each batch. This is the number of samples sent to each GPU.\n# Batch size per gpu = micro_batch_size * gradient_accumulation_steps\nmicro_batch_size: 2\neval_batch_size:\nnum_epochs: 4\nwarmup_steps: 100 # cannot use with warmup_ratio\nwarmup_ratio: 0.05 # cannot use with warmup_steps\nlearning_rate: 0.00003\nlr_quadratic_warmup:\nlogging_steps:\neval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps\nevals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps\neval_strategy: # Set to `\"no\"` to skip evaluation, `\"epoch\"` at end of each epoch, leave empty to infer from `eval_steps`.\nsave_strategy: # Set to `\"no\"` to skip checkpoint saves, `\"epoch\"` at end of each epoch, `\"best\"` when better result is achieved, leave empty to infer from `save_steps`.\nsave_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps\nsaves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps\nsave_total_limit: # Checkpoints saved at a time\n# Maximum number of iterations to train for. It precedes num_epochs which means that\n# if both are set, num_epochs will not be guaranteed.\n# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps\nmax_steps:\n\n# bool of whether to include tokens trainer per second in the training metrics. This iterates over the entire dataset once, so it takes some time.\ninclude_tokens_per_second: # Optional[bool]\n\n# whether to find batch size that fits in memory. Passed to underlying transformers Trainer\nauto_find_batch_size: # Optional[bool]\n\neval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0\neval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128\ndo_causal_lm_eval: # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`.\neval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is [\"sacrebleu\", \"comet\", \"ter\", \"chrf\", \"perplexity\"]\n\nprofiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir.\n # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information\n # snapshots can be visualized @ https://pytorch.org/memory_viz\n\nloss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)\nloss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)\n\n# Save model as safetensors (require safetensors package)\nsave_safetensors:\n\n# Whether to mask out or include the human's prompt from the training labels\ntrain_on_inputs: false\n# Group similarly sized data to minimize padding.\n# May be slower to start, as it must download and sort the entire dataset.\n# Note that training loss may have an oscillating pattern with this enabled.\ngroup_by_length: false\n\n# Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\ngradient_checkpointing: false\n# additional kwargs to pass to the trainer for gradient checkpointing\n# gradient_checkpointing_kwargs:\n# use_reentrant: true\n\n# Stop training after this many evaluation losses have increased in a row\n# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback\nearly_stopping_patience: 3\n\n# Specify a scheduler and kwargs to use with the optimizer\nlr_scheduler: # 'one_cycle' | 'rex' | 'log_sweep' | empty for cosine\nlr_scheduler_kwargs:\ncosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr\ncosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)\n\n# For one_cycle optim\nlr_div_factor: # Learning rate div factor\n\n# Specify optimizer\n# Valid values are driven by the Transformers OptimizerNames class, see:\n# https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189\n#\n# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of\n# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used\n# in the examples/ for your model and fine-tuning use case.\n#\n# Valid values for 'optimizer' include:\n# - adamw_torch\n# - adamw_torch_fused\n# - adamw_torch_xla\n# - adamw_torch_npu_fused\n# - adamw_apex_fused\n# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)\n# - adafactor\n# - adamw_anyprecision\n# - adamw_torch_4bit\n# - ademamix\n# - sgd\n# - adagrad\n# - adamw_bnb_8bit\n# - adamw_8bit # alias for adamw_bnb_8bit\n# - ademamix_8bit\n# - lion_8bit\n# - lion_32bit\n# - paged_adamw_32bit\n# - paged_adamw_8bit\n# - paged_ademamix_32bit\n# - paged_ademamix_8bit\n# - paged_lion_32bit\n# - paged_lion_8bit\n# - rmsprop\n# - rmsprop_bnb\n# - rmsprop_bnb_8bit\n# - rmsprop_bnb_32bit\n# - galore_adamw\n# - galore_adamw_8bit\n# - galore_adafactor\n# - galore_adamw_layerwise\n# - galore_adamw_8bit_layerwise\n# - galore_adafactor_layerwise\n# - lomo\n# - adalomo\n# - grokadamw\n# - schedule_free_adamw\n# - schedule_free_sgd\n# - apollo_adamw\n# - apollo_adamw_layerwise\n#\n# Additional custom optimizers include:\n# - optimi_adamw\n# - ao_adamw_8bit\n# - ao_adamw_fp8\noptimizer:\n# Dictionary of arguments to pass to the optimizer\noptim_args:\n# For Galore Optimizers the following optim_args are available\n# rank: # type: int\n# update_proj_gap # type: int\n# scale # type: float\n# proj_type: # type: str, default = std\n\n# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm\noptim_target_modules:\n# - self_attn # for llama\n# - mlp\n\n# Specify weight decay\nweight_decay:\n# adamw hyperparams\nadam_beta1:\nadam_beta2:\nadam_epsilon:\n# Gradient clipping max norm\nmax_grad_norm:\n\n# Augmentation techniques\n# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings\n# currently only supported on Llama and Mistral\nneftune_noise_alpha:\n\n# Whether to bettertransformers\nflash_optimum:\n# Whether to use xformers attention patch https://github.com/facebookresearch/xformers:\nxformers_attention:\n# Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:\nflash_attention:\nflash_attn_cross_entropy: # Whether to use flash-attention cross entropy implementation - advanced use only\nflash_attn_rms_norm: # Whether to use flash-attention rms norm implementation - advanced use only\nflash_attn_fuse_qkv: # Whether to fuse QKV into a single operation\nflash_attn_fuse_mlp: # Whether to fuse part of the MLP into a single operation\n# Whether to use scaled-dot-product attention\n# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html\nsdp_attention:\n# Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf\ns2_attention:\n# Optional[bool]. Whether to use low_cpu_mem_usage\nlow_cpu_mem_usage:\n# Resume from a specific checkpoint dir\nresume_from_checkpoint:\n# If resume_from_checkpoint isn't set and you simply want it to start where it left off.\n# Be careful with this being turned on between different models.\nauto_resume_from_checkpoints: false\n\n## Multimodal section\n# int | tuple[int, int] | None . Size to resize images to, width x height.\n# Will read from model/processor config if not set.\nimage_size:\n# str. Algorithm to use for image resizing. \"bilinear\", \"bicubic\", \"lanczos\". Default is \"bilinear\".\nimage_resize_algorithm: 'bilinear'\n## End of multimodal section\n\n# Don't mess with this, it's here for accelerate and torchrun\nlocal_rank:\n\n# Add or change special tokens.\n# If you add tokens here, you don't need to add them to the `tokens` list.\nspecial_tokens:\n # bos_token: \"<s>\"\n # eos_token: \"</s>\"\n # unk_token: \"<unk>\"\n # pad_token: \"[PAD]\"\n\n# Add extra tokens.\ntokens:\n\n# Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer.\n# Only works for tokens that are not part of the base vocab (aka are added_tokens).\n# Can be checked if they exist in tokenizer.json added_tokens.\nadded_tokens_overrides: # Dict[int, str]\n# 128041: \"<|im_start|>\"\n# 128042: \"<|im_end|>\"\n\n# FSDP\nfsdp:\nfsdp_config:\n\n# Deepspeed config path. e.g., deepspeed_configs/zero3.json\ndeepspeed:\n\n# Advanced DDP Arguments\nddp_timeout:\nddp_bucket_cap_mb:\nddp_broadcast_buffers:\n\n# Sequence parallelism\n# Set to a divisor of the number of GPUs available to split sequences into chunks of equal size.\n# Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM.\n# E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized\n# subsequences, or set to 4 to split into four equal-sized subsequences.\n# See https://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html for more details.\nsequence_parallel_degree:\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\n# Must evenly divide the number of KV heads in your model.\nheads_k_stride: 1\n\n# Path to torch distx for optim 'adamw_anyprecision'\ntorchdistx_path:\n\n# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize\npretraining_dataset:\n\n# Debug mode\ndebug:\n\n# Seed\nseed:\n\n# Allow overwrite yml config using from cli\nstrict:",
"crumbs": [
"Getting Started",
"Config Reference"
@@ -174,7 +174,7 @@
"href": "docs/multi-gpu.html#sec-overview",
"title": "Multi-GPU",
"section": "1 Overview",
- "text": "1 Overview\nAxolotl supports several methods for multi-GPU training:\n\nDeepSpeed (recommended)\nFSDP (Fully Sharded Data Parallel)\nFSDP + QLoRA",
+ "text": "1 Overview\nAxolotl supports several methods for multi-GPU training:\n\nDeepSpeed (recommended)\nFSDP (Fully Sharded Data Parallel)\nSequence parallelism\nFSDP + QLoRA",
"crumbs": [
"Deployments",
"Multi-GPU"
@@ -196,7 +196,18 @@
"href": "docs/multi-gpu.html#sec-fsdp",
"title": "Multi-GPU",
"section": "3 FSDP",
- "text": "3 FSDP\n\n3.1 Basic FSDP Configuration\nfsdp:\n - full_shard\n - auto_wrap\nfsdp_config:\n fsdp_offload_params: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n\n\n3.2 FSDP + QLoRA\nFor combining FSDP with QLoRA, see our dedicated guide.",
+ "text": "3 FSDP\n\n3.1 Basic FSDP Configuration\nfsdp:\n - full_shard\n - auto_wrap\nfsdp_config:\n fsdp_offload_params: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer",
+ "crumbs": [
+ "Deployments",
+ "Multi-GPU"
+ ]
+ },
+ {
+ "objectID": "docs/multi-gpu.html#sec-sequence-parallelism",
+ "href": "docs/multi-gpu.html#sec-sequence-parallelism",
+ "title": "Multi-GPU",
+ "section": "4 Sequence parallelism",
+ "text": "4 Sequence parallelism\nWe support sequence parallelism (SP) via the\nring-flash-attention project. This\nallows one to split up sequences across GPUs, which is useful in the event that a\nsingle sequence causes OOM errors during model training.\nFirst, install ring-flash-attn, recommended via pip install axolotl[ring-flash-attn],\nor from source with pip install .[ring-flash-attn].\nYour Axolotl YAML config should contain the following lines:\nsequence_parallel_degree: 4 # Split each sequence into 4 parts, one per GPU\nflash_attention: true # Required with sequence parallelism\n\n# Optional; strides across the key dimension. Larger values use more memory but will make training faster.\nheads_k_stride: 1\nSee our dedicated guide for more details.\n\n4.1 FSDP + QLoRA\nFor combining FSDP with QLoRA, see our dedicated guide.",
"crumbs": [
"Deployments",
"Multi-GPU"
@@ -206,8 +217,8 @@
"objectID": "docs/multi-gpu.html#sec-performance",
"href": "docs/multi-gpu.html#sec-performance",
"title": "Multi-GPU",
- "section": "4 Performance Optimization",
- "text": "4 Performance Optimization\n\n4.1 Liger Kernel Integration\nPlease see docs for more info.",
+ "section": "5 Performance Optimization",
+ "text": "5 Performance Optimization\n\n5.1 Liger Kernel Integration\nPlease see docs for more info.",
"crumbs": [
"Deployments",
"Multi-GPU"
@@ -217,8 +228,8 @@
"objectID": "docs/multi-gpu.html#sec-troubleshooting",
"href": "docs/multi-gpu.html#sec-troubleshooting",
"title": "Multi-GPU",
- "section": "5 Troubleshooting",
- "text": "5 Troubleshooting\n\n5.1 NCCL Issues\nFor NCCL-related problems, see our NCCL troubleshooting guide.\n\n\n5.2 Common Problems\n\nMemory IssuesTraining Instability\n\n\n\nReduce micro_batch_size\nReduce eval_batch_size\nAdjust gradient_accumulation_steps\nConsider using a higher ZeRO stage\n\n\n\n\nStart with DeepSpeed ZeRO-2\nMonitor loss values\nCheck learning rates\n\n\n\n\nFor more detailed troubleshooting, see our debugging guide.",
+ "section": "6 Troubleshooting",
+ "text": "6 Troubleshooting\n\n6.1 NCCL Issues\nFor NCCL-related problems, see our NCCL troubleshooting guide.\n\n\n6.2 Common Problems\n\nMemory IssuesTraining Instability\n\n\n\nReduce micro_batch_size\nReduce eval_batch_size\nAdjust gradient_accumulation_steps\nConsider using a higher ZeRO stage\n\n\n\n\nStart with DeepSpeed ZeRO-2\nMonitor loss values\nCheck learning rates\n\n\n\n\nFor more detailed troubleshooting, see our debugging guide.",
"crumbs": [
"Deployments",
"Multi-GPU"
@@ -1571,63 +1582,99 @@
"href": "docs/sequence_parallelism.html",
"title": "Sequence Parallelism",
"section": "",
- "text": "Sequence parallelism is a technique that splits sequences across multiple GPUs,\nallowing you to train with very long sequences that wouldn’t fit on a single GPU. Each\nGPU processes a different portion of the sequence, and the results are aggregated\nthrough a ring communication pattern.\n\n\nUse sequence parallelism when:\n\nYou need to train with sequence lengths that don’t fit into a single GPU’s memory\nYou have multiple GPUs available\nYou’re experiencing OOM (Out Of Memory) errors with long sequences\n\n\n\n\nTo enable sequence parallelism, add the following to your configuration file:\n# Set to a divisor (> 1) of the number of GPUs available\nsequence_parallel_degree: 4 # Split sequences across 4 GPUs\nThe sequence_parallel_degree should be a divisor of the total number of GPUs. For example:\n\nWith 8 GPUs, valid values would be 2, 4, or 8\nWith 4 GPUs, valid values would be 2 or 4\n\n\n\n\nWhen sequence parallelism is enabled:\n\nEach sequence is divided into equal chunks across the GPUs in a sequence parallel group\nThe data collator handles the chunking of input_ids, attention_mask, labels, and position_ids\nPosition IDs are adjusted to maintain proper relative positions, especially for packed sequences\nThe trainer uses special ring communication patterns for attention operations\n\n\n\n\nTo use sequence parallelism, you need:\n\nMultiple GPUs (at least 2)\nThe ring-flash-attn package. Install with:\n\npip install axolotl[ring-flash-attn] (preferred)\npip install ring-flash-attn>=0.1.4\n\n\n\n\n\n\nFlash attention must be enabled for this to work (flash_attention: true in config YAML)\nMay have a small performance overhead due to communication between GPUs\n\n\n\n\n# Example config with sequence parallelism\nbase_model: meta-llama/Llama-3-8B-Instruct\nsequence_len: 8192\nsequence_parallel_degree: 2 # Split each sequence into 4 parts\nflash_attention: true # Required with sequence parallelism\n...\nThis will train the Llama 3 8B model with 8K context length, with each sequence split\ninto 2 subsequences of length 4096 across 2 GPUs.\n\n\n\nSequence parallelism is compatible with Axolotl’s sample packing functionality. When using both features together:\n\nSamples are first packed together\nThe packed sequences are then divided across GPUs in the sequence parallel group\nPosition IDs are automatically adjusted to maintain proper relative positions\n\n\n\n\nWhen using sequence parallelism, your effective global batch size is divided by the sequence_parallel_degree. This happens because:\n\nEach group of sequence_parallel_degree GPUs works on the same batch (just different parts of each sequence)\nThe number of batches processed per step decreases\n\nFor example:\n- With 8 GPUs and no sequence parallelism: 8 different batches processed per step\n- With 8 GPUs and sequence_parallel_degree=4: Only 2 different batches processed per step (each split across 4 GPUs)\n- If your per-GPU micro_batch_size is 2, the global batch size decreases from 16 to 4"
+ "text": "Sequence parallelism is a technique that splits sequences across multiple GPUs,\nallowing you to train with very long sequences that wouldn’t fit on a single GPU. Each\nGPU processes a different portion of the sequence, and the results are aggregated\nthrough a ring communication pattern.\n\n\nUse sequence parallelism when:\n\nYou need to train with sequence lengths that don’t fit into a single GPU’s memory\nYou have multiple GPUs available\nYou’re experiencing OOM (Out Of Memory) errors with long sequences\n\n\n\n\nTo enable sequence parallelism, add the following to your configuration file:\n# Set to a divisor (> 1) of the number of GPUs available\nsequence_parallel_degree: 4 # Split sequences across 4 GPUs\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\nheads_k_stride: 1\nThe sequence_parallel_degree should be a divisor of the total number of GPUs. For example:\n\nWith 8 GPUs, valid values would be 2, 4, or 8\nWith 4 GPUs, valid values would be 2 or 4\n\n\n\n\nWhen sequence parallelism is enabled:\n\nEach sequence is divided into equal chunks across the GPUs in a sequence parallel group\nThe data collator handles the chunking of input_ids, attention_mask, labels, and position_ids\nPosition IDs are adjusted to maintain proper relative positions, especially for packed sequences\nThe trainer uses special ring communication patterns for attention operations\n\n\n\n\nTo use sequence parallelism, you need:\n\nMultiple GPUs (at least 2)\nThe ring-flash-attn package. Install with:\n\npip install axolotl[ring-flash-attn] (preferred)\npip install ring-flash-attn>=0.1.4\n\n\n\n\n\n\nFlash attention must be enabled for this to work (flash_attention: true in config YAML)\nMay have a small performance overhead due to communication between GPUs\n\n\n\n\nbase_model: meta-llama/Llama-3-8B-Instruct\nsequence_len: 8192\n\n...\n\nsequence_parallel_degree: 4 # Split each sequence into 4 parts, one per GPU\nflash_attention: true # Required with sequence parallelism\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\nheads_k_stride: 1\n\n...\nThis will train the Llama 3 8B model with 8K context length, with each sequence split\ninto 2 subsequences of length 4096 across 2 GPUs.\n\n\n\nSequence parallelism is compatible with Axolotl’s sample packing functionality. When using both features together:\n\nSamples are first packed together\nThe packed sequences are then divided across GPUs in the sequence parallel group\nPosition IDs are automatically adjusted to maintain proper relative positions\n\n\n\n\nWhen using sequence parallelism, your effective global batch size is divided by the sequence_parallel_degree. This happens because:\n\nEach group of sequence_parallel_degree GPUs works on the same batch (just different parts of each sequence)\nThe number of batches processed per step decreases\n\nFor example:\n- With 8 GPUs and no sequence parallelism: 8 different batches processed per step\n- With 8 GPUs and sequence_parallel_degree=4: Only 2 different batches processed per step (each split across 4 GPUs)\n- If your per-GPU micro_batch_size is 2, the global batch size decreases from 16 to 4",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#when-to-use-sequence-parallelism",
"href": "docs/sequence_parallelism.html#when-to-use-sequence-parallelism",
"title": "Sequence Parallelism",
"section": "",
- "text": "Use sequence parallelism when:\n\nYou need to train with sequence lengths that don’t fit into a single GPU’s memory\nYou have multiple GPUs available\nYou’re experiencing OOM (Out Of Memory) errors with long sequences"
+ "text": "Use sequence parallelism when:\n\nYou need to train with sequence lengths that don’t fit into a single GPU’s memory\nYou have multiple GPUs available\nYou’re experiencing OOM (Out Of Memory) errors with long sequences",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#configuration",
"href": "docs/sequence_parallelism.html#configuration",
"title": "Sequence Parallelism",
"section": "",
- "text": "To enable sequence parallelism, add the following to your configuration file:\n# Set to a divisor (> 1) of the number of GPUs available\nsequence_parallel_degree: 4 # Split sequences across 4 GPUs\nThe sequence_parallel_degree should be a divisor of the total number of GPUs. For example:\n\nWith 8 GPUs, valid values would be 2, 4, or 8\nWith 4 GPUs, valid values would be 2 or 4"
+ "text": "To enable sequence parallelism, add the following to your configuration file:\n# Set to a divisor (> 1) of the number of GPUs available\nsequence_parallel_degree: 4 # Split sequences across 4 GPUs\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\nheads_k_stride: 1\nThe sequence_parallel_degree should be a divisor of the total number of GPUs. For example:\n\nWith 8 GPUs, valid values would be 2, 4, or 8\nWith 4 GPUs, valid values would be 2 or 4",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#implementation-details",
"href": "docs/sequence_parallelism.html#implementation-details",
"title": "Sequence Parallelism",
"section": "",
- "text": "When sequence parallelism is enabled:\n\nEach sequence is divided into equal chunks across the GPUs in a sequence parallel group\nThe data collator handles the chunking of input_ids, attention_mask, labels, and position_ids\nPosition IDs are adjusted to maintain proper relative positions, especially for packed sequences\nThe trainer uses special ring communication patterns for attention operations"
+ "text": "When sequence parallelism is enabled:\n\nEach sequence is divided into equal chunks across the GPUs in a sequence parallel group\nThe data collator handles the chunking of input_ids, attention_mask, labels, and position_ids\nPosition IDs are adjusted to maintain proper relative positions, especially for packed sequences\nThe trainer uses special ring communication patterns for attention operations",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#requirements",
"href": "docs/sequence_parallelism.html#requirements",
"title": "Sequence Parallelism",
"section": "",
- "text": "To use sequence parallelism, you need:\n\nMultiple GPUs (at least 2)\nThe ring-flash-attn package. Install with:\n\npip install axolotl[ring-flash-attn] (preferred)\npip install ring-flash-attn>=0.1.4"
+ "text": "To use sequence parallelism, you need:\n\nMultiple GPUs (at least 2)\nThe ring-flash-attn package. Install with:\n\npip install axolotl[ring-flash-attn] (preferred)\npip install ring-flash-attn>=0.1.4",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#limitations",
"href": "docs/sequence_parallelism.html#limitations",
"title": "Sequence Parallelism",
"section": "",
- "text": "Flash attention must be enabled for this to work (flash_attention: true in config YAML)\nMay have a small performance overhead due to communication between GPUs"
+ "text": "Flash attention must be enabled for this to work (flash_attention: true in config YAML)\nMay have a small performance overhead due to communication between GPUs",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#example",
"href": "docs/sequence_parallelism.html#example",
"title": "Sequence Parallelism",
"section": "",
- "text": "# Example config with sequence parallelism\nbase_model: meta-llama/Llama-3-8B-Instruct\nsequence_len: 8192\nsequence_parallel_degree: 2 # Split each sequence into 4 parts\nflash_attention: true # Required with sequence parallelism\n...\nThis will train the Llama 3 8B model with 8K context length, with each sequence split\ninto 2 subsequences of length 4096 across 2 GPUs."
+ "text": "base_model: meta-llama/Llama-3-8B-Instruct\nsequence_len: 8192\n\n...\n\nsequence_parallel_degree: 4 # Split each sequence into 4 parts, one per GPU\nflash_attention: true # Required with sequence parallelism\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\nheads_k_stride: 1\n\n...\nThis will train the Llama 3 8B model with 8K context length, with each sequence split\ninto 2 subsequences of length 4096 across 2 GPUs.",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#sample-packing-with-sequence-parallelism",
"href": "docs/sequence_parallelism.html#sample-packing-with-sequence-parallelism",
"title": "Sequence Parallelism",
"section": "",
- "text": "Sequence parallelism is compatible with Axolotl’s sample packing functionality. When using both features together:\n\nSamples are first packed together\nThe packed sequences are then divided across GPUs in the sequence parallel group\nPosition IDs are automatically adjusted to maintain proper relative positions"
+ "text": "Sequence parallelism is compatible with Axolotl’s sample packing functionality. When using both features together:\n\nSamples are first packed together\nThe packed sequences are then divided across GPUs in the sequence parallel group\nPosition IDs are automatically adjusted to maintain proper relative positions",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/sequence_parallelism.html#effect-on-batch-size",
"href": "docs/sequence_parallelism.html#effect-on-batch-size",
"title": "Sequence Parallelism",
"section": "",
- "text": "When using sequence parallelism, your effective global batch size is divided by the sequence_parallel_degree. This happens because:\n\nEach group of sequence_parallel_degree GPUs works on the same batch (just different parts of each sequence)\nThe number of batches processed per step decreases\n\nFor example:\n- With 8 GPUs and no sequence parallelism: 8 different batches processed per step\n- With 8 GPUs and sequence_parallel_degree=4: Only 2 different batches processed per step (each split across 4 GPUs)\n- If your per-GPU micro_batch_size is 2, the global batch size decreases from 16 to 4"
+ "text": "When using sequence parallelism, your effective global batch size is divided by the sequence_parallel_degree. This happens because:\n\nEach group of sequence_parallel_degree GPUs works on the same batch (just different parts of each sequence)\nThe number of batches processed per step decreases\n\nFor example:\n- With 8 GPUs and no sequence parallelism: 8 different batches processed per step\n- With 8 GPUs and sequence_parallel_degree=4: Only 2 different batches processed per step (each split across 4 GPUs)\n- If your per-GPU micro_batch_size is 2, the global batch size decreases from 16 to 4",
+ "crumbs": [
+ "Advanced Features",
+ "Sequence Parallelism"
+ ]
},
{
"objectID": "docs/multipack.html",
diff --git a/sitemap.xml b/sitemap.xml
index 5f8c05b17..f34779672 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -2,674 +2,674 @@
https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html
- 2025-03-31T06:40:26.194Z
+ 2025-03-31T13:13:55.601Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/stepwise_supervised.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.596Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/config.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.596Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multi-gpu.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/installation.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/reward_modelling.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.mlflow_.html
- 2025-03-31T06:40:57.018Z
+ 2025-03-31T13:14:44.106Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.trainer_fsdp_optim.html
- 2025-03-31T06:40:56.618Z
+ 2025-03-31T13:14:43.710Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.data.batch_dataset_fetcher.html
- 2025-03-31T06:40:56.634Z
+ 2025-03-31T13:14:43.726Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.stepwise_supervised.html
- 2025-03-31T06:40:56.328Z
+ 2025-03-31T13:14:43.422Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.mistral_attn_hijack_flash.html
- 2025-03-31T06:40:56.567Z
+ 2025-03-31T13:14:43.660Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.user_defined.html
- 2025-03-31T06:40:56.374Z
+ 2025-03-31T13:14:43.468Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.liger.args.html
- 2025-03-31T06:40:56.936Z
+ 2025-03-31T13:14:44.025Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.training.html
- 2025-03-31T06:40:56.800Z
+ 2025-03-31T13:14:43.891Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/datasets.html
- 2025-03-31T06:40:55.838Z
+ 2025-03-31T13:14:42.935Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.geglu.html
- 2025-03-31T06:40:56.508Z
+ 2025-03-31T13:14:43.601Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_attn_hijack_flash.html
- 2025-03-31T06:40:56.551Z
+ 2025-03-31T13:14:43.644Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.sweeps.html
- 2025-03-31T06:40:56.164Z
+ 2025-03-31T13:14:43.259Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.freeze.html
- 2025-03-31T06:40:56.705Z
+ 2025-03-31T13:14:43.796Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.multipack.html
- 2025-03-31T06:40:56.569Z
+ 2025-03-31T13:14:43.661Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.main.html
- 2025-03-31T06:40:56.064Z
+ 2025-03-31T13:14:43.160Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.trl.html
- 2025-03-31T06:40:56.239Z
+ 2025-03-31T13:14:43.333Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.passthrough.html
- 2025-03-31T06:40:56.376Z
+ 2025-03-31T13:14:43.470Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.format.llama3x.html
- 2025-03-31T06:40:56.019Z
+ 2025-03-31T13:14:43.116Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.datasets.transforms.chat_builder.html
- 2025-03-31T06:40:56.034Z
+ 2025-03-31T13:14:43.130Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.kto.user_defined.html
- 2025-03-31T06:40:56.393Z
+ 2025-03-31T13:14:43.487Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.mamba.html
- 2025-03-31T06:40:56.993Z
+ 2025-03-31T13:14:44.081Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.base.html
- 2025-03-31T06:40:56.921Z
+ 2025-03-31T13:14:44.010Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.bench.html
- 2025-03-31T06:40:56.697Z
+ 2025-03-31T13:14:43.788Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.swiglu.html
- 2025-03-31T06:40:56.517Z
+ 2025-03-31T13:14:43.611Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.format.shared.html
- 2025-03-31T06:40:56.021Z
+ 2025-03-31T13:14:43.118Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.cut_cross_entropy.args.html
- 2025-03-31T06:40:56.924Z
+ 2025-03-31T13:14:44.013Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.datasets.chat.html
- 2025-03-31T06:40:56.026Z
+ 2025-03-31T13:14:43.123Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.lisa.html
- 2025-03-31T06:40:57.014Z
+ 2025-03-31T13:14:44.102Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.grokfast.optimizer.html
- 2025-03-31T06:40:56.925Z
+ 2025-03-31T13:14:44.014Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.alpaca_chat.html
- 2025-03-31T06:40:56.278Z
+ 2025-03-31T13:14:43.372Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.alpaca_instruct.html
- 2025-03-31T06:40:56.280Z
+ 2025-03-31T13:14:43.374Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.kto.chatml.html
- 2025-03-31T06:40:56.392Z
+ 2025-03-31T13:14:43.486Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.integrations.html
- 2025-03-31T06:40:56.846Z
+ 2025-03-31T13:14:43.936Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.trl.html
- 2025-03-31T06:40:56.829Z
+ 2025-03-31T13:14:43.919Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_tokenizers.html
- 2025-03-31T06:40:55.892Z
+ 2025-03-31T13:14:42.989Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.data.sft.html
- 2025-03-31T06:40:56.778Z
+ 2025-03-31T13:14:43.868Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schedulers.html
- 2025-03-31T06:40:56.746Z
+ 2025-03-31T13:14:43.836Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.chat_templates.html
- 2025-03-31T06:40:56.680Z
+ 2025-03-31T13:14:43.772Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.models.html
- 2025-03-31T06:40:56.664Z
+ 2025-03-31T13:14:43.756Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.chatml.html
- 2025-03-31T06:40:56.371Z
+ 2025-03-31T13:14:43.465Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.distributed.html
- 2025-03-31T06:40:56.764Z
+ 2025-03-31T13:14:43.855Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.utils.html
- 2025-03-31T06:40:56.607Z
+ 2025-03-31T13:14:43.699Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.utils.html
- 2025-03-31T06:40:56.858Z
+ 2025-03-31T13:14:43.948Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_expand_mask.html
- 2025-03-31T06:40:56.577Z
+ 2025-03-31T13:14:43.669Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/common.datasets.html
- 2025-03-31T06:40:56.961Z
+ 2025-03-31T13:14:44.050Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/logging_config.html
- 2025-03-31T06:40:55.897Z
+ 2025-03-31T13:14:42.994Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.quantize.html
- 2025-03-31T06:40:56.525Z
+ 2025-03-31T13:14:43.618Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_patch_multipack.html
- 2025-03-31T06:40:56.609Z
+ 2025-03-31T13:14:43.702Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.comet_.html
- 2025-03-31T06:40:57.021Z
+ 2025-03-31T13:14:44.109Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.trainer.html
- 2025-03-31T06:40:56.721Z
+ 2025-03-31T13:14:43.813Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/common.architectures.html
- 2025-03-31T06:40:56.944Z
+ 2025-03-31T13:14:44.033Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/models.mamba.modeling_mamba.html
- 2025-03-31T06:40:56.962Z
+ 2025-03-31T13:14:44.051Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.spectrum.args.html
- 2025-03-31T06:40:56.942Z
+ 2025-03-31T13:14:44.031Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.merge_sharded_fsdp_weights.html
- 2025-03-31T06:40:56.151Z
+ 2025-03-31T13:14:43.245Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.bradley_terry.llama3.html
- 2025-03-31T06:40:56.417Z
+ 2025-03-31T13:14:43.511Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.merge_lora.html
- 2025-03-31T06:40:56.139Z
+ 2025-03-31T13:14:43.234Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.lora.html
- 2025-03-31T06:40:56.685Z
+ 2025-03-31T13:14:43.777Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.relora.html
- 2025-03-31T06:40:56.575Z
+ 2025-03-31T13:14:43.668Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.cloud.base.html
- 2025-03-31T06:40:56.198Z
+ 2025-03-31T13:14:43.293Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/common.const.html
- 2025-03-31T06:40:56.945Z
+ 2025-03-31T13:14:44.034Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/convert.html
- 2025-03-31T06:40:55.851Z
+ 2025-03-31T13:14:42.948Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.chat_template.html
- 2025-03-31T06:40:56.265Z
+ 2025-03-31T13:14:43.359Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.utils.html
- 2025-03-31T06:40:56.526Z
+ 2025-03-31T13:14:43.619Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.lora_embeddings.html
- 2025-03-31T06:40:56.688Z
+ 2025-03-31T13:14:43.780Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/lora_optims.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.596Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/faq.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/lr_groups.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/TODO.html
- 2025-03-31T06:40:26.188Z
+ 2025-03-31T13:13:55.595Zhttps://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html
- 2025-03-31T06:40:26.208Z
+ 2025-03-31T13:13:55.616Zhttps://axolotl-ai-cloud.github.io/axolotl/index.html
- 2025-03-31T06:40:26.205Z
+ 2025-03-31T13:13:55.612Zhttps://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html
- 2025-03-31T06:40:26.209Z
+ 2025-03-31T13:13:55.616Zhttps://axolotl-ai-cloud.github.io/axolotl/FAQS.html
- 2025-03-31T06:40:26.188Z
+ 2025-03-31T13:13:55.595Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/inference.html
- 2025-03-31T06:40:26.192Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/getting-started.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.perplexity.html
- 2025-03-31T06:40:57.009Z
+ 2025-03-31T13:14:44.097Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainer_builder.html
- 2025-03-31T06:40:55.912Z
+ 2025-03-31T13:14:43.009Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.train.html
- 2025-03-31T06:40:56.072Z
+ 2025-03-31T13:14:43.168Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.llama3.html
- 2025-03-31T06:40:56.361Z
+ 2025-03-31T13:14:43.455Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.cloud.modal_.html
- 2025-03-31T06:40:56.205Z
+ 2025-03-31T13:14:43.299Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/index.html
- 2025-03-31T06:40:55.760Z
+ 2025-03-31T13:14:42.857Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.input_output.html
- 2025-03-31T06:40:56.324Z
+ 2025-03-31T13:14:43.418Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.optimizers.adopt.html
- 2025-03-31T06:40:56.775Z
+ 2025-03-31T13:14:43.865Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.btlm_attn_hijack_flash.html
- 2025-03-31T06:40:56.608Z
+ 2025-03-31T13:14:43.700Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.core.html
- 2025-03-31T06:40:56.964Z
+ 2025-03-31T13:14:44.052Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.datasets.html
- 2025-03-31T06:40:56.818Z
+ 2025-03-31T13:14:43.908Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.kd.trainer.html
- 2025-03-31T06:40:56.933Z
+ 2025-03-31T13:14:44.022Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.tokenization.html
- 2025-03-31T06:40:56.670Z
+ 2025-03-31T13:14:43.762Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.mixtral.html
- 2025-03-31T06:40:56.636Z
+ 2025-03-31T13:14:43.727Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.stablelm_attn_hijack_flash.html
- 2025-03-31T06:40:56.615Z
+ 2025-03-31T13:14:43.707Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.model.html
- 2025-03-31T06:40:56.796Z
+ 2025-03-31T13:14:43.886Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.multimodal.html
- 2025-03-31T06:40:56.834Z
+ 2025-03-31T13:14:43.924Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.gradient_checkpointing.unsloth.html
- 2025-03-31T06:40:56.781Z
+ 2025-03-31T13:14:43.871Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.base.html
- 2025-03-31T06:40:56.222Z
+ 2025-03-31T13:14:43.316Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.unsloth_.html
- 2025-03-31T06:40:56.626Z
+ 2025-03-31T13:14:43.718Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.samplers.multipack.html
- 2025-03-31T06:40:57.003Z
+ 2025-03-31T13:14:44.091Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.profiler.html
- 2025-03-31T06:40:57.013Z
+ 2025-03-31T13:14:44.101Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.lm_eval.args.html
- 2025-03-31T06:40:56.939Z
+ 2025-03-31T13:14:44.028Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.data.pretraining.html
- 2025-03-31T06:40:56.777Z
+ 2025-03-31T13:14:43.867Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/evaluate.html
- 2025-03-31T06:40:55.830Z
+ 2025-03-31T13:14:42.927Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.dict.html
- 2025-03-31T06:40:56.768Z
+ 2025-03-31T13:14:43.858Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.utils.html
- 2025-03-31T06:40:56.195Z
+ 2025-03-31T13:14:43.290Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.pygmalion.html
- 2025-03-31T06:40:56.345Z
+ 2025-03-31T13:14:43.440Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.training_args.html
- 2025-03-31T06:40:55.995Z
+ 2025-03-31T13:14:43.091Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.inference.html
- 2025-03-31T06:40:56.131Z
+ 2025-03-31T13:14:43.226Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.lora.html
- 2025-03-31T06:40:56.497Z
+ 2025-03-31T13:14:43.590Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.evaluate.html
- 2025-03-31T06:40:56.080Z
+ 2025-03-31T13:14:43.176Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.batching.html
- 2025-03-31T06:40:56.989Z
+ 2025-03-31T13:14:44.078Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.completion.html
- 2025-03-31T06:40:56.318Z
+ 2025-03-31T13:14:43.412Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.zephyr.html
- 2025-03-31T06:40:56.373Z
+ 2025-03-31T13:14:43.467Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.metharme.html
- 2025-03-31T06:40:56.335Z
+ 2025-03-31T13:14:43.429Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.orpo.chat_template.html
- 2025-03-31T06:40:56.413Z
+ 2025-03-31T13:14:43.507Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.alpaca_w_system.html
- 2025-03-31T06:40:56.291Z
+ 2025-03-31T13:14:43.385Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.model_shard_quant.html
- 2025-03-31T06:40:56.694Z
+ 2025-03-31T13:14:43.785Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.config.html
- 2025-03-31T06:40:56.117Z
+ 2025-03-31T13:14:43.212Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.enums.html
- 2025-03-31T06:40:56.853Z
+ 2025-03-31T13:14:43.943Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.preprocess.html
- 2025-03-31T06:40:56.159Z
+ 2025-03-31T13:14:43.253Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.messages.html
- 2025-03-31T06:40:56.017Z
+ 2025-03-31T13:14:43.113Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.chat_template.html
- 2025-03-31T06:40:56.351Z
+ 2025-03-31T13:14:43.445Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.peft.html
- 2025-03-31T06:40:56.826Z
+ 2025-03-31T13:14:43.916Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/train.html
- 2025-03-31T06:40:55.820Z
+ 2025-03-31T13:14:42.917Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.messages.chat.html
- 2025-03-31T06:40:56.350Z
+ 2025-03-31T13:14:43.444Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.orcamini.html
- 2025-03-31T06:40:56.339Z
+ 2025-03-31T13:14:43.433Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.mm_chat.html
- 2025-03-31T06:40:56.997Z
+ 2025-03-31T13:14:44.086Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.kto.llama3.html
- 2025-03-31T06:40:56.384Z
+ 2025-03-31T13:14:43.478Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.attention.mllama.html
- 2025-03-31T06:40:56.633Z
+ 2025-03-31T13:14:43.724Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.checks.html
- 2025-03-31T06:40:56.100Z
+ 2025-03-31T13:14:43.195Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.transformers_fa_utils.html
- 2025-03-31T06:40:56.625Z
+ 2025-03-31T13:14:43.716Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_attn_hijack_xformers.html
- 2025-03-31T06:40:56.553Z
+ 2025-03-31T13:14:43.646Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.dpo.trainer.html
- 2025-03-31T06:40:56.245Z
+ 2025-03-31T13:14:43.339Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.user_defined.html
- 2025-03-31T06:40:56.299Z
+ 2025-03-31T13:14:43.393Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.args.html
- 2025-03-31T06:40:56.094Z
+ 2025-03-31T13:14:43.189Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.llama2_chat.html
- 2025-03-31T06:40:56.312Z
+ 2025-03-31T13:14:43.406Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.config.html
- 2025-03-31T06:40:56.789Z
+ 2025-03-31T13:14:43.879Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.grpo.trainer.html
- 2025-03-31T06:40:56.249Z
+ 2025-03-31T13:14:43.343Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.format.chatml.html
- 2025-03-31T06:40:56.018Z
+ 2025-03-31T13:14:43.115Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.lora_kernels.html
- 2025-03-31T06:40:56.599Z
+ 2025-03-31T13:14:43.691Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.base.html
- 2025-03-31T06:40:56.250Z
+ 2025-03-31T13:14:43.344Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/cli.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.596Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/custom_integrations.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.596Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/mac.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/docker.html
- 2025-03-31T06:40:26.190Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/ray-integration.html
- 2025-03-31T06:40:26.193Z
+ 2025-03-31T13:13:55.600Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.596Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.597Zhttps://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html
- 2025-03-31T06:40:26.189Z
+ 2025-03-31T13:13:55.597Z
diff --git a/src/axolotl/integrations/LICENSE.html b/src/axolotl/integrations/LICENSE.html
index aaf5a2033..3daa1aca2 100644
--- a/src/axolotl/integrations/LICENSE.html
+++ b/src/axolotl/integrations/LICENSE.html
@@ -356,6 +356,12 @@ ul.task-list li input[type="checkbox"] {
Custom Integrations
+
+