From e8e07e15d8868468c81f3ce82e77997f9b79c582 Mon Sep 17 00:00:00 2001 From: Quarto GHA Workflow Runner Date: Tue, 10 Jun 2025 04:44:24 +0000 Subject: [PATCH] Built site for gh-pages --- .nojekyll | 2 +- docs/api/prompt_strategies.chat_template.html | 125 +- docs/config.html | 1208 +++++++++-------- docs/dataset-formats/conversation.html | 237 +++- search.json | 10 +- sitemap.xml | 378 +++--- 6 files changed, 1069 insertions(+), 891 deletions(-) diff --git a/.nojekyll b/.nojekyll index ded4f025e..def45b25e 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -5efba35f \ No newline at end of file +cb332e67 \ No newline at end of file diff --git a/docs/api/prompt_strategies.chat_template.html b/docs/api/prompt_strategies.chat_template.html index 8b3ed41c5..2e7f65444 100644 --- a/docs/api/prompt_strategies.chat_template.html +++ b/docs/api/prompt_strategies.chat_template.html @@ -525,28 +525,101 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); message_field_training_detail=None, field_messages='messages', field_system='system', - roles=None, - chat_template_kwargs=None, - drop_system_message=False, -) + field_tools='tools', + roles=None, + chat_template_kwargs=None, + drop_system_message=False, +)

Prompter for HF chat templates

+
+

Methods

+ + + + + + + + + + + + + +
NameDescription
build_promptBuild a prompt from a conversation.
+
+
build_prompt
+
prompt_strategies.chat_template.ChatTemplatePrompter.build_prompt(
+    conversation,
+    add_generation_prompt=False,
+    images=None,
+    tools=None,
+)
+

Build a prompt from a conversation.

+
+
Parameters
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionDefault
conversationA list of messages.required
add_generation_promptWhether to add a generation prompt.False
imagesA list of images. (optional)None
toolsA list of tools. (optional)None
+
+
+

ChatTemplateStrategy

-
prompt_strategies.chat_template.ChatTemplateStrategy(
-    prompter,
-    tokenizer,
-    train_on_inputs,
-    sequence_len,
-    roles_to_train=None,
-    train_on_eos=None,
-    train_on_eot=None,
-    eot_tokens=None,
-    split_thinking=False,
-)
+
prompt_strategies.chat_template.ChatTemplateStrategy(
+    prompter,
+    tokenizer,
+    train_on_inputs,
+    sequence_len,
+    roles_to_train=None,
+    train_on_eos=None,
+    train_on_eot=None,
+    eot_tokens=None,
+    split_thinking=False,
+)

Tokenizing strategy for instruction-based prompts.

-
-

Methods

+
+

Methods

@@ -571,27 +644,31 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
find_first_eot_token
-
prompt_strategies.chat_template.ChatTemplateStrategy.find_first_eot_token(
-    input_ids,
-    start_idx,
-)
+
prompt_strategies.chat_template.ChatTemplateStrategy.find_first_eot_token(
+    input_ids,
+    start_idx,
+)

Find the first EOT token in the input_ids starting from start_idx.

find_turn
-
prompt_strategies.chat_template.ChatTemplateStrategy.find_turn(turns, turn_idx)
+
prompt_strategies.chat_template.ChatTemplateStrategy.find_turn(
+    turns,
+    turn_idx,
+    tools=None,
+)

Locate the starting and ending indices of the specified turn in a conversation.

tokenize_prompt
-
prompt_strategies.chat_template.ChatTemplateStrategy.tokenize_prompt(prompt)
+
prompt_strategies.chat_template.ChatTemplateStrategy.tokenize_prompt(prompt)

Public method that can handle either a single prompt or a batch of prompts.

StrategyLoader

-
prompt_strategies.chat_template.StrategyLoader()
+
prompt_strategies.chat_template.StrategyLoader()

Load chat template strategy based on configuration.

diff --git a/docs/config.html b/docs/config.html index f8bea6ea3..8e993aea1 100644 --- a/docs/config.html +++ b/docs/config.html @@ -662,625 +662,629 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true}); # Key containing the messages (default: "messages") field_messages: messages - # Key containing the system message (default: "system") - # If the system message is not present in the dataset sample, it will be loaded from the field_system property. - field_system: system + # Key containing the tools (default: "tools") + # Must be a list[dict] and follow [JSON schema](https://json-schema.org/learn/getting-started-step-by-step). + field_tools: tools - # Mapping of properties from the input dataset to the chat template. - # (default: message_property_mappings={'role':'role', 'content':'content'}) - # If a property exists in the template but not in this mapping, the system will attempt - # to load it directly from the message using the property name as the key. - # Example: In the mapping below, 'from' is loaded from input dataset and used as 'role', - # while 'value' is loaded and used as 'content' in the chat template. - message_property_mappings: - role: from - content: value - # ... - - # Optional[Dict[str, List]]. Roles mapping in the messages. - # The format is {target_role: [source_roles]}. All source roles will be mapped to the target role. - # The default is: - roles: - user: ["human", "user"] - assistant: ["gpt", "assistant"] - system: ["system"] - tool: ["tool"] - - # Optional[bool]. Whether to drop the system turn from the dataset. Only works with chat_template. - # This does not drop the default system message from chat_template if it exists. If you wish to, - # we recommend using a custom jinja template with the default system message removed or - # adding a system turn with empty content. - drop_system_message: - - # Optional[bool]. (for Qwen3 template only) Whether to split the assistant content based on a reasoning trace inside delimited tags - # See example at `docs/dataset-formats/conversation.qmd` - split_thinking: + # Key containing the system message (default: "system") + # If the system message is not present in the dataset sample, it will be loaded from the field_system property. + field_system: system + + # Mapping of properties from the input dataset to the chat template. + # (default: message_property_mappings={'role':'role', 'content':'content'}) + # If a property exists in the template but not in this mapping, the system will attempt + # to load it directly from the message using the property name as the key. + # Example: In the mapping below, 'from' is loaded from input dataset and used as 'role', + # while 'value' is loaded and used as 'content' in the chat template. + message_property_mappings: + role: from + content: value + # ... + + # Optional[Dict[str, List]]. Roles mapping in the messages. + # The format is {target_role: [source_roles]}. All source roles will be mapped to the target role. + # The default is: + roles: + user: ["human", "user"] + assistant: ["gpt", "assistant"] + system: ["system"] + tool: ["tool"] + + # Optional[bool]. Whether to drop the system turn from the dataset. Only works with chat_template. + # This does not drop the default system message from chat_template if it exists. If you wish to, + # we recommend using a custom jinja template with the default system message removed or + # adding a system turn with empty content. + drop_system_message: - # IMPORTANT: The following fields determine which parts of the conversation to train on. - # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train - # See examples at `docs/dataset-formats/conversation.qmd` - # Note: If the below 5 fields are empty, defaults to training only on the last message. - - # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss. - roles_to_train: ["assistant"] # default - # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are: - # - all: train on all EOS tokens - # - turn (default): train on the EOS token at the end of each trainable turn - # - last: train on the last EOS token in the conversation - # TIP: Please make sure that your `tokenizer.eos_token` is same as EOS/EOT token in template. Otherwise, set `eos_token` under `special_tokens`. - train_on_eos: turn - # Optional[str]. Which EOT (End-of-Turn) tokens to train on in the conversation. Possible values are: - # - all: train on all EOT tokens - # - turn: train on the EOT token at the end of each trainable turn - # - last: train on the last EOT token in the conversation - # If not specified, defaults to the value of train_on_eos for backward compatibility. - train_on_eot: - # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`. - message_field_training: training - # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn. - # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train). - message_field_training_detail: train_detail - - -# If false, the datasets will not be shuffled and will keep their original order in `datasets`. -# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true. -shuffle_merged_datasets: true + # Optional[bool]. (for Qwen3 template only) Whether to split the assistant content based on a reasoning trace inside delimited tags + # See example at `docs/dataset-formats/conversation.qmd` + split_thinking: + + # IMPORTANT: The following fields determine which parts of the conversation to train on. + # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train + # See examples at `docs/dataset-formats/conversation.qmd` + # Note: If the below 5 fields are empty, defaults to training only on the last message. + + # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss. + roles_to_train: ["assistant"] # default + # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are: + # - all: train on all EOS tokens + # - turn (default): train on the EOS token at the end of each trainable turn + # - last: train on the last EOS token in the conversation + # TIP: Please make sure that your `tokenizer.eos_token` is same as EOS/EOT token in template. Otherwise, set `eos_token` under `special_tokens`. + train_on_eos: turn + # Optional[str]. Which EOT (End-of-Turn) tokens to train on in the conversation. Possible values are: + # - all: train on all EOT tokens + # - turn: train on the EOT token at the end of each trainable turn + # - last: train on the last EOT token in the conversation + # If not specified, defaults to the value of train_on_eos for backward compatibility. + train_on_eot: + # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`. + message_field_training: training + # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn. + # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train). + message_field_training_detail: train_detail + -# Deduplicates datasets and test_datasets with identical entries. -dataset_exact_deduplication: true - -# A list of one or more datasets to eval the model with. -# You can use either test_datasets, or val_set_size, but not both. -test_datasets: - - path: /workspace/data/eval.jsonl - ds_type: json - # You need to specify a split. For "json" datasets the default split is called "train". - split: train - type: completion - data_files: - - /workspace/data/eval.jsonl - -# use RL training: 'dpo', 'ipo', 'kto', 'simpo', 'orpo', 'grpo' -rl: -rl_beta: # Optional[float]. The beta parameter for the RL training. +# If false, the datasets will not be shuffled and will keep their original order in `datasets`. +# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true. +shuffle_merged_datasets: true + +# Deduplicates datasets and test_datasets with identical entries. +dataset_exact_deduplication: true + +# A list of one or more datasets to eval the model with. +# You can use either test_datasets, or val_set_size, but not both. +test_datasets: + - path: /workspace/data/eval.jsonl + ds_type: json + # You need to specify a split. For "json" datasets the default split is called "train". + split: train + type: completion + data_files: + - /workspace/data/eval.jsonl -# dpo -dpo_use_weighting: # Optional[bool]. Whether to perform weighting. -rpo_alpha: # Optional[float]. Weighting of NLL term in loss from RPO paper. +# use RL training: 'dpo', 'ipo', 'kto', 'simpo', 'orpo', 'grpo' +rl: +rl_beta: # Optional[float]. The beta parameter for the RL training. -# orpo -orpo_alpha: 0.1 # Parameter controlling the relative ratio loss weight in the ORPO loss. Passed to `beta` in `ORPOConfig` due to trl mapping. - -# kto -kto_desirable_weight: # Optional[float]. Factor for desirable loss term in KTO loss. -kto_undesirable_weight: # Optional[float]. Factor for undesirable loss term in KTO loss. +# dpo +dpo_use_weighting: # Optional[bool]. Whether to perform weighting. +rpo_alpha: # Optional[float]. Weighting of NLL term in loss from RPO paper. + +# orpo +orpo_alpha: 0.1 # Parameter controlling the relative ratio loss weight in the ORPO loss. Passed to `beta` in `ORPOConfig` due to trl mapping. -# simpo -cpo_alpha: 1.0 # Weight of the BC regularizer -simpo_gamma: 0.5 # Target reward margin for the SimPO loss +# kto +kto_desirable_weight: # Optional[float]. Factor for desirable loss term in KTO loss. +kto_undesirable_weight: # Optional[float]. Factor for undesirable loss term in KTO loss. -# grpo -trl: - use_vllm: # Optional[bool]. Whether to use VLLM for RL training. - vllm_server_host: # Optional[str]. Host of the vLLM server to connect to. - vllm_server_port: # Optional[int]. Port of the vLLM server to connect to. - vllm_server_timeout: # Optional[int]. Total timeout (in seconds) to wait for the vLLM server to respond. - vllm_guided_decoding_regex: # Optional[str]. Regex for vLLM guided decoding. - - beta: # Optional[float]. Beta parameter for the RL training. Same as `rl_beta`. Use - max_completion_length: # Optional[int]. Maximum length of the completion for RL training. - - reward_funcs: # Optional[list[str]]. List of reward functions to load. Paths must be importable from current dir. - reward_weights: # Optional[list[float]]. List of reward weights for the reward functions. - - num_generations: # Optional[int]. Number of generations to sample. - log_completions: # Optional[bool]. Whether to log completions. - num_completions_to_print: # Optional[int]. Number of completions to print when log_completions is True. +# simpo +cpo_alpha: 1.0 # Weight of the BC regularizer +simpo_gamma: 0.5 # Target reward margin for the SimPO loss + +# grpo +trl: + use_vllm: # Optional[bool]. Whether to use VLLM for RL training. + vllm_server_host: # Optional[str]. Host of the vLLM server to connect to. + vllm_server_port: # Optional[int]. Port of the vLLM server to connect to. + vllm_server_timeout: # Optional[int]. Total timeout (in seconds) to wait for the vLLM server to respond. + vllm_guided_decoding_regex: # Optional[str]. Regex for vLLM guided decoding. + + beta: # Optional[float]. Beta parameter for the RL training. Same as `rl_beta`. Use + max_completion_length: # Optional[int]. Maximum length of the completion for RL training. + + reward_funcs: # Optional[list[str]]. List of reward functions to load. Paths must be importable from current dir. + reward_weights: # Optional[list[float]]. List of reward weights for the reward functions. - sync_ref_model: # Optional[bool]. Whether to sync the reference model. - ref_model_mixup_alpha: # Optional[float]. Mixup alpha for the reference model. - ref_model_sync_steps: # Optional[int]. Sync steps for the reference model. - scale_rewards: # Optional[bool]. Whether to scale rewards by their standard deviation. - - temperature: # Optional[float]. Sampling temperature for the GRPO policy. - top_p: # Optional[float]. Top-p sampling probability for the generation policy. - top_k: # Optional[int]. Top-k sampling for the generation policy. - min_p: # Optional[float]. Minimum probability for the generation policy. - repetition_penalty: # Optional[float]. Penalty for tokens that appear in prompt and generated text. - - num_iterations: # Optional[int]. Number of iterations per batch (μ) for GRPO. - epsilon: # Optional[float]. Epsilon value for clipping in the GRPO algorithm. - epsilon_high: # Optional[float]. Upper-bound epsilon value for clipping in the GRPO algorithm. - use_liger_loss: # Optional[bool]. Whether to use Liger loss for GRPO. - loss_type: # Optional[str]. Loss formulation to use. Supported values: grpo, bnpo, dr_grpo. - mask_truncated_completions: # Optional[bool]. Whether to exclude truncated completions from loss calculation. - - -# reward modelling: `True` or `False` -reward_model: + num_generations: # Optional[int]. Number of generations to sample. + log_completions: # Optional[bool]. Whether to log completions. + num_completions_to_print: # Optional[int]. Number of completions to print when log_completions is True. + + sync_ref_model: # Optional[bool]. Whether to sync the reference model. + ref_model_mixup_alpha: # Optional[float]. Mixup alpha for the reference model. + ref_model_sync_steps: # Optional[int]. Sync steps for the reference model. + scale_rewards: # Optional[bool]. Whether to scale rewards by their standard deviation. + + temperature: # Optional[float]. Sampling temperature for the GRPO policy. + top_p: # Optional[float]. Top-p sampling probability for the generation policy. + top_k: # Optional[int]. Top-k sampling for the generation policy. + min_p: # Optional[float]. Minimum probability for the generation policy. + repetition_penalty: # Optional[float]. Penalty for tokens that appear in prompt and generated text. + + num_iterations: # Optional[int]. Number of iterations per batch (μ) for GRPO. + epsilon: # Optional[float]. Epsilon value for clipping in the GRPO algorithm. + epsilon_high: # Optional[float]. Upper-bound epsilon value for clipping in the GRPO algorithm. + use_liger_loss: # Optional[bool]. Whether to use Liger loss for GRPO. + loss_type: # Optional[str]. Loss formulation to use. Supported values: grpo, bnpo, dr_grpo. + mask_truncated_completions: # Optional[bool]. Whether to exclude truncated completions from loss calculation. -# process reward modelling: `True` or `False` -process_reward_model: - -# The name of the chat template to use for training, following values are supported: -# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value. -# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py -# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer. -# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field. -# The selected chat template will be saved to the tokenizer_config.json for easier inferencing -# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template. -chat_template: tokenizer_default -# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null. -chat_template_jinja: null -# Optional[List[str]]. Custom EOT (End-of-Turn) tokens to mask/unmask during training. -# These tokens mark the boundaries between conversation turns. -# For example: ["/INST", "</s>", "[/SYSTEM_PROMPT]"] -# If not specified, defaults to just the model's eos_token. -# This is useful for templates that use multiple delimiter tokens. -eot_tokens: - # - "</s>" - # - "[/INST]" - # - "[/SYSTEM_PROMPT]" -# Changes the default system message -default_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml. -# Axolotl attempts to save the dataset as an arrow after packing the data together so -# subsequent training attempts load faster, relative path -dataset_prepared_path: data/last_run_prepared -# Push prepared dataset to hub -push_dataset_to_hub: # Optional[str] repo_org/repo_name -# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()` -# if not set. -dataset_processes: # defaults to os.cpu_count() if not set -# Keep dataset in memory while preprocessing -# Only needed if cached dataset is taking too much storage -dataset_keep_in_memory: -# push checkpoints to hub -hub_model_id: # private repo path to push finetuned model -# how to push checkpoints to hub -# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy -hub_strategy: -# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets -# Required to be true when used in combination with `push_dataset_to_hub` -hf_use_auth_token: # boolean -# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval. -val_set_size: 0.04 -# Num shards for whole dataset -dataset_shard_num: -# Index of shard to use for whole dataset -dataset_shard_idx: - -# The maximum length of an input to train with, this should typically be less than 2048 -# as most models have a token/context limit of 2048 -sequence_len: 2048 -# Pad inputs so each step uses constant sized buffers -# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently -pad_to_sequence_len: -# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true' -sample_packing: -# Set to 'false' if getting errors during eval with sample_packing on. -eval_sample_packing: -# You can set these packing optimizations AFTER starting a training at least once. -# The trainer will provide recommended values for these values. -sample_packing_eff_est: -total_num_tokens: -# Increasing the following values helps with packing, but usually only slightly (<%1.) -# The number of samples packed at a time. -sample_packing_group_size: 100000 -# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples. -sample_packing_bin_size: 200 -sample_pack_sequentially: # Optional[bool]. Whether to pack samples sequentially. - -# whether to concatenate samples during pretraining -pretraining_sample_concatenation: - -curriculum_sampling: # Optional[bool]. Whether to use sequential sampling for curriculum learning - -# Use batch flattening for speedups when not using sample_packing -batch_flattening: - -# Passed through to transformers when loading the model when launched without accelerate -# Use `sequential` when training w/ model parallelism to limit memory -device_map: -# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model. -max_memory: - -# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model -adapter: lora -# If you already have a lora model trained that you want to load, put that here. -# This means after training, if you want to test the model, you should set this to the value of `output_dir`. -# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`. -lora_model_dir: - -# LoRA hyperparameters -# For more details about the following options, see: -# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2 -lora_r: 8 -lora_alpha: 16 -lora_dropout: 0.05 -lora_target_modules: - - q_proj - - v_proj -# - k_proj -# - o_proj -# - gate_proj -# - down_proj -# - up_proj -lora_target_linear: # If true, will target all linear modules - -# List[int] | int. # The layer indices to transform, otherwise, apply to all layers -# https://huggingface.co/docs/peft/v0.15.0/en/package_reference/lora#peft.LoraConfig.layers_to_transform -peft_layers_to_transform: + +# reward modelling: `True` or `False` +reward_model: + +# process reward modelling: `True` or `False` +process_reward_model: + +# The name of the chat template to use for training, following values are supported: +# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value. +# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py +# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer. +# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field. +# The selected chat template will be saved to the tokenizer_config.json for easier inferencing +# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template. +chat_template: tokenizer_default +# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null. +chat_template_jinja: null +# Optional[List[str]]. Custom EOT (End-of-Turn) tokens to mask/unmask during training. +# These tokens mark the boundaries between conversation turns. +# For example: ["/INST", "</s>", "[/SYSTEM_PROMPT]"] +# If not specified, defaults to just the model's eos_token. +# This is useful for templates that use multiple delimiter tokens. +eot_tokens: + # - "</s>" + # - "[/INST]" + # - "[/SYSTEM_PROMPT]" +# Changes the default system message +default_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml. +# Axolotl attempts to save the dataset as an arrow after packing the data together so +# subsequent training attempts load faster, relative path +dataset_prepared_path: data/last_run_prepared +# Push prepared dataset to hub +push_dataset_to_hub: # Optional[str] repo_org/repo_name +# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()` +# if not set. +dataset_processes: # defaults to os.cpu_count() if not set +# Keep dataset in memory while preprocessing +# Only needed if cached dataset is taking too much storage +dataset_keep_in_memory: +# push checkpoints to hub +hub_model_id: # private repo path to push finetuned model +# how to push checkpoints to hub +# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy +hub_strategy: +# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets +# Required to be true when used in combination with `push_dataset_to_hub` +hf_use_auth_token: # boolean +# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval. +val_set_size: 0.04 +# Num shards for whole dataset +dataset_shard_num: +# Index of shard to use for whole dataset +dataset_shard_idx: + +# The maximum length of an input to train with, this should typically be less than 2048 +# as most models have a token/context limit of 2048 +sequence_len: 2048 +# Pad inputs so each step uses constant sized buffers +# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently +pad_to_sequence_len: +# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true' +sample_packing: +# Set to 'false' if getting errors during eval with sample_packing on. +eval_sample_packing: +# You can set these packing optimizations AFTER starting a training at least once. +# The trainer will provide recommended values for these values. +sample_packing_eff_est: +total_num_tokens: +# Increasing the following values helps with packing, but usually only slightly (<%1.) +# The number of samples packed at a time. +sample_packing_group_size: 100000 +# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples. +sample_packing_bin_size: 200 +sample_pack_sequentially: # Optional[bool]. Whether to pack samples sequentially. + +# whether to concatenate samples during pretraining +pretraining_sample_concatenation: + +curriculum_sampling: # Optional[bool]. Whether to use sequential sampling for curriculum learning + +# Use batch flattening for speedups when not using sample_packing +batch_flattening: + +# Passed through to transformers when loading the model when launched without accelerate +# Use `sequential` when training w/ model parallelism to limit memory +device_map: +# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model. +max_memory: + +# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model +adapter: lora +# If you already have a lora model trained that you want to load, put that here. +# This means after training, if you want to test the model, you should set this to the value of `output_dir`. +# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`. +lora_model_dir: + +# LoRA hyperparameters +# For more details about the following options, see: +# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2 +lora_r: 8 +lora_alpha: 16 +lora_dropout: 0.05 +lora_target_modules: + - q_proj + - v_proj +# - k_proj +# - o_proj +# - gate_proj +# - down_proj +# - up_proj +lora_target_linear: # If true, will target all linear modules -# Optional[bool]. Whether to use DoRA. -# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#weight-decomposed-low-rank-adaptation-dora -peft_use_dora: +# List[int] | int. # The layer indices to transform, otherwise, apply to all layers +# https://huggingface.co/docs/peft/v0.15.0/en/package_reference/lora#peft.LoraConfig.layers_to_transform +peft_layers_to_transform: -# Optional[bool]. Whether to use RSLoRA. -# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#rank-stabilized-lora -peft_use_rslora: +# Optional[bool]. Whether to use DoRA. +# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#weight-decomposed-low-rank-adaptation-dora +peft_use_dora: -# Optional[list[tuple[int, int]]]. List of layer indices to replicate. -# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#memory-efficient-layer-replication-with-lora -peft_layer_replication: +# Optional[bool]. Whether to use RSLoRA. +# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#rank-stabilized-lora +peft_use_rslora: -# bool | Literal["gaussian", "eva", "olora", "pissa", "pissa_niter_[number of iters]", "corda", "loftq"] -# How to initialize LoRA weights. Default to True which is MS original implementation. -# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#initialization -peft_init_lora_weights: - -# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens. -# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models. -# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities. -# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994 -lora_modules_to_save: -# - embed_tokens -# - lm_head - -lora_fan_in_fan_out: false - -# Apply custom LoRA autograd functions and activation function Triton kernels for -# speed and memory savings -# See: https://docs.axolotl.ai/docs/lora_optims.html -lora_mlp_kernel: true -lora_qkv_kernel: true -lora_o_kernel: true - -# LoRA+ hyperparameters -# For more details about the following options, see: -# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py` -loraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4. -loraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6. - -peft: - # Configuration options for loftq initialization for LoRA - # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization - loftq_config: - loftq_bits: # typically 4 bits - -# ReLoRA configuration -# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed -relora_steps: # Number of steps per ReLoRA restart -relora_warmup_steps: # Number of per-restart warmup steps -relora_anneal_steps: # Number of anneal steps for each relora cycle -relora_prune_ratio: # threshold for optimizer magnitude when pruning -relora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings - -# wandb configuration if you're using it -# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`. -wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb -wandb_project: # Your wandb project name -wandb_entity: # A wandb Team name if using a Team -wandb_watch: -wandb_name: # Set the name of your wandb run -wandb_run_id: # Set the ID of your wandb run -wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training - -# mlflow configuration if you're using it -mlflow_tracking_uri: # URI to mlflow -mlflow_experiment_name: # Your experiment name -mlflow_run_name: # Your run name -hf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry - -# Comet configuration if you're using it -# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`. -# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start -use_comet: # Enable or disable Comet integration. -comet_api_key: # API key for Comet. Recommended to set via `comet login`. -comet_workspace: # Workspace name in Comet. Defaults to the user's default workspace. -comet_project_name: # Project name in Comet. Defaults to Uncategorized. -comet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key. -comet_mode: # Create a new experiment ("create") or log to an existing one ("get"). Default ("get_or_create") auto-selects based on configuration. -comet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True. -comet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details. - -# Tensorboard -use_tensorboard: # Optional[bool] - -# Where to save the full-finetuned model to -output_dir: ./completed-model - -# Whether to use torch.compile and which backend to use -# setting to `auto` will enable torch compile when torch>=2.5.1 -torch_compile: # Optional[Union[Literal["auto"], bool]] -torch_compile_backend: # Optional[str] -torch_compile_mode: # 'default' | 'reduce-overhead' | 'max-autotune' - -# Training hyperparameters - -# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps. -gradient_accumulation_steps: 1 -# The number of samples to include in each batch. This is the number of samples sent to each GPU. -# Batch size per gpu = micro_batch_size * gradient_accumulation_steps -micro_batch_size: 2 -eval_batch_size: -num_epochs: 4 -warmup_steps: 100 # cannot use with warmup_ratio -warmup_ratio: 0.05 # cannot use with warmup_steps -learning_rate: 0.00003 -lr_quadratic_warmup: -logging_steps: -eval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps -evals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps -eval_strategy: # Set to `"no"` to skip evaluation, `"epoch"` at end of each epoch, leave empty to infer from `eval_steps`. -save_strategy: # Set to `"no"` to skip checkpoint saves, `"epoch"` at end of each epoch, `"best"` when better result is achieved, leave empty to infer from `save_steps`. -save_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps -saves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps -save_total_limit: # Checkpoints saved at a time -save_only_model: # Save only the model weights, skipping the optimizer. Using this means you can't resume from checkpoints. -# Maximum number of iterations to train for. It precedes num_epochs which means that -# if both are set, num_epochs will not be guaranteed. -# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps -max_steps: - -# bool of whether to include tokens trainer per second in the training metrics. This iterates over the entire dataset once, so it takes some time. -include_tokens_per_second: # Optional[bool] - -# whether to find batch size that fits in memory. Passed to underlying transformers Trainer -auto_find_batch_size: # Optional[bool] - -eval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0 -eval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128 -do_causal_lm_eval: # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`. -eval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is ["sacrebleu", "comet", "ter", "chrf", "perplexity"] - -profiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir. - # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information - # snapshots can be visualized @ https://pytorch.org/memory_viz +# Optional[list[tuple[int, int]]]. List of layer indices to replicate. +# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#memory-efficient-layer-replication-with-lora +peft_layer_replication: + +# bool | Literal["gaussian", "eva", "olora", "pissa", "pissa_niter_[number of iters]", "corda", "loftq"] +# How to initialize LoRA weights. Default to True which is MS original implementation. +# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#initialization +peft_init_lora_weights: + +# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens. +# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models. +# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities. +# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994 +lora_modules_to_save: +# - embed_tokens +# - lm_head + +lora_fan_in_fan_out: false + +# Apply custom LoRA autograd functions and activation function Triton kernels for +# speed and memory savings +# See: https://docs.axolotl.ai/docs/lora_optims.html +lora_mlp_kernel: true +lora_qkv_kernel: true +lora_o_kernel: true + +# LoRA+ hyperparameters +# For more details about the following options, see: +# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py` +loraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4. +loraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6. + +peft: + # Configuration options for loftq initialization for LoRA + # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization + loftq_config: + loftq_bits: # typically 4 bits + +# ReLoRA configuration +# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed +relora_steps: # Number of steps per ReLoRA restart +relora_warmup_steps: # Number of per-restart warmup steps +relora_anneal_steps: # Number of anneal steps for each relora cycle +relora_prune_ratio: # threshold for optimizer magnitude when pruning +relora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings + +# wandb configuration if you're using it +# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`. +wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb +wandb_project: # Your wandb project name +wandb_entity: # A wandb Team name if using a Team +wandb_watch: +wandb_name: # Set the name of your wandb run +wandb_run_id: # Set the ID of your wandb run +wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training + +# mlflow configuration if you're using it +mlflow_tracking_uri: # URI to mlflow +mlflow_experiment_name: # Your experiment name +mlflow_run_name: # Your run name +hf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry + +# Comet configuration if you're using it +# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`. +# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start +use_comet: # Enable or disable Comet integration. +comet_api_key: # API key for Comet. Recommended to set via `comet login`. +comet_workspace: # Workspace name in Comet. Defaults to the user's default workspace. +comet_project_name: # Project name in Comet. Defaults to Uncategorized. +comet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key. +comet_mode: # Create a new experiment ("create") or log to an existing one ("get"). Default ("get_or_create") auto-selects based on configuration. +comet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True. +comet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details. + +# Tensorboard +use_tensorboard: # Optional[bool] + +# Where to save the full-finetuned model to +output_dir: ./completed-model + +# Whether to use torch.compile and which backend to use +# setting to `auto` will enable torch compile when torch>=2.5.1 +torch_compile: # Optional[Union[Literal["auto"], bool]] +torch_compile_backend: # Optional[str] +torch_compile_mode: # 'default' | 'reduce-overhead' | 'max-autotune' + +# Training hyperparameters + +# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps. +gradient_accumulation_steps: 1 +# The number of samples to include in each batch. This is the number of samples sent to each GPU. +# Batch size per gpu = micro_batch_size * gradient_accumulation_steps +micro_batch_size: 2 +eval_batch_size: +num_epochs: 4 +warmup_steps: 100 # cannot use with warmup_ratio +warmup_ratio: 0.05 # cannot use with warmup_steps +learning_rate: 0.00003 +lr_quadratic_warmup: +logging_steps: +eval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps +evals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps +eval_strategy: # Set to `"no"` to skip evaluation, `"epoch"` at end of each epoch, leave empty to infer from `eval_steps`. +save_strategy: # Set to `"no"` to skip checkpoint saves, `"epoch"` at end of each epoch, `"best"` when better result is achieved, leave empty to infer from `save_steps`. +save_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps +saves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps +save_total_limit: # Checkpoints saved at a time +save_only_model: # Save only the model weights, skipping the optimizer. Using this means you can't resume from checkpoints. +# Maximum number of iterations to train for. It precedes num_epochs which means that +# if both are set, num_epochs will not be guaranteed. +# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps +max_steps: + +# bool of whether to include tokens trainer per second in the training metrics. This iterates over the entire dataset once, so it takes some time. +include_tokens_per_second: # Optional[bool] + +# whether to find batch size that fits in memory. Passed to underlying transformers Trainer +auto_find_batch_size: # Optional[bool] + +eval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0 +eval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128 +do_causal_lm_eval: # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`. +eval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is ["sacrebleu", "comet", "ter", "chrf", "perplexity"] -loss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training) -loss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3) - -# Save model as safetensors (require safetensors package). Default True -save_safetensors: - -# Whether to mask out or include the human's prompt from the training labels -train_on_inputs: false -# Group similarly sized data to minimize padding. -# May be slower to start, as it must download and sort the entire dataset. -# Note that training loss may have an oscillating pattern with this enabled. -group_by_length: false - -# Whether to use gradient checkpointing. Available options are: true, false, "offload", "offload_disk". -# https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing -gradient_checkpointing: false -# additional kwargs to pass to the trainer for gradient checkpointing -# gradient_checkpointing_kwargs: -# use_reentrant: true - -# Stop training after this many evaluation losses have increased in a row -# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback -early_stopping_patience: 3 +profiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir. + # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information + # snapshots can be visualized @ https://pytorch.org/memory_viz + +loss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training) +loss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3) + +# Save model as safetensors (require safetensors package). Default True +save_safetensors: + +# Whether to mask out or include the human's prompt from the training labels +train_on_inputs: false +# Group similarly sized data to minimize padding. +# May be slower to start, as it must download and sort the entire dataset. +# Note that training loss may have an oscillating pattern with this enabled. +group_by_length: false + +# Whether to use gradient checkpointing. Available options are: true, false, "offload", "offload_disk". +# https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing +gradient_checkpointing: false +# additional kwargs to pass to the trainer for gradient checkpointing +# gradient_checkpointing_kwargs: +# use_reentrant: true -# Specify a scheduler and kwargs to use with the optimizer -# Valid values are driven by the Transformers SchedulerType class, see: -# https://github.com/huggingface/transformers/blob/5f4ecf2d9f867a1255131d2461d75793c0cf1db2/src/transformers/trainer_utils.py#L420 -# Valid values include -# - 'linear' -# - 'cosine' (default) -# - 'cosine_with_restarts' -# - 'polynomial' -# - 'constant' -# - 'constant_with_warmup' -# - 'inverse_sqrt' -# - 'reduce_lr_on_plateau' -# - 'cosine_with_min_lr' -# - 'warmup_stable_decay' - -# Additional schedulers include: -# - 'one_cycle' -# - 'rex' -lr_scheduler: -lr_scheduler_kwargs: -cosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr -cosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf) - -# For one_cycle optim -lr_div_factor: # Learning rate div factor - -# Specify optimizer -# Valid values are driven by the Transformers OptimizerNames class, see: -# https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189 -# -# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of -# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used -# in the examples/ for your model and fine-tuning use case. +# Stop training after this many evaluation losses have increased in a row +# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback +early_stopping_patience: 3 + +# Specify a scheduler and kwargs to use with the optimizer +# Valid values are driven by the Transformers SchedulerType class, see: +# https://github.com/huggingface/transformers/blob/5f4ecf2d9f867a1255131d2461d75793c0cf1db2/src/transformers/trainer_utils.py#L420 +# Valid values include +# - 'linear' +# - 'cosine' (default) +# - 'cosine_with_restarts' +# - 'polynomial' +# - 'constant' +# - 'constant_with_warmup' +# - 'inverse_sqrt' +# - 'reduce_lr_on_plateau' +# - 'cosine_with_min_lr' +# - 'warmup_stable_decay' + +# Additional schedulers include: +# - 'one_cycle' +# - 'rex' +lr_scheduler: +lr_scheduler_kwargs: +cosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr +cosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf) + +# For one_cycle optim +lr_div_factor: # Learning rate div factor + +# Specify optimizer +# Valid values are driven by the Transformers OptimizerNames class, see: +# https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189 # -# Valid values for 'optimizer' include: -# - adamw_torch -# - adamw_torch_fused (default) -# - adamw_torch_xla -# - adamw_torch_npu_fused -# - adamw_apex_fused -# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1) -# - adafactor -# - adamw_anyprecision -# - adamw_torch_4bit -# - ademamix -# - sgd -# - adagrad -# - adamw_bnb_8bit -# - adamw_8bit # alias for adamw_bnb_8bit -# - ademamix_8bit -# - lion_8bit -# - lion_32bit -# - paged_adamw_32bit -# - paged_adamw_8bit -# - paged_ademamix_32bit -# - paged_ademamix_8bit -# - paged_lion_32bit -# - paged_lion_8bit -# - rmsprop -# - rmsprop_bnb -# - rmsprop_bnb_8bit -# - rmsprop_bnb_32bit -# - galore_adamw -# - galore_adamw_8bit -# - galore_adafactor -# - galore_adamw_layerwise -# - galore_adamw_8bit_layerwise -# - galore_adafactor_layerwise -# - lomo -# - adalomo -# - grokadamw -# - schedule_free_adamw -# - schedule_free_sgd -# - apollo_adamw -# - apollo_adamw_layerwise -# -# Additional custom optimizers include: -# - optimi_adamw -# - ao_adamw_8bit -# - ao_adamw_fp8 -# - came_pytorch -optimizer: -# Dictionary of arguments to pass to the optimizer -optim_args: -# For Galore Optimizers the following optim_args are available -# rank: # type: int -# update_proj_gap # type: int -# scale # type: float -# proj_type: # type: str, default = std - -# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm -optim_target_modules: -# - self_attn # for llama -# - mlp - -# Specify weight decay -weight_decay: -# adamw hyperparams -adam_beta1: -adam_beta2: -adam_beta3: # only used for CAME Optimizer -adam_epsilon: -adam_epsilon2: # only used for CAME Optimizer -# Gradient clipping max norm -max_grad_norm: - -# Augmentation techniques -# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings -# currently only supported on Llama and Mistral -neftune_noise_alpha: - -# Optional[bool]. Whether to bettertransformers -flash_optimum: - -# Note: Only one of the following attention patches can be used at a time. -# For example, if you set `xformers_attention` to `true`, do not set `flash_attention` to `true`. - -# Optional[bool]. Whether to use xformers attention patch https://github.com/facebookresearch/xformers: -xformers_attention: -# Optional[bool]. Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention: -flash_attention: -flash_attn_cross_entropy: # Optional[bool]. Whether to use flash-attention cross entropy implementation - advanced use only -flash_attn_rms_norm: # Optional[bool]. Whether to use flash-attention rms norm implementation - advanced use only -flash_attn_fuse_qkv: # Optional[bool]. Whether to fuse QKV into a single operation -flash_attn_fuse_mlp: # Optional[bool]. Whether to fuse part of the MLP into a single operation -# Optional[bool]. Whether to use scaled-dot-product attention -# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html -sdp_attention: -# Optional[bool]. Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf -s2_attention: - -# Optional[bool]. Whether to use low_cpu_mem_usage -low_cpu_mem_usage: -# Optional[str]. Resume from a specific checkpoint dir -resume_from_checkpoint: -# Optional[bool]. If resume_from_checkpoint isn't set and you simply want it to start where it left off. -# Be careful with this being turned on between different models. -auto_resume_from_checkpoints: false - -## Multimodal section -# int | tuple[int, int] | None . Size to resize images to, width x height. -# Will read from model/processor config if not set. -image_size: -# str. Algorithm to use for image resizing. "bilinear", "bicubic", "lanczos". Default is "bilinear". -image_resize_algorithm: 'bilinear' -## End of multimodal section - -# Don't mess with this, it's here for accelerate and torchrun -local_rank: - -# Add or change special tokens. -# If you add tokens here, you don't need to add them to the `tokens` list. -special_tokens: - # bos_token: "<s>" - # eos_token: "</s>" - # unk_token: "<unk>" - # pad_token: "[PAD]" - -# Optional[list[str]]. Add extra tokens to the tokenizer. -tokens: - # - "<|startoftext|>" - # - "<|endoftext|>" - -# Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer. -# Only works for tokens that are not part of the base vocab (aka are added_tokens). -# Can be checked if they exist in tokenizer.json added_tokens. -added_tokens_overrides: # Dict[int, str] -# 128041: "<|im_start|>" -# 128042: "<|im_end|>" - -# FSDP -fsdp: -fsdp_config: +# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of +# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used +# in the examples/ for your model and fine-tuning use case. +# +# Valid values for 'optimizer' include: +# - adamw_torch +# - adamw_torch_fused (default) +# - adamw_torch_xla +# - adamw_torch_npu_fused +# - adamw_apex_fused +# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1) +# - adafactor +# - adamw_anyprecision +# - adamw_torch_4bit +# - ademamix +# - sgd +# - adagrad +# - adamw_bnb_8bit +# - adamw_8bit # alias for adamw_bnb_8bit +# - ademamix_8bit +# - lion_8bit +# - lion_32bit +# - paged_adamw_32bit +# - paged_adamw_8bit +# - paged_ademamix_32bit +# - paged_ademamix_8bit +# - paged_lion_32bit +# - paged_lion_8bit +# - rmsprop +# - rmsprop_bnb +# - rmsprop_bnb_8bit +# - rmsprop_bnb_32bit +# - galore_adamw +# - galore_adamw_8bit +# - galore_adafactor +# - galore_adamw_layerwise +# - galore_adamw_8bit_layerwise +# - galore_adafactor_layerwise +# - lomo +# - adalomo +# - grokadamw +# - schedule_free_adamw +# - schedule_free_sgd +# - apollo_adamw +# - apollo_adamw_layerwise +# +# Additional custom optimizers include: +# - optimi_adamw +# - ao_adamw_8bit +# - ao_adamw_fp8 +# - came_pytorch +optimizer: +# Dictionary of arguments to pass to the optimizer +optim_args: +# For Galore Optimizers the following optim_args are available +# rank: # type: int +# update_proj_gap # type: int +# scale # type: float +# proj_type: # type: str, default = std + +# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm +optim_target_modules: +# - self_attn # for llama +# - mlp + +# Specify weight decay +weight_decay: +# adamw hyperparams +adam_beta1: +adam_beta2: +adam_beta3: # only used for CAME Optimizer +adam_epsilon: +adam_epsilon2: # only used for CAME Optimizer +# Gradient clipping max norm +max_grad_norm: + +# Augmentation techniques +# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings +# currently only supported on Llama and Mistral +neftune_noise_alpha: + +# Optional[bool]. Whether to bettertransformers +flash_optimum: + +# Note: Only one of the following attention patches can be used at a time. +# For example, if you set `xformers_attention` to `true`, do not set `flash_attention` to `true`. + +# Optional[bool]. Whether to use xformers attention patch https://github.com/facebookresearch/xformers: +xformers_attention: +# Optional[bool]. Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention: +flash_attention: +flash_attn_cross_entropy: # Optional[bool]. Whether to use flash-attention cross entropy implementation - advanced use only +flash_attn_rms_norm: # Optional[bool]. Whether to use flash-attention rms norm implementation - advanced use only +flash_attn_fuse_qkv: # Optional[bool]. Whether to fuse QKV into a single operation +flash_attn_fuse_mlp: # Optional[bool]. Whether to fuse part of the MLP into a single operation +# Optional[bool]. Whether to use scaled-dot-product attention +# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html +sdp_attention: +# Optional[bool]. Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf +s2_attention: + +# Optional[bool]. Whether to use low_cpu_mem_usage +low_cpu_mem_usage: +# Optional[str]. Resume from a specific checkpoint dir +resume_from_checkpoint: +# Optional[bool]. If resume_from_checkpoint isn't set and you simply want it to start where it left off. +# Be careful with this being turned on between different models. +auto_resume_from_checkpoints: false + +## Multimodal section +# int | tuple[int, int] | None . Size to resize images to, width x height. +# Will read from model/processor config if not set. +image_size: +# str. Algorithm to use for image resizing. "bilinear", "bicubic", "lanczos". Default is "bilinear". +image_resize_algorithm: 'bilinear' +## End of multimodal section + +# Don't mess with this, it's here for accelerate and torchrun +local_rank: + +# Add or change special tokens. +# If you add tokens here, you don't need to add them to the `tokens` list. +special_tokens: + # bos_token: "<s>" + # eos_token: "</s>" + # unk_token: "<unk>" + # pad_token: "[PAD]" + +# Optional[list[str]]. Add extra tokens to the tokenizer. +tokens: + # - "<|startoftext|>" + # - "<|endoftext|>" + +# Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer. +# Only works for tokens that are not part of the base vocab (aka are added_tokens). +# Can be checked if they exist in tokenizer.json added_tokens. +added_tokens_overrides: # Dict[int, str] +# 128041: "<|im_start|>" +# 128042: "<|im_end|>" -# Deepspeed config path. e.g., deepspeed_configs/zero3.json -deepspeed: - -# Advanced DDP Arguments -ddp_timeout: -ddp_bucket_cap_mb: -ddp_broadcast_buffers: - -# Sequence parallelism -# Set to a divisor of the number of GPUs available to split sequences into chunks of equal size. -# Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM. -# E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized -# subsequences, or set to 4 to split into four equal-sized subsequences. -# See https://docs.axolotl.ai/docs/sequence_parallelism.html for more details. -sequence_parallel_degree: -# Optional; strides across the key dimension. Larger values use more memory but should make training faster. -# Must evenly divide the number of KV heads in your model. -heads_k_stride: 1 -# One of "varlen_llama3", "batch_ring", "batch_zigzag", "batch_stripe". Defaults to "varlen_llama3" -# in the sample packing case, and "batch_ring" in the non-sample packing case. -ring_attn_func: - -# Path to torch distx for optim 'adamw_anyprecision' -torchdistx_path: - -# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize -pretraining_dataset: - -# Debug mode -debug: - -# Seed -seed: - -# Allow overwrite yml config using from cli -strict: +# FSDP +fsdp: +fsdp_config: + +# Deepspeed config path. e.g., deepspeed_configs/zero3.json +deepspeed: + +# Advanced DDP Arguments +ddp_timeout: +ddp_bucket_cap_mb: +ddp_broadcast_buffers: + +# Sequence parallelism +# Set to a divisor of the number of GPUs available to split sequences into chunks of equal size. +# Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM. +# E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized +# subsequences, or set to 4 to split into four equal-sized subsequences. +# See https://docs.axolotl.ai/docs/sequence_parallelism.html for more details. +sequence_parallel_degree: +# Optional; strides across the key dimension. Larger values use more memory but should make training faster. +# Must evenly divide the number of KV heads in your model. +heads_k_stride: 1 +# One of "varlen_llama3", "batch_ring", "batch_zigzag", "batch_stripe". Defaults to "varlen_llama3" +# in the sample packing case, and "batch_ring" in the non-sample packing case. +ring_attn_func: + +# Path to torch distx for optim 'adamw_anyprecision' +torchdistx_path: + +# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize +pretraining_dataset: + +# Debug mode +debug: + +# Seed +seed: + +# Allow overwrite yml config using from cli +strict: diff --git a/docs/dataset-formats/conversation.html b/docs/dataset-formats/conversation.html index 89690588c..5afefc83c 100644 --- a/docs/dataset-formats/conversation.html +++ b/docs/dataset-formats/conversation.html @@ -549,9 +549,9 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});

Examples

-
    -
  1. (Legacy) Using the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.
  2. -
+
+

Training on last message

+

(Legacy) Using the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.

datasets:
   - path: ...
     type: chat_template
@@ -570,24 +570,40 @@ Tip
 

If you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.

-
    -
  1. Using the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.
  2. -
+
+
+

Overriding default chat template

+

Using the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.

chat_template: gemma # this overwrites the tokenizer's chat_template
 datasets:
   - path: ...
     type: chat_template
     roles_to_train: ["assistant"]  # default value
-
    -
  1. Using the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.
  2. -
+
+
+
+ +
+
+Note +
+
+
+

If you want to use built-in chat_template, use chat_template: tokenizer_default (this is set by default).

+
+
+
+
+

Using default chat template with fallback

+

Using the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.

chat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template
 datasets:
   - path: ...
     type: chat_template
-
    -
  1. Using a custom jinja template on OpenAI messages format, training on all assistant messages.
  2. -
+
+
+

Custom Jinja template

+

Using a custom jinja template on OpenAI messages format, training on all assistant messages.

# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty
 chat_template_jinja: "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\n' + message['content'] + '<|end|>' + '\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\n' + message['content'] + '<|end|>' + '\n' + '<|assistant|>' + '\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\n'}}{% endif %}{% endfor %}"
 
@@ -607,9 +623,12 @@ Important
 

Please make sure that your tokenizer.eos_token is same as EOS (End-of-Sequence) token in template. Otherwise, set eos_token under special_tokens:.

-
    +
+
+

Using template with different token for EOT and EOS

+
  • If you are using a template that has a different EOT (End-of-Turn) token from EOS token or multiple EOT tokens (like Mistral V7 Tekken), set the eot_tokens: config. The handling of EOT tokens follows train_on_eos: which defaults to turn.
  • - +
eot_tokens:
   - "[/INST]"
   # - "[/SYSTEM_PROMPT]"
@@ -647,9 +666,9 @@ Note
 

You can add those tokens as new tokens under tokens: or (recommended) override unused added_tokens via added_tokens_overrides:. See config for more details.

-
    +
    • Continuing from the previous example, if you want to train on all EOT token trainable turns but only last EOS token, set train_on_eos: last.
    • -
+
eot_tokens:
   - "[/INST]"
   # ...
@@ -673,51 +692,127 @@ Tip
 

If EOS token only appears at the end of a prompt, train_on_eos: last is equivalent to train_on_eos: turn. Therefore, generally, you can leave them to their defaults and omit them.

-
    -
  1. (Advanced) Using fine-grained control over tokens and turns to train in a conversation
  2. -
+
+
+

Using tool use

+

Instead of passing tools via the system prompt, an alternative method would be to have the tools in a separate column and loaded via chat_template to let the template dynamically build it.

+
{
+    "tools": [
+        {
+            "type": "...",
+            "function": {
+                "name": "...",
+                "description": "...",
+                "parameters": {
+                    "type": "...",
+                    "properties": {
+                        // ...
+                    },
+                    "required": ["..."],
+                },
+            },
+        },
+    ],
+    "messages": [
+        // ...
+        {
+            "role": "assistant", // call the function via assistant
+            "tool_calls": [
+                {
+                    "type": "function",
+                    "function": {
+                        "name": "...",
+                        "arguments": {
+                            "...": "...",
+                        }
+                    }
+                }
+            ]
+        },
+        {
+            "role": "tool",
+            "name": "...",
+            "content": "..."
+        },
+    ],
+}
+
+
+
+ +
+
+Note +
+
+
+

Tools need to follow JSON schema.

+
+
+
chat_template: llama4
+datasets:
+  - path: ...
+    type: chat_template
+    # field_tools: tools # default is `tools`
+
+
+
+ +
+
+Tip +
+
+
+

Look into the chat_template you are using to see if it supports tools and what the expected role is for the tool answer. In the example above, the tool answer is expected to be in the tool or ipython role for llama4 template.

+
+
+
+
+

Using fine-grained control over token masking

+

(Advanced) Using fine-grained control over tokens and turns to train in a conversation

For a data sample that looks like:

data.jsonl
-
{
-  "conversations": [
-    {"from": "system", "value": "You are an AI assistant.", "train": false},
-    {"from": "human", "value": "Hello", "train": false},
-    {"from": "assistant", "value": "Hello", "train": true},
-    {"from": "human", "value": "How are you?", "train": true},
-    {
-      "from": "assistant",
-      "value": "I'm doing very well, thank you!",
-      "train_detail": [
-        {"begin_offset": 0, "end_offset": 8, "train": false},
-        {"begin_offset": 9, "end_offset": 18, "train": true},
-        {"begin_offset": 19, "end_offset": 30, "train": false},
-      ],
-    },
-    {
-        "from": "human",
-        "value": "I'm doing very well, thank you!",
-        "train": true,
-    },
-    {"from": "assistant", "value": "Hi there!", "train": true}
-  ]
-}
+
{
+  "conversations": [
+    {"from": "system", "value": "You are an AI assistant.", "train": false},
+    {"from": "human", "value": "Hello", "train": false},
+    {"from": "assistant", "value": "Hello", "train": true},
+    {"from": "human", "value": "How are you?", "train": true},
+    {
+      "from": "assistant",
+      "value": "I'm doing very well, thank you!",
+      "train_detail": [
+        {"begin_offset": 0, "end_offset": 8, "train": false},
+        {"begin_offset": 9, "end_offset": 18, "train": true},
+        {"begin_offset": 19, "end_offset": 30, "train": false},
+      ],
+    },
+    {
+        "from": "human",
+        "value": "I'm doing very well, thank you!",
+        "train": true,
+    },
+    {"from": "assistant", "value": "Hi there!", "train": true}
+  ]
+}

The configuration would look like:

-
datasets:
-  - path: ...
-    type: chat_template
-    chat_template: tokenizer_default
-    field_messages: conversations
-    message_property_mappings:
-      role: from
-      content: value
-    roles_to_train: []
-    train_on_eos: turn
-    message_field_training: train
-    message_field_training_detail: train_detail
+
datasets:
+  - path: ...
+    type: chat_template
+    chat_template: tokenizer_default
+    field_messages: conversations
+    message_property_mappings:
+      role: from
+      content: value
+    roles_to_train: []
+    train_on_eos: turn
+    message_field_training: train
+    message_field_training_detail: train_detail
@@ -731,23 +826,25 @@ Tip

It is not necessary to set both message_field_training and message_field_training_detail at once.

-
    -
  1. (For Qwen3 template only) Enable reasoning split, where the reasoning is split from the content and passed as a separate field into the template.
  2. -
-
datasets:
-  - path: ...
-    type: chat_template
-    chat_template: qwen3
-    split_thinking: true
+
+
+

Reasoning split

+

(For Qwen3 template only) Enable reasoning split, where the reasoning is split from the content and passed as a separate field into the template.

+
datasets:
+  - path: ...
+    type: chat_template
+    chat_template: qwen3
+    split_thinking: true

For example, a content can look like:

-
{
-  "content": "<think>Some thinking outputs</think>Output after thinking."
-}
+
{
+  "content": "<think>Some thinking outputs</think>Output after thinking."
+}

After split, it will look like:

-
{
-  "reasoning_content": "Some thinking outputs",
-  "content": "Output after thinking..."
-}
+
{
+  "reasoning_content": "Some thinking outputs",
+  "content": "Output after thinking..."
+}
+
@@ -772,7 +869,7 @@ Important
data.jsonl
-
{"conversations": [{"role": "...", "value": "..."}]}
+
{"conversations": [{"role": "...", "value": "..."}]}
diff --git a/search.json b/search.json index 190ce16fe..26942d800 100644 --- a/search.json +++ b/search.json @@ -430,7 +430,7 @@ "href": "docs/dataset-formats/conversation.html", "title": "Conversation", "section": "", - "text": "Chat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer’s template, a supported template, or custom jinja2.\n\n\ndata.jsonl\n\n{\"conversations\": [{\"role\": \"...\", \"content\": \"...\"}]}\n\nSee configs for full configs and supported templates.\n\n\nMost configs can be adapted as follows:\n# old\nchat_template: chatml\ndatasets:\n - path: ...\n type: sharegpt\n conversation: chatml\n\n# new (if using tokenizer's chat_template)\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n\n# new (if setting a new chat_template like chatml, gemma, etc)\nchat_template: chatml\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\nWe recommend checking the below examples for other usecases.\n\n\n\n\n(Legacy) Using the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.\n\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train:\n train_on_eos:\n\n\n\n\n\n\nTip\n\n\n\nIf you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.\n\n\n\nUsing the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.\n\nchat_template: gemma # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train: [\"assistant\"] # default value\n\nUsing the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.\n\nchat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n\nUsing a custom jinja template on OpenAI messages format, training on all assistant messages.\n\n# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty\nchat_template_jinja: \"{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\\n' + message['content'] + '<|end|>' + '\\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\\n' + message['content'] + '<|end|>' + '\\n' + '<|assistant|>' + '\\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\\n'}}{% endif %}{% endfor %}\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure that your tokenizer.eos_token is same as EOS (End-of-Sequence) token in template. Otherwise, set eos_token under special_tokens:.\n\n\n\nIf you are using a template that has a different EOT (End-of-Turn) token from EOS token or multiple EOT tokens (like Mistral V7 Tekken), set the eot_tokens: config. The handling of EOT tokens follows train_on_eos: which defaults to turn.\n\neot_tokens:\n - \"[/INST]\"\n # - \"[/SYSTEM_PROMPT]\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n # optional\n train_on_eot: turn # defaults read from train_on_eos (which defaults to turn)\n\n\n\n\n\n\nTip\n\n\n\nSee config documentation for detailed explanations of “turn”, “last”, and “all” options for training on tokens.\n\n\n\n\n\n\n\n\nNote\n\n\n\nUsing eot_tokens requires each token that exists in chat_template to be a single token in the tokenizer. Otherwise, the tokenizer will split the token and cause unexpected behavior.\nYou can add those tokens as new tokens under tokens: or (recommended) override unused added_tokens via added_tokens_overrides:. See config for more details.\n\n\n\nContinuing from the previous example, if you want to train on all EOT token trainable turns but only last EOS token, set train_on_eos: last.\n\neot_tokens:\n - \"[/INST]\"\n # ...\n\ndatasets:\n - path: ...\n type: chat_template\n\n train_on_eos: last\n train_on_eot: turn\n\n\n\n\n\n\nTip\n\n\n\nIf EOS token only appears at the end of a prompt, train_on_eos: last is equivalent to train_on_eos: turn. Therefore, generally, you can leave them to their defaults and omit them.\n\n\n\n(Advanced) Using fine-grained control over tokens and turns to train in a conversation\n\nFor a data sample that looks like:\n\n\ndata.jsonl\n\n{\n \"conversations\": [\n {\"from\": \"system\", \"value\": \"You are an AI assistant.\", \"train\": false},\n {\"from\": \"human\", \"value\": \"Hello\", \"train\": false},\n {\"from\": \"assistant\", \"value\": \"Hello\", \"train\": true},\n {\"from\": \"human\", \"value\": \"How are you?\", \"train\": true},\n {\n \"from\": \"assistant\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train_detail\": [\n {\"begin_offset\": 0, \"end_offset\": 8, \"train\": false},\n {\"begin_offset\": 9, \"end_offset\": 18, \"train\": true},\n {\"begin_offset\": 19, \"end_offset\": 30, \"train\": false},\n ],\n },\n {\n \"from\": \"human\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train\": true,\n },\n {\"from\": \"assistant\", \"value\": \"Hi there!\", \"train\": true}\n ]\n}\n\nThe configuration would look like:\ndatasets:\n - path: ...\n type: chat_template\n chat_template: tokenizer_default\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n roles_to_train: []\n train_on_eos: turn\n message_field_training: train\n message_field_training_detail: train_detail\n\n\n\n\n\n\nTip\n\n\n\nIt is not necessary to set both message_field_training and message_field_training_detail at once.\n\n\n\n(For Qwen3 template only) Enable reasoning split, where the reasoning is split from the content and passed as a separate field into the template.\n\ndatasets:\n - path: ...\n type: chat_template\n chat_template: qwen3\n split_thinking: true\nFor example, a content can look like:\n{\n \"content\": \"<think>Some thinking outputs</think>Output after thinking.\"\n}\nAfter split, it will look like:\n{\n \"reasoning_content\": \"Some thinking outputs\",\n \"content\": \"Output after thinking...\"\n}", + "text": "Chat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer’s template, a supported template, or custom jinja2.\n\n\ndata.jsonl\n\n{\"conversations\": [{\"role\": \"...\", \"content\": \"...\"}]}\n\nSee configs for full configs and supported templates.\n\n\nMost configs can be adapted as follows:\n# old\nchat_template: chatml\ndatasets:\n - path: ...\n type: sharegpt\n conversation: chatml\n\n# new (if using tokenizer's chat_template)\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n\n# new (if setting a new chat_template like chatml, gemma, etc)\nchat_template: chatml\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\nWe recommend checking the below examples for other usecases.\n\n\n\n\n\n(Legacy) Using the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train:\n train_on_eos:\n\n\n\n\n\n\nTip\n\n\n\nIf you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.\n\n\n\n\n\nUsing the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.\nchat_template: gemma # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train: [\"assistant\"] # default value\n\n\n\n\n\n\nNote\n\n\n\nIf you want to use built-in chat_template, use chat_template: tokenizer_default (this is set by default).\n\n\n\n\n\nUsing the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.\nchat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n\n\n\nUsing a custom jinja template on OpenAI messages format, training on all assistant messages.\n# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty\nchat_template_jinja: \"{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\\n' + message['content'] + '<|end|>' + '\\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\\n' + message['content'] + '<|end|>' + '\\n' + '<|assistant|>' + '\\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\\n'}}{% endif %}{% endfor %}\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure that your tokenizer.eos_token is same as EOS (End-of-Sequence) token in template. Otherwise, set eos_token under special_tokens:.\n\n\n\n\n\n\nIf you are using a template that has a different EOT (End-of-Turn) token from EOS token or multiple EOT tokens (like Mistral V7 Tekken), set the eot_tokens: config. The handling of EOT tokens follows train_on_eos: which defaults to turn.\n\neot_tokens:\n - \"[/INST]\"\n # - \"[/SYSTEM_PROMPT]\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n # optional\n train_on_eot: turn # defaults read from train_on_eos (which defaults to turn)\n\n\n\n\n\n\nTip\n\n\n\nSee config documentation for detailed explanations of “turn”, “last”, and “all” options for training on tokens.\n\n\n\n\n\n\n\n\nNote\n\n\n\nUsing eot_tokens requires each token that exists in chat_template to be a single token in the tokenizer. Otherwise, the tokenizer will split the token and cause unexpected behavior.\nYou can add those tokens as new tokens under tokens: or (recommended) override unused added_tokens via added_tokens_overrides:. See config for more details.\n\n\n\nContinuing from the previous example, if you want to train on all EOT token trainable turns but only last EOS token, set train_on_eos: last.\n\neot_tokens:\n - \"[/INST]\"\n # ...\n\ndatasets:\n - path: ...\n type: chat_template\n\n train_on_eos: last\n train_on_eot: turn\n\n\n\n\n\n\nTip\n\n\n\nIf EOS token only appears at the end of a prompt, train_on_eos: last is equivalent to train_on_eos: turn. Therefore, generally, you can leave them to their defaults and omit them.\n\n\n\n\n\nInstead of passing tools via the system prompt, an alternative method would be to have the tools in a separate column and loaded via chat_template to let the template dynamically build it.\n{\n \"tools\": [\n {\n \"type\": \"...\",\n \"function\": {\n \"name\": \"...\",\n \"description\": \"...\",\n \"parameters\": {\n \"type\": \"...\",\n \"properties\": {\n // ...\n },\n \"required\": [\"...\"],\n },\n },\n },\n ],\n \"messages\": [\n // ...\n {\n \"role\": \"assistant\", // call the function via assistant\n \"tool_calls\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"...\",\n \"arguments\": {\n \"...\": \"...\",\n }\n }\n }\n ]\n },\n {\n \"role\": \"tool\",\n \"name\": \"...\",\n \"content\": \"...\"\n },\n ],\n}\n\n\n\n\n\n\nNote\n\n\n\nTools need to follow JSON schema.\n\n\nchat_template: llama4\ndatasets:\n - path: ...\n type: chat_template\n # field_tools: tools # default is `tools`\n\n\n\n\n\n\nTip\n\n\n\nLook into the chat_template you are using to see if it supports tools and what the expected role is for the tool answer. In the example above, the tool answer is expected to be in the tool or ipython role for llama4 template.\n\n\n\n\n\n(Advanced) Using fine-grained control over tokens and turns to train in a conversation\nFor a data sample that looks like:\n\n\ndata.jsonl\n\n{\n \"conversations\": [\n {\"from\": \"system\", \"value\": \"You are an AI assistant.\", \"train\": false},\n {\"from\": \"human\", \"value\": \"Hello\", \"train\": false},\n {\"from\": \"assistant\", \"value\": \"Hello\", \"train\": true},\n {\"from\": \"human\", \"value\": \"How are you?\", \"train\": true},\n {\n \"from\": \"assistant\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train_detail\": [\n {\"begin_offset\": 0, \"end_offset\": 8, \"train\": false},\n {\"begin_offset\": 9, \"end_offset\": 18, \"train\": true},\n {\"begin_offset\": 19, \"end_offset\": 30, \"train\": false},\n ],\n },\n {\n \"from\": \"human\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train\": true,\n },\n {\"from\": \"assistant\", \"value\": \"Hi there!\", \"train\": true}\n ]\n}\n\nThe configuration would look like:\ndatasets:\n - path: ...\n type: chat_template\n chat_template: tokenizer_default\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n roles_to_train: []\n train_on_eos: turn\n message_field_training: train\n message_field_training_detail: train_detail\n\n\n\n\n\n\nTip\n\n\n\nIt is not necessary to set both message_field_training and message_field_training_detail at once.\n\n\n\n\n\n(For Qwen3 template only) Enable reasoning split, where the reasoning is split from the content and passed as a separate field into the template.\ndatasets:\n - path: ...\n type: chat_template\n chat_template: qwen3\n split_thinking: true\nFor example, a content can look like:\n{\n \"content\": \"<think>Some thinking outputs</think>Output after thinking.\"\n}\nAfter split, it will look like:\n{\n \"reasoning_content\": \"Some thinking outputs\",\n \"content\": \"Output after thinking...\"\n}", "crumbs": [ "Dataset Formats", "Conversation" @@ -441,7 +441,7 @@ "href": "docs/dataset-formats/conversation.html#chat_template", "title": "Conversation", "section": "", - "text": "Chat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer’s template, a supported template, or custom jinja2.\n\n\ndata.jsonl\n\n{\"conversations\": [{\"role\": \"...\", \"content\": \"...\"}]}\n\nSee configs for full configs and supported templates.\n\n\nMost configs can be adapted as follows:\n# old\nchat_template: chatml\ndatasets:\n - path: ...\n type: sharegpt\n conversation: chatml\n\n# new (if using tokenizer's chat_template)\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n\n# new (if setting a new chat_template like chatml, gemma, etc)\nchat_template: chatml\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\nWe recommend checking the below examples for other usecases.\n\n\n\n\n(Legacy) Using the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.\n\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train:\n train_on_eos:\n\n\n\n\n\n\nTip\n\n\n\nIf you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.\n\n\n\nUsing the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.\n\nchat_template: gemma # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train: [\"assistant\"] # default value\n\nUsing the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.\n\nchat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n\nUsing a custom jinja template on OpenAI messages format, training on all assistant messages.\n\n# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty\nchat_template_jinja: \"{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\\n' + message['content'] + '<|end|>' + '\\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\\n' + message['content'] + '<|end|>' + '\\n' + '<|assistant|>' + '\\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\\n'}}{% endif %}{% endfor %}\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure that your tokenizer.eos_token is same as EOS (End-of-Sequence) token in template. Otherwise, set eos_token under special_tokens:.\n\n\n\nIf you are using a template that has a different EOT (End-of-Turn) token from EOS token or multiple EOT tokens (like Mistral V7 Tekken), set the eot_tokens: config. The handling of EOT tokens follows train_on_eos: which defaults to turn.\n\neot_tokens:\n - \"[/INST]\"\n # - \"[/SYSTEM_PROMPT]\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n # optional\n train_on_eot: turn # defaults read from train_on_eos (which defaults to turn)\n\n\n\n\n\n\nTip\n\n\n\nSee config documentation for detailed explanations of “turn”, “last”, and “all” options for training on tokens.\n\n\n\n\n\n\n\n\nNote\n\n\n\nUsing eot_tokens requires each token that exists in chat_template to be a single token in the tokenizer. Otherwise, the tokenizer will split the token and cause unexpected behavior.\nYou can add those tokens as new tokens under tokens: or (recommended) override unused added_tokens via added_tokens_overrides:. See config for more details.\n\n\n\nContinuing from the previous example, if you want to train on all EOT token trainable turns but only last EOS token, set train_on_eos: last.\n\neot_tokens:\n - \"[/INST]\"\n # ...\n\ndatasets:\n - path: ...\n type: chat_template\n\n train_on_eos: last\n train_on_eot: turn\n\n\n\n\n\n\nTip\n\n\n\nIf EOS token only appears at the end of a prompt, train_on_eos: last is equivalent to train_on_eos: turn. Therefore, generally, you can leave them to their defaults and omit them.\n\n\n\n(Advanced) Using fine-grained control over tokens and turns to train in a conversation\n\nFor a data sample that looks like:\n\n\ndata.jsonl\n\n{\n \"conversations\": [\n {\"from\": \"system\", \"value\": \"You are an AI assistant.\", \"train\": false},\n {\"from\": \"human\", \"value\": \"Hello\", \"train\": false},\n {\"from\": \"assistant\", \"value\": \"Hello\", \"train\": true},\n {\"from\": \"human\", \"value\": \"How are you?\", \"train\": true},\n {\n \"from\": \"assistant\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train_detail\": [\n {\"begin_offset\": 0, \"end_offset\": 8, \"train\": false},\n {\"begin_offset\": 9, \"end_offset\": 18, \"train\": true},\n {\"begin_offset\": 19, \"end_offset\": 30, \"train\": false},\n ],\n },\n {\n \"from\": \"human\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train\": true,\n },\n {\"from\": \"assistant\", \"value\": \"Hi there!\", \"train\": true}\n ]\n}\n\nThe configuration would look like:\ndatasets:\n - path: ...\n type: chat_template\n chat_template: tokenizer_default\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n roles_to_train: []\n train_on_eos: turn\n message_field_training: train\n message_field_training_detail: train_detail\n\n\n\n\n\n\nTip\n\n\n\nIt is not necessary to set both message_field_training and message_field_training_detail at once.\n\n\n\n(For Qwen3 template only) Enable reasoning split, where the reasoning is split from the content and passed as a separate field into the template.\n\ndatasets:\n - path: ...\n type: chat_template\n chat_template: qwen3\n split_thinking: true\nFor example, a content can look like:\n{\n \"content\": \"<think>Some thinking outputs</think>Output after thinking.\"\n}\nAfter split, it will look like:\n{\n \"reasoning_content\": \"Some thinking outputs\",\n \"content\": \"Output after thinking...\"\n}", + "text": "Chat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer’s template, a supported template, or custom jinja2.\n\n\ndata.jsonl\n\n{\"conversations\": [{\"role\": \"...\", \"content\": \"...\"}]}\n\nSee configs for full configs and supported templates.\n\n\nMost configs can be adapted as follows:\n# old\nchat_template: chatml\ndatasets:\n - path: ...\n type: sharegpt\n conversation: chatml\n\n# new (if using tokenizer's chat_template)\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n\n# new (if setting a new chat_template like chatml, gemma, etc)\nchat_template: chatml\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\nWe recommend checking the below examples for other usecases.\n\n\n\n\n\n(Legacy) Using the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train:\n train_on_eos:\n\n\n\n\n\n\nTip\n\n\n\nIf you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.\n\n\n\n\n\nUsing the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.\nchat_template: gemma # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train: [\"assistant\"] # default value\n\n\n\n\n\n\nNote\n\n\n\nIf you want to use built-in chat_template, use chat_template: tokenizer_default (this is set by default).\n\n\n\n\n\nUsing the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.\nchat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n\n\n\nUsing a custom jinja template on OpenAI messages format, training on all assistant messages.\n# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty\nchat_template_jinja: \"{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\\n' + message['content'] + '<|end|>' + '\\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\\n' + message['content'] + '<|end|>' + '\\n' + '<|assistant|>' + '\\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\\n'}}{% endif %}{% endfor %}\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure that your tokenizer.eos_token is same as EOS (End-of-Sequence) token in template. Otherwise, set eos_token under special_tokens:.\n\n\n\n\n\n\nIf you are using a template that has a different EOT (End-of-Turn) token from EOS token or multiple EOT tokens (like Mistral V7 Tekken), set the eot_tokens: config. The handling of EOT tokens follows train_on_eos: which defaults to turn.\n\neot_tokens:\n - \"[/INST]\"\n # - \"[/SYSTEM_PROMPT]\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n # optional\n train_on_eot: turn # defaults read from train_on_eos (which defaults to turn)\n\n\n\n\n\n\nTip\n\n\n\nSee config documentation for detailed explanations of “turn”, “last”, and “all” options for training on tokens.\n\n\n\n\n\n\n\n\nNote\n\n\n\nUsing eot_tokens requires each token that exists in chat_template to be a single token in the tokenizer. Otherwise, the tokenizer will split the token and cause unexpected behavior.\nYou can add those tokens as new tokens under tokens: or (recommended) override unused added_tokens via added_tokens_overrides:. See config for more details.\n\n\n\nContinuing from the previous example, if you want to train on all EOT token trainable turns but only last EOS token, set train_on_eos: last.\n\neot_tokens:\n - \"[/INST]\"\n # ...\n\ndatasets:\n - path: ...\n type: chat_template\n\n train_on_eos: last\n train_on_eot: turn\n\n\n\n\n\n\nTip\n\n\n\nIf EOS token only appears at the end of a prompt, train_on_eos: last is equivalent to train_on_eos: turn. Therefore, generally, you can leave them to their defaults and omit them.\n\n\n\n\n\nInstead of passing tools via the system prompt, an alternative method would be to have the tools in a separate column and loaded via chat_template to let the template dynamically build it.\n{\n \"tools\": [\n {\n \"type\": \"...\",\n \"function\": {\n \"name\": \"...\",\n \"description\": \"...\",\n \"parameters\": {\n \"type\": \"...\",\n \"properties\": {\n // ...\n },\n \"required\": [\"...\"],\n },\n },\n },\n ],\n \"messages\": [\n // ...\n {\n \"role\": \"assistant\", // call the function via assistant\n \"tool_calls\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"...\",\n \"arguments\": {\n \"...\": \"...\",\n }\n }\n }\n ]\n },\n {\n \"role\": \"tool\",\n \"name\": \"...\",\n \"content\": \"...\"\n },\n ],\n}\n\n\n\n\n\n\nNote\n\n\n\nTools need to follow JSON schema.\n\n\nchat_template: llama4\ndatasets:\n - path: ...\n type: chat_template\n # field_tools: tools # default is `tools`\n\n\n\n\n\n\nTip\n\n\n\nLook into the chat_template you are using to see if it supports tools and what the expected role is for the tool answer. In the example above, the tool answer is expected to be in the tool or ipython role for llama4 template.\n\n\n\n\n\n(Advanced) Using fine-grained control over tokens and turns to train in a conversation\nFor a data sample that looks like:\n\n\ndata.jsonl\n\n{\n \"conversations\": [\n {\"from\": \"system\", \"value\": \"You are an AI assistant.\", \"train\": false},\n {\"from\": \"human\", \"value\": \"Hello\", \"train\": false},\n {\"from\": \"assistant\", \"value\": \"Hello\", \"train\": true},\n {\"from\": \"human\", \"value\": \"How are you?\", \"train\": true},\n {\n \"from\": \"assistant\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train_detail\": [\n {\"begin_offset\": 0, \"end_offset\": 8, \"train\": false},\n {\"begin_offset\": 9, \"end_offset\": 18, \"train\": true},\n {\"begin_offset\": 19, \"end_offset\": 30, \"train\": false},\n ],\n },\n {\n \"from\": \"human\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train\": true,\n },\n {\"from\": \"assistant\", \"value\": \"Hi there!\", \"train\": true}\n ]\n}\n\nThe configuration would look like:\ndatasets:\n - path: ...\n type: chat_template\n chat_template: tokenizer_default\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n roles_to_train: []\n train_on_eos: turn\n message_field_training: train\n message_field_training_detail: train_detail\n\n\n\n\n\n\nTip\n\n\n\nIt is not necessary to set both message_field_training and message_field_training_detail at once.\n\n\n\n\n\n(For Qwen3 template only) Enable reasoning split, where the reasoning is split from the content and passed as a separate field into the template.\ndatasets:\n - path: ...\n type: chat_template\n chat_template: qwen3\n split_thinking: true\nFor example, a content can look like:\n{\n \"content\": \"<think>Some thinking outputs</think>Output after thinking.\"\n}\nAfter split, it will look like:\n{\n \"reasoning_content\": \"Some thinking outputs\",\n \"content\": \"Output after thinking...\"\n}", "crumbs": [ "Dataset Formats", "Conversation" @@ -1347,14 +1347,14 @@ "href": "docs/api/prompt_strategies.chat_template.html", "title": "prompt_strategies.chat_template", "section": "", - "text": "prompt_strategies.chat_template\nHF Chat Templates prompt strategy\n\n\n\n\n\nName\nDescription\n\n\n\n\nChatTemplatePrompter\nPrompter for HF chat templates\n\n\nChatTemplateStrategy\nTokenizing strategy for instruction-based prompts.\n\n\nStrategyLoader\nLoad chat template strategy based on configuration.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplatePrompter(\n tokenizer,\n chat_template,\n processor=None,\n max_length=2048,\n message_property_mappings=None,\n message_field_training=None,\n message_field_training_detail=None,\n field_messages='messages',\n field_system='system',\n roles=None,\n chat_template_kwargs=None,\n drop_system_message=False,\n)\nPrompter for HF chat templates\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy(\n prompter,\n tokenizer,\n train_on_inputs,\n sequence_len,\n roles_to_train=None,\n train_on_eos=None,\n train_on_eot=None,\n eot_tokens=None,\n split_thinking=False,\n)\nTokenizing strategy for instruction-based prompts.\n\n\n\n\n\nName\nDescription\n\n\n\n\nfind_first_eot_token\nFind the first EOT token in the input_ids starting from start_idx.\n\n\nfind_turn\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\ntokenize_prompt\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_first_eot_token(\n input_ids,\n start_idx,\n)\nFind the first EOT token in the input_ids starting from start_idx.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_turn(turns, turn_idx)\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.tokenize_prompt(prompt)\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.StrategyLoader()\nLoad chat template strategy based on configuration." + "text": "prompt_strategies.chat_template\nHF Chat Templates prompt strategy\n\n\n\n\n\nName\nDescription\n\n\n\n\nChatTemplatePrompter\nPrompter for HF chat templates\n\n\nChatTemplateStrategy\nTokenizing strategy for instruction-based prompts.\n\n\nStrategyLoader\nLoad chat template strategy based on configuration.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplatePrompter(\n tokenizer,\n chat_template,\n processor=None,\n max_length=2048,\n message_property_mappings=None,\n message_field_training=None,\n message_field_training_detail=None,\n field_messages='messages',\n field_system='system',\n field_tools='tools',\n roles=None,\n chat_template_kwargs=None,\n drop_system_message=False,\n)\nPrompter for HF chat templates\n\n\n\n\n\nName\nDescription\n\n\n\n\nbuild_prompt\nBuild a prompt from a conversation.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplatePrompter.build_prompt(\n conversation,\n add_generation_prompt=False,\n images=None,\n tools=None,\n)\nBuild a prompt from a conversation.\n\n\n\n\n\n\n\n\n\n\n\nName\nType\nDescription\nDefault\n\n\n\n\nconversation\n\nA list of messages.\nrequired\n\n\nadd_generation_prompt\n\nWhether to add a generation prompt.\nFalse\n\n\nimages\n\nA list of images. (optional)\nNone\n\n\ntools\n\nA list of tools. (optional)\nNone\n\n\n\n\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy(\n prompter,\n tokenizer,\n train_on_inputs,\n sequence_len,\n roles_to_train=None,\n train_on_eos=None,\n train_on_eot=None,\n eot_tokens=None,\n split_thinking=False,\n)\nTokenizing strategy for instruction-based prompts.\n\n\n\n\n\nName\nDescription\n\n\n\n\nfind_first_eot_token\nFind the first EOT token in the input_ids starting from start_idx.\n\n\nfind_turn\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\ntokenize_prompt\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_first_eot_token(\n input_ids,\n start_idx,\n)\nFind the first EOT token in the input_ids starting from start_idx.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_turn(\n turns,\n turn_idx,\n tools=None,\n)\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.tokenize_prompt(prompt)\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.StrategyLoader()\nLoad chat template strategy based on configuration." }, { "objectID": "docs/api/prompt_strategies.chat_template.html#classes", "href": "docs/api/prompt_strategies.chat_template.html#classes", "title": "prompt_strategies.chat_template", "section": "", - "text": "Name\nDescription\n\n\n\n\nChatTemplatePrompter\nPrompter for HF chat templates\n\n\nChatTemplateStrategy\nTokenizing strategy for instruction-based prompts.\n\n\nStrategyLoader\nLoad chat template strategy based on configuration.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplatePrompter(\n tokenizer,\n chat_template,\n processor=None,\n max_length=2048,\n message_property_mappings=None,\n message_field_training=None,\n message_field_training_detail=None,\n field_messages='messages',\n field_system='system',\n roles=None,\n chat_template_kwargs=None,\n drop_system_message=False,\n)\nPrompter for HF chat templates\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy(\n prompter,\n tokenizer,\n train_on_inputs,\n sequence_len,\n roles_to_train=None,\n train_on_eos=None,\n train_on_eot=None,\n eot_tokens=None,\n split_thinking=False,\n)\nTokenizing strategy for instruction-based prompts.\n\n\n\n\n\nName\nDescription\n\n\n\n\nfind_first_eot_token\nFind the first EOT token in the input_ids starting from start_idx.\n\n\nfind_turn\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\ntokenize_prompt\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_first_eot_token(\n input_ids,\n start_idx,\n)\nFind the first EOT token in the input_ids starting from start_idx.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_turn(turns, turn_idx)\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.tokenize_prompt(prompt)\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.StrategyLoader()\nLoad chat template strategy based on configuration." + "text": "Name\nDescription\n\n\n\n\nChatTemplatePrompter\nPrompter for HF chat templates\n\n\nChatTemplateStrategy\nTokenizing strategy for instruction-based prompts.\n\n\nStrategyLoader\nLoad chat template strategy based on configuration.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplatePrompter(\n tokenizer,\n chat_template,\n processor=None,\n max_length=2048,\n message_property_mappings=None,\n message_field_training=None,\n message_field_training_detail=None,\n field_messages='messages',\n field_system='system',\n field_tools='tools',\n roles=None,\n chat_template_kwargs=None,\n drop_system_message=False,\n)\nPrompter for HF chat templates\n\n\n\n\n\nName\nDescription\n\n\n\n\nbuild_prompt\nBuild a prompt from a conversation.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplatePrompter.build_prompt(\n conversation,\n add_generation_prompt=False,\n images=None,\n tools=None,\n)\nBuild a prompt from a conversation.\n\n\n\n\n\n\n\n\n\n\n\nName\nType\nDescription\nDefault\n\n\n\n\nconversation\n\nA list of messages.\nrequired\n\n\nadd_generation_prompt\n\nWhether to add a generation prompt.\nFalse\n\n\nimages\n\nA list of images. (optional)\nNone\n\n\ntools\n\nA list of tools. (optional)\nNone\n\n\n\n\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy(\n prompter,\n tokenizer,\n train_on_inputs,\n sequence_len,\n roles_to_train=None,\n train_on_eos=None,\n train_on_eot=None,\n eot_tokens=None,\n split_thinking=False,\n)\nTokenizing strategy for instruction-based prompts.\n\n\n\n\n\nName\nDescription\n\n\n\n\nfind_first_eot_token\nFind the first EOT token in the input_ids starting from start_idx.\n\n\nfind_turn\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\ntokenize_prompt\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_first_eot_token(\n input_ids,\n start_idx,\n)\nFind the first EOT token in the input_ids starting from start_idx.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.find_turn(\n turns,\n turn_idx,\n tools=None,\n)\nLocate the starting and ending indices of the specified turn in a conversation.\n\n\n\nprompt_strategies.chat_template.ChatTemplateStrategy.tokenize_prompt(prompt)\nPublic method that can handle either a single prompt or a batch of prompts.\n\n\n\n\n\nprompt_strategies.chat_template.StrategyLoader()\nLoad chat template strategy based on configuration." }, { "objectID": "docs/api/prompt_strategies.kto.chatml.html", @@ -3712,7 +3712,7 @@ "href": "docs/config.html", "title": "Config Reference", "section": "", - "text": "# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files\n# This can also be a relative path to a model on disk\nbase_model: ./llama-7b-hf\n# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)\nbase_model_ignore_patterns:\n# If the base_model repo on hf hub doesn't include configuration .json files,\n# You can set that here, or leave this empty to default to base_model\nbase_model_config: ./llama-7b-hf\n# You can specify to choose a specific model revision from huggingface hub\nrevision_of_model:\n# Optional tokenizer configuration path in case you want to use a different tokenizer\n# than the one defined in the base model\ntokenizer_config:\n# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too\nmodel_type: AutoModelForCausalLM\n# Corresponding tokenizer for the model AutoTokenizer is a good choice\ntokenizer_type: AutoTokenizer\n# Trust remote code for untrusted source\ntrust_remote_code:\n# use_fast option for tokenizer loading from_pretrained, default to True\ntokenizer_use_fast:\n# Whether to use the legacy tokenizer setting, defaults to True\ntokenizer_legacy:\n# Resize the model embeddings when new tokens are added to multiples of 32\n# This is reported to improve training speed on some models\nresize_token_embeddings_to_32x:\n# Optional[bool] Whether to shrink the embeddings to len(tokenizer). By default, we won't shrink.\nshrink_embeddings:\n# Optional[bool] Don't upcast the embeddings to float32 when using PEFT. Useful for low-VRAM GPUs\nembeddings_skip_upcast:\n# Whether to load the model with randomly initialized weights. Useful for\n# pre-training a model from scratch or debugging purposes.\nrandom_init_weights:\n\n# (Internal use only)\n# Used to identify which the model is based on\nis_falcon_derived_model:\nis_llama_derived_model:\nis_qwen_derived_model:\n# Please note that if you set this to true, `padding_side` will be set to \"left\" by default\nis_mistral_derived_model:\n\n# optional overrides to the base model configuration\noverrides_of_model_config:\n # RoPE Scaling https://github.com/huggingface/transformers/pull/24653\n rope_scaling:\n type: # linear | dynamic\n factor: # float\n\n# optional overrides the base model loading from_pretrained\noverrides_of_model_kwargs:\n # use_cache: False\n\n# optional overrides to the bnb 4bit quantization configuration\n# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig\nbnb_config_kwargs:\n # These are default values\n llm_int8_has_fp16_weight: false\n bnb_4bit_quant_type: nf4\n bnb_4bit_use_double_quant: true\n\n# quantization aware training\nqat:\n activation_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for activation quantization. Valid options are \"int4\" and \"int8\"\n weight_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for weight quantization. Valid options are \"int4\" and \"int8\"\n group_size: # Optional[int] = 32. The number of elements in each group for per-group fake quantization\n fake_quant_after_n_steps: # Optional[int] = None. The number of steps to apply fake quantization after\n\n# post-training quantization\nquantization:\n weight_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for weight quantization. Valid options are uintX for X in [1, 2, 3, 4, 5, 6, 7], or int4, or int8\n activation_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for activation quantization. Valid options are \"int4\" and \"int8\"\n group_size: # Optional[int] = 32. The number of elements in each group for per-group fake quantization\n quantize_embedding: # Optional[bool] = False. Whether to quantize the embedding layer.\n\n\n# Whether you are training a 4-bit GPTQ quantized model\ngptq: true\n\n# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer\nload_in_8bit: true\n# Use bitsandbytes 4 bit\nload_in_4bit:\n\n# Use CUDA bf16\nbf16: true # bool or 'full' for `bf16_full_eval`, or 'auto' for automatic detection. require >=ampere\n# Use CUDA fp16\nfp16: true\n# Use CUDA tf32\ntf32: true # require >=ampere\n# Note: if bf16 is set to 'auto', and fp16 is set to true, we will prefer the explict fp16 setting\n\n# No AMP (automatic mixed precision)\nbfloat16: true # require >=ampere\nfloat16: true\n\n# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset\ngpu_memory_limit: 20GiB\n# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge\nlora_on_cpu: true\n\n# List[str]. Add plugins to extend the pipeline.\n# See `src/axolotl/integrations` for the available plugins or doc below for more details.\n# https://docs.axolotl.ai/docs/custom_integrations.html\nplugins:\n # - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin\n\n# A list of one or more datasets to finetune the model with\n# See https://docs.axolotl.ai/docs/dataset_loading.html for guide on loading datasets\n# See https://docs.axolotl.ai/docs/dataset-formats/ for guide on dataset formats\ndatasets:\n # HuggingFace dataset repo | s3:// | gs:// | path to local file or directory\n - path: vicgalle/alpaca-gpt4\n # The type of prompt to use for training. [alpaca, gpteacher, oasst, reflection]\n type: alpaca # format | format:<prompt_style> (chat/instruct) | <prompt_strategies>.load_<load_fn>\n ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file\n data_files: # Optional[str] path to source data files\n\n shards: # Optional[int] split dataset into N pieces (use with shards_idx)\n shards_idx: # Optional[int] = 0 the index of sharded dataset to use\n\n preprocess_shards: # Optional[int] process dataset in N sequential chunks for memory efficiency (exclusive with `shards`)\n\n name: # Optional[str] name of dataset configuration to load\n split: train # Optional[str] name of dataset split to load from\n revision: # Optional[str] The specific revision of the dataset to use when loading from the Hugging Face Hub. This can be a commit hash, tag, or branch name. If not specified, the latest version will be used. This parameter is ignored for local datasets.\n trust_remote_code: # Optional[bool] Trust remote code for untrusted source\n\n # Custom user instruction prompt\n - path: repo\n type:\n # The below are defaults. only set what's needed if you use a different column name.\n system_prompt: \"\"\n system_format: \"{system}\"\n field_system: system\n field_instruction: instruction\n field_input: input\n field_output: output\n\n # Customizable to be single line or multi-line\n # Use {instruction}/{input} as key to be replaced\n # 'format' can include {input}\n format: |-\n User: {instruction} {input}\n Assistant:\n # 'no_input_format' cannot include {input}\n no_input_format: \"{instruction} \"\n\n # For `completion` datsets only, uses the provided field instead of `text` column\n field:\n\n # Using chat template\n - path: ...\n # Set type to `chat_template` to use this strategy\n type: chat_template\n # Specify the name of the chat template to use\n # The name of the chat template to use for training, following values are supported:\n # - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default.\n # - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n # - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to if the tokenizer does not have a chat template else default to tokenizer. E.g. tokenizer_default_fallback_chatml.\n # - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n chat_template: tokenizer_default\n\n # Custom jinja chat template. Used only if `chat_template: jinja` or empty.\n chat_template_jinja:\n\n # Key containing the messages (default: \"messages\")\n field_messages: messages\n\n # Key containing the system message (default: \"system\")\n # If the system message is not present in the dataset sample, it will be loaded from the field_system property.\n field_system: system\n\n # Mapping of properties from the input dataset to the chat template.\n # (default: message_property_mappings={'role':'role', 'content':'content'})\n # If a property exists in the template but not in this mapping, the system will attempt\n # to load it directly from the message using the property name as the key.\n # Example: In the mapping below, 'from' is loaded from input dataset and used as 'role',\n # while 'value' is loaded and used as 'content' in the chat template.\n message_property_mappings:\n role: from\n content: value\n # ...\n\n # Optional[Dict[str, List]]. Roles mapping in the messages.\n # The format is {target_role: [source_roles]}. All source roles will be mapped to the target role.\n # The default is:\n roles:\n user: [\"human\", \"user\"]\n assistant: [\"gpt\", \"assistant\"]\n system: [\"system\"]\n tool: [\"tool\"]\n\n # Optional[bool]. Whether to drop the system turn from the dataset. Only works with chat_template.\n # This does not drop the default system message from chat_template if it exists. If you wish to,\n # we recommend using a custom jinja template with the default system message removed or\n # adding a system turn with empty content.\n drop_system_message:\n\n # Optional[bool]. (for Qwen3 template only) Whether to split the assistant content based on a reasoning trace inside delimited tags\n # See example at `docs/dataset-formats/conversation.qmd`\n split_thinking:\n\n # IMPORTANT: The following fields determine which parts of the conversation to train on.\n # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train\n # See examples at `docs/dataset-formats/conversation.qmd`\n # Note: If the below 5 fields are empty, defaults to training only on the last message.\n\n # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss.\n roles_to_train: [\"assistant\"] # default\n # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are:\n # - all: train on all EOS tokens\n # - turn (default): train on the EOS token at the end of each trainable turn\n # - last: train on the last EOS token in the conversation\n # TIP: Please make sure that your `tokenizer.eos_token` is same as EOS/EOT token in template. Otherwise, set `eos_token` under `special_tokens`.\n train_on_eos: turn\n # Optional[str]. Which EOT (End-of-Turn) tokens to train on in the conversation. Possible values are:\n # - all: train on all EOT tokens\n # - turn: train on the EOT token at the end of each trainable turn\n # - last: train on the last EOT token in the conversation\n # If not specified, defaults to the value of train_on_eos for backward compatibility.\n train_on_eot:\n # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`.\n message_field_training: training\n # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn.\n # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train).\n message_field_training_detail: train_detail\n\n\n# If false, the datasets will not be shuffled and will keep their original order in `datasets`.\n# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.\nshuffle_merged_datasets: true\n\n# Deduplicates datasets and test_datasets with identical entries.\ndataset_exact_deduplication: true\n\n# A list of one or more datasets to eval the model with.\n# You can use either test_datasets, or val_set_size, but not both.\ntest_datasets:\n - path: /workspace/data/eval.jsonl\n ds_type: json\n # You need to specify a split. For \"json\" datasets the default split is called \"train\".\n split: train\n type: completion\n data_files:\n - /workspace/data/eval.jsonl\n\n# use RL training: 'dpo', 'ipo', 'kto', 'simpo', 'orpo', 'grpo'\nrl:\nrl_beta: # Optional[float]. The beta parameter for the RL training.\n\n# dpo\ndpo_use_weighting: # Optional[bool]. Whether to perform weighting.\nrpo_alpha: # Optional[float]. Weighting of NLL term in loss from RPO paper.\n\n# orpo\norpo_alpha: 0.1 # Parameter controlling the relative ratio loss weight in the ORPO loss. Passed to `beta` in `ORPOConfig` due to trl mapping.\n\n# kto\nkto_desirable_weight: # Optional[float]. Factor for desirable loss term in KTO loss.\nkto_undesirable_weight: # Optional[float]. Factor for undesirable loss term in KTO loss.\n\n# simpo\ncpo_alpha: 1.0 # Weight of the BC regularizer\nsimpo_gamma: 0.5 # Target reward margin for the SimPO loss\n\n# grpo\ntrl:\n use_vllm: # Optional[bool]. Whether to use VLLM for RL training.\n vllm_server_host: # Optional[str]. Host of the vLLM server to connect to.\n vllm_server_port: # Optional[int]. Port of the vLLM server to connect to.\n vllm_server_timeout: # Optional[int]. Total timeout (in seconds) to wait for the vLLM server to respond.\n vllm_guided_decoding_regex: # Optional[str]. Regex for vLLM guided decoding.\n\n beta: # Optional[float]. Beta parameter for the RL training. Same as `rl_beta`. Use\n max_completion_length: # Optional[int]. Maximum length of the completion for RL training.\n\n reward_funcs: # Optional[list[str]]. List of reward functions to load. Paths must be importable from current dir.\n reward_weights: # Optional[list[float]]. List of reward weights for the reward functions.\n\n num_generations: # Optional[int]. Number of generations to sample.\n log_completions: # Optional[bool]. Whether to log completions.\n num_completions_to_print: # Optional[int]. Number of completions to print when log_completions is True.\n\n sync_ref_model: # Optional[bool]. Whether to sync the reference model.\n ref_model_mixup_alpha: # Optional[float]. Mixup alpha for the reference model.\n ref_model_sync_steps: # Optional[int]. Sync steps for the reference model.\n scale_rewards: # Optional[bool]. Whether to scale rewards by their standard deviation.\n\n temperature: # Optional[float]. Sampling temperature for the GRPO policy.\n top_p: # Optional[float]. Top-p sampling probability for the generation policy.\n top_k: # Optional[int]. Top-k sampling for the generation policy.\n min_p: # Optional[float]. Minimum probability for the generation policy.\n repetition_penalty: # Optional[float]. Penalty for tokens that appear in prompt and generated text.\n\n num_iterations: # Optional[int]. Number of iterations per batch (μ) for GRPO.\n epsilon: # Optional[float]. Epsilon value for clipping in the GRPO algorithm.\n epsilon_high: # Optional[float]. Upper-bound epsilon value for clipping in the GRPO algorithm.\n use_liger_loss: # Optional[bool]. Whether to use Liger loss for GRPO.\n loss_type: # Optional[str]. Loss formulation to use. Supported values: grpo, bnpo, dr_grpo.\n mask_truncated_completions: # Optional[bool]. Whether to exclude truncated completions from loss calculation.\n\n\n# reward modelling: `True` or `False`\nreward_model:\n\n# process reward modelling: `True` or `False`\nprocess_reward_model:\n\n# The name of the chat template to use for training, following values are supported:\n# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value.\n# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer.\n# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n# The selected chat template will be saved to the tokenizer_config.json for easier inferencing\n# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template.\nchat_template: tokenizer_default\n# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null.\nchat_template_jinja: null\n# Optional[List[str]]. Custom EOT (End-of-Turn) tokens to mask/unmask during training.\n# These tokens mark the boundaries between conversation turns.\n# For example: [\"/INST\", \"</s>\", \"[/SYSTEM_PROMPT]\"]\n# If not specified, defaults to just the model's eos_token.\n# This is useful for templates that use multiple delimiter tokens.\neot_tokens:\n # - \"</s>\"\n # - \"[/INST]\"\n # - \"[/SYSTEM_PROMPT]\"\n# Changes the default system message\ndefault_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml.\n# Axolotl attempts to save the dataset as an arrow after packing the data together so\n# subsequent training attempts load faster, relative path\ndataset_prepared_path: data/last_run_prepared\n# Push prepared dataset to hub\npush_dataset_to_hub: # Optional[str] repo_org/repo_name\n# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`\n# if not set.\ndataset_processes: # defaults to os.cpu_count() if not set\n# Keep dataset in memory while preprocessing\n# Only needed if cached dataset is taking too much storage\ndataset_keep_in_memory:\n# push checkpoints to hub\nhub_model_id: # private repo path to push finetuned model\n# how to push checkpoints to hub\n# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy\nhub_strategy:\n# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets\n# Required to be true when used in combination with `push_dataset_to_hub`\nhf_use_auth_token: # boolean\n# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.\nval_set_size: 0.04\n# Num shards for whole dataset\ndataset_shard_num:\n# Index of shard to use for whole dataset\ndataset_shard_idx:\n\n# The maximum length of an input to train with, this should typically be less than 2048\n# as most models have a token/context limit of 2048\nsequence_len: 2048\n# Pad inputs so each step uses constant sized buffers\n# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently\npad_to_sequence_len:\n# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'\nsample_packing:\n# Set to 'false' if getting errors during eval with sample_packing on.\neval_sample_packing:\n# You can set these packing optimizations AFTER starting a training at least once.\n# The trainer will provide recommended values for these values.\nsample_packing_eff_est:\ntotal_num_tokens:\n# Increasing the following values helps with packing, but usually only slightly (<%1.)\n# The number of samples packed at a time.\nsample_packing_group_size: 100000\n# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples.\nsample_packing_bin_size: 200\nsample_pack_sequentially: # Optional[bool]. Whether to pack samples sequentially.\n\n# whether to concatenate samples during pretraining\npretraining_sample_concatenation:\n\ncurriculum_sampling: # Optional[bool]. Whether to use sequential sampling for curriculum learning\n\n# Use batch flattening for speedups when not using sample_packing\nbatch_flattening:\n\n# Passed through to transformers when loading the model when launched without accelerate\n# Use `sequential` when training w/ model parallelism to limit memory\ndevice_map:\n# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.\nmax_memory:\n\n# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model\nadapter: lora\n# If you already have a lora model trained that you want to load, put that here.\n# This means after training, if you want to test the model, you should set this to the value of `output_dir`.\n# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.\nlora_model_dir:\n\n# LoRA hyperparameters\n# For more details about the following options, see:\n# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n# - k_proj\n# - o_proj\n# - gate_proj\n# - down_proj\n# - up_proj\nlora_target_linear: # If true, will target all linear modules\n\n# List[int] | int. # The layer indices to transform, otherwise, apply to all layers\n# https://huggingface.co/docs/peft/v0.15.0/en/package_reference/lora#peft.LoraConfig.layers_to_transform\npeft_layers_to_transform:\n\n# Optional[bool]. Whether to use DoRA.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#weight-decomposed-low-rank-adaptation-dora\npeft_use_dora:\n\n# Optional[bool]. Whether to use RSLoRA.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#rank-stabilized-lora\npeft_use_rslora:\n\n# Optional[list[tuple[int, int]]]. List of layer indices to replicate.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#memory-efficient-layer-replication-with-lora\npeft_layer_replication:\n\n# bool | Literal[\"gaussian\", \"eva\", \"olora\", \"pissa\", \"pissa_niter_[number of iters]\", \"corda\", \"loftq\"]\n# How to initialize LoRA weights. Default to True which is MS original implementation.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#initialization\npeft_init_lora_weights:\n\n# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.\n# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.\n# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994\nlora_modules_to_save:\n# - embed_tokens\n# - lm_head\n\nlora_fan_in_fan_out: false\n\n# Apply custom LoRA autograd functions and activation function Triton kernels for\n# speed and memory savings\n# See: https://docs.axolotl.ai/docs/lora_optims.html\nlora_mlp_kernel: true\nlora_qkv_kernel: true\nlora_o_kernel: true\n\n# LoRA+ hyperparameters\n# For more details about the following options, see:\n# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py`\nloraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4.\nloraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6.\n\npeft:\n # Configuration options for loftq initialization for LoRA\n # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization\n loftq_config:\n loftq_bits: # typically 4 bits\n\n# ReLoRA configuration\n# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed\nrelora_steps: # Number of steps per ReLoRA restart\nrelora_warmup_steps: # Number of per-restart warmup steps\nrelora_anneal_steps: # Number of anneal steps for each relora cycle\nrelora_prune_ratio: # threshold for optimizer magnitude when pruning\nrelora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings\n\n# wandb configuration if you're using it\n# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.\nwandb_mode: # \"offline\" to save run metadata locally and not sync to the server, \"disabled\" to turn off wandb\nwandb_project: # Your wandb project name\nwandb_entity: # A wandb Team name if using a Team\nwandb_watch:\nwandb_name: # Set the name of your wandb run\nwandb_run_id: # Set the ID of your wandb run\nwandb_log_model: # \"checkpoint\" to log model to wandb Artifacts every `save_steps` or \"end\" to log only at the end of training\n\n# mlflow configuration if you're using it\nmlflow_tracking_uri: # URI to mlflow\nmlflow_experiment_name: # Your experiment name\nmlflow_run_name: # Your run name\nhf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry\n\n# Comet configuration if you're using it\n# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`.\n# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start\nuse_comet: # Enable or disable Comet integration.\ncomet_api_key: # API key for Comet. Recommended to set via `comet login`.\ncomet_workspace: # Workspace name in Comet. Defaults to the user's default workspace.\ncomet_project_name: # Project name in Comet. Defaults to Uncategorized.\ncomet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key.\ncomet_mode: # Create a new experiment (\"create\") or log to an existing one (\"get\"). Default (\"get_or_create\") auto-selects based on configuration.\ncomet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True.\ncomet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details.\n\n# Tensorboard\nuse_tensorboard: # Optional[bool]\n\n# Where to save the full-finetuned model to\noutput_dir: ./completed-model\n\n# Whether to use torch.compile and which backend to use\n# setting to `auto` will enable torch compile when torch>=2.5.1\ntorch_compile: # Optional[Union[Literal[\"auto\"], bool]]\ntorch_compile_backend: # Optional[str]\ntorch_compile_mode: # 'default' | 'reduce-overhead' | 'max-autotune'\n\n# Training hyperparameters\n\n# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.\ngradient_accumulation_steps: 1\n# The number of samples to include in each batch. This is the number of samples sent to each GPU.\n# Batch size per gpu = micro_batch_size * gradient_accumulation_steps\nmicro_batch_size: 2\neval_batch_size:\nnum_epochs: 4\nwarmup_steps: 100 # cannot use with warmup_ratio\nwarmup_ratio: 0.05 # cannot use with warmup_steps\nlearning_rate: 0.00003\nlr_quadratic_warmup:\nlogging_steps:\neval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps\nevals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps\neval_strategy: # Set to `\"no\"` to skip evaluation, `\"epoch\"` at end of each epoch, leave empty to infer from `eval_steps`.\nsave_strategy: # Set to `\"no\"` to skip checkpoint saves, `\"epoch\"` at end of each epoch, `\"best\"` when better result is achieved, leave empty to infer from `save_steps`.\nsave_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps\nsaves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps\nsave_total_limit: # Checkpoints saved at a time\nsave_only_model: # Save only the model weights, skipping the optimizer. Using this means you can't resume from checkpoints.\n# Maximum number of iterations to train for. It precedes num_epochs which means that\n# if both are set, num_epochs will not be guaranteed.\n# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps\nmax_steps:\n\n# bool of whether to include tokens trainer per second in the training metrics. This iterates over the entire dataset once, so it takes some time.\ninclude_tokens_per_second: # Optional[bool]\n\n# whether to find batch size that fits in memory. Passed to underlying transformers Trainer\nauto_find_batch_size: # Optional[bool]\n\neval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0\neval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128\ndo_causal_lm_eval: # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`.\neval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is [\"sacrebleu\", \"comet\", \"ter\", \"chrf\", \"perplexity\"]\n\nprofiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir.\n # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information\n # snapshots can be visualized @ https://pytorch.org/memory_viz\n\nloss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)\nloss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)\n\n# Save model as safetensors (require safetensors package). Default True\nsave_safetensors:\n\n# Whether to mask out or include the human's prompt from the training labels\ntrain_on_inputs: false\n# Group similarly sized data to minimize padding.\n# May be slower to start, as it must download and sort the entire dataset.\n# Note that training loss may have an oscillating pattern with this enabled.\ngroup_by_length: false\n\n# Whether to use gradient checkpointing. Available options are: true, false, \"offload\", \"offload_disk\".\n# https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\ngradient_checkpointing: false\n# additional kwargs to pass to the trainer for gradient checkpointing\n# gradient_checkpointing_kwargs:\n# use_reentrant: true\n\n# Stop training after this many evaluation losses have increased in a row\n# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback\nearly_stopping_patience: 3\n\n# Specify a scheduler and kwargs to use with the optimizer\n# Valid values are driven by the Transformers SchedulerType class, see:\n# https://github.com/huggingface/transformers/blob/5f4ecf2d9f867a1255131d2461d75793c0cf1db2/src/transformers/trainer_utils.py#L420\n# Valid values include\n# - 'linear'\n# - 'cosine' (default)\n# - 'cosine_with_restarts'\n# - 'polynomial'\n# - 'constant'\n# - 'constant_with_warmup'\n# - 'inverse_sqrt'\n# - 'reduce_lr_on_plateau'\n# - 'cosine_with_min_lr'\n# - 'warmup_stable_decay'\n\n# Additional schedulers include:\n# - 'one_cycle'\n# - 'rex'\nlr_scheduler:\nlr_scheduler_kwargs:\ncosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr\ncosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)\n\n# For one_cycle optim\nlr_div_factor: # Learning rate div factor\n\n# Specify optimizer\n# Valid values are driven by the Transformers OptimizerNames class, see:\n# https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189\n#\n# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of\n# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used\n# in the examples/ for your model and fine-tuning use case.\n#\n# Valid values for 'optimizer' include:\n# - adamw_torch\n# - adamw_torch_fused (default)\n# - adamw_torch_xla\n# - adamw_torch_npu_fused\n# - adamw_apex_fused\n# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)\n# - adafactor\n# - adamw_anyprecision\n# - adamw_torch_4bit\n# - ademamix\n# - sgd\n# - adagrad\n# - adamw_bnb_8bit\n# - adamw_8bit # alias for adamw_bnb_8bit\n# - ademamix_8bit\n# - lion_8bit\n# - lion_32bit\n# - paged_adamw_32bit\n# - paged_adamw_8bit\n# - paged_ademamix_32bit\n# - paged_ademamix_8bit\n# - paged_lion_32bit\n# - paged_lion_8bit\n# - rmsprop\n# - rmsprop_bnb\n# - rmsprop_bnb_8bit\n# - rmsprop_bnb_32bit\n# - galore_adamw\n# - galore_adamw_8bit\n# - galore_adafactor\n# - galore_adamw_layerwise\n# - galore_adamw_8bit_layerwise\n# - galore_adafactor_layerwise\n# - lomo\n# - adalomo\n# - grokadamw\n# - schedule_free_adamw\n# - schedule_free_sgd\n# - apollo_adamw\n# - apollo_adamw_layerwise\n#\n# Additional custom optimizers include:\n# - optimi_adamw\n# - ao_adamw_8bit\n# - ao_adamw_fp8\n# - came_pytorch\noptimizer:\n# Dictionary of arguments to pass to the optimizer\noptim_args:\n# For Galore Optimizers the following optim_args are available\n# rank: # type: int\n# update_proj_gap # type: int\n# scale # type: float\n# proj_type: # type: str, default = std\n\n# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm\noptim_target_modules:\n# - self_attn # for llama\n# - mlp\n\n# Specify weight decay\nweight_decay:\n# adamw hyperparams\nadam_beta1:\nadam_beta2:\nadam_beta3: # only used for CAME Optimizer\nadam_epsilon:\nadam_epsilon2: # only used for CAME Optimizer\n# Gradient clipping max norm\nmax_grad_norm:\n\n# Augmentation techniques\n# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings\n# currently only supported on Llama and Mistral\nneftune_noise_alpha:\n\n# Optional[bool]. Whether to bettertransformers\nflash_optimum:\n\n# Note: Only one of the following attention patches can be used at a time.\n# For example, if you set `xformers_attention` to `true`, do not set `flash_attention` to `true`.\n\n# Optional[bool]. Whether to use xformers attention patch https://github.com/facebookresearch/xformers:\nxformers_attention:\n# Optional[bool]. Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:\nflash_attention:\nflash_attn_cross_entropy: # Optional[bool]. Whether to use flash-attention cross entropy implementation - advanced use only\nflash_attn_rms_norm: # Optional[bool]. Whether to use flash-attention rms norm implementation - advanced use only\nflash_attn_fuse_qkv: # Optional[bool]. Whether to fuse QKV into a single operation\nflash_attn_fuse_mlp: # Optional[bool]. Whether to fuse part of the MLP into a single operation\n# Optional[bool]. Whether to use scaled-dot-product attention\n# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html\nsdp_attention:\n# Optional[bool]. Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf\ns2_attention:\n\n# Optional[bool]. Whether to use low_cpu_mem_usage\nlow_cpu_mem_usage:\n# Optional[str]. Resume from a specific checkpoint dir\nresume_from_checkpoint:\n# Optional[bool]. If resume_from_checkpoint isn't set and you simply want it to start where it left off.\n# Be careful with this being turned on between different models.\nauto_resume_from_checkpoints: false\n\n## Multimodal section\n# int | tuple[int, int] | None . Size to resize images to, width x height.\n# Will read from model/processor config if not set.\nimage_size:\n# str. Algorithm to use for image resizing. \"bilinear\", \"bicubic\", \"lanczos\". Default is \"bilinear\".\nimage_resize_algorithm: 'bilinear'\n## End of multimodal section\n\n# Don't mess with this, it's here for accelerate and torchrun\nlocal_rank:\n\n# Add or change special tokens.\n# If you add tokens here, you don't need to add them to the `tokens` list.\nspecial_tokens:\n # bos_token: \"<s>\"\n # eos_token: \"</s>\"\n # unk_token: \"<unk>\"\n # pad_token: \"[PAD]\"\n\n# Optional[list[str]]. Add extra tokens to the tokenizer.\ntokens:\n # - \"<|startoftext|>\"\n # - \"<|endoftext|>\"\n\n# Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer.\n# Only works for tokens that are not part of the base vocab (aka are added_tokens).\n# Can be checked if they exist in tokenizer.json added_tokens.\nadded_tokens_overrides: # Dict[int, str]\n# 128041: \"<|im_start|>\"\n# 128042: \"<|im_end|>\"\n\n# FSDP\nfsdp:\nfsdp_config:\n\n# Deepspeed config path. e.g., deepspeed_configs/zero3.json\ndeepspeed:\n\n# Advanced DDP Arguments\nddp_timeout:\nddp_bucket_cap_mb:\nddp_broadcast_buffers:\n\n# Sequence parallelism\n# Set to a divisor of the number of GPUs available to split sequences into chunks of equal size.\n# Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM.\n# E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized\n# subsequences, or set to 4 to split into four equal-sized subsequences.\n# See https://docs.axolotl.ai/docs/sequence_parallelism.html for more details.\nsequence_parallel_degree:\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\n# Must evenly divide the number of KV heads in your model.\nheads_k_stride: 1\n# One of \"varlen_llama3\", \"batch_ring\", \"batch_zigzag\", \"batch_stripe\". Defaults to \"varlen_llama3\"\n# in the sample packing case, and \"batch_ring\" in the non-sample packing case.\nring_attn_func:\n\n# Path to torch distx for optim 'adamw_anyprecision'\ntorchdistx_path:\n\n# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize\npretraining_dataset:\n\n# Debug mode\ndebug:\n\n# Seed\nseed:\n\n# Allow overwrite yml config using from cli\nstrict:", + "text": "# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files\n# This can also be a relative path to a model on disk\nbase_model: ./llama-7b-hf\n# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)\nbase_model_ignore_patterns:\n# If the base_model repo on hf hub doesn't include configuration .json files,\n# You can set that here, or leave this empty to default to base_model\nbase_model_config: ./llama-7b-hf\n# You can specify to choose a specific model revision from huggingface hub\nrevision_of_model:\n# Optional tokenizer configuration path in case you want to use a different tokenizer\n# than the one defined in the base model\ntokenizer_config:\n# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too\nmodel_type: AutoModelForCausalLM\n# Corresponding tokenizer for the model AutoTokenizer is a good choice\ntokenizer_type: AutoTokenizer\n# Trust remote code for untrusted source\ntrust_remote_code:\n# use_fast option for tokenizer loading from_pretrained, default to True\ntokenizer_use_fast:\n# Whether to use the legacy tokenizer setting, defaults to True\ntokenizer_legacy:\n# Resize the model embeddings when new tokens are added to multiples of 32\n# This is reported to improve training speed on some models\nresize_token_embeddings_to_32x:\n# Optional[bool] Whether to shrink the embeddings to len(tokenizer). By default, we won't shrink.\nshrink_embeddings:\n# Optional[bool] Don't upcast the embeddings to float32 when using PEFT. Useful for low-VRAM GPUs\nembeddings_skip_upcast:\n# Whether to load the model with randomly initialized weights. Useful for\n# pre-training a model from scratch or debugging purposes.\nrandom_init_weights:\n\n# (Internal use only)\n# Used to identify which the model is based on\nis_falcon_derived_model:\nis_llama_derived_model:\nis_qwen_derived_model:\n# Please note that if you set this to true, `padding_side` will be set to \"left\" by default\nis_mistral_derived_model:\n\n# optional overrides to the base model configuration\noverrides_of_model_config:\n # RoPE Scaling https://github.com/huggingface/transformers/pull/24653\n rope_scaling:\n type: # linear | dynamic\n factor: # float\n\n# optional overrides the base model loading from_pretrained\noverrides_of_model_kwargs:\n # use_cache: False\n\n# optional overrides to the bnb 4bit quantization configuration\n# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig\nbnb_config_kwargs:\n # These are default values\n llm_int8_has_fp16_weight: false\n bnb_4bit_quant_type: nf4\n bnb_4bit_use_double_quant: true\n\n# quantization aware training\nqat:\n activation_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for activation quantization. Valid options are \"int4\" and \"int8\"\n weight_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for weight quantization. Valid options are \"int4\" and \"int8\"\n group_size: # Optional[int] = 32. The number of elements in each group for per-group fake quantization\n fake_quant_after_n_steps: # Optional[int] = None. The number of steps to apply fake quantization after\n\n# post-training quantization\nquantization:\n weight_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for weight quantization. Valid options are uintX for X in [1, 2, 3, 4, 5, 6, 7], or int4, or int8\n activation_dtype: # Optional[str] = \"int8\". Fake quantization layout to use for activation quantization. Valid options are \"int4\" and \"int8\"\n group_size: # Optional[int] = 32. The number of elements in each group for per-group fake quantization\n quantize_embedding: # Optional[bool] = False. Whether to quantize the embedding layer.\n\n\n# Whether you are training a 4-bit GPTQ quantized model\ngptq: true\n\n# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer\nload_in_8bit: true\n# Use bitsandbytes 4 bit\nload_in_4bit:\n\n# Use CUDA bf16\nbf16: true # bool or 'full' for `bf16_full_eval`, or 'auto' for automatic detection. require >=ampere\n# Use CUDA fp16\nfp16: true\n# Use CUDA tf32\ntf32: true # require >=ampere\n# Note: if bf16 is set to 'auto', and fp16 is set to true, we will prefer the explict fp16 setting\n\n# No AMP (automatic mixed precision)\nbfloat16: true # require >=ampere\nfloat16: true\n\n# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset\ngpu_memory_limit: 20GiB\n# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge\nlora_on_cpu: true\n\n# List[str]. Add plugins to extend the pipeline.\n# See `src/axolotl/integrations` for the available plugins or doc below for more details.\n# https://docs.axolotl.ai/docs/custom_integrations.html\nplugins:\n # - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin\n\n# A list of one or more datasets to finetune the model with\n# See https://docs.axolotl.ai/docs/dataset_loading.html for guide on loading datasets\n# See https://docs.axolotl.ai/docs/dataset-formats/ for guide on dataset formats\ndatasets:\n # HuggingFace dataset repo | s3:// | gs:// | path to local file or directory\n - path: vicgalle/alpaca-gpt4\n # The type of prompt to use for training. [alpaca, gpteacher, oasst, reflection]\n type: alpaca # format | format:<prompt_style> (chat/instruct) | <prompt_strategies>.load_<load_fn>\n ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file\n data_files: # Optional[str] path to source data files\n\n shards: # Optional[int] split dataset into N pieces (use with shards_idx)\n shards_idx: # Optional[int] = 0 the index of sharded dataset to use\n\n preprocess_shards: # Optional[int] process dataset in N sequential chunks for memory efficiency (exclusive with `shards`)\n\n name: # Optional[str] name of dataset configuration to load\n split: train # Optional[str] name of dataset split to load from\n revision: # Optional[str] The specific revision of the dataset to use when loading from the Hugging Face Hub. This can be a commit hash, tag, or branch name. If not specified, the latest version will be used. This parameter is ignored for local datasets.\n trust_remote_code: # Optional[bool] Trust remote code for untrusted source\n\n # Custom user instruction prompt\n - path: repo\n type:\n # The below are defaults. only set what's needed if you use a different column name.\n system_prompt: \"\"\n system_format: \"{system}\"\n field_system: system\n field_instruction: instruction\n field_input: input\n field_output: output\n\n # Customizable to be single line or multi-line\n # Use {instruction}/{input} as key to be replaced\n # 'format' can include {input}\n format: |-\n User: {instruction} {input}\n Assistant:\n # 'no_input_format' cannot include {input}\n no_input_format: \"{instruction} \"\n\n # For `completion` datsets only, uses the provided field instead of `text` column\n field:\n\n # Using chat template\n - path: ...\n # Set type to `chat_template` to use this strategy\n type: chat_template\n # Specify the name of the chat template to use\n # The name of the chat template to use for training, following values are supported:\n # - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default.\n # - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n # - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to if the tokenizer does not have a chat template else default to tokenizer. E.g. tokenizer_default_fallback_chatml.\n # - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n chat_template: tokenizer_default\n\n # Custom jinja chat template. Used only if `chat_template: jinja` or empty.\n chat_template_jinja:\n\n # Key containing the messages (default: \"messages\")\n field_messages: messages\n\n # Key containing the tools (default: \"tools\")\n # Must be a list[dict] and follow [JSON schema](https://json-schema.org/learn/getting-started-step-by-step).\n field_tools: tools\n\n # Key containing the system message (default: \"system\")\n # If the system message is not present in the dataset sample, it will be loaded from the field_system property.\n field_system: system\n\n # Mapping of properties from the input dataset to the chat template.\n # (default: message_property_mappings={'role':'role', 'content':'content'})\n # If a property exists in the template but not in this mapping, the system will attempt\n # to load it directly from the message using the property name as the key.\n # Example: In the mapping below, 'from' is loaded from input dataset and used as 'role',\n # while 'value' is loaded and used as 'content' in the chat template.\n message_property_mappings:\n role: from\n content: value\n # ...\n\n # Optional[Dict[str, List]]. Roles mapping in the messages.\n # The format is {target_role: [source_roles]}. All source roles will be mapped to the target role.\n # The default is:\n roles:\n user: [\"human\", \"user\"]\n assistant: [\"gpt\", \"assistant\"]\n system: [\"system\"]\n tool: [\"tool\"]\n\n # Optional[bool]. Whether to drop the system turn from the dataset. Only works with chat_template.\n # This does not drop the default system message from chat_template if it exists. If you wish to,\n # we recommend using a custom jinja template with the default system message removed or\n # adding a system turn with empty content.\n drop_system_message:\n\n # Optional[bool]. (for Qwen3 template only) Whether to split the assistant content based on a reasoning trace inside delimited tags\n # See example at `docs/dataset-formats/conversation.qmd`\n split_thinking:\n\n # IMPORTANT: The following fields determine which parts of the conversation to train on.\n # Priority order: message_field_training > message_field_training_detail > train_on_inputs or role in roles_to_train\n # See examples at `docs/dataset-formats/conversation.qmd`\n # Note: If the below 5 fields are empty, defaults to training only on the last message.\n\n # Optional[List[str]]. Roles to train on. The tokens from these roles will be considered for the loss.\n roles_to_train: [\"assistant\"] # default\n # Optional[str]. Which EOS tokens to train on in the conversation. Possible values are:\n # - all: train on all EOS tokens\n # - turn (default): train on the EOS token at the end of each trainable turn\n # - last: train on the last EOS token in the conversation\n # TIP: Please make sure that your `tokenizer.eos_token` is same as EOS/EOT token in template. Otherwise, set `eos_token` under `special_tokens`.\n train_on_eos: turn\n # Optional[str]. Which EOT (End-of-Turn) tokens to train on in the conversation. Possible values are:\n # - all: train on all EOT tokens\n # - turn: train on the EOT token at the end of each trainable turn\n # - last: train on the last EOT token in the conversation\n # If not specified, defaults to the value of train_on_eos for backward compatibility.\n train_on_eot:\n # The key in the message turn that indicates via boolean whether tokens of a turn should be considered for training. Useful to selectively train on certain turns besides the `roles_to_train`.\n message_field_training: training\n # The key in the message turn that contains the training details. Useful to selectively train on certain tokens in a turn.\n # The value of the key is a List[Dict] containing `begin_offset` (start character index in content), `end_offset` (end character index in content), and `train` (boolean whether to train).\n message_field_training_detail: train_detail\n\n\n# If false, the datasets will not be shuffled and will keep their original order in `datasets`.\n# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.\nshuffle_merged_datasets: true\n\n# Deduplicates datasets and test_datasets with identical entries.\ndataset_exact_deduplication: true\n\n# A list of one or more datasets to eval the model with.\n# You can use either test_datasets, or val_set_size, but not both.\ntest_datasets:\n - path: /workspace/data/eval.jsonl\n ds_type: json\n # You need to specify a split. For \"json\" datasets the default split is called \"train\".\n split: train\n type: completion\n data_files:\n - /workspace/data/eval.jsonl\n\n# use RL training: 'dpo', 'ipo', 'kto', 'simpo', 'orpo', 'grpo'\nrl:\nrl_beta: # Optional[float]. The beta parameter for the RL training.\n\n# dpo\ndpo_use_weighting: # Optional[bool]. Whether to perform weighting.\nrpo_alpha: # Optional[float]. Weighting of NLL term in loss from RPO paper.\n\n# orpo\norpo_alpha: 0.1 # Parameter controlling the relative ratio loss weight in the ORPO loss. Passed to `beta` in `ORPOConfig` due to trl mapping.\n\n# kto\nkto_desirable_weight: # Optional[float]. Factor for desirable loss term in KTO loss.\nkto_undesirable_weight: # Optional[float]. Factor for undesirable loss term in KTO loss.\n\n# simpo\ncpo_alpha: 1.0 # Weight of the BC regularizer\nsimpo_gamma: 0.5 # Target reward margin for the SimPO loss\n\n# grpo\ntrl:\n use_vllm: # Optional[bool]. Whether to use VLLM for RL training.\n vllm_server_host: # Optional[str]. Host of the vLLM server to connect to.\n vllm_server_port: # Optional[int]. Port of the vLLM server to connect to.\n vllm_server_timeout: # Optional[int]. Total timeout (in seconds) to wait for the vLLM server to respond.\n vllm_guided_decoding_regex: # Optional[str]. Regex for vLLM guided decoding.\n\n beta: # Optional[float]. Beta parameter for the RL training. Same as `rl_beta`. Use\n max_completion_length: # Optional[int]. Maximum length of the completion for RL training.\n\n reward_funcs: # Optional[list[str]]. List of reward functions to load. Paths must be importable from current dir.\n reward_weights: # Optional[list[float]]. List of reward weights for the reward functions.\n\n num_generations: # Optional[int]. Number of generations to sample.\n log_completions: # Optional[bool]. Whether to log completions.\n num_completions_to_print: # Optional[int]. Number of completions to print when log_completions is True.\n\n sync_ref_model: # Optional[bool]. Whether to sync the reference model.\n ref_model_mixup_alpha: # Optional[float]. Mixup alpha for the reference model.\n ref_model_sync_steps: # Optional[int]. Sync steps for the reference model.\n scale_rewards: # Optional[bool]. Whether to scale rewards by their standard deviation.\n\n temperature: # Optional[float]. Sampling temperature for the GRPO policy.\n top_p: # Optional[float]. Top-p sampling probability for the generation policy.\n top_k: # Optional[int]. Top-k sampling for the generation policy.\n min_p: # Optional[float]. Minimum probability for the generation policy.\n repetition_penalty: # Optional[float]. Penalty for tokens that appear in prompt and generated text.\n\n num_iterations: # Optional[int]. Number of iterations per batch (μ) for GRPO.\n epsilon: # Optional[float]. Epsilon value for clipping in the GRPO algorithm.\n epsilon_high: # Optional[float]. Upper-bound epsilon value for clipping in the GRPO algorithm.\n use_liger_loss: # Optional[bool]. Whether to use Liger loss for GRPO.\n loss_type: # Optional[str]. Loss formulation to use. Supported values: grpo, bnpo, dr_grpo.\n mask_truncated_completions: # Optional[bool]. Whether to exclude truncated completions from loss calculation.\n\n\n# reward modelling: `True` or `False`\nreward_model:\n\n# process reward modelling: `True` or `False`\nprocess_reward_model:\n\n# The name of the chat template to use for training, following values are supported:\n# - tokenizer_default: Uses the chat template that is available in the tokenizer_config.json. If the chat template is not available in the tokenizer, it will raise an error. This is the default value.\n# - alpaca/inst/chatml/gemma/cohere/llama3/phi_3/deepseek_v2/jamba: These chat templates are available in the axolotl codebase at src/axolotl/utils/chat_templates.py\n# - tokenizer_default_fallback_*: where * is the name of the chat template to fallback to. E.g. tokenizer_default_fallback_chatml. This is useful when the chat template is not available in the tokenizer.\n# - jinja: Uses a custom jinja template for the chat template. The custom jinja template should be provided in the chat_template_jinja field.\n# The selected chat template will be saved to the tokenizer_config.json for easier inferencing\n# Note: It is recommended to set train_on_inputs to true when using a chat template that is different from the model's default chat template.\nchat_template: tokenizer_default\n# custom jinja template for chat template. This will be only used if chat_template is set to `jinja` or `null` (in which case chat_template is automatically set to `jinja`). Default is null.\nchat_template_jinja: null\n# Optional[List[str]]. Custom EOT (End-of-Turn) tokens to mask/unmask during training.\n# These tokens mark the boundaries between conversation turns.\n# For example: [\"/INST\", \"</s>\", \"[/SYSTEM_PROMPT]\"]\n# If not specified, defaults to just the model's eos_token.\n# This is useful for templates that use multiple delimiter tokens.\neot_tokens:\n # - \"</s>\"\n # - \"[/INST]\"\n # - \"[/SYSTEM_PROMPT]\"\n# Changes the default system message\ndefault_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml.\n# Axolotl attempts to save the dataset as an arrow after packing the data together so\n# subsequent training attempts load faster, relative path\ndataset_prepared_path: data/last_run_prepared\n# Push prepared dataset to hub\npush_dataset_to_hub: # Optional[str] repo_org/repo_name\n# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`\n# if not set.\ndataset_processes: # defaults to os.cpu_count() if not set\n# Keep dataset in memory while preprocessing\n# Only needed if cached dataset is taking too much storage\ndataset_keep_in_memory:\n# push checkpoints to hub\nhub_model_id: # private repo path to push finetuned model\n# how to push checkpoints to hub\n# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy\nhub_strategy:\n# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets\n# Required to be true when used in combination with `push_dataset_to_hub`\nhf_use_auth_token: # boolean\n# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.\nval_set_size: 0.04\n# Num shards for whole dataset\ndataset_shard_num:\n# Index of shard to use for whole dataset\ndataset_shard_idx:\n\n# The maximum length of an input to train with, this should typically be less than 2048\n# as most models have a token/context limit of 2048\nsequence_len: 2048\n# Pad inputs so each step uses constant sized buffers\n# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently\npad_to_sequence_len:\n# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'\nsample_packing:\n# Set to 'false' if getting errors during eval with sample_packing on.\neval_sample_packing:\n# You can set these packing optimizations AFTER starting a training at least once.\n# The trainer will provide recommended values for these values.\nsample_packing_eff_est:\ntotal_num_tokens:\n# Increasing the following values helps with packing, but usually only slightly (<%1.)\n# The number of samples packed at a time.\nsample_packing_group_size: 100000\n# The number of samples which can be packed into one sequence. Increase if using a large sequence_len with many short samples.\nsample_packing_bin_size: 200\nsample_pack_sequentially: # Optional[bool]. Whether to pack samples sequentially.\n\n# whether to concatenate samples during pretraining\npretraining_sample_concatenation:\n\ncurriculum_sampling: # Optional[bool]. Whether to use sequential sampling for curriculum learning\n\n# Use batch flattening for speedups when not using sample_packing\nbatch_flattening:\n\n# Passed through to transformers when loading the model when launched without accelerate\n# Use `sequential` when training w/ model parallelism to limit memory\ndevice_map:\n# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.\nmax_memory:\n\n# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model\nadapter: lora\n# If you already have a lora model trained that you want to load, put that here.\n# This means after training, if you want to test the model, you should set this to the value of `output_dir`.\n# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.\nlora_model_dir:\n\n# LoRA hyperparameters\n# For more details about the following options, see:\n# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n# - k_proj\n# - o_proj\n# - gate_proj\n# - down_proj\n# - up_proj\nlora_target_linear: # If true, will target all linear modules\n\n# List[int] | int. # The layer indices to transform, otherwise, apply to all layers\n# https://huggingface.co/docs/peft/v0.15.0/en/package_reference/lora#peft.LoraConfig.layers_to_transform\npeft_layers_to_transform:\n\n# Optional[bool]. Whether to use DoRA.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#weight-decomposed-low-rank-adaptation-dora\npeft_use_dora:\n\n# Optional[bool]. Whether to use RSLoRA.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#rank-stabilized-lora\npeft_use_rslora:\n\n# Optional[list[tuple[int, int]]]. List of layer indices to replicate.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#memory-efficient-layer-replication-with-lora\npeft_layer_replication:\n\n# bool | Literal[\"gaussian\", \"eva\", \"olora\", \"pissa\", \"pissa_niter_[number of iters]\", \"corda\", \"loftq\"]\n# How to initialize LoRA weights. Default to True which is MS original implementation.\n# https://huggingface.co/docs/peft/v0.15.0/en/developer_guides/lora#initialization\npeft_init_lora_weights:\n\n# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.\n# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.\n# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994\nlora_modules_to_save:\n# - embed_tokens\n# - lm_head\n\nlora_fan_in_fan_out: false\n\n# Apply custom LoRA autograd functions and activation function Triton kernels for\n# speed and memory savings\n# See: https://docs.axolotl.ai/docs/lora_optims.html\nlora_mlp_kernel: true\nlora_qkv_kernel: true\nlora_o_kernel: true\n\n# LoRA+ hyperparameters\n# For more details about the following options, see:\n# https://arxiv.org/abs/2402.12354 and `src/axolotl/core/train_builder.py`\nloraplus_lr_ratio: # loraplus learning rate ratio lr_B / lr_A. Recommended value is 2^4.\nloraplus_lr_embedding: # loraplus learning rate for lora embedding layers. Default value is 1e-6.\n\npeft:\n # Configuration options for loftq initialization for LoRA\n # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization\n loftq_config:\n loftq_bits: # typically 4 bits\n\n# ReLoRA configuration\n# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed\nrelora_steps: # Number of steps per ReLoRA restart\nrelora_warmup_steps: # Number of per-restart warmup steps\nrelora_anneal_steps: # Number of anneal steps for each relora cycle\nrelora_prune_ratio: # threshold for optimizer magnitude when pruning\nrelora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings\n\n# wandb configuration if you're using it\n# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.\nwandb_mode: # \"offline\" to save run metadata locally and not sync to the server, \"disabled\" to turn off wandb\nwandb_project: # Your wandb project name\nwandb_entity: # A wandb Team name if using a Team\nwandb_watch:\nwandb_name: # Set the name of your wandb run\nwandb_run_id: # Set the ID of your wandb run\nwandb_log_model: # \"checkpoint\" to log model to wandb Artifacts every `save_steps` or \"end\" to log only at the end of training\n\n# mlflow configuration if you're using it\nmlflow_tracking_uri: # URI to mlflow\nmlflow_experiment_name: # Your experiment name\nmlflow_run_name: # Your run name\nhf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry\n\n# Comet configuration if you're using it\n# Make sure your `COMET_API_KEY` environment variable is set (recommended) or you login to Comet with `comet login`.\n# Check out our documentation for more details https://www.comet.com/docs/v2/api-and-sdk/python-sdk/reference/Experiment-Creation/#comet_ml.start\nuse_comet: # Enable or disable Comet integration.\ncomet_api_key: # API key for Comet. Recommended to set via `comet login`.\ncomet_workspace: # Workspace name in Comet. Defaults to the user's default workspace.\ncomet_project_name: # Project name in Comet. Defaults to Uncategorized.\ncomet_experiment_key: # Identifier for the experiment. Used to append data to an existing experiment or control the key of new experiments. Default to a random key.\ncomet_mode: # Create a new experiment (\"create\") or log to an existing one (\"get\"). Default (\"get_or_create\") auto-selects based on configuration.\ncomet_online: # Set to True to log data to Comet server, or False for offline storage. Default is True.\ncomet_experiment_config: # Dictionary for additional configuration settings, see the doc for more details.\n\n# Tensorboard\nuse_tensorboard: # Optional[bool]\n\n# Where to save the full-finetuned model to\noutput_dir: ./completed-model\n\n# Whether to use torch.compile and which backend to use\n# setting to `auto` will enable torch compile when torch>=2.5.1\ntorch_compile: # Optional[Union[Literal[\"auto\"], bool]]\ntorch_compile_backend: # Optional[str]\ntorch_compile_mode: # 'default' | 'reduce-overhead' | 'max-autotune'\n\n# Training hyperparameters\n\n# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.\ngradient_accumulation_steps: 1\n# The number of samples to include in each batch. This is the number of samples sent to each GPU.\n# Batch size per gpu = micro_batch_size * gradient_accumulation_steps\nmicro_batch_size: 2\neval_batch_size:\nnum_epochs: 4\nwarmup_steps: 100 # cannot use with warmup_ratio\nwarmup_ratio: 0.05 # cannot use with warmup_steps\nlearning_rate: 0.00003\nlr_quadratic_warmup:\nlogging_steps:\neval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps\nevals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps\neval_strategy: # Set to `\"no\"` to skip evaluation, `\"epoch\"` at end of each epoch, leave empty to infer from `eval_steps`.\nsave_strategy: # Set to `\"no\"` to skip checkpoint saves, `\"epoch\"` at end of each epoch, `\"best\"` when better result is achieved, leave empty to infer from `save_steps`.\nsave_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps\nsaves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps\nsave_total_limit: # Checkpoints saved at a time\nsave_only_model: # Save only the model weights, skipping the optimizer. Using this means you can't resume from checkpoints.\n# Maximum number of iterations to train for. It precedes num_epochs which means that\n# if both are set, num_epochs will not be guaranteed.\n# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps\nmax_steps:\n\n# bool of whether to include tokens trainer per second in the training metrics. This iterates over the entire dataset once, so it takes some time.\ninclude_tokens_per_second: # Optional[bool]\n\n# whether to find batch size that fits in memory. Passed to underlying transformers Trainer\nauto_find_batch_size: # Optional[bool]\n\neval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0\neval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128\ndo_causal_lm_eval: # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`.\neval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is [\"sacrebleu\", \"comet\", \"ter\", \"chrf\", \"perplexity\"]\n\nprofiler_steps: # enable the pytorch profiler to capture the first N steps of training to the output_dir.\n # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information\n # snapshots can be visualized @ https://pytorch.org/memory_viz\n\nloss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)\nloss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)\n\n# Save model as safetensors (require safetensors package). Default True\nsave_safetensors:\n\n# Whether to mask out or include the human's prompt from the training labels\ntrain_on_inputs: false\n# Group similarly sized data to minimize padding.\n# May be slower to start, as it must download and sort the entire dataset.\n# Note that training loss may have an oscillating pattern with this enabled.\ngroup_by_length: false\n\n# Whether to use gradient checkpointing. Available options are: true, false, \"offload\", \"offload_disk\".\n# https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\ngradient_checkpointing: false\n# additional kwargs to pass to the trainer for gradient checkpointing\n# gradient_checkpointing_kwargs:\n# use_reentrant: true\n\n# Stop training after this many evaluation losses have increased in a row\n# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback\nearly_stopping_patience: 3\n\n# Specify a scheduler and kwargs to use with the optimizer\n# Valid values are driven by the Transformers SchedulerType class, see:\n# https://github.com/huggingface/transformers/blob/5f4ecf2d9f867a1255131d2461d75793c0cf1db2/src/transformers/trainer_utils.py#L420\n# Valid values include\n# - 'linear'\n# - 'cosine' (default)\n# - 'cosine_with_restarts'\n# - 'polynomial'\n# - 'constant'\n# - 'constant_with_warmup'\n# - 'inverse_sqrt'\n# - 'reduce_lr_on_plateau'\n# - 'cosine_with_min_lr'\n# - 'warmup_stable_decay'\n\n# Additional schedulers include:\n# - 'one_cycle'\n# - 'rex'\nlr_scheduler:\nlr_scheduler_kwargs:\ncosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr\ncosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)\n\n# For one_cycle optim\nlr_div_factor: # Learning rate div factor\n\n# Specify optimizer\n# Valid values are driven by the Transformers OptimizerNames class, see:\n# https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189\n#\n# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of\n# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used\n# in the examples/ for your model and fine-tuning use case.\n#\n# Valid values for 'optimizer' include:\n# - adamw_torch\n# - adamw_torch_fused (default)\n# - adamw_torch_xla\n# - adamw_torch_npu_fused\n# - adamw_apex_fused\n# - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)\n# - adafactor\n# - adamw_anyprecision\n# - adamw_torch_4bit\n# - ademamix\n# - sgd\n# - adagrad\n# - adamw_bnb_8bit\n# - adamw_8bit # alias for adamw_bnb_8bit\n# - ademamix_8bit\n# - lion_8bit\n# - lion_32bit\n# - paged_adamw_32bit\n# - paged_adamw_8bit\n# - paged_ademamix_32bit\n# - paged_ademamix_8bit\n# - paged_lion_32bit\n# - paged_lion_8bit\n# - rmsprop\n# - rmsprop_bnb\n# - rmsprop_bnb_8bit\n# - rmsprop_bnb_32bit\n# - galore_adamw\n# - galore_adamw_8bit\n# - galore_adafactor\n# - galore_adamw_layerwise\n# - galore_adamw_8bit_layerwise\n# - galore_adafactor_layerwise\n# - lomo\n# - adalomo\n# - grokadamw\n# - schedule_free_adamw\n# - schedule_free_sgd\n# - apollo_adamw\n# - apollo_adamw_layerwise\n#\n# Additional custom optimizers include:\n# - optimi_adamw\n# - ao_adamw_8bit\n# - ao_adamw_fp8\n# - came_pytorch\noptimizer:\n# Dictionary of arguments to pass to the optimizer\noptim_args:\n# For Galore Optimizers the following optim_args are available\n# rank: # type: int\n# update_proj_gap # type: int\n# scale # type: float\n# proj_type: # type: str, default = std\n\n# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm\noptim_target_modules:\n# - self_attn # for llama\n# - mlp\n\n# Specify weight decay\nweight_decay:\n# adamw hyperparams\nadam_beta1:\nadam_beta2:\nadam_beta3: # only used for CAME Optimizer\nadam_epsilon:\nadam_epsilon2: # only used for CAME Optimizer\n# Gradient clipping max norm\nmax_grad_norm:\n\n# Augmentation techniques\n# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings\n# currently only supported on Llama and Mistral\nneftune_noise_alpha:\n\n# Optional[bool]. Whether to bettertransformers\nflash_optimum:\n\n# Note: Only one of the following attention patches can be used at a time.\n# For example, if you set `xformers_attention` to `true`, do not set `flash_attention` to `true`.\n\n# Optional[bool]. Whether to use xformers attention patch https://github.com/facebookresearch/xformers:\nxformers_attention:\n# Optional[bool]. Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:\nflash_attention:\nflash_attn_cross_entropy: # Optional[bool]. Whether to use flash-attention cross entropy implementation - advanced use only\nflash_attn_rms_norm: # Optional[bool]. Whether to use flash-attention rms norm implementation - advanced use only\nflash_attn_fuse_qkv: # Optional[bool]. Whether to fuse QKV into a single operation\nflash_attn_fuse_mlp: # Optional[bool]. Whether to fuse part of the MLP into a single operation\n# Optional[bool]. Whether to use scaled-dot-product attention\n# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html\nsdp_attention:\n# Optional[bool]. Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf\ns2_attention:\n\n# Optional[bool]. Whether to use low_cpu_mem_usage\nlow_cpu_mem_usage:\n# Optional[str]. Resume from a specific checkpoint dir\nresume_from_checkpoint:\n# Optional[bool]. If resume_from_checkpoint isn't set and you simply want it to start where it left off.\n# Be careful with this being turned on between different models.\nauto_resume_from_checkpoints: false\n\n## Multimodal section\n# int | tuple[int, int] | None . Size to resize images to, width x height.\n# Will read from model/processor config if not set.\nimage_size:\n# str. Algorithm to use for image resizing. \"bilinear\", \"bicubic\", \"lanczos\". Default is \"bilinear\".\nimage_resize_algorithm: 'bilinear'\n## End of multimodal section\n\n# Don't mess with this, it's here for accelerate and torchrun\nlocal_rank:\n\n# Add or change special tokens.\n# If you add tokens here, you don't need to add them to the `tokens` list.\nspecial_tokens:\n # bos_token: \"<s>\"\n # eos_token: \"</s>\"\n # unk_token: \"<unk>\"\n # pad_token: \"[PAD]\"\n\n# Optional[list[str]]. Add extra tokens to the tokenizer.\ntokens:\n # - \"<|startoftext|>\"\n # - \"<|endoftext|>\"\n\n# Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer.\n# Only works for tokens that are not part of the base vocab (aka are added_tokens).\n# Can be checked if they exist in tokenizer.json added_tokens.\nadded_tokens_overrides: # Dict[int, str]\n# 128041: \"<|im_start|>\"\n# 128042: \"<|im_end|>\"\n\n# FSDP\nfsdp:\nfsdp_config:\n\n# Deepspeed config path. e.g., deepspeed_configs/zero3.json\ndeepspeed:\n\n# Advanced DDP Arguments\nddp_timeout:\nddp_bucket_cap_mb:\nddp_broadcast_buffers:\n\n# Sequence parallelism\n# Set to a divisor of the number of GPUs available to split sequences into chunks of equal size.\n# Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM.\n# E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized\n# subsequences, or set to 4 to split into four equal-sized subsequences.\n# See https://docs.axolotl.ai/docs/sequence_parallelism.html for more details.\nsequence_parallel_degree:\n# Optional; strides across the key dimension. Larger values use more memory but should make training faster.\n# Must evenly divide the number of KV heads in your model.\nheads_k_stride: 1\n# One of \"varlen_llama3\", \"batch_ring\", \"batch_zigzag\", \"batch_stripe\". Defaults to \"varlen_llama3\"\n# in the sample packing case, and \"batch_ring\" in the non-sample packing case.\nring_attn_func:\n\n# Path to torch distx for optim 'adamw_anyprecision'\ntorchdistx_path:\n\n# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize\npretraining_dataset:\n\n# Debug mode\ndebug:\n\n# Seed\nseed:\n\n# Allow overwrite yml config using from cli\nstrict:", "crumbs": [ "Getting Started", "Config Reference" diff --git a/sitemap.xml b/sitemap.xml index 0ec9f9314..7232ee96b 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,758 +2,758 @@ https://docs.axolotl.ai/FAQS.html - 2025-06-09T06:14:21.507Z + 2025-06-10T04:42:14.206Z https://docs.axolotl.ai/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html - 2025-06-09T06:14:21.533Z + 2025-06-10T04:42:14.228Z https://docs.axolotl.ai/examples/colab-notebooks/colab-axolotl-example.html - 2025-06-09T06:14:21.514Z + 2025-06-10T04:42:14.212Z https://docs.axolotl.ai/docs/lr_groups.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/lora_optims.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/rlhf.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/qat.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/fsdp_qlora.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/custom_integrations.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/docs/dataset-formats/pretraining.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/dataset-formats/template_free.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/dataset-formats/conversation.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/docs/dataset-formats/tokenized.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/multipack.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/mac.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/input_output.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/quantize.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/cli.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/docs/multi-node.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/nccl.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/api/utils.data.sft.html - 2025-06-09T06:14:49.831Z + 2025-06-10T04:42:43.170Z https://docs.axolotl.ai/docs/api/core.datasets.transforms.chat_builder.html - 2025-06-09T06:14:48.971Z + 2025-06-10T04:42:42.299Z https://docs.axolotl.ai/docs/api/evaluate.html - 2025-06-09T06:14:48.755Z + 2025-06-10T04:42:42.082Z https://docs.axolotl.ai/docs/api/monkeypatch.mixtral.html - 2025-06-09T06:14:49.690Z + 2025-06-10T04:42:43.027Z https://docs.axolotl.ai/docs/api/prompt_strategies.alpaca_w_system.html - 2025-06-09T06:14:49.347Z + 2025-06-10T04:42:42.681Z https://docs.axolotl.ai/docs/api/integrations.spectrum.args.html - 2025-06-09T06:14:50.080Z + 2025-06-10T04:42:43.419Z https://docs.axolotl.ai/docs/api/core.builders.rl.html - 2025-06-09T06:14:48.844Z + 2025-06-10T04:42:42.171Z https://docs.axolotl.ai/docs/api/common.const.html - 2025-06-09T06:14:50.083Z + 2025-06-10T04:42:43.422Z https://docs.axolotl.ai/docs/api/kernels.geglu.html - 2025-06-09T06:14:49.565Z + 2025-06-10T04:42:42.900Z https://docs.axolotl.ai/docs/api/index.html - 2025-06-09T06:14:48.684Z + 2025-06-10T04:42:42.011Z https://docs.axolotl.ai/docs/api/integrations.cut_cross_entropy.args.html - 2025-06-09T06:14:50.061Z + 2025-06-10T04:42:43.400Z https://docs.axolotl.ai/docs/api/prompt_strategies.user_defined.html - 2025-06-09T06:14:49.355Z + 2025-06-10T04:42:42.689Z https://docs.axolotl.ai/docs/api/core.trainers.mixins.scheduler.html - 2025-06-09T06:14:49.278Z + 2025-06-10T04:42:42.607Z https://docs.axolotl.ai/docs/api/core.datasets.chat.html - 2025-06-09T06:14:48.963Z + 2025-06-10T04:42:42.291Z https://docs.axolotl.ai/docs/api/train.html - 2025-06-09T06:14:48.745Z + 2025-06-10T04:42:42.071Z https://docs.axolotl.ai/docs/api/cli.preprocess.html - 2025-06-09T06:14:49.104Z + 2025-06-10T04:42:42.432Z https://docs.axolotl.ai/docs/api/prompt_strategies.dpo.zephyr.html - 2025-06-09T06:14:49.429Z + 2025-06-10T04:42:42.763Z https://docs.axolotl.ai/docs/api/core.training_args.html - 2025-06-09T06:14:48.931Z + 2025-06-10T04:42:42.259Z https://docs.axolotl.ai/docs/api/datasets.html - 2025-06-09T06:14:48.762Z + 2025-06-10T04:42:42.089Z https://docs.axolotl.ai/docs/api/models.mamba.modeling_mamba.html - 2025-06-09T06:14:50.101Z + 2025-06-10T04:42:43.440Z https://docs.axolotl.ai/docs/api/kernels.utils.html - 2025-06-09T06:14:49.584Z + 2025-06-10T04:42:42.919Z https://docs.axolotl.ai/docs/api/loaders.adapter.html - 2025-06-09T06:14:49.253Z + 2025-06-10T04:42:42.582Z https://docs.axolotl.ai/docs/api/core.chat.format.chatml.html - 2025-06-09T06:14:48.955Z + 2025-06-10T04:42:42.283Z https://docs.axolotl.ai/docs/api/core.trainers.dpo.trainer.html - 2025-06-09T06:14:49.205Z + 2025-06-10T04:42:42.534Z https://docs.axolotl.ai/docs/api/cli.cloud.modal_.html - 2025-06-09T06:14:49.158Z + 2025-06-10T04:42:42.486Z https://docs.axolotl.ai/docs/api/prompt_strategies.stepwise_supervised.html - 2025-06-09T06:14:49.384Z + 2025-06-10T04:42:42.718Z https://docs.axolotl.ai/docs/api/prompt_strategies.dpo.passthrough.html - 2025-06-09T06:14:49.432Z + 2025-06-10T04:42:42.766Z https://docs.axolotl.ai/docs/api/monkeypatch.relora.html - 2025-06-09T06:14:49.633Z + 2025-06-10T04:42:42.969Z https://docs.axolotl.ai/docs/api/kernels.swiglu.html - 2025-06-09T06:14:49.575Z + 2025-06-10T04:42:42.911Z https://docs.axolotl.ai/docs/api/cli.cloud.base.html - 2025-06-09T06:14:49.152Z + 2025-06-10T04:42:42.480Z https://docs.axolotl.ai/docs/api/common.datasets.html - 2025-06-09T06:14:50.100Z + 2025-06-10T04:42:43.438Z https://docs.axolotl.ai/docs/api/core.trainers.grpo.sampler.html - 2025-06-09T06:14:49.227Z + 2025-06-10T04:42:42.556Z https://docs.axolotl.ai/docs/api/cli.merge_lora.html - 2025-06-09T06:14:49.084Z + 2025-06-10T04:42:42.412Z https://docs.axolotl.ai/docs/api/utils.model_shard_quant.html - 2025-06-09T06:14:49.746Z + 2025-06-10T04:42:43.084Z https://docs.axolotl.ai/docs/api/prompt_strategies.dpo.llama3.html - 2025-06-09T06:14:49.417Z + 2025-06-10T04:42:42.751Z https://docs.axolotl.ai/docs/api/integrations.liger.args.html - 2025-06-09T06:14:50.074Z + 2025-06-10T04:42:43.412Z https://docs.axolotl.ai/docs/api/utils.callbacks.profiler.html - 2025-06-09T06:14:50.186Z + 2025-06-10T04:42:43.518Z https://docs.axolotl.ai/docs/api/utils.chat_templates.html - 2025-06-09T06:14:49.736Z + 2025-06-10T04:42:43.073Z https://docs.axolotl.ai/docs/api/core.builders.base.html - 2025-06-09T06:14:48.832Z + 2025-06-10T04:42:42.159Z https://docs.axolotl.ai/docs/api/core.trainers.mamba.html - 2025-06-09T06:14:49.194Z + 2025-06-10T04:42:42.523Z https://docs.axolotl.ai/docs/api/prompt_strategies.orpo.chat_template.html - 2025-06-09T06:14:49.469Z + 2025-06-10T04:42:42.804Z https://docs.axolotl.ai/docs/api/utils.data.pretraining.html - 2025-06-09T06:14:49.830Z + 2025-06-10T04:42:43.168Z https://docs.axolotl.ai/docs/api/utils.schemas.peft.html - 2025-06-09T06:14:49.903Z + 2025-06-10T04:42:43.240Z https://docs.axolotl.ai/docs/api/monkeypatch.trainer_fsdp_optim.html - 2025-06-09T06:14:49.679Z + 2025-06-10T04:42:43.016Z https://docs.axolotl.ai/docs/api/prompt_strategies.chat_template.html - 2025-06-09T06:14:49.320Z + 2025-06-10T04:42:42.655Z https://docs.axolotl.ai/docs/api/prompt_strategies.kto.chatml.html - 2025-06-09T06:14:49.448Z + 2025-06-10T04:42:42.782Z https://docs.axolotl.ai/docs/api/monkeypatch.btlm_attn_hijack_flash.html - 2025-06-09T06:14:49.669Z + 2025-06-10T04:42:43.006Z https://docs.axolotl.ai/docs/api/core.chat.format.shared.html - 2025-06-09T06:14:48.958Z + 2025-06-10T04:42:42.286Z https://docs.axolotl.ai/docs/api/logging_config.html - 2025-06-09T06:14:48.826Z + 2025-06-10T04:42:42.153Z https://docs.axolotl.ai/docs/api/cli.sweeps.html - 2025-06-09T06:14:49.110Z + 2025-06-10T04:42:42.438Z https://docs.axolotl.ai/docs/api/prompt_strategies.dpo.chat_template.html - 2025-06-09T06:14:49.407Z + 2025-06-10T04:42:42.741Z https://docs.axolotl.ai/docs/api/prompt_tokenizers.html - 2025-06-09T06:14:48.816Z + 2025-06-10T04:42:42.143Z https://docs.axolotl.ai/docs/api/kernels.quantize.html - 2025-06-09T06:14:49.583Z + 2025-06-10T04:42:42.918Z https://docs.axolotl.ai/docs/api/utils.callbacks.mlflow_.html - 2025-06-09T06:14:50.192Z + 2025-06-10T04:42:43.523Z https://docs.axolotl.ai/docs/api/utils.schemas.trl.html - 2025-06-09T06:14:49.906Z + 2025-06-10T04:42:43.244Z https://docs.axolotl.ai/docs/api/utils.schemas.config.html - 2025-06-09T06:14:49.864Z + 2025-06-10T04:42:43.202Z https://docs.axolotl.ai/docs/api/monkeypatch.unsloth_.html - 2025-06-09T06:14:49.687Z + 2025-06-10T04:42:43.024Z https://docs.axolotl.ai/docs/api/cli.checks.html - 2025-06-09T06:14:49.044Z + 2025-06-10T04:42:42.372Z https://docs.axolotl.ai/docs/api/core.chat.messages.html - 2025-06-09T06:14:48.953Z + 2025-06-10T04:42:42.282Z https://docs.axolotl.ai/docs/api/loaders.patch_manager.html - 2025-06-09T06:14:49.262Z + 2025-06-10T04:42:42.590Z https://docs.axolotl.ai/docs/api/monkeypatch.llama_expand_mask.html - 2025-06-09T06:14:49.634Z + 2025-06-10T04:42:42.970Z https://docs.axolotl.ai/docs/api/utils.schemas.utils.html - 2025-06-09T06:14:49.939Z + 2025-06-10T04:42:43.277Z https://docs.axolotl.ai/docs/api/integrations.base.html - 2025-06-09T06:14:50.058Z + 2025-06-10T04:42:43.397Z https://docs.axolotl.ai/docs/api/utils.samplers.multipack.html - 2025-06-09T06:14:50.173Z + 2025-06-10T04:42:43.508Z https://docs.axolotl.ai/docs/api/prompt_strategies.completion.html - 2025-06-09T06:14:49.374Z + 2025-06-10T04:42:42.708Z https://docs.axolotl.ai/docs/api/utils.collators.mamba.html - 2025-06-09T06:14:50.125Z + 2025-06-10T04:42:43.463Z https://docs.axolotl.ai/docs/api/utils.schemas.model.html - 2025-06-09T06:14:49.871Z + 2025-06-10T04:42:43.209Z https://docs.axolotl.ai/docs/api/utils.schemas.training.html - 2025-06-09T06:14:49.876Z + 2025-06-10T04:42:43.214Z https://docs.axolotl.ai/docs/api/prompt_strategies.kto.llama3.html - 2025-06-09T06:14:49.440Z + 2025-06-10T04:42:42.774Z https://docs.axolotl.ai/docs/api/core.trainers.utils.html - 2025-06-09T06:14:49.229Z + 2025-06-10T04:42:42.558Z https://docs.axolotl.ai/docs/api/core.trainers.mixins.optimizer.html - 2025-06-09T06:14:49.269Z + 2025-06-10T04:42:42.597Z https://docs.axolotl.ai/docs/api/cli.merge_sharded_fsdp_weights.html - 2025-06-09T06:14:49.096Z + 2025-06-10T04:42:42.424Z https://docs.axolotl.ai/docs/dataset_loading.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/amd_hpc.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/TODO.html - 2025-06-09T06:14:21.508Z + 2025-06-10T04:42:14.206Z https://docs.axolotl.ai/docs/debugging.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/multi-gpu.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/api/cli.utils.html - 2025-06-09T06:14:49.142Z + 2025-06-10T04:42:42.470Z https://docs.axolotl.ai/docs/api/utils.freeze.html - 2025-06-09T06:14:49.757Z + 2025-06-10T04:42:43.095Z https://docs.axolotl.ai/docs/api/common.architectures.html - 2025-06-09T06:14:50.082Z + 2025-06-10T04:42:43.420Z https://docs.axolotl.ai/docs/api/utils.schedulers.html - 2025-06-09T06:14:49.798Z + 2025-06-10T04:42:43.137Z https://docs.axolotl.ai/docs/api/utils.collators.core.html - 2025-06-09T06:14:50.103Z + 2025-06-10T04:42:43.441Z https://docs.axolotl.ai/docs/api/utils.schemas.multimodal.html - 2025-06-09T06:14:49.911Z + 2025-06-10T04:42:43.249Z https://docs.axolotl.ai/docs/api/utils.distributed.html - 2025-06-09T06:14:49.818Z + 2025-06-10T04:42:43.156Z https://docs.axolotl.ai/docs/api/utils.schemas.datasets.html - 2025-06-09T06:14:49.894Z + 2025-06-10T04:42:43.232Z https://docs.axolotl.ai/docs/api/prompt_strategies.kto.user_defined.html - 2025-06-09T06:14:49.449Z + 2025-06-10T04:42:42.784Z https://docs.axolotl.ai/docs/api/monkeypatch.utils.html - 2025-06-09T06:14:49.667Z + 2025-06-10T04:42:43.004Z https://docs.axolotl.ai/docs/api/utils.schemas.integrations.html - 2025-06-09T06:14:49.923Z + 2025-06-10T04:42:43.261Z https://docs.axolotl.ai/docs/api/core.trainers.mixins.rng_state_loader.html - 2025-06-09T06:14:49.272Z + 2025-06-10T04:42:42.601Z https://docs.axolotl.ai/docs/api/integrations.lm_eval.args.html - 2025-06-09T06:14:50.077Z + 2025-06-10T04:42:43.416Z https://docs.axolotl.ai/docs/api/loaders.processor.html - 2025-06-09T06:14:49.248Z + 2025-06-10T04:42:42.577Z https://docs.axolotl.ai/docs/api/core.trainers.trl.html - 2025-06-09T06:14:49.189Z + 2025-06-10T04:42:42.518Z https://docs.axolotl.ai/docs/api/loaders.constants.html - 2025-06-09T06:14:49.263Z + 2025-06-10T04:42:42.592Z https://docs.axolotl.ai/docs/api/monkeypatch.llama_attn_hijack_flash.html - 2025-06-09T06:14:49.609Z + 2025-06-10T04:42:42.945Z https://docs.axolotl.ai/docs/api/cli.args.html - 2025-06-09T06:14:49.037Z + 2025-06-10T04:42:42.366Z https://docs.axolotl.ai/docs/api/core.builders.causal.html - 2025-06-09T06:14:48.836Z + 2025-06-10T04:42:42.163Z https://docs.axolotl.ai/docs/api/prompt_strategies.input_output.html - 2025-06-09T06:14:49.380Z + 2025-06-10T04:42:42.714Z https://docs.axolotl.ai/docs/api/core.chat.format.llama3x.html - 2025-06-09T06:14:48.956Z + 2025-06-10T04:42:42.285Z https://docs.axolotl.ai/docs/api/cli.train.html - 2025-06-09T06:14:49.010Z + 2025-06-10T04:42:42.338Z https://docs.axolotl.ai/docs/api/convert.html - 2025-06-09T06:14:48.775Z + 2025-06-10T04:42:42.102Z https://docs.axolotl.ai/docs/api/utils.optimizers.adopt.html - 2025-06-09T06:14:49.829Z + 2025-06-10T04:42:43.167Z https://docs.axolotl.ai/docs/api/prompt_strategies.pygmalion.html - 2025-06-09T06:14:49.401Z + 2025-06-10T04:42:42.735Z https://docs.axolotl.ai/docs/api/integrations.kd.trainer.html - 2025-06-09T06:14:50.070Z + 2025-06-10T04:42:43.409Z https://docs.axolotl.ai/docs/api/utils.callbacks.qat.html - 2025-06-09T06:14:50.206Z + 2025-06-10T04:42:43.533Z https://docs.axolotl.ai/docs/api/utils.callbacks.lisa.html - 2025-06-09T06:14:50.188Z + 2025-06-10T04:42:43.520Z https://docs.axolotl.ai/docs/api/loaders.tokenizer.html - 2025-06-09T06:14:49.246Z + 2025-06-10T04:42:42.575Z https://docs.axolotl.ai/docs/api/cli.main.html - 2025-06-09T06:14:49.002Z + 2025-06-10T04:42:42.330Z https://docs.axolotl.ai/docs/api/monkeypatch.data.batch_dataset_fetcher.html - 2025-06-09T06:14:49.689Z + 2025-06-10T04:42:43.026Z https://docs.axolotl.ai/docs/api/cli.evaluate.html - 2025-06-09T06:14:49.018Z + 2025-06-10T04:42:42.347Z https://docs.axolotl.ai/docs/api/utils.bench.html - 2025-06-09T06:14:49.750Z + 2025-06-10T04:42:43.088Z https://docs.axolotl.ai/docs/api/monkeypatch.transformers_fa_utils.html - 2025-06-09T06:14:49.686Z + 2025-06-10T04:42:43.023Z https://docs.axolotl.ai/docs/api/monkeypatch.llama_patch_multipack.html - 2025-06-09T06:14:49.670Z + 2025-06-10T04:42:43.007Z https://docs.axolotl.ai/docs/api/monkeypatch.mistral_attn_hijack_flash.html - 2025-06-09T06:14:49.625Z + 2025-06-10T04:42:42.961Z https://docs.axolotl.ai/docs/api/utils.ctx_managers.sequence_parallel.html - 2025-06-09T06:14:49.301Z + 2025-06-10T04:42:42.630Z https://docs.axolotl.ai/docs/api/integrations.grokfast.optimizer.html - 2025-06-09T06:14:50.063Z + 2025-06-10T04:42:43.401Z https://docs.axolotl.ai/docs/api/utils.callbacks.comet_.html - 2025-06-09T06:14:50.197Z + 2025-06-10T04:42:43.527Z https://docs.axolotl.ai/docs/api/monkeypatch.multipack.html - 2025-06-09T06:14:49.626Z + 2025-06-10T04:42:42.962Z https://docs.axolotl.ai/docs/api/utils.collators.mm_chat.html - 2025-06-09T06:14:50.129Z + 2025-06-10T04:42:43.468Z https://docs.axolotl.ai/docs/api/utils.tokenization.html - 2025-06-09T06:14:49.726Z + 2025-06-10T04:42:43.063Z https://docs.axolotl.ai/docs/api/prompt_strategies.metharme.html - 2025-06-09T06:14:49.391Z + 2025-06-10T04:42:42.725Z https://docs.axolotl.ai/docs/api/utils.collators.batching.html - 2025-06-09T06:14:50.121Z + 2025-06-10T04:42:43.460Z https://docs.axolotl.ai/docs/api/core.trainers.base.html - 2025-06-09T06:14:49.173Z + 2025-06-10T04:42:42.501Z https://docs.axolotl.ai/docs/api/core.trainers.grpo.trainer.html - 2025-06-09T06:14:49.215Z + 2025-06-10T04:42:42.544Z https://docs.axolotl.ai/docs/api/prompt_strategies.dpo.chatml.html - 2025-06-09T06:14:49.427Z + 2025-06-10T04:42:42.762Z https://docs.axolotl.ai/docs/api/utils.quantization.html - 2025-06-09T06:14:49.852Z + 2025-06-10T04:42:43.191Z https://docs.axolotl.ai/docs/api/prompt_strategies.llama2_chat.html - 2025-06-09T06:14:49.368Z + 2025-06-10T04:42:42.702Z https://docs.axolotl.ai/docs/api/cli.quantize.html - 2025-06-09T06:14:49.163Z + 2025-06-10T04:42:42.491Z https://docs.axolotl.ai/docs/api/prompt_strategies.alpaca_instruct.html - 2025-06-09T06:14:49.336Z + 2025-06-10T04:42:42.669Z https://docs.axolotl.ai/docs/api/cli.vllm_serve.html - 2025-06-09T06:14:49.148Z + 2025-06-10T04:42:42.477Z https://docs.axolotl.ai/docs/api/prompt_strategies.alpaca_chat.html - 2025-06-09T06:14:49.334Z + 2025-06-10T04:42:42.668Z https://docs.axolotl.ai/docs/api/prompt_strategies.bradley_terry.llama3.html - 2025-06-09T06:14:49.473Z + 2025-06-10T04:42:42.808Z https://docs.axolotl.ai/docs/api/utils.dict.html - 2025-06-09T06:14:49.821Z + 2025-06-10T04:42:43.159Z https://docs.axolotl.ai/docs/api/prompt_strategies.dpo.user_defined.html - 2025-06-09T06:14:49.430Z + 2025-06-10T04:42:42.764Z https://docs.axolotl.ai/docs/api/monkeypatch.llama_attn_hijack_xformers.html - 2025-06-09T06:14:49.611Z + 2025-06-10T04:42:42.946Z https://docs.axolotl.ai/docs/api/utils.lora.html - 2025-06-09T06:14:49.741Z + 2025-06-10T04:42:43.078Z https://docs.axolotl.ai/docs/api/monkeypatch.gradient_checkpointing.offload_cpu.html - 2025-06-09T06:14:49.693Z + 2025-06-10T04:42:43.031Z https://docs.axolotl.ai/docs/api/utils.trainer.html - 2025-06-09T06:14:49.774Z + 2025-06-10T04:42:43.112Z https://docs.axolotl.ai/docs/api/prompt_strategies.orcamini.html - 2025-06-09T06:14:49.394Z + 2025-06-10T04:42:42.729Z https://docs.axolotl.ai/docs/api/prompt_strategies.messages.chat.html - 2025-06-09T06:14:49.405Z + 2025-06-10T04:42:42.740Z https://docs.axolotl.ai/docs/api/monkeypatch.lora_kernels.html - 2025-06-09T06:14:49.659Z + 2025-06-10T04:42:42.996Z https://docs.axolotl.ai/docs/api/utils.schemas.enums.html - 2025-06-09T06:14:49.934Z + 2025-06-10T04:42:43.272Z https://docs.axolotl.ai/docs/api/cli.config.html - 2025-06-09T06:14:49.061Z + 2025-06-10T04:42:42.390Z https://docs.axolotl.ai/docs/api/monkeypatch.gradient_checkpointing.offload_disk.html - 2025-06-09T06:14:49.719Z + 2025-06-10T04:42:43.056Z https://docs.axolotl.ai/docs/api/monkeypatch.stablelm_attn_hijack_flash.html - 2025-06-09T06:14:49.676Z + 2025-06-10T04:42:43.013Z https://docs.axolotl.ai/docs/api/kernels.lora.html - 2025-06-09T06:14:49.555Z + 2025-06-10T04:42:42.890Z https://docs.axolotl.ai/docs/api/prompt_strategies.base.html - 2025-06-09T06:14:49.303Z + 2025-06-10T04:42:42.632Z https://docs.axolotl.ai/docs/api/core.trainers.relora.html - 2025-06-09T06:14:49.198Z + 2025-06-10T04:42:42.527Z https://docs.axolotl.ai/docs/api/loaders.model.html - 2025-06-09T06:14:49.238Z + 2025-06-10T04:42:42.567Z https://docs.axolotl.ai/docs/api/utils.callbacks.perplexity.html - 2025-06-09T06:14:50.181Z + 2025-06-10T04:42:43.515Z https://docs.axolotl.ai/docs/api/cli.inference.html - 2025-06-09T06:14:49.075Z + 2025-06-10T04:42:42.404Z https://docs.axolotl.ai/docs/batch_vs_grad.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/docs/sequence_parallelism.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/getting-started.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/torchao.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/dataset_preprocessing.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/docker.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/ray-integration.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/dataset-formats/inst_tune.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/dataset-formats/stepwise_supervised.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/dataset-formats/index.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/docs/multimodal.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/reward_modelling.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/faq.html - 2025-06-09T06:14:21.510Z + 2025-06-10T04:42:14.208Z https://docs.axolotl.ai/docs/installation.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/unsloth.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/docs/config.html - 2025-06-09T06:14:21.509Z + 2025-06-10T04:42:14.207Z https://docs.axolotl.ai/docs/inference.html - 2025-06-09T06:14:21.513Z + 2025-06-10T04:42:14.211Z https://docs.axolotl.ai/src/axolotl/integrations/LICENSE.html - 2025-06-09T06:14:21.533Z + 2025-06-10T04:42:14.228Z https://docs.axolotl.ai/index.html - 2025-06-09T06:14:21.528Z + 2025-06-10T04:42:14.224Z