Built site for gh-pages
This commit is contained in:
@@ -495,7 +495,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
</tr>
|
||||
<tr class="odd">
|
||||
<td><a href="#axolotl.cli.utils.load_model_and_tokenizer">load_model_and_tokenizer</a></td>
|
||||
<td>Helper function for loading a model and tokenizer specified in the given <code>axolotl</code></td>
|
||||
<td>Helper function for loading a model, tokenizer, and processor specified in the given <code>axolotl</code></td>
|
||||
</tr>
|
||||
<tr class="even">
|
||||
<td><a href="#axolotl.cli.utils.strip_optional_type">strip_optional_type</a></td>
|
||||
@@ -829,7 +829,7 @@ Only downloads files that don’t exist locally or have changed.</p>
|
||||
<section id="axolotl.cli.utils.load_model_and_tokenizer" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.cli.utils.load_model_and_tokenizer">load_model_and_tokenizer</h3>
|
||||
<div class="sourceCode" id="cb7"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a>cli.utils.load_model_and_tokenizer(cfg, inference<span class="op">=</span><span class="va">False</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Helper function for loading a model and tokenizer specified in the given <code>axolotl</code>
|
||||
<p>Helper function for loading a model, tokenizer, and processor specified in the given <code>axolotl</code>
|
||||
config.</p>
|
||||
<section id="parameters-6" class="level4 doc-section doc-section-parameters">
|
||||
<h4 class="doc-section doc-section-parameters anchored" data-anchor-id="parameters-6">Parameters</h4>
|
||||
@@ -868,9 +868,9 @@ config.</p>
|
||||
<h4 class="doc-section doc-section-returns anchored" data-anchor-id="returns-5">Returns</h4>
|
||||
<table class="caption-top table">
|
||||
<colgroup>
|
||||
<col style="width: 6%">
|
||||
<col style="width: 64%">
|
||||
<col style="width: 29%">
|
||||
<col style="width: 4%">
|
||||
<col style="width: 58%">
|
||||
<col style="width: 36%">
|
||||
</colgroup>
|
||||
<thead>
|
||||
<tr class="header">
|
||||
@@ -882,8 +882,8 @@ config.</p>
|
||||
<tbody>
|
||||
<tr class="odd">
|
||||
<td></td>
|
||||
<td>tuple[PreTrainedModel, PreTrainedTokenizer | PreTrainedTokenizerFast | Any]</td>
|
||||
<td><code>transformers</code> model and tokenizer.</td>
|
||||
<td>tuple[PreTrainedModel, PreTrainedTokenizer | PreTrainedTokenizerFast | Any, ProcessorMixin | None]</td>
|
||||
<td>Tuple of (PreTrainedModel, PreTrainedTokenizer, ProcessorMixin).</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
@@ -497,30 +497,82 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
<h3 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlCPOTrainer">AxolotlCPOTrainer</h3>
|
||||
<div class="sourceCode" id="cb1"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlCPOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Extend the base CPOTrainer for axolotl helpers</p>
|
||||
<section id="methods" class="level4">
|
||||
<h4 class="anchored" data-anchor-id="methods">Methods</h4>
|
||||
<table class="caption-top table">
|
||||
<thead>
|
||||
<tr class="header">
|
||||
<th>Name</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr class="odd">
|
||||
<td><a href="#axolotl.core.trainers.trl.AxolotlCPOTrainer.get_batch_loss_metrics">get_batch_loss_metrics</a></td>
|
||||
<td>Compute the CPO loss and other metrics for the given batch of inputs for train or test.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<section id="axolotl.core.trainers.trl.AxolotlCPOTrainer.get_batch_loss_metrics" class="level5">
|
||||
<h5 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlCPOTrainer.get_batch_loss_metrics">get_batch_loss_metrics</h5>
|
||||
<div class="sourceCode" id="cb2"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlCPOTrainer.get_batch_loss_metrics(</span>
|
||||
<span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a> model,</span>
|
||||
<span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a> batch,</span>
|
||||
<span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a> train_eval<span class="op">=</span><span class="st">'train'</span>,</span>
|
||||
<span id="cb2-5"><a href="#cb2-5" aria-hidden="true" tabindex="-1"></a>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Compute the CPO loss and other metrics for the given batch of inputs for train or test.</p>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
<section id="axolotl.core.trainers.trl.AxolotlKTOTrainer" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlKTOTrainer">AxolotlKTOTrainer</h3>
|
||||
<div class="sourceCode" id="cb2"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlKTOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb3"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlKTOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Extend the base KTOTrainer for axolotl helpers</p>
|
||||
</section>
|
||||
<section id="axolotl.core.trainers.trl.AxolotlORPOTrainer" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlORPOTrainer">AxolotlORPOTrainer</h3>
|
||||
<div class="sourceCode" id="cb3"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlORPOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb4"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlORPOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Extend the base ORPOTrainer for axolotl helpers</p>
|
||||
<section id="methods-1" class="level4">
|
||||
<h4 class="anchored" data-anchor-id="methods-1">Methods</h4>
|
||||
<table class="caption-top table">
|
||||
<thead>
|
||||
<tr class="header">
|
||||
<th>Name</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr class="odd">
|
||||
<td><a href="#axolotl.core.trainers.trl.AxolotlORPOTrainer.get_batch_loss_metrics">get_batch_loss_metrics</a></td>
|
||||
<td>Compute the ORPO loss and other metrics for the given batch of inputs for train or test.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<section id="axolotl.core.trainers.trl.AxolotlORPOTrainer.get_batch_loss_metrics" class="level5">
|
||||
<h5 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlORPOTrainer.get_batch_loss_metrics">get_batch_loss_metrics</h5>
|
||||
<div class="sourceCode" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlORPOTrainer.get_batch_loss_metrics(</span>
|
||||
<span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a> model,</span>
|
||||
<span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a> batch,</span>
|
||||
<span id="cb5-4"><a href="#cb5-4" aria-hidden="true" tabindex="-1"></a> train_eval<span class="op">=</span><span class="st">'train'</span>,</span>
|
||||
<span id="cb5-5"><a href="#cb5-5" aria-hidden="true" tabindex="-1"></a>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Compute the ORPO loss and other metrics for the given batch of inputs for train or test.</p>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
<section id="axolotl.core.trainers.trl.AxolotlPRMTrainer" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlPRMTrainer">AxolotlPRMTrainer</h3>
|
||||
<div class="sourceCode" id="cb4"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlPRMTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb6"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlPRMTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Extend the base trl.PRMTrainer for axolotl helpers</p>
|
||||
</section>
|
||||
<section id="axolotl.core.trainers.trl.AxolotlRewardTrainer" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.core.trainers.trl.AxolotlRewardTrainer">AxolotlRewardTrainer</h3>
|
||||
<div class="sourceCode" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlRewardTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb7"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.AxolotlRewardTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Extend the base RewardTrainer for axolotl helpers</p>
|
||||
</section>
|
||||
<section id="axolotl.core.trainers.trl.TRLPPOTrainer" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.core.trainers.trl.TRLPPOTrainer">TRLPPOTrainer</h3>
|
||||
<div class="sourceCode" id="cb6"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.TRLPPOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb8"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a>core.trainers.trl.TRLPPOTrainer()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Wrapper for TRL PPO trainer to handle customizations</p>
|
||||
|
||||
|
||||
|
||||
@@ -725,7 +725,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
</section>
|
||||
<section id="axolotl.train.save_initial_configs" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="axolotl.train.save_initial_configs">save_initial_configs</h3>
|
||||
<div class="sourceCode" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>train.save_initial_configs(cfg, tokenizer, model, peft_config)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>train.save_initial_configs(cfg, tokenizer, model, peft_config, processor)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<p>Save initial configurations before training.</p>
|
||||
<section id="parameters-4" class="level4 doc-section doc-section-parameters">
|
||||
<h4 class="doc-section doc-section-parameters anchored" data-anchor-id="parameters-4">Parameters</h4>
|
||||
@@ -935,8 +935,8 @@ trainer setup.</p>
|
||||
<tbody>
|
||||
<tr class="odd">
|
||||
<td></td>
|
||||
<td>tuple[HFRLTrainerBuilder | HFCausalTrainerBuilder, PeftModel | PreTrainedModel, PreTrainedTokenizer, PeftConfig | None]</td>
|
||||
<td>Tuple of: - Trainer (Causal or RLHF) - Model - Tokenizer - PEFT config</td>
|
||||
<td>tuple[HFRLTrainerBuilder | HFCausalTrainerBuilder, PeftModel | PreTrainedModel, PreTrainedTokenizer, PeftConfig | None, ProcessorMixin | None]</td>
|
||||
<td>Tuple of: - Trainer (Causal or RLHF) - Model - Tokenizer - PEFT config - Processor</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
390
docs/config.html
390
docs/config.html
@@ -920,189 +920,213 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
<span id="cb1-460"><a href="#cb1-460" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-461"><a href="#cb1-461" aria-hidden="true" tabindex="-1"></a><span class="fu">eval_table_size</span><span class="kw">:</span><span class="co"> # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0</span></span>
|
||||
<span id="cb1-462"><a href="#cb1-462" aria-hidden="true" tabindex="-1"></a><span class="fu">eval_max_new_tokens</span><span class="kw">:</span><span class="co"> # Total number of tokens generated for predictions sent to wandb. Default is 128</span></span>
|
||||
<span id="cb1-463"><a href="#cb1-463" aria-hidden="true" tabindex="-1"></a><span class="fu">eval_causal_lm_metrics</span><span class="kw">:</span><span class="co"> # HF evaluate metrics used during evaluation. Default is ["sacrebleu", "comet", "ter", "chrf", "perplexity"]</span></span>
|
||||
<span id="cb1-464"><a href="#cb1-464" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-465"><a href="#cb1-465" aria-hidden="true" tabindex="-1"></a><span class="fu">profiler_steps</span><span class="kw">:</span><span class="co"> # enable the pytorch profiler to capture the first N steps of training to the output_dir.</span></span>
|
||||
<span id="cb1-466"><a href="#cb1-466" aria-hidden="true" tabindex="-1"></a><span class="co"> # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information</span></span>
|
||||
<span id="cb1-467"><a href="#cb1-467" aria-hidden="true" tabindex="-1"></a><span class="co"> # snapshots can be visualized @ https://pytorch.org/memory_viz</span></span>
|
||||
<span id="cb1-468"><a href="#cb1-468" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-469"><a href="#cb1-469" aria-hidden="true" tabindex="-1"></a><span class="fu">loss_watchdog_threshold</span><span class="kw">:</span><span class="co"> # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)</span></span>
|
||||
<span id="cb1-470"><a href="#cb1-470" aria-hidden="true" tabindex="-1"></a><span class="fu">loss_watchdog_patience</span><span class="kw">:</span><span class="co"> # Number of high-loss steps in a row before the trainer aborts (default: 3)</span></span>
|
||||
<span id="cb1-471"><a href="#cb1-471" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-472"><a href="#cb1-472" aria-hidden="true" tabindex="-1"></a><span class="co"># Save model as safetensors (require safetensors package)</span></span>
|
||||
<span id="cb1-473"><a href="#cb1-473" aria-hidden="true" tabindex="-1"></a><span class="fu">save_safetensors</span><span class="kw">:</span></span>
|
||||
<span id="cb1-474"><a href="#cb1-474" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-475"><a href="#cb1-475" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to mask out or include the human's prompt from the training labels</span></span>
|
||||
<span id="cb1-476"><a href="#cb1-476" aria-hidden="true" tabindex="-1"></a><span class="fu">train_on_inputs</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-477"><a href="#cb1-477" aria-hidden="true" tabindex="-1"></a><span class="co"># Group similarly sized data to minimize padding.</span></span>
|
||||
<span id="cb1-478"><a href="#cb1-478" aria-hidden="true" tabindex="-1"></a><span class="co"># May be slower to start, as it must download and sort the entire dataset.</span></span>
|
||||
<span id="cb1-479"><a href="#cb1-479" aria-hidden="true" tabindex="-1"></a><span class="co"># Note that training loss may have an oscillating pattern with this enabled.</span></span>
|
||||
<span id="cb1-480"><a href="#cb1-480" aria-hidden="true" tabindex="-1"></a><span class="fu">group_by_length</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-481"><a href="#cb1-481" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-482"><a href="#cb1-482" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing</span></span>
|
||||
<span id="cb1-483"><a href="#cb1-483" aria-hidden="true" tabindex="-1"></a><span class="fu">gradient_checkpointing</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-484"><a href="#cb1-484" aria-hidden="true" tabindex="-1"></a><span class="co"># additional kwargs to pass to the trainer for gradient checkpointing</span></span>
|
||||
<span id="cb1-485"><a href="#cb1-485" aria-hidden="true" tabindex="-1"></a><span class="co"># gradient_checkpointing_kwargs:</span></span>
|
||||
<span id="cb1-486"><a href="#cb1-486" aria-hidden="true" tabindex="-1"></a><span class="co"># use_reentrant: true</span></span>
|
||||
<span id="cb1-487"><a href="#cb1-487" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-488"><a href="#cb1-488" aria-hidden="true" tabindex="-1"></a><span class="co"># Stop training after this many evaluation losses have increased in a row</span></span>
|
||||
<span id="cb1-489"><a href="#cb1-489" aria-hidden="true" tabindex="-1"></a><span class="co"># https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback</span></span>
|
||||
<span id="cb1-490"><a href="#cb1-490" aria-hidden="true" tabindex="-1"></a><span class="fu">early_stopping_patience</span><span class="kw">:</span><span class="at"> </span><span class="dv">3</span></span>
|
||||
<span id="cb1-491"><a href="#cb1-491" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-492"><a href="#cb1-492" aria-hidden="true" tabindex="-1"></a><span class="co"># Specify a scheduler and kwargs to use with the optimizer</span></span>
|
||||
<span id="cb1-493"><a href="#cb1-493" aria-hidden="true" tabindex="-1"></a><span class="fu">lr_scheduler</span><span class="kw">:</span><span class="co"> # 'one_cycle' | 'rex' | 'log_sweep' | empty for cosine</span></span>
|
||||
<span id="cb1-494"><a href="#cb1-494" aria-hidden="true" tabindex="-1"></a><span class="fu">lr_scheduler_kwargs</span><span class="kw">:</span></span>
|
||||
<span id="cb1-495"><a href="#cb1-495" aria-hidden="true" tabindex="-1"></a><span class="fu">cosine_min_lr_ratio</span><span class="kw">:</span><span class="co"> # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr</span></span>
|
||||
<span id="cb1-496"><a href="#cb1-496" aria-hidden="true" tabindex="-1"></a><span class="fu">cosine_constant_lr_ratio</span><span class="kw">:</span><span class="co"> # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)</span></span>
|
||||
<span id="cb1-497"><a href="#cb1-497" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-498"><a href="#cb1-498" aria-hidden="true" tabindex="-1"></a><span class="co"># For one_cycle optim</span></span>
|
||||
<span id="cb1-499"><a href="#cb1-499" aria-hidden="true" tabindex="-1"></a><span class="fu">lr_div_factor</span><span class="kw">:</span><span class="co"> # Learning rate div factor</span></span>
|
||||
<span id="cb1-500"><a href="#cb1-500" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-501"><a href="#cb1-501" aria-hidden="true" tabindex="-1"></a><span class="co"># Specify optimizer</span></span>
|
||||
<span id="cb1-502"><a href="#cb1-502" aria-hidden="true" tabindex="-1"></a><span class="co"># Valid values are driven by the Transformers OptimizerNames class, see:</span></span>
|
||||
<span id="cb1-503"><a href="#cb1-503" aria-hidden="true" tabindex="-1"></a><span class="co"># https://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/training_args.py#L134</span></span>
|
||||
<span id="cb1-504"><a href="#cb1-504" aria-hidden="true" tabindex="-1"></a><span class="co">#</span></span>
|
||||
<span id="cb1-505"><a href="#cb1-505" aria-hidden="true" tabindex="-1"></a><span class="co"># Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of</span></span>
|
||||
<span id="cb1-506"><a href="#cb1-506" aria-hidden="true" tabindex="-1"></a><span class="co"># torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used</span></span>
|
||||
<span id="cb1-507"><a href="#cb1-507" aria-hidden="true" tabindex="-1"></a><span class="co"># in the examples/ for your model and fine-tuning use case.</span></span>
|
||||
<span id="cb1-508"><a href="#cb1-508" aria-hidden="true" tabindex="-1"></a><span class="co">#</span></span>
|
||||
<span id="cb1-509"><a href="#cb1-509" aria-hidden="true" tabindex="-1"></a><span class="co"># Valid values for 'optimizer' include:</span></span>
|
||||
<span id="cb1-510"><a href="#cb1-510" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch</span></span>
|
||||
<span id="cb1-511"><a href="#cb1-511" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch_fused</span></span>
|
||||
<span id="cb1-512"><a href="#cb1-512" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch_xla</span></span>
|
||||
<span id="cb1-513"><a href="#cb1-513" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_apex_fused</span></span>
|
||||
<span id="cb1-514"><a href="#cb1-514" aria-hidden="true" tabindex="-1"></a><span class="co"># - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)</span></span>
|
||||
<span id="cb1-515"><a href="#cb1-515" aria-hidden="true" tabindex="-1"></a><span class="co"># - adafactor</span></span>
|
||||
<span id="cb1-516"><a href="#cb1-516" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_anyprecision</span></span>
|
||||
<span id="cb1-517"><a href="#cb1-517" aria-hidden="true" tabindex="-1"></a><span class="co"># - sgd</span></span>
|
||||
<span id="cb1-518"><a href="#cb1-518" aria-hidden="true" tabindex="-1"></a><span class="co"># - adagrad</span></span>
|
||||
<span id="cb1-519"><a href="#cb1-519" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_bnb_8bit</span></span>
|
||||
<span id="cb1-520"><a href="#cb1-520" aria-hidden="true" tabindex="-1"></a><span class="co"># - lion_8bit</span></span>
|
||||
<span id="cb1-521"><a href="#cb1-521" aria-hidden="true" tabindex="-1"></a><span class="co"># - lion_32bit</span></span>
|
||||
<span id="cb1-522"><a href="#cb1-522" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_adamw_32bit</span></span>
|
||||
<span id="cb1-523"><a href="#cb1-523" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_adamw_8bit</span></span>
|
||||
<span id="cb1-524"><a href="#cb1-524" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_lion_32bit</span></span>
|
||||
<span id="cb1-525"><a href="#cb1-525" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_lion_8bit</span></span>
|
||||
<span id="cb1-526"><a href="#cb1-526" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw</span></span>
|
||||
<span id="cb1-527"><a href="#cb1-527" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw_8bit</span></span>
|
||||
<span id="cb1-528"><a href="#cb1-528" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adafactor</span></span>
|
||||
<span id="cb1-529"><a href="#cb1-529" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw_layerwise</span></span>
|
||||
<span id="cb1-530"><a href="#cb1-530" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw_8bit_layerwise</span></span>
|
||||
<span id="cb1-531"><a href="#cb1-531" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adafactor_layerwise</span></span>
|
||||
<span id="cb1-532"><a href="#cb1-532" aria-hidden="true" tabindex="-1"></a><span class="fu">optimizer</span><span class="kw">:</span></span>
|
||||
<span id="cb1-533"><a href="#cb1-533" aria-hidden="true" tabindex="-1"></a><span class="co"># Dictionary of arguments to pass to the optimizer</span></span>
|
||||
<span id="cb1-534"><a href="#cb1-534" aria-hidden="true" tabindex="-1"></a><span class="fu">optim_args</span><span class="kw">:</span></span>
|
||||
<span id="cb1-535"><a href="#cb1-535" aria-hidden="true" tabindex="-1"></a><span class="co"># For Galore Optimizers the following optim_args are available</span></span>
|
||||
<span id="cb1-536"><a href="#cb1-536" aria-hidden="true" tabindex="-1"></a><span class="co"># rank: # type: int</span></span>
|
||||
<span id="cb1-537"><a href="#cb1-537" aria-hidden="true" tabindex="-1"></a><span class="co"># update_proj_gap # type: int</span></span>
|
||||
<span id="cb1-538"><a href="#cb1-538" aria-hidden="true" tabindex="-1"></a><span class="co"># scale # type: float</span></span>
|
||||
<span id="cb1-539"><a href="#cb1-539" aria-hidden="true" tabindex="-1"></a><span class="co"># proj_type: # type: str, default = std</span></span>
|
||||
<span id="cb1-540"><a href="#cb1-540" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-541"><a href="#cb1-541" aria-hidden="true" tabindex="-1"></a><span class="co"># The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm</span></span>
|
||||
<span id="cb1-542"><a href="#cb1-542" aria-hidden="true" tabindex="-1"></a><span class="fu">optim_target_modules</span><span class="kw">:</span></span>
|
||||
<span id="cb1-543"><a href="#cb1-543" aria-hidden="true" tabindex="-1"></a><span class="co"># - self_attn # for llama</span></span>
|
||||
<span id="cb1-544"><a href="#cb1-544" aria-hidden="true" tabindex="-1"></a><span class="co"># - mlp</span></span>
|
||||
<span id="cb1-545"><a href="#cb1-545" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-546"><a href="#cb1-546" aria-hidden="true" tabindex="-1"></a><span class="co"># Specify weight decay</span></span>
|
||||
<span id="cb1-547"><a href="#cb1-547" aria-hidden="true" tabindex="-1"></a><span class="fu">weight_decay</span><span class="kw">:</span></span>
|
||||
<span id="cb1-548"><a href="#cb1-548" aria-hidden="true" tabindex="-1"></a><span class="co"># adamw hyperparams</span></span>
|
||||
<span id="cb1-549"><a href="#cb1-549" aria-hidden="true" tabindex="-1"></a><span class="fu">adam_beta1</span><span class="kw">:</span></span>
|
||||
<span id="cb1-550"><a href="#cb1-550" aria-hidden="true" tabindex="-1"></a><span class="fu">adam_beta2</span><span class="kw">:</span></span>
|
||||
<span id="cb1-551"><a href="#cb1-551" aria-hidden="true" tabindex="-1"></a><span class="fu">adam_epsilon</span><span class="kw">:</span></span>
|
||||
<span id="cb1-552"><a href="#cb1-552" aria-hidden="true" tabindex="-1"></a><span class="co"># Gradient clipping max norm</span></span>
|
||||
<span id="cb1-553"><a href="#cb1-553" aria-hidden="true" tabindex="-1"></a><span class="fu">max_grad_norm</span><span class="kw">:</span></span>
|
||||
<span id="cb1-554"><a href="#cb1-554" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-555"><a href="#cb1-555" aria-hidden="true" tabindex="-1"></a><span class="co"># Augmentation techniques</span></span>
|
||||
<span id="cb1-556"><a href="#cb1-556" aria-hidden="true" tabindex="-1"></a><span class="co"># NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings</span></span>
|
||||
<span id="cb1-557"><a href="#cb1-557" aria-hidden="true" tabindex="-1"></a><span class="co"># currently only supported on Llama and Mistral</span></span>
|
||||
<span id="cb1-558"><a href="#cb1-558" aria-hidden="true" tabindex="-1"></a><span class="fu">neftune_noise_alpha</span><span class="kw">:</span></span>
|
||||
<span id="cb1-559"><a href="#cb1-559" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-560"><a href="#cb1-560" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to bettertransformers</span></span>
|
||||
<span id="cb1-561"><a href="#cb1-561" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_optimum</span><span class="kw">:</span></span>
|
||||
<span id="cb1-562"><a href="#cb1-562" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use xformers attention patch https://github.com/facebookresearch/xformers:</span></span>
|
||||
<span id="cb1-563"><a href="#cb1-563" aria-hidden="true" tabindex="-1"></a><span class="fu">xformers_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-564"><a href="#cb1-564" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:</span></span>
|
||||
<span id="cb1-565"><a href="#cb1-565" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-566"><a href="#cb1-566" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_cross_entropy</span><span class="kw">:</span><span class="co"> # Whether to use flash-attention cross entropy implementation - advanced use only</span></span>
|
||||
<span id="cb1-567"><a href="#cb1-567" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_rms_norm</span><span class="kw">:</span><span class="co"> # Whether to use flash-attention rms norm implementation - advanced use only</span></span>
|
||||
<span id="cb1-568"><a href="#cb1-568" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_fuse_qkv</span><span class="kw">:</span><span class="co"> # Whether to fuse QKV into a single operation</span></span>
|
||||
<span id="cb1-569"><a href="#cb1-569" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_fuse_mlp</span><span class="kw">:</span><span class="co"> # Whether to fuse part of the MLP into a single operation</span></span>
|
||||
<span id="cb1-570"><a href="#cb1-570" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use scaled-dot-product attention</span></span>
|
||||
<span id="cb1-571"><a href="#cb1-571" aria-hidden="true" tabindex="-1"></a><span class="co"># https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html</span></span>
|
||||
<span id="cb1-572"><a href="#cb1-572" aria-hidden="true" tabindex="-1"></a><span class="fu">sdp_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-573"><a href="#cb1-573" aria-hidden="true" tabindex="-1"></a><span class="co"># Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf</span></span>
|
||||
<span id="cb1-574"><a href="#cb1-574" aria-hidden="true" tabindex="-1"></a><span class="fu">s2_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-575"><a href="#cb1-575" aria-hidden="true" tabindex="-1"></a><span class="co"># Optional[bool]. Whether to use low_cpu_mem_usage</span></span>
|
||||
<span id="cb1-576"><a href="#cb1-576" aria-hidden="true" tabindex="-1"></a><span class="fu">low_cpu_mem_usage</span><span class="kw">:</span></span>
|
||||
<span id="cb1-577"><a href="#cb1-577" aria-hidden="true" tabindex="-1"></a><span class="co"># Resume from a specific checkpoint dir</span></span>
|
||||
<span id="cb1-578"><a href="#cb1-578" aria-hidden="true" tabindex="-1"></a><span class="fu">resume_from_checkpoint</span><span class="kw">:</span></span>
|
||||
<span id="cb1-579"><a href="#cb1-579" aria-hidden="true" tabindex="-1"></a><span class="co"># If resume_from_checkpoint isn't set and you simply want it to start where it left off.</span></span>
|
||||
<span id="cb1-580"><a href="#cb1-580" aria-hidden="true" tabindex="-1"></a><span class="co"># Be careful with this being turned on between different models.</span></span>
|
||||
<span id="cb1-581"><a href="#cb1-581" aria-hidden="true" tabindex="-1"></a><span class="fu">auto_resume_from_checkpoints</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-582"><a href="#cb1-582" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-583"><a href="#cb1-583" aria-hidden="true" tabindex="-1"></a><span class="co">## Multimodal section</span></span>
|
||||
<span id="cb1-584"><a href="#cb1-584" aria-hidden="true" tabindex="-1"></a><span class="co"># int | tuple[int, int] | None . Size to resize images to, width x height.</span></span>
|
||||
<span id="cb1-585"><a href="#cb1-585" aria-hidden="true" tabindex="-1"></a><span class="co"># Will read from model/processor config if not set.</span></span>
|
||||
<span id="cb1-586"><a href="#cb1-586" aria-hidden="true" tabindex="-1"></a><span class="fu">image_size</span><span class="kw">:</span></span>
|
||||
<span id="cb1-587"><a href="#cb1-587" aria-hidden="true" tabindex="-1"></a><span class="co"># str. Algorithm to use for image resizing. "bilinear", "bicubic", "lanczos". Default is "bilinear".</span></span>
|
||||
<span id="cb1-588"><a href="#cb1-588" aria-hidden="true" tabindex="-1"></a><span class="fu">image_resize_algorithm</span><span class="kw">:</span><span class="at"> </span><span class="st">'bilinear'</span></span>
|
||||
<span id="cb1-589"><a href="#cb1-589" aria-hidden="true" tabindex="-1"></a><span class="co">## End of multimodal section</span></span>
|
||||
<span id="cb1-590"><a href="#cb1-590" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-591"><a href="#cb1-591" aria-hidden="true" tabindex="-1"></a><span class="co"># Don't mess with this, it's here for accelerate and torchrun</span></span>
|
||||
<span id="cb1-592"><a href="#cb1-592" aria-hidden="true" tabindex="-1"></a><span class="fu">local_rank</span><span class="kw">:</span></span>
|
||||
<span id="cb1-593"><a href="#cb1-593" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-594"><a href="#cb1-594" aria-hidden="true" tabindex="-1"></a><span class="co"># Add or change special tokens.</span></span>
|
||||
<span id="cb1-595"><a href="#cb1-595" aria-hidden="true" tabindex="-1"></a><span class="co"># If you add tokens here, you don't need to add them to the `tokens` list.</span></span>
|
||||
<span id="cb1-596"><a href="#cb1-596" aria-hidden="true" tabindex="-1"></a><span class="fu">special_tokens</span><span class="kw">:</span></span>
|
||||
<span id="cb1-597"><a href="#cb1-597" aria-hidden="true" tabindex="-1"></a><span class="co"> # bos_token: "<s>"</span></span>
|
||||
<span id="cb1-598"><a href="#cb1-598" aria-hidden="true" tabindex="-1"></a><span class="co"> # eos_token: "</s>"</span></span>
|
||||
<span id="cb1-599"><a href="#cb1-599" aria-hidden="true" tabindex="-1"></a><span class="co"> # unk_token: "<unk>"</span></span>
|
||||
<span id="cb1-600"><a href="#cb1-600" aria-hidden="true" tabindex="-1"></a><span class="co"> # pad_token: "[PAD]"</span></span>
|
||||
<span id="cb1-601"><a href="#cb1-601" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-602"><a href="#cb1-602" aria-hidden="true" tabindex="-1"></a><span class="co"># Add extra tokens.</span></span>
|
||||
<span id="cb1-603"><a href="#cb1-603" aria-hidden="true" tabindex="-1"></a><span class="fu">tokens</span><span class="kw">:</span></span>
|
||||
<span id="cb1-604"><a href="#cb1-604" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-605"><a href="#cb1-605" aria-hidden="true" tabindex="-1"></a><span class="co"># Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer.</span></span>
|
||||
<span id="cb1-606"><a href="#cb1-606" aria-hidden="true" tabindex="-1"></a><span class="co"># Only works for tokens that are not part of the base vocab (aka are added_tokens).</span></span>
|
||||
<span id="cb1-607"><a href="#cb1-607" aria-hidden="true" tabindex="-1"></a><span class="co"># Can be checked if they exist in tokenizer.json added_tokens.</span></span>
|
||||
<span id="cb1-608"><a href="#cb1-608" aria-hidden="true" tabindex="-1"></a><span class="fu">added_tokens_overrides</span><span class="kw">:</span><span class="co"> # Dict[int, str]</span></span>
|
||||
<span id="cb1-609"><a href="#cb1-609" aria-hidden="true" tabindex="-1"></a><span class="co"># 128041: "<|im_start|>"</span></span>
|
||||
<span id="cb1-610"><a href="#cb1-610" aria-hidden="true" tabindex="-1"></a><span class="co"># 128042: "<|im_end|>"</span></span>
|
||||
<span id="cb1-611"><a href="#cb1-611" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-612"><a href="#cb1-612" aria-hidden="true" tabindex="-1"></a><span class="co"># FSDP</span></span>
|
||||
<span id="cb1-613"><a href="#cb1-613" aria-hidden="true" tabindex="-1"></a><span class="fu">fsdp</span><span class="kw">:</span></span>
|
||||
<span id="cb1-614"><a href="#cb1-614" aria-hidden="true" tabindex="-1"></a><span class="fu">fsdp_config</span><span class="kw">:</span></span>
|
||||
<span id="cb1-615"><a href="#cb1-615" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-616"><a href="#cb1-616" aria-hidden="true" tabindex="-1"></a><span class="co"># Deepspeed config path. e.g., deepspeed_configs/zero3.json</span></span>
|
||||
<span id="cb1-617"><a href="#cb1-617" aria-hidden="true" tabindex="-1"></a><span class="fu">deepspeed</span><span class="kw">:</span></span>
|
||||
<span id="cb1-618"><a href="#cb1-618" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-619"><a href="#cb1-619" aria-hidden="true" tabindex="-1"></a><span class="co"># Advanced DDP Arguments</span></span>
|
||||
<span id="cb1-620"><a href="#cb1-620" aria-hidden="true" tabindex="-1"></a><span class="fu">ddp_timeout</span><span class="kw">:</span></span>
|
||||
<span id="cb1-621"><a href="#cb1-621" aria-hidden="true" tabindex="-1"></a><span class="fu">ddp_bucket_cap_mb</span><span class="kw">:</span></span>
|
||||
<span id="cb1-622"><a href="#cb1-622" aria-hidden="true" tabindex="-1"></a><span class="fu">ddp_broadcast_buffers</span><span class="kw">:</span></span>
|
||||
<span id="cb1-623"><a href="#cb1-623" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-624"><a href="#cb1-624" aria-hidden="true" tabindex="-1"></a><span class="co"># Sequence parallelism</span></span>
|
||||
<span id="cb1-625"><a href="#cb1-625" aria-hidden="true" tabindex="-1"></a><span class="co"># Set to a divisor of the number of GPUs available to split sequences into chunks of equal size.</span></span>
|
||||
<span id="cb1-626"><a href="#cb1-626" aria-hidden="true" tabindex="-1"></a><span class="co"># Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM.</span></span>
|
||||
<span id="cb1-627"><a href="#cb1-627" aria-hidden="true" tabindex="-1"></a><span class="co"># E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized</span></span>
|
||||
<span id="cb1-628"><a href="#cb1-628" aria-hidden="true" tabindex="-1"></a><span class="co"># subsequences, or set to 4 to split into four equal-sized subsequences.</span></span>
|
||||
<span id="cb1-629"><a href="#cb1-629" aria-hidden="true" tabindex="-1"></a><span class="co"># See https://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html for more details.</span></span>
|
||||
<span id="cb1-630"><a href="#cb1-630" aria-hidden="true" tabindex="-1"></a><span class="fu">sequence_parallel_degree</span><span class="kw">:</span></span>
|
||||
<span id="cb1-631"><a href="#cb1-631" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-632"><a href="#cb1-632" aria-hidden="true" tabindex="-1"></a><span class="co"># Path to torch distx for optim 'adamw_anyprecision'</span></span>
|
||||
<span id="cb1-633"><a href="#cb1-633" aria-hidden="true" tabindex="-1"></a><span class="fu">torchdistx_path</span><span class="kw">:</span></span>
|
||||
<span id="cb1-634"><a href="#cb1-634" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-635"><a href="#cb1-635" aria-hidden="true" tabindex="-1"></a><span class="co"># Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize</span></span>
|
||||
<span id="cb1-636"><a href="#cb1-636" aria-hidden="true" tabindex="-1"></a><span class="fu">pretraining_dataset</span><span class="kw">:</span></span>
|
||||
<span id="cb1-637"><a href="#cb1-637" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-638"><a href="#cb1-638" aria-hidden="true" tabindex="-1"></a><span class="co"># Debug mode</span></span>
|
||||
<span id="cb1-639"><a href="#cb1-639" aria-hidden="true" tabindex="-1"></a><span class="fu">debug</span><span class="kw">:</span></span>
|
||||
<span id="cb1-640"><a href="#cb1-640" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-641"><a href="#cb1-641" aria-hidden="true" tabindex="-1"></a><span class="co"># Seed</span></span>
|
||||
<span id="cb1-642"><a href="#cb1-642" aria-hidden="true" tabindex="-1"></a><span class="fu">seed</span><span class="kw">:</span></span>
|
||||
<span id="cb1-643"><a href="#cb1-643" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-644"><a href="#cb1-644" aria-hidden="true" tabindex="-1"></a><span class="co"># Allow overwrite yml config using from cli</span></span>
|
||||
<span id="cb1-645"><a href="#cb1-645" aria-hidden="true" tabindex="-1"></a><span class="fu">strict</span><span class="kw">:</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<span id="cb1-463"><a href="#cb1-463" aria-hidden="true" tabindex="-1"></a><span class="fu">do_causal_lm_eval</span><span class="kw">:</span><span class="co"> # Whether to run causal language model evaluation for metrics in `eval_causal_lm_metrics`.</span></span>
|
||||
<span id="cb1-464"><a href="#cb1-464" aria-hidden="true" tabindex="-1"></a><span class="fu">eval_causal_lm_metrics</span><span class="kw">:</span><span class="co"> # HF evaluate metrics used during evaluation. Default is ["sacrebleu", "comet", "ter", "chrf", "perplexity"]</span></span>
|
||||
<span id="cb1-465"><a href="#cb1-465" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-466"><a href="#cb1-466" aria-hidden="true" tabindex="-1"></a><span class="fu">profiler_steps</span><span class="kw">:</span><span class="co"> # enable the pytorch profiler to capture the first N steps of training to the output_dir.</span></span>
|
||||
<span id="cb1-467"><a href="#cb1-467" aria-hidden="true" tabindex="-1"></a><span class="co"> # see https://pytorch.org/blog/understanding-gpu-memory-1/ for more information</span></span>
|
||||
<span id="cb1-468"><a href="#cb1-468" aria-hidden="true" tabindex="-1"></a><span class="co"> # snapshots can be visualized @ https://pytorch.org/memory_viz</span></span>
|
||||
<span id="cb1-469"><a href="#cb1-469" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-470"><a href="#cb1-470" aria-hidden="true" tabindex="-1"></a><span class="fu">loss_watchdog_threshold</span><span class="kw">:</span><span class="co"> # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)</span></span>
|
||||
<span id="cb1-471"><a href="#cb1-471" aria-hidden="true" tabindex="-1"></a><span class="fu">loss_watchdog_patience</span><span class="kw">:</span><span class="co"> # Number of high-loss steps in a row before the trainer aborts (default: 3)</span></span>
|
||||
<span id="cb1-472"><a href="#cb1-472" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-473"><a href="#cb1-473" aria-hidden="true" tabindex="-1"></a><span class="co"># Save model as safetensors (require safetensors package)</span></span>
|
||||
<span id="cb1-474"><a href="#cb1-474" aria-hidden="true" tabindex="-1"></a><span class="fu">save_safetensors</span><span class="kw">:</span></span>
|
||||
<span id="cb1-475"><a href="#cb1-475" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-476"><a href="#cb1-476" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to mask out or include the human's prompt from the training labels</span></span>
|
||||
<span id="cb1-477"><a href="#cb1-477" aria-hidden="true" tabindex="-1"></a><span class="fu">train_on_inputs</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-478"><a href="#cb1-478" aria-hidden="true" tabindex="-1"></a><span class="co"># Group similarly sized data to minimize padding.</span></span>
|
||||
<span id="cb1-479"><a href="#cb1-479" aria-hidden="true" tabindex="-1"></a><span class="co"># May be slower to start, as it must download and sort the entire dataset.</span></span>
|
||||
<span id="cb1-480"><a href="#cb1-480" aria-hidden="true" tabindex="-1"></a><span class="co"># Note that training loss may have an oscillating pattern with this enabled.</span></span>
|
||||
<span id="cb1-481"><a href="#cb1-481" aria-hidden="true" tabindex="-1"></a><span class="fu">group_by_length</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-482"><a href="#cb1-482" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-483"><a href="#cb1-483" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing</span></span>
|
||||
<span id="cb1-484"><a href="#cb1-484" aria-hidden="true" tabindex="-1"></a><span class="fu">gradient_checkpointing</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-485"><a href="#cb1-485" aria-hidden="true" tabindex="-1"></a><span class="co"># additional kwargs to pass to the trainer for gradient checkpointing</span></span>
|
||||
<span id="cb1-486"><a href="#cb1-486" aria-hidden="true" tabindex="-1"></a><span class="co"># gradient_checkpointing_kwargs:</span></span>
|
||||
<span id="cb1-487"><a href="#cb1-487" aria-hidden="true" tabindex="-1"></a><span class="co"># use_reentrant: true</span></span>
|
||||
<span id="cb1-488"><a href="#cb1-488" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-489"><a href="#cb1-489" aria-hidden="true" tabindex="-1"></a><span class="co"># Stop training after this many evaluation losses have increased in a row</span></span>
|
||||
<span id="cb1-490"><a href="#cb1-490" aria-hidden="true" tabindex="-1"></a><span class="co"># https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback</span></span>
|
||||
<span id="cb1-491"><a href="#cb1-491" aria-hidden="true" tabindex="-1"></a><span class="fu">early_stopping_patience</span><span class="kw">:</span><span class="at"> </span><span class="dv">3</span></span>
|
||||
<span id="cb1-492"><a href="#cb1-492" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-493"><a href="#cb1-493" aria-hidden="true" tabindex="-1"></a><span class="co"># Specify a scheduler and kwargs to use with the optimizer</span></span>
|
||||
<span id="cb1-494"><a href="#cb1-494" aria-hidden="true" tabindex="-1"></a><span class="fu">lr_scheduler</span><span class="kw">:</span><span class="co"> # 'one_cycle' | 'rex' | 'log_sweep' | empty for cosine</span></span>
|
||||
<span id="cb1-495"><a href="#cb1-495" aria-hidden="true" tabindex="-1"></a><span class="fu">lr_scheduler_kwargs</span><span class="kw">:</span></span>
|
||||
<span id="cb1-496"><a href="#cb1-496" aria-hidden="true" tabindex="-1"></a><span class="fu">cosine_min_lr_ratio</span><span class="kw">:</span><span class="co"> # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr</span></span>
|
||||
<span id="cb1-497"><a href="#cb1-497" aria-hidden="true" tabindex="-1"></a><span class="fu">cosine_constant_lr_ratio</span><span class="kw">:</span><span class="co"> # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)</span></span>
|
||||
<span id="cb1-498"><a href="#cb1-498" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-499"><a href="#cb1-499" aria-hidden="true" tabindex="-1"></a><span class="co"># For one_cycle optim</span></span>
|
||||
<span id="cb1-500"><a href="#cb1-500" aria-hidden="true" tabindex="-1"></a><span class="fu">lr_div_factor</span><span class="kw">:</span><span class="co"> # Learning rate div factor</span></span>
|
||||
<span id="cb1-501"><a href="#cb1-501" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-502"><a href="#cb1-502" aria-hidden="true" tabindex="-1"></a><span class="co"># Specify optimizer</span></span>
|
||||
<span id="cb1-503"><a href="#cb1-503" aria-hidden="true" tabindex="-1"></a><span class="co"># Valid values are driven by the Transformers OptimizerNames class, see:</span></span>
|
||||
<span id="cb1-504"><a href="#cb1-504" aria-hidden="true" tabindex="-1"></a><span class="co"># https://github.com/huggingface/transformers/blob/cbf924b76c03828101a34069a96d209314114fd5/src/transformers/training_args.py#L144-L189</span></span>
|
||||
<span id="cb1-505"><a href="#cb1-505" aria-hidden="true" tabindex="-1"></a><span class="co">#</span></span>
|
||||
<span id="cb1-506"><a href="#cb1-506" aria-hidden="true" tabindex="-1"></a><span class="co"># Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of</span></span>
|
||||
<span id="cb1-507"><a href="#cb1-507" aria-hidden="true" tabindex="-1"></a><span class="co"># torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used</span></span>
|
||||
<span id="cb1-508"><a href="#cb1-508" aria-hidden="true" tabindex="-1"></a><span class="co"># in the examples/ for your model and fine-tuning use case.</span></span>
|
||||
<span id="cb1-509"><a href="#cb1-509" aria-hidden="true" tabindex="-1"></a><span class="co">#</span></span>
|
||||
<span id="cb1-510"><a href="#cb1-510" aria-hidden="true" tabindex="-1"></a><span class="co"># Valid values for 'optimizer' include:</span></span>
|
||||
<span id="cb1-511"><a href="#cb1-511" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch</span></span>
|
||||
<span id="cb1-512"><a href="#cb1-512" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch_fused</span></span>
|
||||
<span id="cb1-513"><a href="#cb1-513" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch_xla</span></span>
|
||||
<span id="cb1-514"><a href="#cb1-514" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch_npu_fused</span></span>
|
||||
<span id="cb1-515"><a href="#cb1-515" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_apex_fused</span></span>
|
||||
<span id="cb1-516"><a href="#cb1-516" aria-hidden="true" tabindex="-1"></a><span class="co"># - adopt_adamw (an EXPERIMENTAL optimizer, only for torch version >= 2.5.1)</span></span>
|
||||
<span id="cb1-517"><a href="#cb1-517" aria-hidden="true" tabindex="-1"></a><span class="co"># - adafactor</span></span>
|
||||
<span id="cb1-518"><a href="#cb1-518" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_anyprecision</span></span>
|
||||
<span id="cb1-519"><a href="#cb1-519" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_torch_4bit</span></span>
|
||||
<span id="cb1-520"><a href="#cb1-520" aria-hidden="true" tabindex="-1"></a><span class="co"># - ademamix</span></span>
|
||||
<span id="cb1-521"><a href="#cb1-521" aria-hidden="true" tabindex="-1"></a><span class="co"># - sgd</span></span>
|
||||
<span id="cb1-522"><a href="#cb1-522" aria-hidden="true" tabindex="-1"></a><span class="co"># - adagrad</span></span>
|
||||
<span id="cb1-523"><a href="#cb1-523" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_bnb_8bit</span></span>
|
||||
<span id="cb1-524"><a href="#cb1-524" aria-hidden="true" tabindex="-1"></a><span class="co"># - adamw_8bit # alias for adamw_bnb_8bit</span></span>
|
||||
<span id="cb1-525"><a href="#cb1-525" aria-hidden="true" tabindex="-1"></a><span class="co"># - ademamix_8bit</span></span>
|
||||
<span id="cb1-526"><a href="#cb1-526" aria-hidden="true" tabindex="-1"></a><span class="co"># - lion_8bit</span></span>
|
||||
<span id="cb1-527"><a href="#cb1-527" aria-hidden="true" tabindex="-1"></a><span class="co"># - lion_32bit</span></span>
|
||||
<span id="cb1-528"><a href="#cb1-528" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_adamw_32bit</span></span>
|
||||
<span id="cb1-529"><a href="#cb1-529" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_adamw_8bit</span></span>
|
||||
<span id="cb1-530"><a href="#cb1-530" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_ademamix_32bit</span></span>
|
||||
<span id="cb1-531"><a href="#cb1-531" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_ademamix_8bit</span></span>
|
||||
<span id="cb1-532"><a href="#cb1-532" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_lion_32bit</span></span>
|
||||
<span id="cb1-533"><a href="#cb1-533" aria-hidden="true" tabindex="-1"></a><span class="co"># - paged_lion_8bit</span></span>
|
||||
<span id="cb1-534"><a href="#cb1-534" aria-hidden="true" tabindex="-1"></a><span class="co"># - rmsprop</span></span>
|
||||
<span id="cb1-535"><a href="#cb1-535" aria-hidden="true" tabindex="-1"></a><span class="co"># - rmsprop_bnb</span></span>
|
||||
<span id="cb1-536"><a href="#cb1-536" aria-hidden="true" tabindex="-1"></a><span class="co"># - rmsprop_bnb_8bit</span></span>
|
||||
<span id="cb1-537"><a href="#cb1-537" aria-hidden="true" tabindex="-1"></a><span class="co"># - rmsprop_bnb_32bit</span></span>
|
||||
<span id="cb1-538"><a href="#cb1-538" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw</span></span>
|
||||
<span id="cb1-539"><a href="#cb1-539" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw_8bit</span></span>
|
||||
<span id="cb1-540"><a href="#cb1-540" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adafactor</span></span>
|
||||
<span id="cb1-541"><a href="#cb1-541" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw_layerwise</span></span>
|
||||
<span id="cb1-542"><a href="#cb1-542" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adamw_8bit_layerwise</span></span>
|
||||
<span id="cb1-543"><a href="#cb1-543" aria-hidden="true" tabindex="-1"></a><span class="co"># - galore_adafactor_layerwise</span></span>
|
||||
<span id="cb1-544"><a href="#cb1-544" aria-hidden="true" tabindex="-1"></a><span class="co"># - lomo</span></span>
|
||||
<span id="cb1-545"><a href="#cb1-545" aria-hidden="true" tabindex="-1"></a><span class="co"># - adalomo</span></span>
|
||||
<span id="cb1-546"><a href="#cb1-546" aria-hidden="true" tabindex="-1"></a><span class="co"># - grokadamw</span></span>
|
||||
<span id="cb1-547"><a href="#cb1-547" aria-hidden="true" tabindex="-1"></a><span class="co"># - schedule_free_adamw</span></span>
|
||||
<span id="cb1-548"><a href="#cb1-548" aria-hidden="true" tabindex="-1"></a><span class="co"># - schedule_free_sgd</span></span>
|
||||
<span id="cb1-549"><a href="#cb1-549" aria-hidden="true" tabindex="-1"></a><span class="co"># - apollo_adamw</span></span>
|
||||
<span id="cb1-550"><a href="#cb1-550" aria-hidden="true" tabindex="-1"></a><span class="co"># - apollo_adamw_layerwise</span></span>
|
||||
<span id="cb1-551"><a href="#cb1-551" aria-hidden="true" tabindex="-1"></a><span class="co">#</span></span>
|
||||
<span id="cb1-552"><a href="#cb1-552" aria-hidden="true" tabindex="-1"></a><span class="co"># Additional custom optimizers include:</span></span>
|
||||
<span id="cb1-553"><a href="#cb1-553" aria-hidden="true" tabindex="-1"></a><span class="co"># - optimi_adamw</span></span>
|
||||
<span id="cb1-554"><a href="#cb1-554" aria-hidden="true" tabindex="-1"></a><span class="co"># - ao_adamw_8bit</span></span>
|
||||
<span id="cb1-555"><a href="#cb1-555" aria-hidden="true" tabindex="-1"></a><span class="co"># - ao_adamw_fp8</span></span>
|
||||
<span id="cb1-556"><a href="#cb1-556" aria-hidden="true" tabindex="-1"></a><span class="fu">optimizer</span><span class="kw">:</span></span>
|
||||
<span id="cb1-557"><a href="#cb1-557" aria-hidden="true" tabindex="-1"></a><span class="co"># Dictionary of arguments to pass to the optimizer</span></span>
|
||||
<span id="cb1-558"><a href="#cb1-558" aria-hidden="true" tabindex="-1"></a><span class="fu">optim_args</span><span class="kw">:</span></span>
|
||||
<span id="cb1-559"><a href="#cb1-559" aria-hidden="true" tabindex="-1"></a><span class="co"># For Galore Optimizers the following optim_args are available</span></span>
|
||||
<span id="cb1-560"><a href="#cb1-560" aria-hidden="true" tabindex="-1"></a><span class="co"># rank: # type: int</span></span>
|
||||
<span id="cb1-561"><a href="#cb1-561" aria-hidden="true" tabindex="-1"></a><span class="co"># update_proj_gap # type: int</span></span>
|
||||
<span id="cb1-562"><a href="#cb1-562" aria-hidden="true" tabindex="-1"></a><span class="co"># scale # type: float</span></span>
|
||||
<span id="cb1-563"><a href="#cb1-563" aria-hidden="true" tabindex="-1"></a><span class="co"># proj_type: # type: str, default = std</span></span>
|
||||
<span id="cb1-564"><a href="#cb1-564" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-565"><a href="#cb1-565" aria-hidden="true" tabindex="-1"></a><span class="co"># The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm</span></span>
|
||||
<span id="cb1-566"><a href="#cb1-566" aria-hidden="true" tabindex="-1"></a><span class="fu">optim_target_modules</span><span class="kw">:</span></span>
|
||||
<span id="cb1-567"><a href="#cb1-567" aria-hidden="true" tabindex="-1"></a><span class="co"># - self_attn # for llama</span></span>
|
||||
<span id="cb1-568"><a href="#cb1-568" aria-hidden="true" tabindex="-1"></a><span class="co"># - mlp</span></span>
|
||||
<span id="cb1-569"><a href="#cb1-569" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-570"><a href="#cb1-570" aria-hidden="true" tabindex="-1"></a><span class="co"># Specify weight decay</span></span>
|
||||
<span id="cb1-571"><a href="#cb1-571" aria-hidden="true" tabindex="-1"></a><span class="fu">weight_decay</span><span class="kw">:</span></span>
|
||||
<span id="cb1-572"><a href="#cb1-572" aria-hidden="true" tabindex="-1"></a><span class="co"># adamw hyperparams</span></span>
|
||||
<span id="cb1-573"><a href="#cb1-573" aria-hidden="true" tabindex="-1"></a><span class="fu">adam_beta1</span><span class="kw">:</span></span>
|
||||
<span id="cb1-574"><a href="#cb1-574" aria-hidden="true" tabindex="-1"></a><span class="fu">adam_beta2</span><span class="kw">:</span></span>
|
||||
<span id="cb1-575"><a href="#cb1-575" aria-hidden="true" tabindex="-1"></a><span class="fu">adam_epsilon</span><span class="kw">:</span></span>
|
||||
<span id="cb1-576"><a href="#cb1-576" aria-hidden="true" tabindex="-1"></a><span class="co"># Gradient clipping max norm</span></span>
|
||||
<span id="cb1-577"><a href="#cb1-577" aria-hidden="true" tabindex="-1"></a><span class="fu">max_grad_norm</span><span class="kw">:</span></span>
|
||||
<span id="cb1-578"><a href="#cb1-578" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-579"><a href="#cb1-579" aria-hidden="true" tabindex="-1"></a><span class="co"># Augmentation techniques</span></span>
|
||||
<span id="cb1-580"><a href="#cb1-580" aria-hidden="true" tabindex="-1"></a><span class="co"># NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings</span></span>
|
||||
<span id="cb1-581"><a href="#cb1-581" aria-hidden="true" tabindex="-1"></a><span class="co"># currently only supported on Llama and Mistral</span></span>
|
||||
<span id="cb1-582"><a href="#cb1-582" aria-hidden="true" tabindex="-1"></a><span class="fu">neftune_noise_alpha</span><span class="kw">:</span></span>
|
||||
<span id="cb1-583"><a href="#cb1-583" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-584"><a href="#cb1-584" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to bettertransformers</span></span>
|
||||
<span id="cb1-585"><a href="#cb1-585" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_optimum</span><span class="kw">:</span></span>
|
||||
<span id="cb1-586"><a href="#cb1-586" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use xformers attention patch https://github.com/facebookresearch/xformers:</span></span>
|
||||
<span id="cb1-587"><a href="#cb1-587" aria-hidden="true" tabindex="-1"></a><span class="fu">xformers_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-588"><a href="#cb1-588" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:</span></span>
|
||||
<span id="cb1-589"><a href="#cb1-589" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-590"><a href="#cb1-590" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_cross_entropy</span><span class="kw">:</span><span class="co"> # Whether to use flash-attention cross entropy implementation - advanced use only</span></span>
|
||||
<span id="cb1-591"><a href="#cb1-591" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_rms_norm</span><span class="kw">:</span><span class="co"> # Whether to use flash-attention rms norm implementation - advanced use only</span></span>
|
||||
<span id="cb1-592"><a href="#cb1-592" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_fuse_qkv</span><span class="kw">:</span><span class="co"> # Whether to fuse QKV into a single operation</span></span>
|
||||
<span id="cb1-593"><a href="#cb1-593" aria-hidden="true" tabindex="-1"></a><span class="fu">flash_attn_fuse_mlp</span><span class="kw">:</span><span class="co"> # Whether to fuse part of the MLP into a single operation</span></span>
|
||||
<span id="cb1-594"><a href="#cb1-594" aria-hidden="true" tabindex="-1"></a><span class="co"># Whether to use scaled-dot-product attention</span></span>
|
||||
<span id="cb1-595"><a href="#cb1-595" aria-hidden="true" tabindex="-1"></a><span class="co"># https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html</span></span>
|
||||
<span id="cb1-596"><a href="#cb1-596" aria-hidden="true" tabindex="-1"></a><span class="fu">sdp_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-597"><a href="#cb1-597" aria-hidden="true" tabindex="-1"></a><span class="co"># Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf</span></span>
|
||||
<span id="cb1-598"><a href="#cb1-598" aria-hidden="true" tabindex="-1"></a><span class="fu">s2_attention</span><span class="kw">:</span></span>
|
||||
<span id="cb1-599"><a href="#cb1-599" aria-hidden="true" tabindex="-1"></a><span class="co"># Optional[bool]. Whether to use low_cpu_mem_usage</span></span>
|
||||
<span id="cb1-600"><a href="#cb1-600" aria-hidden="true" tabindex="-1"></a><span class="fu">low_cpu_mem_usage</span><span class="kw">:</span></span>
|
||||
<span id="cb1-601"><a href="#cb1-601" aria-hidden="true" tabindex="-1"></a><span class="co"># Resume from a specific checkpoint dir</span></span>
|
||||
<span id="cb1-602"><a href="#cb1-602" aria-hidden="true" tabindex="-1"></a><span class="fu">resume_from_checkpoint</span><span class="kw">:</span></span>
|
||||
<span id="cb1-603"><a href="#cb1-603" aria-hidden="true" tabindex="-1"></a><span class="co"># If resume_from_checkpoint isn't set and you simply want it to start where it left off.</span></span>
|
||||
<span id="cb1-604"><a href="#cb1-604" aria-hidden="true" tabindex="-1"></a><span class="co"># Be careful with this being turned on between different models.</span></span>
|
||||
<span id="cb1-605"><a href="#cb1-605" aria-hidden="true" tabindex="-1"></a><span class="fu">auto_resume_from_checkpoints</span><span class="kw">:</span><span class="at"> </span><span class="ch">false</span></span>
|
||||
<span id="cb1-606"><a href="#cb1-606" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-607"><a href="#cb1-607" aria-hidden="true" tabindex="-1"></a><span class="co">## Multimodal section</span></span>
|
||||
<span id="cb1-608"><a href="#cb1-608" aria-hidden="true" tabindex="-1"></a><span class="co"># int | tuple[int, int] | None . Size to resize images to, width x height.</span></span>
|
||||
<span id="cb1-609"><a href="#cb1-609" aria-hidden="true" tabindex="-1"></a><span class="co"># Will read from model/processor config if not set.</span></span>
|
||||
<span id="cb1-610"><a href="#cb1-610" aria-hidden="true" tabindex="-1"></a><span class="fu">image_size</span><span class="kw">:</span></span>
|
||||
<span id="cb1-611"><a href="#cb1-611" aria-hidden="true" tabindex="-1"></a><span class="co"># str. Algorithm to use for image resizing. "bilinear", "bicubic", "lanczos". Default is "bilinear".</span></span>
|
||||
<span id="cb1-612"><a href="#cb1-612" aria-hidden="true" tabindex="-1"></a><span class="fu">image_resize_algorithm</span><span class="kw">:</span><span class="at"> </span><span class="st">'bilinear'</span></span>
|
||||
<span id="cb1-613"><a href="#cb1-613" aria-hidden="true" tabindex="-1"></a><span class="co">## End of multimodal section</span></span>
|
||||
<span id="cb1-614"><a href="#cb1-614" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-615"><a href="#cb1-615" aria-hidden="true" tabindex="-1"></a><span class="co"># Don't mess with this, it's here for accelerate and torchrun</span></span>
|
||||
<span id="cb1-616"><a href="#cb1-616" aria-hidden="true" tabindex="-1"></a><span class="fu">local_rank</span><span class="kw">:</span></span>
|
||||
<span id="cb1-617"><a href="#cb1-617" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-618"><a href="#cb1-618" aria-hidden="true" tabindex="-1"></a><span class="co"># Add or change special tokens.</span></span>
|
||||
<span id="cb1-619"><a href="#cb1-619" aria-hidden="true" tabindex="-1"></a><span class="co"># If you add tokens here, you don't need to add them to the `tokens` list.</span></span>
|
||||
<span id="cb1-620"><a href="#cb1-620" aria-hidden="true" tabindex="-1"></a><span class="fu">special_tokens</span><span class="kw">:</span></span>
|
||||
<span id="cb1-621"><a href="#cb1-621" aria-hidden="true" tabindex="-1"></a><span class="co"> # bos_token: "<s>"</span></span>
|
||||
<span id="cb1-622"><a href="#cb1-622" aria-hidden="true" tabindex="-1"></a><span class="co"> # eos_token: "</s>"</span></span>
|
||||
<span id="cb1-623"><a href="#cb1-623" aria-hidden="true" tabindex="-1"></a><span class="co"> # unk_token: "<unk>"</span></span>
|
||||
<span id="cb1-624"><a href="#cb1-624" aria-hidden="true" tabindex="-1"></a><span class="co"> # pad_token: "[PAD]"</span></span>
|
||||
<span id="cb1-625"><a href="#cb1-625" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-626"><a href="#cb1-626" aria-hidden="true" tabindex="-1"></a><span class="co"># Add extra tokens.</span></span>
|
||||
<span id="cb1-627"><a href="#cb1-627" aria-hidden="true" tabindex="-1"></a><span class="fu">tokens</span><span class="kw">:</span></span>
|
||||
<span id="cb1-628"><a href="#cb1-628" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-629"><a href="#cb1-629" aria-hidden="true" tabindex="-1"></a><span class="co"># Mapping token_id to new_token_string to override reserved added_tokens in the tokenizer.</span></span>
|
||||
<span id="cb1-630"><a href="#cb1-630" aria-hidden="true" tabindex="-1"></a><span class="co"># Only works for tokens that are not part of the base vocab (aka are added_tokens).</span></span>
|
||||
<span id="cb1-631"><a href="#cb1-631" aria-hidden="true" tabindex="-1"></a><span class="co"># Can be checked if they exist in tokenizer.json added_tokens.</span></span>
|
||||
<span id="cb1-632"><a href="#cb1-632" aria-hidden="true" tabindex="-1"></a><span class="fu">added_tokens_overrides</span><span class="kw">:</span><span class="co"> # Dict[int, str]</span></span>
|
||||
<span id="cb1-633"><a href="#cb1-633" aria-hidden="true" tabindex="-1"></a><span class="co"># 128041: "<|im_start|>"</span></span>
|
||||
<span id="cb1-634"><a href="#cb1-634" aria-hidden="true" tabindex="-1"></a><span class="co"># 128042: "<|im_end|>"</span></span>
|
||||
<span id="cb1-635"><a href="#cb1-635" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-636"><a href="#cb1-636" aria-hidden="true" tabindex="-1"></a><span class="co"># FSDP</span></span>
|
||||
<span id="cb1-637"><a href="#cb1-637" aria-hidden="true" tabindex="-1"></a><span class="fu">fsdp</span><span class="kw">:</span></span>
|
||||
<span id="cb1-638"><a href="#cb1-638" aria-hidden="true" tabindex="-1"></a><span class="fu">fsdp_config</span><span class="kw">:</span></span>
|
||||
<span id="cb1-639"><a href="#cb1-639" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-640"><a href="#cb1-640" aria-hidden="true" tabindex="-1"></a><span class="co"># Deepspeed config path. e.g., deepspeed_configs/zero3.json</span></span>
|
||||
<span id="cb1-641"><a href="#cb1-641" aria-hidden="true" tabindex="-1"></a><span class="fu">deepspeed</span><span class="kw">:</span></span>
|
||||
<span id="cb1-642"><a href="#cb1-642" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-643"><a href="#cb1-643" aria-hidden="true" tabindex="-1"></a><span class="co"># Advanced DDP Arguments</span></span>
|
||||
<span id="cb1-644"><a href="#cb1-644" aria-hidden="true" tabindex="-1"></a><span class="fu">ddp_timeout</span><span class="kw">:</span></span>
|
||||
<span id="cb1-645"><a href="#cb1-645" aria-hidden="true" tabindex="-1"></a><span class="fu">ddp_bucket_cap_mb</span><span class="kw">:</span></span>
|
||||
<span id="cb1-646"><a href="#cb1-646" aria-hidden="true" tabindex="-1"></a><span class="fu">ddp_broadcast_buffers</span><span class="kw">:</span></span>
|
||||
<span id="cb1-647"><a href="#cb1-647" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-648"><a href="#cb1-648" aria-hidden="true" tabindex="-1"></a><span class="co"># Sequence parallelism</span></span>
|
||||
<span id="cb1-649"><a href="#cb1-649" aria-hidden="true" tabindex="-1"></a><span class="co"># Set to a divisor of the number of GPUs available to split sequences into chunks of equal size.</span></span>
|
||||
<span id="cb1-650"><a href="#cb1-650" aria-hidden="true" tabindex="-1"></a><span class="co"># Use in long context training to prevent OOM when sequences cannot fit into a single GPU's VRAM.</span></span>
|
||||
<span id="cb1-651"><a href="#cb1-651" aria-hidden="true" tabindex="-1"></a><span class="co"># E.g., if 4 GPUs are available, set this value to 2 to split each sequence into two equal-sized</span></span>
|
||||
<span id="cb1-652"><a href="#cb1-652" aria-hidden="true" tabindex="-1"></a><span class="co"># subsequences, or set to 4 to split into four equal-sized subsequences.</span></span>
|
||||
<span id="cb1-653"><a href="#cb1-653" aria-hidden="true" tabindex="-1"></a><span class="co"># See https://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html for more details.</span></span>
|
||||
<span id="cb1-654"><a href="#cb1-654" aria-hidden="true" tabindex="-1"></a><span class="fu">sequence_parallel_degree</span><span class="kw">:</span></span>
|
||||
<span id="cb1-655"><a href="#cb1-655" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-656"><a href="#cb1-656" aria-hidden="true" tabindex="-1"></a><span class="co"># Path to torch distx for optim 'adamw_anyprecision'</span></span>
|
||||
<span id="cb1-657"><a href="#cb1-657" aria-hidden="true" tabindex="-1"></a><span class="fu">torchdistx_path</span><span class="kw">:</span></span>
|
||||
<span id="cb1-658"><a href="#cb1-658" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-659"><a href="#cb1-659" aria-hidden="true" tabindex="-1"></a><span class="co"># Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize</span></span>
|
||||
<span id="cb1-660"><a href="#cb1-660" aria-hidden="true" tabindex="-1"></a><span class="fu">pretraining_dataset</span><span class="kw">:</span></span>
|
||||
<span id="cb1-661"><a href="#cb1-661" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-662"><a href="#cb1-662" aria-hidden="true" tabindex="-1"></a><span class="co"># Debug mode</span></span>
|
||||
<span id="cb1-663"><a href="#cb1-663" aria-hidden="true" tabindex="-1"></a><span class="fu">debug</span><span class="kw">:</span></span>
|
||||
<span id="cb1-664"><a href="#cb1-664" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-665"><a href="#cb1-665" aria-hidden="true" tabindex="-1"></a><span class="co"># Seed</span></span>
|
||||
<span id="cb1-666"><a href="#cb1-666" aria-hidden="true" tabindex="-1"></a><span class="fu">seed</span><span class="kw">:</span></span>
|
||||
<span id="cb1-667"><a href="#cb1-667" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb1-668"><a href="#cb1-668" aria-hidden="true" tabindex="-1"></a><span class="co"># Allow overwrite yml config using from cli</span></span>
|
||||
<span id="cb1-669"><a href="#cb1-669" aria-hidden="true" tabindex="-1"></a><span class="fu">strict</span><span class="kw">:</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -437,6 +437,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
<li><a href="#requirements" id="toc-requirements" class="nav-link" data-scroll-target="#requirements">Requirements</a></li>
|
||||
<li><a href="#installation" id="toc-installation" class="nav-link" data-scroll-target="#installation">Installation</a></li>
|
||||
<li><a href="#usage" id="toc-usage" class="nav-link" data-scroll-target="#usage">Usage</a></li>
|
||||
<li><a href="#supported-models" id="toc-supported-models" class="nav-link" data-scroll-target="#supported-models">Supported Models</a></li>
|
||||
<li><a href="#citation" id="toc-citation" class="nav-link" data-scroll-target="#citation">Citation</a></li>
|
||||
</ul></li>
|
||||
<li><a href="#grokfast" id="toc-grokfast" class="nav-link" data-scroll-target="#grokfast">Grokfast</a>
|
||||
@@ -494,7 +495,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
<p>To enable them, please check the respective documentations.</p>
|
||||
<section id="cut-cross-entropy" class="level2">
|
||||
<h2 class="anchored" data-anchor-id="cut-cross-entropy">Cut Cross Entropy</h2>
|
||||
<p>Cut Cross Entropy reduces VRAM usage through optimization on the cross-entropy operation during loss calculation.</p>
|
||||
<p>Cut Cross Entropy (CCE) reduces VRAM usage through optimization on the cross-entropy operation during loss calculation.</p>
|
||||
<p>See https://github.com/apple/ml-cross-entropy</p>
|
||||
<section id="requirements" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="requirements">Requirements</h3>
|
||||
@@ -516,6 +517,22 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
|
||||
<span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a></span>
|
||||
<span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a><span class="fu">cut_cross_entropy</span><span class="kw">:</span><span class="at"> </span><span class="ch">true</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
</section>
|
||||
<section id="supported-models" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="supported-models">Supported Models</h3>
|
||||
<ul>
|
||||
<li>llama</li>
|
||||
<li>phi3</li>
|
||||
<li>gemma</li>
|
||||
<li>gemma2</li>
|
||||
<li>gemma3</li>
|
||||
<li>gemma3_text</li>
|
||||
<li>mistral</li>
|
||||
<li>mistral3</li>
|
||||
<li>qwen2</li>
|
||||
<li>cohere</li>
|
||||
<li>cohere2</li>
|
||||
</ul>
|
||||
</section>
|
||||
<section id="citation" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="citation">Citation</h3>
|
||||
<div class="sourceCode" id="cb3"><pre class="sourceCode bib code-with-copy"><code class="sourceCode bibtex"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a><span class="va">@article</span>{<span class="ot">wijmans2024cut</span>,</span>
|
||||
|
||||
@@ -576,8 +576,7 @@ Tip
|
||||
<ul>
|
||||
<li><code>JUPYTER_DISABLE</code>: Disable Jupyter lab.</li>
|
||||
<li><code>JUPYTER_PASSWORD</code>: Set a password for the Jupyter lab.</li>
|
||||
<li><code>PUBLIC_KEY</code>: Add a public key for the SSH service.</li>
|
||||
<li><code>SSH_KEY</code>: Add a private key for the SSH service.</li>
|
||||
<li><code>PUBLIC_KEY</code> / <code>SSH_KEY</code>: Add a public key for the SSH service.</li>
|
||||
</ul>
|
||||
</section>
|
||||
<section id="volume-mounts" class="level4">
|
||||
|
||||
18
search.json
18
search.json
File diff suppressed because one or more lines are too long
336
sitemap.xml
336
sitemap.xml
@@ -2,674 +2,674 @@
|
||||
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.819Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.592Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/stepwise_supervised.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.812Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/config.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.812Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/multi-gpu.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/installation.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/reward_modelling.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.mlflow_.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.246Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.905Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.trainer_fsdp_optim.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.841Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.500Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.data.batch_dataset_fetcher.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.857Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.516Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.stepwise_supervised.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.545Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.206Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.mistral_attn_hijack_flash.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.789Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.447Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.user_defined.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.592Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.252Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.liger.args.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.162Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.822Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.training.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.025Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.685Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/datasets.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.053Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.706Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.geglu.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.728Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.387Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_attn_hijack_flash.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.774Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.431Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.sweeps.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.384Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.039Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.freeze.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.928Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.multipack.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.791Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.448Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.main.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.283Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.938Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.trl.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.454Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.114Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.passthrough.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.594Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.253Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.format.llama3x.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.239Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.891Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.datasets.transforms.chat_builder.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.253Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.905Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.kto.user_defined.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.611Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.271Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.mamba.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.220Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.880Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.base.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.147Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.807Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.bench.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.920Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.580Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.swiglu.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.738Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.397Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.format.shared.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.240Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.892Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.cut_cross_entropy.args.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.150Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.810Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.datasets.chat.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.245Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.897Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.lisa.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.242Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.901Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.grokfast.optimizer.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.151Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.811Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.alpaca_chat.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.495Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.154Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.alpaca_instruct.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.496Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.156Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.kto.chatml.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.610Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.269Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.integrations.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.071Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.731Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.trl.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.054Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.714Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_tokenizers.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.109Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.761Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.data.sft.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.002Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.662Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schedulers.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.969Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.629Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.chat_templates.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.903Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.563Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.models.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.887Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.546Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.chatml.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.589Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.249Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.distributed.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.988Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.648Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.utils.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.829Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.488Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.utils.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.084Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.744Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_expand_mask.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.799Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.457Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/common.datasets.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.188Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.848Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/logging_config.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.114Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.766Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.quantize.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.745Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.404Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_patch_multipack.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.832Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.491Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.comet_.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.249Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.908Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.trainer.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.945Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.604Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/common.architectures.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.170Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.830Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/models.mamba.modeling_mamba.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.189Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.849Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.spectrum.args.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.169Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.828Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.merge_sharded_fsdp_weights.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.370Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.025Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.bradley_terry.llama3.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.635Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.295Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.merge_lora.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.358Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.013Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.lora.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.908Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.567Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.relora.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.798Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.455Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.cloud.base.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.418Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.074Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/common.const.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.172Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.831Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/convert.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.067Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.720Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.chat_template.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.481Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.141Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.utils.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.746Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.405Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.lora_embeddings.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.911Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.571Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/lora_optims.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.812Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/lr_groups.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/TODO.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.811Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.586Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.835Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.606Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/index.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.831Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.603Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.835Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.607Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/FAQS.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.810Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.586Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/sequence_parallelism.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/inference.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.590Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/getting-started.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.perplexity.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.237Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.896Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainer_builder.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.129Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.781Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.train.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.291Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.946Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.llama3.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.579Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.239Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.cloud.modal_.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.425Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.080Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/index.html</loc>
|
||||
<lastmod>2025-03-23T15:09:29.976Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.627Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.input_output.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.541Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.201Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.optimizers.adopt.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.999Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.659Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.btlm_attn_hijack_flash.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.830Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.490Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.core.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.191Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.850Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.datasets.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.042Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.702Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.kd.trainer.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.159Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.819Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.tokenization.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.893Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.553Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.mixtral.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.858Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.517Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.stablelm_attn_hijack_flash.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.837Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.497Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.model.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.020Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.680Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.multimodal.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.059Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.719Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.gradient_checkpointing.unsloth.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.005Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.665Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.base.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.442Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.097Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.unsloth_.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.848Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.508Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.samplers.multipack.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.230Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.890Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.callbacks.profiler.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.240Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.900Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/integrations.lm_eval.args.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.166Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.825Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.data.pretraining.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.000Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.661Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/evaluate.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.046Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.699Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.dict.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.991Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.652Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.utils.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.415Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.071Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.pygmalion.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.563Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.223Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.training_args.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.213Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.865Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.inference.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.350Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.005Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/kernels.lora.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.717Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.376Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.evaluate.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.300Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.954Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.batching.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.217Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.876Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.completion.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.535Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.195Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.zephyr.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.591Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.251Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.metharme.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.552Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.213Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.orpo.chat_template.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.632Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.291Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.alpaca_w_system.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.508Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.168Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.model_shard_quant.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.917Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.576Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.config.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.336Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.991Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.enums.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.078Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.738Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.preprocess.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.378Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.033Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.messages.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.235Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.888Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.dpo.chat_template.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.569Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.229Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.peft.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.051Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.711Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/train.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.036Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.688Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.messages.chat.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.567Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.227Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.orcamini.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.556Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.216Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.collators.mm_chat.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.225Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.884Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.kto.llama3.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.602Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.261Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.attention.mllama.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.855Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.514Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.checks.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.319Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.974Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.transformers_fa_utils.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.847Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.506Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.llama_attn_hijack_xformers.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.775Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.432Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.dpo.trainer.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.461Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.121Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.user_defined.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.516Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.176Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/cli.args.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.313Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.967Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.llama2_chat.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.529Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.189Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/utils.schemas.config.html</loc>
|
||||
<lastmod>2025-03-23T15:09:31.013Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.673Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.trainers.grpo.trainer.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.465Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.124Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/core.chat.format.chatml.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.237Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:48.889Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/monkeypatch.lora_kernels.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.821Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.479Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/api/prompt_strategies.base.html</loc>
|
||||
<lastmod>2025-03-23T15:09:30.466Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:49.126Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/cli.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.812Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/custom_integrations.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.812Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.817Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/docker.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.588Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/ray-integration.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.818Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.591Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html</loc>
|
||||
<lastmod>2025-03-23T15:09:01.813Z</lastmod>
|
||||
<lastmod>2025-03-26T22:15:09.587Z</lastmod>
|
||||
</url>
|
||||
</urlset>
|
||||
|
||||
Reference in New Issue
Block a user