Built site for gh-pages
This commit is contained in:
@@ -558,7 +558,7 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
|
||||
<ul>
|
||||
<li>If you are installing from pip</li>
|
||||
</ul>
|
||||
<div class="sourceCode" id="cb2"><pre class="sourceCode bash code-with-copy"><code class="sourceCode bash"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="ex">pip3</span> uninstall <span class="at">-y</span> cut-cross-entropy <span class="kw">&&</span> <span class="ex">pip3</span> install <span class="st">"cut-cross-entropy[transformers] @ git+https://github.com/axolotl-ai-cloud/ml-cross-entropy.git@78b2a45713a54c9bedf8b33f5e31cf07a1a57154"</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
<div class="sourceCode" id="cb2"><pre class="sourceCode bash code-with-copy"><code class="sourceCode bash"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="ex">pip3</span> uninstall <span class="at">-y</span> cut-cross-entropy <span class="kw">&&</span> <span class="ex">pip3</span> install <span class="st">"cut-cross-entropy[transformers] @ git+https://github.com/axolotl-ai-cloud/ml-cross-entropy.git@622068a"</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
||||
</section>
|
||||
<section id="usage" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="usage">Usage</h3>
|
||||
|
||||
@@ -551,6 +551,14 @@ gtag('config', 'G-9KYCVJBNMQ', { 'anonymize_ip': true});
|
||||
<blockquote class="blockquote">
|
||||
<p>A: This is because you may be using <code>preprocess</code> CLI with <code>pretraining_dataset:</code> or <code>skip_prepare_dataset: true</code> respectively. Please use <code>axolotl train</code> CLI directly instead as these datasets are prepared on demand.</p>
|
||||
</blockquote>
|
||||
<p><strong>Q: vLLM is not working with Axolotl</strong></p>
|
||||
<blockquote class="blockquote">
|
||||
<p>A: We currently recommend torch 2.6.0 for use with <code>vllm</code>. Please ensure you use the right version. For Docker, please use the <code>main-py3.11-cu124-2.6.0</code> tag.</p>
|
||||
</blockquote>
|
||||
<p><strong>Q: FA2 2.8.0 <code>undefined symbol</code> runtime error on CUDA 12.4</strong></p>
|
||||
<blockquote class="blockquote">
|
||||
<p>A: There seems to be a wheel issue with FA2 2.8.0 on CUDA 12.4. Try CUDA 12.6 instead or downgrade to FA2 2.7.4. Please refer to the upstream issue: https://github.com/Dao-AILab/flash-attention/issues/1717.</p>
|
||||
</blockquote>
|
||||
</section>
|
||||
<section id="chat-templates" class="level3">
|
||||
<h3 class="anchored" data-anchor-id="chat-templates">Chat templates</h3>
|
||||
|
||||
Reference in New Issue
Block a user