diff --git a/.nojekyll b/.nojekyll index e0ce4cb7b..047915d19 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -85acc50e \ No newline at end of file +85563c2f \ No newline at end of file diff --git a/docs/docker.html b/docs/docker.html index 41dcd2211..cf3c1e9c2 100644 --- a/docs/docker.html +++ b/docs/docker.html @@ -545,7 +545,6 @@ Important
main-base-py3.11-cu126-2.7.0main-base-py3.11-cu124-2.6.0main-base-py3.11-cu124-2.5.1main-base-py3.11-cu124-2.4.1main-py3.11-cu126-2.7.0main-py3.11-cu124-2.6.0main-py3.11-cu124-2.5.1main-py3.11-cu124-2.4.1main-latestmain-20250303-py3.11-cu124-2.6.0main-20250303-py3.11-cu124-2.5.1main-20250303-py3.11-cu124-2.4.10.7.10.9.2bf16 and Flash Attention) or AMD GPUFor the latest features between releases:
-git clone https://github.com/axolotl-ai-cloud/axolotl.git
-cd axolotl
-pip3 install -U packaging setuptools wheel ninja
-pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'uv is a fast, reliable Python package installer and resolver built in Rust. It offers significant performance improvements over pip and provides better dependency resolution, making it an excellent choice for complex environments.
+Install uv if not already installed
+curl -LsSf https://astral.sh/uv/install.sh | sh
+source $HOME/.local/bin/envChoose your CUDA version to use with PyTorch; e.g. cu124, cu126, cu128,
+then create the venv and activate
export UV_TORCH_BACKEND=cu126
+uv venv --no-project --relocatable
+source .venv/bin/activateInstall PyTorch +- PyTorch 2.6.0 recommended
+uv pip install packaging setuptools wheel
+uv pip install torch==2.6.0
+uv pip install awscli pydanticInstall axolotl from PyPi
+uv pip install --no-build-isolation axolotl[deepspeed,flash-attn]
+
+# optionally install with vLLM if you're using torch==2.6.0 and want to train w/ GRPO
+uv pip install --no-build-isolation axolotl[deepspeed,flash-attn,vllm]docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latestFor the latest features between releases:
+git clone https://github.com/axolotl-ai-cloud/axolotl.git
+cd axolotl
+pip3 install -U packaging setuptools wheel ninja
+pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latestFor development with Docker:
-docker compose up -ddocker compose up -ddocker run --privileged --gpus '"all"' --shm-size 10g --rm -it \
- --name axolotl --ipc=host \
- --ulimit memlock=-1 --ulimit stack=67108864 \
- --mount type=bind,src="${PWD}",target=/workspace/axolotl \
- -v ${HOME}/.cache/huggingface:/root/.cache/huggingface \
- axolotlai/axolotl:main-latestdocker run --privileged --gpus '"all"' --shm-size 10g --rm -it \
+ --name axolotl --ipc=host \
+ --ulimit memlock=-1 --ulimit stack=67108864 \
+ --mount type=bind,src="${PWD}",target=/workspace/axolotl \
+ -v ${HOME}/.cache/huggingface:/root/.cache/huggingface \
+ axolotlai/axolotl:main-latestpip3 install --no-build-isolation -e '.'pip3 install --no-build-isolation -e '.'See Section 6 for Mac-specific issues.
Install Python ≥3.10
Install PyTorch: https://pytorch.org/get-started/locally/
Install Axolotl:
-pip3 install -U packaging setuptools wheel ninja
-pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'pip3 install -U packaging setuptools wheel ninja
+pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'(Optional) Login to Hugging Face:
-huggingface-cli loginhuggingface-cli loginbf16 and Flash Attention) or AMD GPU