* add grpo scale_rewards config for trl#3135 * options to connect to vllm server directly w grpo trl#3094 * temperature support trl#3029 * sampling/generation kwargs for grpo trl#2989 * make vllm_enable_prefix_caching a config param trl#2900 * grpo multi-step optimizeations trl#2899 * remove overrides for grpo trainer * bump trl to 0.16.0 * add cli to start vllm-serve via trl * call the python module directly * update to use vllm with 2.6.0 too now and call trl vllm serve from module * vllm 0.8.1 * use python3 * use sys.executable * remove context and wait for start * fixes to make it actually work * fixes so the grpo tests pass with new vllm paradigm * explicit host/port and check in start vllm * make sure that vllm doesn't hang by setting quiet so outouts go to dev null * also bump bnb to latest release * add option for wait from cli and nccl debugging for ci * grpo + vllm test on separate devices for now * make sure grpo + vllm tests runs single worker since pynccl comms would conflict * fix cli * remove wait and add caching for argilla dataset * refactoring configs * chore: lint * add vllm config * fixup vllm grpo args * fix one more incorrect schema/config path * fix another vlllm reference and increase timeout * make the tests run a bit faster * change mbsz back so it is correct for grpo * another change mbsz back so it is correct for grpo * fixing cli args * nits * adding docs * docs * include tensor parallel size for vllm in pydantic schema * moving start_vllm, more docs * limit output len for grpo vllm * vllm enable_prefix_caching isn't a bool cli arg * fix env ordering in tests and also use pid check when looking for vllm --------- Co-authored-by: Salman Mohammadi <salman.mohammadi@outlook.com>
73 lines
2.4 KiB
YAML
73 lines
2.4 KiB
YAML
name: docker-multigpu-tests-biweekly
|
|
|
|
on:
|
|
pull_request:
|
|
paths:
|
|
- 'tests/e2e/multigpu/*.py'
|
|
- 'requirements.txt'
|
|
- 'setup.py'
|
|
- 'pyproject.toml'
|
|
- '.github/workflows/multi-gpu-e2e.yml'
|
|
workflow_dispatch:
|
|
schedule:
|
|
- cron: '0 0 * * 1,4' # Runs at 00:00 UTC every monday & thursday
|
|
|
|
# Cancel jobs on the same ref if a new one is triggered
|
|
concurrency:
|
|
group: ${{ github.workflow }}-${{ github.ref }}
|
|
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
|
|
|
|
jobs:
|
|
test-axolotl-multigpu:
|
|
if: ${{ ! contains(github.event.commits[0].message, '[skip e2e]') && github.repository_owner == 'axolotl-ai-cloud' }}
|
|
strategy:
|
|
fail-fast: false
|
|
matrix:
|
|
include:
|
|
- cuda: 124
|
|
cuda_version: 12.4.1
|
|
python_version: "3.11"
|
|
pytorch: 2.4.1
|
|
axolotl_extras: # no vllm support for 2.4.1
|
|
num_gpus: 2
|
|
nightly_build: "true"
|
|
- cuda: 124
|
|
cuda_version: 12.4.1
|
|
python_version: "3.11"
|
|
pytorch: 2.5.1
|
|
axolotl_extras: vllm
|
|
num_gpus: 2
|
|
nightly_build: "true"
|
|
- cuda: 124
|
|
cuda_version: 12.4.1
|
|
python_version: "3.11"
|
|
pytorch: 2.6.0
|
|
axolotl_extras: vllm
|
|
num_gpus: 2
|
|
nightly_build: "true"
|
|
runs-on: [self-hosted, modal]
|
|
timeout-minutes: 120
|
|
steps:
|
|
- name: Checkout
|
|
uses: actions/checkout@v4
|
|
- name: Install Python
|
|
uses: actions/setup-python@v5
|
|
with:
|
|
python-version: "3.11"
|
|
- name: Install Modal
|
|
run: |
|
|
python -m pip install --upgrade pip
|
|
pip install modal==0.71.8 jinja2
|
|
- name: Update env vars
|
|
run: |
|
|
echo "BASE_TAG=main-base-py${{ matrix.python_version }}-cu${{ matrix.cuda }}-${{ matrix.pytorch }}" >> $GITHUB_ENV
|
|
echo "PYTORCH_VERSION=${{ matrix.pytorch}}" >> $GITHUB_ENV
|
|
echo "AXOLOTL_ARGS=${{ matrix.axolotl_args}}" >> $GITHUB_ENV
|
|
echo "AXOLOTL_EXTRAS=${{ matrix.axolotl_extras}}" >> $GITHUB_ENV
|
|
echo "CUDA=${{ matrix.cuda }}" >> $GITHUB_ENV
|
|
echo "N_GPUS=${{ matrix.num_gpus }}" >> $GITHUB_ENV
|
|
echo "NIGHTLY_BUILD=${{ matrix.nightly_build }}" >> $GITHUB_ENV
|
|
- name: Run tests job on Modal
|
|
run: |
|
|
modal run cicd.multigpu
|