Files
axolotl/.github/workflows/e2e.yml
Wing Lian 24146733db E2e device cuda (#575)
* use torch.cuda.current_device() instead of local_rank

* ignore NVML errors for gpu stats

* llama lora packing e2e tests
2023-09-14 22:49:27 -04:00

33 lines
699 B
YAML

name: E2E
on:
workflow_dispatch:
jobs:
e2e-test:
runs-on: [self-hosted, gpu]
strategy:
fail-fast: false
matrix:
python_version: ["3.10"]
timeout-minutes: 10
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python_version }}
cache: 'pip' # caching pip dependencies
- name: Install dependencies
run: |
pip3 install -e .
pip3 install flash-attn
pip3 install -r requirements-tests.txt
- name: Run e2e tests
run: |
pytest tests/e2e/