E2e device cuda (#575)

* use torch.cuda.current_device() instead of local_rank

* ignore NVML errors for gpu stats

* llama lora packing e2e tests
This commit is contained in:
Wing Lian
2023-09-14 22:49:27 -04:00
committed by GitHub
parent 9218ebecd2
commit 24146733db
4 changed files with 52 additions and 6 deletions

View File

@@ -24,6 +24,7 @@ jobs:
- name: Install dependencies
run: |
pip3 install -e .
pip3 install flash-attn
pip3 install -r requirements-tests.txt
- name: Run e2e tests