E2e device cuda (#575)
* use torch.cuda.current_device() instead of local_rank * ignore NVML errors for gpu stats * llama lora packing e2e tests
This commit is contained in:
1
.github/workflows/e2e.yml
vendored
1
.github/workflows/e2e.yml
vendored
@@ -24,6 +24,7 @@ jobs:
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip3 install -e .
|
||||
pip3 install flash-attn
|
||||
pip3 install -r requirements-tests.txt
|
||||
|
||||
- name: Run e2e tests
|
||||
|
||||
Reference in New Issue
Block a user