Files
axolotl/.github/workflows
Wing Lian 24146733db E2e device cuda (#575)
* use torch.cuda.current_device() instead of local_rank

* ignore NVML errors for gpu stats

* llama lora packing e2e tests
2023-09-14 22:49:27 -04:00
..
2023-07-26 16:27:49 -04:00
2023-09-14 22:49:27 -04:00
2023-09-14 21:56:11 -04:00
2023-09-14 21:56:11 -04:00