Files
axolotl/tests/e2e
Wing Lian 24146733db E2e device cuda (#575)
* use torch.cuda.current_device() instead of local_rank

* ignore NVML errors for gpu stats

* llama lora packing e2e tests
2023-09-14 22:49:27 -04:00
..
2023-09-14 22:49:27 -04:00