Files
axolotl/tests
Wing Lian 24146733db E2e device cuda (#575)
* use torch.cuda.current_device() instead of local_rank

* ignore NVML errors for gpu stats

* llama lora packing e2e tests
2023-09-14 22:49:27 -04:00
..
2023-09-14 22:49:27 -04:00
2023-08-13 01:15:50 +00:00
2023-08-12 18:55:06 -07:00