Wing Lian
24146733db
E2e device cuda (#575)
* use torch.cuda.current_device() instead of local_rank
* ignore NVML errors for gpu stats
* llama lora packing e2e tests
2023-09-14 22:49:27 -04:00
..
2023-09-14 22:49:27 -04:00
2023-08-23 04:04:49 -04:00
2023-08-12 15:14:56 -04:00
2023-09-13 00:16:40 -04:00
2023-08-13 01:15:50 +00:00
2023-08-12 15:14:56 -04:00
2023-08-12 15:14:56 -04:00
2023-08-12 15:14:56 -04:00
2023-08-12 15:14:56 -04:00
2023-08-12 18:55:06 -07:00
2023-09-06 17:00:21 -04:00