bump transformers and set roundup_power2_divisions for more VRAM improvements, low bit ao optimizers (#1769)

* bump transformers and set roundup_power2_divisions for more VRAM improvements

* support for low bit optimizers from torch ao

* fix check for alternate optimizers and use nous models on hf for llama3

* add missing check for ao_adamw_fp8

* fix check when using custom optimizers w adamw
This commit is contained in:
Wing Lian
2024-07-19 00:47:07 -04:00
committed by GitHub
parent 7830fe04b5
commit e4063d60a7
9 changed files with 64 additions and 10 deletions

19
docs/torchao.qmd Normal file
View File

@@ -0,0 +1,19 @@
---
title: "PyTorch ao"
description: "Custom data types and layouts for training and inference"
---
### Installation
Stable Release from the PyTorch index
```bash
pip install torchao --extra-index-url https://download.pytorch.org/whl/cu121 # full options are cpu/cu118/cu121/cu124
```
Nightly release
```bash
pip install --pre torchao-nightly --index-url https://download.pytorch.org/whl/nightly/cu121 # full options are cpu/cu118/cu121/cu124
```