QAT and quantization w/torchao
This commit is contained in:
salman
2025-05-28 12:35:47 +01:00
committed by GitHub
parent 20fda75917
commit 5fca214108
26 changed files with 1372 additions and 13 deletions

View File

@@ -209,6 +209,16 @@ axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dir
This would be necessary to use with other frameworks. If you have an adapter, merge it with the non-quantized linearized model before delinearizing.
### quantize
Quantizes a model using the quantization configuration specified in your YAML file.
```bash
axolotl quantize config.yml
```
See [Quantization](./quantize.qmd) for more details.
## Legacy CLI Usage