quick formatting fix for LoRA optims doc

This commit is contained in:
Dan Saunders
2025-02-19 14:17:20 +00:00
parent 8dfadc2b3c
commit 02efd7e83d

View File

@@ -12,6 +12,7 @@ to leverage operator fusion and tensor re-use in order to improve speed and redu
memory usage during the forward and backward passes of these calculations.
We currently support several common model architectures, including (but not limited to):
- `llama`
- `mistral`
- `qwen2`