Dan Saunders
|
3d8425fa91
|
Activation function Triton kernels, LoRA custom autograd functions (#2324)
* LoRA + activation fn Triton kernels: initial commit
* implementing optims
* finalizing MLP LoRA kernels and progress on QKV / W kernels
* updates
* O projection optim
* adding monkey patching logic
* doc strings, typing, pre-commit fixes
* updates
* adding lora 8b kernels example
* working on fsdp support
* tests and fixes
* small fixes, getting tests to pass, adding doc strings
* integration tests for LoRA patching
* config.qmd
* remove unneeded pytest fixture
* fix
* review comments first pass
* improving tests, attention class agnostic patching
* adding support for more archs
* wip SiLU / GELU impls
* improved testing, small updates, etc.
* slightly updating docs
* rebase
* fixing test_attention_patching_integration
* additional review comments, fixing test in CI (hopefully)
* isolating problematic patching test
* relaxing allclose threshold to reduce flakiness
* fixing accidental change
* adding model arch agnostic attention class fetching
* removing unused activations
|
2025-02-17 14:23:15 -05:00 |
|