allow the optimizer prune ratio for ReLoRA to be configurable (#1287)

* allow the optimizer prune ration for relora to be configurable

* update docs for relora

* prevent circular imports
This commit is contained in:
Wing Lian
2024-02-12 11:39:51 -08:00
committed by GitHub
parent fac2d98c26
commit 4b997c3e1a
3 changed files with 23 additions and 4 deletions

View File

@@ -734,6 +734,8 @@ peft:
# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed
relora_steps: # Number of steps per ReLoRA restart
relora_warmup_steps: # Number of per-restart warmup steps
relora_anneal_steps: # Number of anneal steps for each relora cycle
relora_prune_ratio: # threshold for optimizer magnitude when pruning
relora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings
# wandb configuration if you're using it