Feat(readme): improve docs on multi-gpu

This commit is contained in:
NanoCode012
2023-07-16 01:07:27 +09:00
parent 168a7a09cc
commit cf5ae6b649

View File

@@ -36,8 +36,6 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl
pip3 install -e .
pip3 install -U git+https://github.com/huggingface/peft.git
accelerate config
# finetune lora
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
@@ -525,6 +523,21 @@ Run
accelerate launch scripts/finetune.py configs/your_config.yml
```
#### Multi-GPU Config
- llama FSDP
```yaml
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_offload_params: true
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
```
- llama Deepspeed: append `ACCELERATE_USE_DEEPSPEED=true` in front of finetune command
### Inference
Pass the appropriate flag to the train command:
@@ -575,6 +588,10 @@ Try set `fp16: true`
Try to turn off xformers.
> Message about accelerate config missing
It's safe to ignore it.
## Need help? 🙋♂️
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you