Merge pull request #279 from NanoCode012/feat/multi-gpu-readme

Feat(readme): improve docs on multi-gpu
This commit is contained in:
Wing Lian
2023-07-16 16:08:37 -04:00
committed by GitHub

View File

@@ -36,8 +36,6 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl
pip3 install -e .
pip3 install -U git+https://github.com/huggingface/peft.git
accelerate config
# finetune lora
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
@@ -532,6 +530,21 @@ Run
accelerate launch scripts/finetune.py configs/your_config.yml
```
#### Multi-GPU Config
- llama FSDP
```yaml
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_offload_params: true
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
```
- llama Deepspeed: append `ACCELERATE_USE_DEEPSPEED=true` in front of finetune command
### Inference
Pass the appropriate flag to the train command:
@@ -582,6 +595,10 @@ Try set `fp16: true`
Try to turn off xformers.
> Message about accelerate config missing
It's safe to ignore it.
## Need help? 🙋♂️
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you