* [Feat] streaming multipack * WIP make continued pretraining work w multipack * fix up hadrcoding, lint * fix dict check * update test for updated pretraining multipack code * fix hardcoded data collator fix for multipack pretraining * fix the collator to be the max length for multipack pretraining * don't bother with latest tag for test * cleanup docker build/test --------- Co-authored-by: jinwonkim93@github.com <jinwonkim> Co-authored-by: Wing Lian <wing.lian@gmail.com>
Overview
This is a simple example of how to finetune TinyLlama1.1B using either lora or qlora:
LoRa:
accelerate launch -m axolotl.cli.train examples/tiny-llama/lora.yml
qLoRa:
accelerate launch -m axolotl.cli.train examples/tiny-llama/qlora.yml
Both take about 10 minutes to complete on a 4090.