Files
axolotl/examples/tiny-llama
Wing Lian c7cf3810bd Pretrain transforms (#1261)
* wip for pretraining/iterable data with arbitrary prompt strategies

* more fixes, wip

* more fixes for custom pretraining

* iterable ds wrapper not needed

* remove extra features

* chore: lint

* update pretraning example yml

* fix order for partials

* fixup for tests
2024-02-06 00:37:03 -05:00
..
2024-02-06 00:37:03 -05:00

Overview

This is a simple example of how to finetune TinyLlama1.1B using either lora or qlora:

LoRa:

accelerate launch -m axolotl.cli.train examples/tiny-llama/lora.yml

qLoRa:

accelerate launch -m axolotl.cli.train examples/tiny-llama/qlora.yml

Both take about 10 minutes to complete on a 4090.