Commit Graph

807 Commits

Author SHA1 Message Date
Wing Lian
f5a828aa20 Qwen2 (#1166)
* qwen2 multipack support

* fix qwen derived model check so it doesn't break qwen2

* fixes to ensure qwen2 packing works

* bump requirements for qwen2

* requirements typo
2024-01-22 18:24:15 -05:00
Wing Lian
fccb542b47 make sure the model config loader respects the model_revision too (#1160) [skip-ci] 2024-01-22 13:23:14 -05:00
Wing Lian
2ce5c0d68a Deprecate max packed sequence len (#1141) 2024-01-20 05:11:50 -05:00
NanoCode012
3db5f2fd17 feat(dataset): add config to keep processed dataset in memory (#1152) 2024-01-20 13:19:28 +09:00
Wing Lian
6910e6a8ca Multipack simplify for Mixtral (#1142) 2024-01-18 16:23:49 -05:00
Joe Cummings
1d70f24b50 Add shifted sparse attention (#973) [skip-ci]
* Add s2_attn to hijack flash code

* Refactor code to account for s2_attn

* Add test for models utils

* Add ``s2_attention`` option to llama configs

* Add ``s2_attention`` option to README config

* Format code to appease linter

* chore: lint

* Remove xpos and llama-landmark [bad merge]

* add e2e smoke tests for shifted sparse attention

* remove stray patch from merge

* update yml with link to paper for s2_attention/longlora

* fix assertion check for full fine tune

* increase sequence len for tests and PR feedback updates

* reduce context len to 16k for tests

* reduce context len to 16k for tests

* reduce batch size for larger context len and udpate test to check message

* fix test for message

---------

Co-authored-by: joecummings <jrcummings@devvm050.nha0.facebook.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-18 10:16:07 -05:00
Wing Lian
317fa2555a fix bf16 check when preprocessing data (#1140) 2024-01-17 22:41:23 -05:00
NanoCode012
1e56b88cde fix(preprocess): Make sure dataset not loaded from cache when using preprocess cli (#1136) 2024-01-18 03:03:52 +09:00
Wing Lian
7570446596 Preprocess dataset size fix (#1131)
* overwrite cache on preprocess step
* don't cache the TokenizedPromptDataset at all
* load_from_cache_file no longer needed
2024-01-17 11:02:41 -05:00
xzuyn
8487b97cf3 Add layers_to_transform for lora_config (#1118) 2024-01-15 21:29:55 -05:00
Simon Hällqvist
086561326f Enable or disable bf16 support based on availability (#1116) 2024-01-14 12:06:56 -05:00
Casper
2202a20f60 Reverse caching PR (#1115) 2024-01-13 10:17:40 -05:00
Casper
d66b10141e Disable caching on --disable_caching in CLI (#1110)
* Disable caching on `--disable_caching` in CLI

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-13 10:13:35 +01:00
Wing Lian
da97285e63 keep gate in fp32 for 16 bit loras (#1105)
* keep gate in fp32 for loras

* add e2e check for lora w/o flash attention for mixtral to check gate

* add checks for gate in fp32 for mixtral, add typehints to train outputs

* mixtral doesn't support basic lora 🤦

add lora tests @ 16bit and fix gate layer check
fix the parameter name, was using the old disco name
don't lora over the gate so we can check that is in fp32
fix dtype check

* ensure we're using fp16/bf16 for 16bit and qlora is always going to be in uint8
2024-01-12 14:58:21 -05:00
NanoCode012
b432889256 feat: enable trl's autounwrap (#1060)
* feat: test trl's autounwrap

* fix: add check for adapter

* feat: add config to disable autounwrap

* chore: fix lint
2024-01-11 08:43:41 -05:00
Wing Lian
78c5b1979e add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083) 2024-01-10 22:32:43 -05:00
Wing Lian
23495a80af misc fixes from #943 (#1086) [skip ci] 2024-01-10 22:31:36 -05:00
Wing Lian
90036ebbc6 optimize calculation of cu_seqlens from position_ids (#1084) [skip ci] 2024-01-10 11:54:50 -05:00
NanoCode012
d69ba2b0b7 fix: warn user to install mamba_ssm package (#1019) 2024-01-10 02:50:56 -05:00
Wing Lian
2f2582e6ed additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci]
* additional logging to get maximum token length of a sequence in the dataset

* fix ordering to properly determine the max_len of tokens before dropping anything longer
2024-01-10 00:49:31 -05:00
Wing Lian
0ce1a6594e update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
* update sharegpt conversations when chatml chat template is set

* add info log when updating sharegpt/chatml conversation
2024-01-10 00:49:07 -05:00
NanoCode012
043c3860cd fix: train_on_inputs: true ignored for sharegpt (#1045) [skip ci]
* fix: `train_on_inputs: true` ignored for sharegpt

* enable unit test for train_on_inputs for sharegpt

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-09 23:00:09 -05:00
Wing Lian
0f100800e3 be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
* be more robust about checking embedding modules for lora finetunes

* update dynamic error message
2024-01-09 22:58:54 -05:00
Wing Lian
ead34c516a swap the data collator for evals if not using sample packing (#1076)
* swap the data collator for evals if not using sample packing

* drop last from dataloader to help with issues with evals
2024-01-09 22:16:24 -05:00
Wing Lian
d7057ccd36 paired kto support (#1069) 2024-01-09 13:30:45 -05:00
Johan Hansson
090c24dcb0 Add: mlflow for experiment tracking (#1059) [skip ci]
* Update requirements.txt

adding mlflow

* Update __init__.py

Imports for mlflow

* Update README.md

* Create mlflow_.py (#1)

* Update README.md

* fix precommits

* Update README.md

Update mlflow_tracking_uri

* Update trainer_builder.py

update trainer building

* chore: lint

* make ternary a bit more readable

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-09 09:34:09 -05:00
Wing Lian
651b7a31fc fix double eos token for chatml (#1054) [skip ci]
* fix double eos token for chatml

* isolate fix to chatml conversation

* fix add special tokens to include rstrip

* add test for train_on_inputs for sharegpt

* don't use rstrip for chatml
2024-01-09 09:33:38 -05:00
Ricardo Dominguez-Olmedo
04b978b428 Cosine learning rate schedule - minimum learning rate (#1062)
* Cosine min lr

* Cosine min lr - warn if using deepspeed

* cosine_min_lr_ratio readme

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-09 09:29:56 -05:00
NanoCode012
c3e8165f26 fix: torch_dtype mistral default to fp32 (#1050) 2024-01-09 07:48:15 -05:00
Ricardo Dominguez-Olmedo
81d384598e Efficiently get the length of the tokenized docs (#1063)
* Efficiently get the length of the tokenized docs

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-08 15:48:30 -05:00
Wing Lian
732851f105 Phi2 rewrite (#1058)
* restore to current phi modeling code from phi-2

* enable gradient checkpointing

* don't cast everything to float32 all the time

* gradient checkpointing for phi2 ParallelBlock module too

* fix enabling flash attn for phi2

* add comment about import

* fix phi2 example

* fix model type check for tokenizer

* revert float32 -> bf16 casting changes

* support fused dense flash attn

* fix the repo for flash-attn

* add package name for subdir pkg

* fix the data collator when not using sample packing

* install packaging for pytests in ci

* also fix setup to not install flash attn fused dense subdir if not extras

* split out the fused-dense-lib in extra requires

* don't train w group_by_length for phi

* update integration test to use phi2

* set max steps and save steps for phi e2e tests

* try to workaround ssave issue in ci

* skip phi2 e2e test for now
2024-01-08 14:04:22 -05:00
JinK
553c80f79a streaming multipack for pretraining dataset (#959)
* [Feat] streaming multipack

* WIP make continued pretraining work w multipack

* fix up hadrcoding, lint

* fix dict check

* update test for updated pretraining multipack code

* fix hardcoded data collator fix for multipack pretraining

* fix the collator to be the max length for multipack pretraining

* don't bother with latest tag for test

* cleanup docker build/test

---------

Co-authored-by: jinwonkim93@github.com <jinwonkim>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-05 22:13:21 -05:00
NanoCode012
cbdbf9e6e5 feat: always push checkpoint to hub if set (#1049) [skip ci] 2024-01-05 13:09:42 -05:00
kallewoof
bdfefaf054 feature: better device mapping for large models (#918)
* fix: improved memory handling when model is bigger than existing VRAM

* feature: add lora_on_cpu flag to do LoRA loading on CPU (RAM)

For big models where the models are taking up the entire GPU VRAM, the LoRA part will fail unless it is loaded on CPU only.

* doc: add README

* fix: enable progress bars in do_merge_lora()

* doc: mention gpu_memory_limit and lora_on_cpu in merge part of README

* Update src/axolotl/utils/models.py

Co-authored-by: Wing Lian <wing.lian@gmail.com>

* fix: remove deletion of removed model_kwargs key

* fix: validate that gpu_memory_limit and max_memory are not both set

---------

Co-authored-by: Karl-Johan Alm <kalle@gmail.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-05 22:22:21 +09:00
Hamel Husain
63fb3eb426 set default for merge (#1044) 2024-01-04 18:14:20 -08:00
Hamel Husain
31d23504a5 fix model card upload for PEFT models (#1043) 2024-01-04 18:13:54 -08:00
Wing Lian
f243c2186d RL/DPO (#935)
* ipo-dpo trainer

* fix missing abstract method

* chatml template, grad checkpointing kwargs support

* fix steps calc for RL and add dataloader kwargs

* wip to fix dpo and start ppo

* more fixes

* refactor to generalize map fn

* fix dataset loop and handle argilla pref dataset

* set training args

* load reference model on seperate gpu if more than one device

* no auto upload to hub for dpo, don't add lora adapters to ref model for dpo

* fixes for rl training

* support for ipo from yaml

* set dpo training args from the config, add tests

* chore: lint

* set sequence_len for model in test

* add RLHF docs
2024-01-04 18:22:55 -05:00
xaviviro
59b2d302c8 Added chatglm3 conversation type for training models like TinyLLama (#1036)
* Added chatgml3 conversation type for training models like TinyLLama

* Added chatgml3 conversation type for training models like TinyLLama with lint

* Added chatgml3 conversation type for training models like TinyLLama with lint
2024-01-04 21:03:04 +09:00
Wing Lian
bcc78d8fa3 bump transformers and update attention class map name (#1023)
* bump transformers and update attention class map name

* also run the tests in docker

* add mixtral e2e smoke test

* fix base name for docker image in test

* mixtral lora doesn't seem to work, at least check qlora

* add testcase for mixtral w sample packing

* check monkeypatch for flash attn multipack

* also run the e2e tests in docker

* use all gpus to run tests in docker ci

* use privileged mode too for docker w gpus

* rename the docker e2e actions for gh ci

* set privileged mode for docker and update mixtral model self attn check

* use fp16/bf16 for mixtral w fa2

* skip e2e tests on docker w gpus for now

* tests to validate mistral and mixtral patches

* fix rel import
2024-01-03 12:11:04 -08:00
NanoCode012
74532ddc45 chore(config): clean up old log for Qwen (#1034) 2024-01-04 01:19:52 +09:00
Wing Lian
4d2e842e46 use recommended setting for use_reentrant w gradient checkpointing (#1021)
* use recommended setting for use_reentrant w gradient checkpointing

* add doc for gradient_checkpointing_kwargs
2024-01-01 22:17:27 -05:00
Tazik Shahjahan
3678a6c41d Fix: bf16 support for inference (#981)
* Fix: bf16 torch dtype

* simplify casting to device and dtype

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2023-12-29 16:15:53 -06:00
mhenrichsen
f8ae59b0a8 Adds chat templates (#1022) 2023-12-29 15:44:23 -06:00
Hamel Husain
4f4d638b84 [WandB] Push axolotl config to top level wandb files (#1014) 2023-12-29 10:52:12 -08:00
Wing Lian
ba043a361e add ultrachat prompt strategies (#996) 2023-12-29 12:23:29 -06:00
NanoCode012
41353d2ea0 feat: expose bnb kwargs (#1018)
* feat: expose bnb kwargs

* chore: added examples and link per suggestion

* Uncomment defaults per suggestion for readability

Co-authored-by: Hamel Husain <hamel.husain@gmail.com>

---------

Co-authored-by: Hamel Husain <hamel.husain@gmail.com>
2023-12-29 18:16:26 +09:00
NanoCode012
f6ecf14dd4 feat: remove need to add load_in* during merge (#1017) 2023-12-29 18:15:30 +09:00
Wing Lian
70b46ca4f4 remove landmark attn and xpos rope implementations (#1010) 2023-12-27 21:07:27 -08:00
Hamel Husain
85dd4d525b add config to model card (#1005)
* add config to model card

* rm space

* apply black formatting

* apply black formatting

* fix formatting

* check for cfg attribute

* add version

* add version

* put the config in a collapsible element

* put the config in a collapsible element
2023-12-27 21:25:33 -06:00
Younes Belkada
db9094df0f FEAT: add tagging support to axolotl (#1004)
* add tagging support to axolotl

* chore: lint

* fix method w self

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2023-12-27 16:25:20 -06:00