Commit Graph

571 Commits

Author SHA1 Message Date
Agung Baptiso Sorlawan
02f2c720fc Fix generation_config validation raises Exception for do_merge_lora (#1184) 2024-01-24 00:42:15 -05:00
James Wade
71141deb18 Add support for offline mode with HF_HUB_OFFLINE envvar (#1182)
* Add support for offline mode with HF_HUB_OFFLINE envvar

* Apply styling

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-24 00:41:47 -05:00
Wing Lian
59a31fe613 DPO fixes v2 (#1174)
* check for length before trying to remove it

* add validation for sample packing with RLHF
2024-01-23 12:56:24 -05:00
Wing Lian
814aee6603 Phi2 multipack (#1173)
* phi2 multipack

* update validation and examples for phi

* more updates to phi examples

* make sure to use the correct collator for phi multipack

* phi needs attention mask now for multipack

* if the special token already exists in the tokenizer, don't require in lora modules to save

* fix qlora yml for phi, fix phi test validation

* test qlora too

* make sure flash attention is enabled for the test

* don't use remote code for phi anymore

* reduce sequence len for sample packing phi
2024-01-23 12:54:36 -05:00
Wing Lian
fb7f9b9516 don't fail if can't cast weights due to offload when merging (#1172) [skip ci] 2024-01-23 09:17:08 -05:00
Wing Lian
7523d1f557 DPO cleanup (#1126)
* cleanup dpo to be a little more extensible, add zephyr/nectar strategy

* fix eos slash

* support for eval split

* fix kwargs

* handle empty evals

* don't load peft model for dpo

* ensure dpo traning args gets bf16 for peft if applicable

* fix duplicate kwargs for bf16

* make sure to respect the configured lr scheduler

* supprt trainer callback to push config to wandb

* set dataloader preload args

* ensure that we are loading the lora when merging

* Update src/axolotl/utils/data.py

Co-authored-by: Agus <agustin.piqueres@gmail.com>

* support local datasets for dpo

Co-authored-by: Agus <agustin.piqueres@gmail.com>

* chore: lint

* dpo/kto/ipo smoke tests w lora, simplify dpo dataset type names

* add split to dpo tests

* fix rebase/merging error

* handle edge case w logging

* use accelerator for dpo datasets so it doesn't break the logger

* missing args

* validate checkpoint is an adapter for now

* log warning when dataset strategy is not loadable

---------

Co-authored-by: Agus <agustin.piqueres@gmail.com>
2024-01-23 00:40:37 -05:00
Casper
684038111e Add desc to map/filter (#1162)
* Add desc to map/filter

* update descriptions

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-22 21:30:53 -05:00
Wing Lian
cda52dc32b support for explicit test_dataset definition for evals (#786) 2024-01-22 21:29:56 -05:00
Wing Lian
e799e08d3c Falcon embeddings (#1149) [skip docker]
* also fix multipack for falcon and add smoke tests

* make sure to handle special tokens and added tokens for lora

* fix reference to model_type

* fix tests for falcon

* fix stray typo

* fixes for smoke tests
2024-01-22 21:01:42 -05:00
Wing Lian
32580c1ca7 Vram fix attempt (#1164) [skip ci]
* revert order of filter/drop_long step and handle calc for max_input_len only during preprocessing

* revert some changes to preparing for packing to allow more flexibility

* prepare dataset for packing during pre-processing step

* prepare dataset hash based on sample packing too

* enclose none check

* just cast straight to string for ds hash
2024-01-22 19:54:54 -05:00
Wing Lian
802f9667a2 improve vram use w gradient checkpointing (#1167) [skip ci] 2024-01-22 19:48:22 -05:00
JohanWork
b8e5603467 Add mlflow callback for pushing config to mlflow artifacts (#1125)
* Update callbacks.py

adding callback for mlflow

* Update trainer_builder.py

* clean up
2024-01-22 18:44:39 -05:00
Wing Lian
782b6a4216 set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
* set fp16 to false if bf16, update bf16: auto in example YAMLs

* unset fp16 so that it fallsback properly if bf16 isn't available

* Update README.md [skip-ci]

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>

* test that bf16 disables fp16

---------

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2024-01-22 18:44:01 -05:00
Wing Lian
eaaeefce55 jupyter lab fixes (#1139) [skip ci]
* add a basic notebook for lab users in the root

* update notebook and fix cors for jupyter

* cell is code

* fix eval batch size check

* remove intro notebook
2024-01-22 18:42:40 -05:00
Wing Lian
f5a828aa20 Qwen2 (#1166)
* qwen2 multipack support

* fix qwen derived model check so it doesn't break qwen2

* fixes to ensure qwen2 packing works

* bump requirements for qwen2

* requirements typo
2024-01-22 18:24:15 -05:00
Wing Lian
fccb542b47 make sure the model config loader respects the model_revision too (#1160) [skip-ci] 2024-01-22 13:23:14 -05:00
Wing Lian
2ce5c0d68a Deprecate max packed sequence len (#1141) 2024-01-20 05:11:50 -05:00
NanoCode012
3db5f2fd17 feat(dataset): add config to keep processed dataset in memory (#1152) 2024-01-20 13:19:28 +09:00
Wing Lian
6910e6a8ca Multipack simplify for Mixtral (#1142) 2024-01-18 16:23:49 -05:00
Joe Cummings
1d70f24b50 Add shifted sparse attention (#973) [skip-ci]
* Add s2_attn to hijack flash code

* Refactor code to account for s2_attn

* Add test for models utils

* Add ``s2_attention`` option to llama configs

* Add ``s2_attention`` option to README config

* Format code to appease linter

* chore: lint

* Remove xpos and llama-landmark [bad merge]

* add e2e smoke tests for shifted sparse attention

* remove stray patch from merge

* update yml with link to paper for s2_attention/longlora

* fix assertion check for full fine tune

* increase sequence len for tests and PR feedback updates

* reduce context len to 16k for tests

* reduce context len to 16k for tests

* reduce batch size for larger context len and udpate test to check message

* fix test for message

---------

Co-authored-by: joecummings <jrcummings@devvm050.nha0.facebook.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-18 10:16:07 -05:00
Wing Lian
317fa2555a fix bf16 check when preprocessing data (#1140) 2024-01-17 22:41:23 -05:00
NanoCode012
1e56b88cde fix(preprocess): Make sure dataset not loaded from cache when using preprocess cli (#1136) 2024-01-18 03:03:52 +09:00
Wing Lian
7570446596 Preprocess dataset size fix (#1131)
* overwrite cache on preprocess step
* don't cache the TokenizedPromptDataset at all
* load_from_cache_file no longer needed
2024-01-17 11:02:41 -05:00
xzuyn
8487b97cf3 Add layers_to_transform for lora_config (#1118) 2024-01-15 21:29:55 -05:00
Simon Hällqvist
086561326f Enable or disable bf16 support based on availability (#1116) 2024-01-14 12:06:56 -05:00
Casper
2202a20f60 Reverse caching PR (#1115) 2024-01-13 10:17:40 -05:00
Casper
d66b10141e Disable caching on --disable_caching in CLI (#1110)
* Disable caching on `--disable_caching` in CLI

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-13 10:13:35 +01:00
Wing Lian
da97285e63 keep gate in fp32 for 16 bit loras (#1105)
* keep gate in fp32 for loras

* add e2e check for lora w/o flash attention for mixtral to check gate

* add checks for gate in fp32 for mixtral, add typehints to train outputs

* mixtral doesn't support basic lora 🤦

add lora tests @ 16bit and fix gate layer check
fix the parameter name, was using the old disco name
don't lora over the gate so we can check that is in fp32
fix dtype check

* ensure we're using fp16/bf16 for 16bit and qlora is always going to be in uint8
2024-01-12 14:58:21 -05:00
NanoCode012
b432889256 feat: enable trl's autounwrap (#1060)
* feat: test trl's autounwrap

* fix: add check for adapter

* feat: add config to disable autounwrap

* chore: fix lint
2024-01-11 08:43:41 -05:00
Wing Lian
78c5b1979e add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083) 2024-01-10 22:32:43 -05:00
Wing Lian
23495a80af misc fixes from #943 (#1086) [skip ci] 2024-01-10 22:31:36 -05:00
Wing Lian
90036ebbc6 optimize calculation of cu_seqlens from position_ids (#1084) [skip ci] 2024-01-10 11:54:50 -05:00
NanoCode012
d69ba2b0b7 fix: warn user to install mamba_ssm package (#1019) 2024-01-10 02:50:56 -05:00
Wing Lian
2f2582e6ed additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci]
* additional logging to get maximum token length of a sequence in the dataset

* fix ordering to properly determine the max_len of tokens before dropping anything longer
2024-01-10 00:49:31 -05:00
Wing Lian
0ce1a6594e update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
* update sharegpt conversations when chatml chat template is set

* add info log when updating sharegpt/chatml conversation
2024-01-10 00:49:07 -05:00
NanoCode012
043c3860cd fix: train_on_inputs: true ignored for sharegpt (#1045) [skip ci]
* fix: `train_on_inputs: true` ignored for sharegpt

* enable unit test for train_on_inputs for sharegpt

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-09 23:00:09 -05:00
Wing Lian
0f100800e3 be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
* be more robust about checking embedding modules for lora finetunes

* update dynamic error message
2024-01-09 22:58:54 -05:00
Wing Lian
ead34c516a swap the data collator for evals if not using sample packing (#1076)
* swap the data collator for evals if not using sample packing

* drop last from dataloader to help with issues with evals
2024-01-09 22:16:24 -05:00
Wing Lian
d7057ccd36 paired kto support (#1069) 2024-01-09 13:30:45 -05:00
Johan Hansson
090c24dcb0 Add: mlflow for experiment tracking (#1059) [skip ci]
* Update requirements.txt

adding mlflow

* Update __init__.py

Imports for mlflow

* Update README.md

* Create mlflow_.py (#1)

* Update README.md

* fix precommits

* Update README.md

Update mlflow_tracking_uri

* Update trainer_builder.py

update trainer building

* chore: lint

* make ternary a bit more readable

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-09 09:34:09 -05:00
Wing Lian
651b7a31fc fix double eos token for chatml (#1054) [skip ci]
* fix double eos token for chatml

* isolate fix to chatml conversation

* fix add special tokens to include rstrip

* add test for train_on_inputs for sharegpt

* don't use rstrip for chatml
2024-01-09 09:33:38 -05:00
Ricardo Dominguez-Olmedo
04b978b428 Cosine learning rate schedule - minimum learning rate (#1062)
* Cosine min lr

* Cosine min lr - warn if using deepspeed

* cosine_min_lr_ratio readme

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-09 09:29:56 -05:00
NanoCode012
c3e8165f26 fix: torch_dtype mistral default to fp32 (#1050) 2024-01-09 07:48:15 -05:00
Ricardo Dominguez-Olmedo
81d384598e Efficiently get the length of the tokenized docs (#1063)
* Efficiently get the length of the tokenized docs

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-08 15:48:30 -05:00
Wing Lian
732851f105 Phi2 rewrite (#1058)
* restore to current phi modeling code from phi-2

* enable gradient checkpointing

* don't cast everything to float32 all the time

* gradient checkpointing for phi2 ParallelBlock module too

* fix enabling flash attn for phi2

* add comment about import

* fix phi2 example

* fix model type check for tokenizer

* revert float32 -> bf16 casting changes

* support fused dense flash attn

* fix the repo for flash-attn

* add package name for subdir pkg

* fix the data collator when not using sample packing

* install packaging for pytests in ci

* also fix setup to not install flash attn fused dense subdir if not extras

* split out the fused-dense-lib in extra requires

* don't train w group_by_length for phi

* update integration test to use phi2

* set max steps and save steps for phi e2e tests

* try to workaround ssave issue in ci

* skip phi2 e2e test for now
2024-01-08 14:04:22 -05:00
JinK
553c80f79a streaming multipack for pretraining dataset (#959)
* [Feat] streaming multipack

* WIP make continued pretraining work w multipack

* fix up hadrcoding, lint

* fix dict check

* update test for updated pretraining multipack code

* fix hardcoded data collator fix for multipack pretraining

* fix the collator to be the max length for multipack pretraining

* don't bother with latest tag for test

* cleanup docker build/test

---------

Co-authored-by: jinwonkim93@github.com <jinwonkim>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-05 22:13:21 -05:00
NanoCode012
cbdbf9e6e5 feat: always push checkpoint to hub if set (#1049) [skip ci] 2024-01-05 13:09:42 -05:00
kallewoof
bdfefaf054 feature: better device mapping for large models (#918)
* fix: improved memory handling when model is bigger than existing VRAM

* feature: add lora_on_cpu flag to do LoRA loading on CPU (RAM)

For big models where the models are taking up the entire GPU VRAM, the LoRA part will fail unless it is loaded on CPU only.

* doc: add README

* fix: enable progress bars in do_merge_lora()

* doc: mention gpu_memory_limit and lora_on_cpu in merge part of README

* Update src/axolotl/utils/models.py

Co-authored-by: Wing Lian <wing.lian@gmail.com>

* fix: remove deletion of removed model_kwargs key

* fix: validate that gpu_memory_limit and max_memory are not both set

---------

Co-authored-by: Karl-Johan Alm <kalle@gmail.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-05 22:22:21 +09:00
Hamel Husain
63fb3eb426 set default for merge (#1044) 2024-01-04 18:14:20 -08:00
Hamel Husain
31d23504a5 fix model card upload for PEFT models (#1043) 2024-01-04 18:13:54 -08:00