Commit Graph

937 Commits

Author SHA1 Message Date
Brian Fitzgerald
b7d8a7dc4d Add Glaive conversation format support (#1365)
* Add Glaive conversation format support

* fix black formatting errors

* Fix black and pylint formatting errors

* only set role_key_tool if provided in the dataset constructor

* Update src/axolotl/prompt_strategies/sharegpt.py

Co-authored-by: Wing Lian <wing.lian@gmail.com>

* sharegpt test

* tokenizer test

* fix formatting

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-03-10 20:50:25 -04:00
David Baker
0bc114d2e1 Fix pydantic configuration for the max_memory input (#1385) [skip ci]
* Fix pydantic configuration for the max_memory input

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-03-10 20:50:04 -04:00
Wing Lian
7659c001aa support for rslora (#1387) [skip ci] 2024-03-10 20:49:45 -04:00
Wing Lian
3fd8093717 validation for fsdp and deepspeed (#1388) [skip ci]
* validation for fsdp and deepspeed

* make sure to return data
2024-03-10 20:49:25 -04:00
Wing Lian
9b6ee83a73 FDSP + QLoRA (#1378)
* wip qlora + fsdp fixes

* more fixes

* make sure to load the lora 🤦

* only setup quantized meta on non-zero rank:

* only run setup_quantized_peft_meta_for_training for qlora+fsdp

* more fixes for qlora+fsdp

* chore: lint

* add example yml

* support mistral too

* fix for model_type and add mixtral support too

* set cpu_offload: false to reduce vram, constrain new accleerator logic to qlora + fsdp

* refactor for duplicate code
2024-03-08 14:31:01 -05:00
Wing Lian
0cfdb2c90c support for DoRA w/ PEFT (#1363) 2024-03-05 21:20:15 -05:00
Eric Hartford
e0f1895408 add starcoder2 (#1349)
* add starcoder2

* Apply suggestions from code review

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>

* chore: lint

* Apply suggestions from code review

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2024-03-05 19:49:17 -05:00
Wing Lian
2598c9f045 allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361)
* allow the sharegpt handler to also better handle datasets destined for openai finetuning

* make sure to support system role
2024-03-05 11:43:33 -05:00
Wing Lian
decb66e170 lora+ support (#1352)
* lora+ support

* optimizer should default to None

* include mit license
2024-03-05 07:29:23 -05:00
Wing Lian
4d09b42ee3 plain input/output prompt strategy w/o chat templates (#1346)
* plain input/output prompt strategy w/o chat templates

* disable duplicate code check

* make sure to add an eos/eot token to the end of the output so it will stop

* multi turn segement support and test
2024-03-04 16:25:16 -05:00
Chirag Jain
b5b44925ec Fix validation for early stopping (#1358) 2024-03-03 22:15:18 -05:00
Wing Lian
6b3b271925 fix for protected model_ namespace w pydantic (#1345) 2024-02-28 15:07:49 -05:00
Chirag Jain
3a5a2d2f34 Fix use_mlflow to be bool instead of str (#1344) 2024-02-28 12:58:29 -05:00
Wing Lian
0f985e12fe more fixes 20240228 (#1342) [skip ci]
* add missing evals_per_epoch setting

* more pydantic fixes

* more fixes

* move test from normalization to validation

* increase eval size for sample packing tests
2024-02-28 12:57:45 -05:00
Wing Lian
c1a7b3dd69 add gemma instruct chat template (#1341)
* add gemma instruct chat template

* support for chat tempalte strategy too
2024-02-27 17:20:01 -05:00
Ikko Eltociear Ashimine
2b9687f341 Update fastchat_conversation_turns.py (#1294) [skip ci]
seperated -> separated
2024-02-27 09:06:10 -05:00
Wing Lian
2c9c88b32a fix steps check for anneal on first cycle (#1316) 2024-02-27 08:56:08 -05:00
Wing Lian
3f69571943 more pydantic fixes (#1338) 2024-02-26 22:39:13 -05:00
nopperl
1e3d5305d3 Support user-defined prompt processing strategies for dpo (#1248)
* support user-defined prompt processing strategies for dpo

* interpret dict dataset types as user-defined

* fix lint errors

* setup pydantic config for validation of User defined DPO

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-02-26 18:49:34 -05:00
Maxime
16482796b0 add lion-pytorch optimizer (#1299) [skip ci]
* add lion-pytorch optimizer

* update pydantic to support lion optimizer

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-02-26 18:45:14 -05:00
Wing Lian
269c5436ea hotfix to exclude_unset from pydantic config when converting back to a dict (#1334) 2024-02-26 15:06:25 -05:00
Wing Lian
e7eed203d8 hotfix for missing outputs params (#1333) 2024-02-26 14:36:37 -05:00
Wing Lian
cf002312e0 hotfix for lora rank (#1332) 2024-02-26 14:28:43 -05:00
Wing Lian
7de912e097 hotfix for capabilities loading (#1331) 2024-02-26 14:24:28 -05:00
JohanWork
d75653407c ADD: push checkpoints to mlflow artifact registry (#1295) [skip ci]
* Add checkpoint logging to mlflow artifact registry

* clean up

* Update README.md

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>

* update pydantic config from rebase

---------

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-02-26 13:32:39 -05:00
Wing Lian
cc3cebfa70 Pydantic 2.x cfg (#1239)
* WIP conversion to use pydantic for config validation

* wip, more fields, add capabilities

* wip

* update pydantic validation to match existing tests

* tweak requirements

* setup deprecated paams pydantic model

* more validations

* wrap up rest of the validations

* flesh out the rest of the options from the readme into pydantic

* fix model validators as class methods

remember to return in validator
missing return
add missing relora attributes
fix test for DictDefault change
fix sys template for mistral from fastchat change in PR 2872
fix test for batch size warning

* more missing attributes for cfg

* updates from PR feedback

* fix validation for datasets and pretrain datasets

* fix test for lora check
2024-02-26 12:24:14 -05:00
Wing Lian
5894f0e57e make mlflow optional (#1317)
* make mlflow optional

* fix xformers

don't patch swiglu if xformers not working
fix the check for xformers swiglu

* fix install of xformers with extra index url for docker builds

* fix docker build arg quoting
2024-02-26 11:41:33 -05:00
Wing Lian
2752d5f958 multipack for gemma (#1313)
* multipack for gemma

* chore: lint

* handle cache_position kwarg in updated llama modeling

* add position_ids to rotary embed call for updated llama modeling
2024-02-21 19:24:21 -05:00
David Meikle
3c00f406d6 Allow load_best_model_at_end to be configured for early stopping on custom evaluation datasets (#1291)
* Allow load_best_model_at_end when using test_datasets and val_set_size is zero for custom evaluation datasets

* Fixed formatting following failed Lint check
2024-02-22 00:57:18 +09:00
Leonardo Emili
e2786cce6a Validation always happens on first step (#1300) 2024-02-22 00:52:24 +09:00
Leonardo Emili
5a5d47458d Add seq2seq eval benchmark callback (#1274)
* Add CausalLMBenchEvalCallback for measuring seq2seq performance

* Fix code for pre-commit

* Fix typing and improve logging

* eval_sample_packing must be false with CausalLMBenchEvalCallback
2024-02-13 08:24:30 -08:00
김진원
8430db22e2 Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model? (#1273) 2024-02-12 21:23:28 -08:00
Wing Lian
4b997c3e1a allow the optimizer prune ratio for ReLoRA to be configurable (#1287)
* allow the optimizer prune ration for relora to be configurable

* update docs for relora

* prevent circular imports
2024-02-12 11:39:51 -08:00
Maxime
fac2d98c26 Add MPS support (#1264)
* add mps support

* linter stuff

* CI fixes

* install packaging for various tests

* Update setup.py

* Revert "install packaging for various tests"

This reverts commit 980e7aa44d.

* Revert "CI fixes"

This reverts commit 4609e3b166.

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-02-12 08:30:32 -05:00
Wing Lian
ea00dd0852 don't use load and push together (#1284) 2024-02-09 14:54:31 -05:00
Hamel Husain
9bca7db133 add support for https remote yamls (#1277) 2024-02-08 20:02:17 -08:00
Hamel Husain
91cf4ee72c allow remote data paths (#1278)
* allow remote data paths

* add docs about public url

* only allow https

* better docs

* better docs
2024-02-08 15:02:35 -08:00
Wing Lian
5698943263 simplify haldning for newer multipack patches so they can be added in a single place (#1270) 2024-02-07 10:46:04 -05:00
Zac Brannelly
73f1bdaa15 Fix bug preventing model_kwargs being injected (#1262) 2024-02-07 09:38:35 -05:00
Philip May
13eea21f9b Add more save strategies for DPO training. (#1255)
* Set save_strategy and save_steps in HFDPOTrainerBuilder

* fix doublicate save_steps
2024-02-06 00:38:43 -05:00
Chirag Jain
1072f28874 Fix typo bloat16 -> bfloat16 (#1257) 2024-02-06 00:38:14 -05:00
Wing Lian
c7cf3810bd Pretrain transforms (#1261)
* wip for pretraining/iterable data with arbitrary prompt strategies

* more fixes, wip

* more fixes for custom pretraining

* iterable ds wrapper not needed

* remove extra features

* chore: lint

* update pretraning example yml

* fix order for partials

* fixup for tests
2024-02-06 00:37:03 -05:00
Wing Lian
8c2e05ade3 relora: magnitude pruning of the optimizer (#1245)
* magnitude pruning of the optimizer

* add alpaca chat template and fix relora patch

* fix handling of lora adapter for relora

* fix merge and save call

* fixes for 8-bit lora merge

* save intermediate checkpoint adapters

* auto merge

* fix eval check

* handle relora annealing

* fix anneal step logic

* chore: lint

* misx fix

* fix types

* Update tests/e2e/test_relora_llama.py

* check for safetensors saved from relora
2024-02-06 00:35:30 -05:00
NanoCode012
2d65f470d5 fix(model): apply gate fp32 only for mixtral (#1241)
* fix(model): apply gate fp32 only for mixtral

* Update src/axolotl/utils/models.py

* fix gate layer check

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-02-01 13:55:05 -05:00
Wing Lian
00568c1539 support for true batches with multipack (#1230)
* support for true batches with multipack

* patch the map dataset fetcher to handle batches with packed indexes

* patch 4d mask creation for sdp attention

* better handling for BetterTransformer

* patch general case for 4d mask

* setup forward patch. WIP

* fix patch file

* support for multipack w/o flash attention for llama

* cleanup

* add warning about bf16 vs fp16 for multipack with sdpa

* bugfixes

* add 4d multipack tests, refactor patches

* update tests and add warnings

* fix e2e file check

* skip sdpa test if not at least torch 2.1.1, update docs
2024-02-01 10:18:42 -05:00
Wing Lian
c67fb71583 Peft deepspeed resume (#1227)
* import deepspeed integration

* monkeypatch peft adapater with deepspeed for resume from checkpoint

* fix patch

* fix patches attempt 2

* make sure to set lora_model_dir

* skip pylint for deepspeed.utils

* pick up upstream fix in transformers

* remove monkeypatch for deepspeed/peft fix

* no need to set the lora_model_dir on resume

* unset load_in_*bit when using quant config

* guard before del

* better handling of load_in* kwargs
2024-01-31 18:13:29 -05:00
DreamGenX
25e037fe2d Support for additional_special_tokens (#1221) [skip ci]
* Support for additional_special_tokens

* Support for additional_special_tokens. Adjust whitespace.

* Support for additional_special_tokens. Use correct quotes.

* Support for additional_special_tokens. Safe pop.

* Support for additional_special_tokens. nt.

* Support for additional_special_tokens. cfg.special_tokens may be None.

* add token if not in vocabulary when adding additional_special_tokens

* fix logic for copy/pasta

* bugfix for popping from config and tokenizer reload

* no need to add tokens manually now with previous bugfix

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-31 18:13:13 -05:00
DreamGenX
5787e1a23f Fix and document test_datasets (#1228)
* Make sure test_dataset are used and treat val_set_size.

* Add test_datasets docs.

* Apply suggestions from code review

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-31 06:48:57 -05:00
xhedit
8608d8003e Fix typo (#1231) [skip ci] 2024-01-31 06:46:55 -05:00
Wing Lian
4cb7900a56 Peft lotfq (#1222)
* loftq support for lora

* fix loftq check

* update readme for loftq

* readability cleanup

* use peft main for loftq fixes, remove unnecessary special tokens

* remove unused test from older deprecation
2024-01-28 18:50:08 -05:00