Commit Graph

1534 Commits

Author SHA1 Message Date
Wing Lian
3ebf22464b qlora-fsdp ram efficient loading with hf trainer (#1791)
* fix 405b with lower cpu ram requirements

* make sure to use doouble quant and only skip output embeddings

* set model attributes

* more fixes for sharded fsdp loading

* update the base model in example to use pre-quantized nf4-bf16 weights

* upstream fixes  for qlora+fsdp
2024-07-30 19:21:38 -04:00
Wing Lian
dbf8fb549e publish axolotl images without extras in the tag name (#1798) 2024-07-30 13:36:19 -04:00
Wing Lian
9a63884597 update test and main/nightly builds (#1797)
* update test and main/nightly builds

* don't install mamba-ssm on 2.4.0 since it has no wheels yet
2024-07-30 12:37:40 -04:00
Wing Lian
c5587b45ac use 12.4.1 instead of 12.4 [skip-ci] (#1796) 2024-07-30 08:50:23 -04:00
Wing Lian
d4f6a6b103 fix dockerfile and base builder (#1795) [skip-ci] 2024-07-30 08:34:37 -04:00
Wing Lian
d8d1788ffc move to supporting mostly 12.1 w 2.3.1 and add new 12.4 with 2.4.0 (#1793) 2024-07-30 08:06:11 -04:00
mhenrichsen
3bc8e64557 Update README.md (#1792) 2024-07-30 07:59:53 +02:00
Adam Brusselback
55cc214c76 Add flexible configuration options for chat_template dataset training (#1756)
* Add flexible configuration options for chat dataset training

- Introduce roles_to_train parameter to set training labels by role
- Add train_on_eos option to configure training on end-of-sequence tokens
- Implement per-message training configuration in dataset
- Allow fine-grained control over training specific portions of messages
- Add message_field_training and message_field_training_detail settings
- Implement mapping between dataset character offsets and tokenized prompt
- Enhance test suite to cover new functionality

* Fix missing field inits, things weren't working from yaml.

* Add flexible configuration options for chat dataset training

- Introduce roles_to_train parameter to set training labels by role
- Add train_on_eos option to configure training on end-of-sequence tokens
- Implement per-message training configuration in dataset
- Allow fine-grained control over training specific portions of messages
- Add message_field_training and message_field_training_detail settings
- Implement mapping between dataset character offsets and tokenized prompt
- Enhance test suite to cover new functionality

* Fix missing field inits, things weren't working from yaml.

* chore: lint

* Revert test repo back to NousResearch after opening PR to fix the tokenizer_config.json.

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-07-28 21:48:57 -04:00
Wing Lian
94ba93259f various batch of fixes (#1785)
* various batch of fixes

* more tweaks

* fix autoawq requirement for torch flexibility

* simplify conditionals

* multi-node fixes wip

* bump transformers and include 405b qlora+fsdp yaml
2024-07-28 07:25:54 -04:00
Wing Lian
22680913f3 Bump deepspeed 20240727 (#1790)
* pin deepspeed to 0.14.4 otherwise it doesn't play nice with trl

* Add test to import to try to trigger import dependencies
2024-07-27 10:24:11 -04:00
Wing Lian
6a9cfec222 add support for simpo via cpo trainer (#1772)
* add support for simpo via cpo trainer

* add cpo_alpha / sft_weight from the paper

* make sure to use the right builder for simpo
2024-07-23 21:22:16 -04:00
Wing Lian
fe250ada78 fix fsdp loading of models, esp 70b (#1780) 2024-07-23 19:54:28 -04:00
Wing Lian
e6b299dd79 bump flash attention to 2.6.2 (#1781) [skip ci] 2024-07-23 19:54:15 -04:00
Wing Lian
608a2f3180 bump transformers for updated llama 3.1 (#1778)
* bump transformers for updated llama 3.1

* bump for patch fix
2024-07-23 13:21:03 -04:00
Wing Lian
87455e7f32 swaps to use newer sample packing for mistral (#1773)
* swaps to use newer sample packing for mistral

* fix multipack patch test

* patch the common fa utils

* update for refactor of flash attn unpad

* remove un-needed drop attn mask for mistral

* bump transformers to main to pick up latest mistral fix for 12b and refactor of fa2

* update test
2024-07-23 01:41:11 -04:00
Keith Stevens
985819d89b Add a chat_template prompt strategy for DPO (#1725)
* Implementing a basic chat_template strategy for DPO datasets

This mimics the sft chat_template strategy such that users can:
* Specify the messages field
* Specify the per message role and content fields
* speicfy the chosen and rejected fields
* Let the tokenizer construct the raw prompt
* Ensure the chosen and rejected fields don't have any prefix tokens

* Adding additional dpo chat template unittests

* Rename test class
2024-07-21 09:10:42 -04:00
Wing Lian
fa91b698e9 Fix untrained tokens (#1771)
* fix untrained reserved tokens

* save model after fixing untrained embeddings

* don't need fsdp conditional here
2024-07-19 12:21:37 -04:00
Wing Lian
e4063d60a7 bump transformers and set roundup_power2_divisions for more VRAM improvements, low bit ao optimizers (#1769)
* bump transformers and set roundup_power2_divisions for more VRAM improvements

* support for low bit optimizers from torch ao

* fix check for alternate optimizers and use nous models on hf for llama3

* add missing check for ao_adamw_fp8

* fix check when using custom optimizers w adamw
2024-07-19 00:47:07 -04:00
Wing Lian
7830fe04b5 Unsloth rope (#1767)
* Add unsloth rope embeddings support

* support for models weights in 4bit and do some memory gc

* use accelerate logger

* add unsloth llama rms norm optims

* update docs for unsloth

* more docs info
2024-07-18 14:54:41 -04:00
Wing Lian
c86c32a627 set the number of dataset processes on the DPO Config rather than the trainer (#1762) 2024-07-17 15:38:37 -04:00
Wing Lian
8731b95d04 re-enable PYTORCH_CUDA_ALLOC_CONF expandable_segments (#1765) [skip ci] 2024-07-17 15:38:26 -04:00
Wing Lian
8619b2d855 add torch_compile_mode options (#1763) [skip ci]
* add torch_compile_mode options

* make sure n_gpu is an int
2024-07-17 15:38:07 -04:00
Wing Lian
976f85195a fixes to accelerator so that iterable pretraining datasets work (#1759)
* fixes to accelerator so that iterable pretraining datasets work

* fix the pretraining test params

* split batches, not dispatch batches needs to be set

* update c4 datasets

* set epochs in pretrain config test

* need to set both split_batches and dispatch_batches to false for pretraining

* fix bool val in comment
2024-07-17 10:58:38 -04:00
Wing Lian
152ab76623 fix num gpu check (#1760) 2024-07-17 10:58:14 -04:00
Wing Lian
5f58555bd0 support for llama multipack using updated code/patches (#1754)
* support for llama multipack using updated code/patches

* also support unsloth patches

* incorrect arg

* add config validation for unsloth

* add missing return to validation

* add another missing return to validation
2024-07-16 17:36:29 -04:00
Wing Lian
cfc533a7f7 torch compile and cuda alloc improvements (#1755)
* enable experimental expandable_segments

* hf trainer seems to be missing torch compile

* disable PYTORCH_CUDA_ALLOC_CONF to see if that fixes cicd
2024-07-16 16:00:23 -04:00
Wing Lian
e1725aef2b update modal package and don't cache pip install (#1757)
* update modal package and cleanup pip cache

* more verbosity on the test
2024-07-16 14:45:38 -04:00
Wing Lian
78e12f8ca5 add basic support for the optimi adamw optimizer (#1727)
* add support for optimi_adamw optimizer w kahan summation

* pydantic validator for optimi_adamw

* workaround for setting optimizer for fsdp

* make sure to install optimizer packages

* make sure to have parity for model parameters passed to optimizer

* add smoke test for optimi_adamw optimizer

* don't use foreach optimi by default
2024-07-14 19:12:57 -04:00
Wing Lian
98af5388ba bump flash attention 2.5.8 -> 2.6.1 (#1738)
* bump flash attention 2.5.8 -> 2.6.1

* use triton implementation of cross entropy from flash attn

* add smoke test for flash attn cross entropy patch

* fix args to xentropy.apply

* handle tuple from triton loss fn

* ensure the patch tests run independently

* use the wrapper already built into flash attn for cross entropy

* mark pytest as forked for patches

* use pytest xdist instead of forked, since cuda doesn't like forking

* limit to 1 process and use dist loadfile for pytest

* change up pytest for fixture to reload transformers w monkeypathc
2024-07-14 19:11:31 -04:00
RodriMora
219cd0d3c5 Fix eval_sample_packing in llama-3 lora example (#1716) [skip ci]
* Fix eval_sample_packing in llama-3 lora example

* Update examples/llama-3/lora-8b.yml

Co-authored-by: Wing Lian <wing.lian@gmail.com>

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-07-13 14:34:44 -04:00
David Meikle
634f384e06 Changed URL for dataset docs (#1744) 2024-07-13 14:34:28 -04:00
Akshaya Shanbhogue
4512738a73 bump xformers to 0.0.27 (#1740)
* Update requirements.txt

Preserve compatibility with torch 2.3.1. [Reference](https://github.com/facebookresearch/xformers/issues/1052)

* fix setup.py to extract the current xformers dep from requirements for replacement

* xformers 0.0.27 wheels not built for torch 2.3.0

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-07-13 14:04:31 -04:00
Wing Lian
1e57b4c562 update to pytorch 2.3.1 (#1746) [skip ci] 2024-07-13 13:28:17 -04:00
Wing Lian
a4a5bf057f fixes to prevent vram spike when train starts (#1742) 2024-07-13 09:53:13 -04:00
Wing Lian
137d84d1b4 add torch 2.3.1 base image (#1745) 2024-07-13 09:41:51 -04:00
Oliver Klingefjord
18abdb447a typo (#1685) [skip ci]
* typo

* typo 2

---------

Co-authored-by: mhenrichsen <mads.gade.henrichsen@live.dk>
2024-07-12 21:24:01 -04:00
Wing Lian
47e1916484 add tests so CI can catch updates where patches will break with unsloth (#1737) [skip ci] 2024-07-11 16:43:19 -04:00
mhenrichsen
1194c2e0b1 github urls (#1734)
Co-authored-by: Henrichsen, Mads (ext) <mads.henrichsen.ext@siemens-energy.com>
2024-07-11 09:19:29 -04:00
Wing Lian
a159724e44 bump trl and accelerate for latest releases (#1730)
* bump trl and accelerate for latest releases

* ensure that the CI runs on new gh org

* drop kto_pair support since removed upstream
2024-07-10 11:15:44 -04:00
Josh Bleecher Snyder
b3f680d305 sanity check ranges in freeze.py (#1686)
* sanity check ranges in freeze.py

this will catch problems earlier and more clearly.

in my case, it appears that deepspeed zero3 sets layer tensor shapes
to [0], which doesn't play well with automatically inferred ranges.
through a bit of luck, inverting ranges still appears to work correctly.

* simplify chained comparison
2024-07-05 09:24:07 -04:00
Wing Lian
c69b7eb2b5 full weights fsdp training seems broken with fsdp_cpu_ram_efficient_loading, disabling for now (#1726) 2024-07-05 09:15:36 -04:00
Wing Lian
c6d83a87c4 add support for .env files for env vars (#1724) 2024-07-02 13:17:40 -04:00
Wing Lian
5370cedf0c support for gemma2 w sample packing (#1718) 2024-06-29 01:38:55 -04:00
Josh Bleecher Snyder
f2480a1d91 improve Pre-Tokenized Dataset docs (#1684) [skip ci]
Fixes #1661
2024-06-26 13:13:21 -07:00
DavidFarago
559562d790 Allow "weight: 0" in messages to mask them (#1703)
Allow in message objects the additional key `weight`, which can be set
to 0 (or 1) to cause that message to be masked out (or left unmasked)
for training (similar to [1]). This is helpful for training the model to be robust and
capable of error recovery upon a bad assistant message.
A missing `weight` key defaults to weight 1, to guarantee downward compatibility.

[1]: https://github.com/mistralai/mistral-finetune
2024-06-20 10:05:16 -04:00
Wing Lian
4de4b4089f add support for multipack for deepseek_v2 (#1712) 2024-06-20 10:02:55 -04:00
Wing Lian
3f1f5e3312 drop length column for issues with eval without packing (#1711) 2024-06-18 23:32:29 -04:00
Wing Lian
5783839c6e download model weights on preprocess step (#1693) 2024-06-09 20:10:17 -04:00
Wing Lian
cbbf039a46 verbose failure message (#1694) 2024-06-09 20:09:36 -04:00
Wing Lian
851ccb1237 bump deepspeed for fix for grad norm compute putting tensors on different devices (#1699) 2024-06-09 17:13:28 -04:00