Commit Graph

1443 Commits

Author SHA1 Message Date
Wing Lian
2501a371c6 fix setting the authorized keys when there are more than one in the env var (#1626) 2024-05-15 20:48:56 -04:00
Wing Lian
e6937e884b fix symlinks for axolotl outputs (#1625) 2024-05-15 19:41:45 -04:00
Wing Lian
039e2a0370 bump versions of deps (#1621)
* bump versions of deps

* bump transformers too

* fix xformers deps and include s3fs install
2024-05-15 13:27:44 -04:00
Wing Lian
4fde300e5f update outputs path so that we can mount workspace to /workspace/data (#1623)
* update outputs path so that we can mount workspace to /workspace/data

* fix ln order
2024-05-15 12:44:13 -04:00
Wing Lian
3319780300 update torch 2.2.1 -> 2.2.2 (#1622) 2024-05-15 09:45:27 -04:00
bofeng huang
81da7d2531 Fix total_num_steps (#1566)
* Fix `total_num_steps`

* Fix total_num_steps

* lint
2024-05-14 20:10:37 -04:00
Ali Mosavian
1e1921b794 FIX: max_length and max_prompt_length was not being sent to ORPOTrainer (#1584)
* FIX: TRL trainer preprocessing step was running in one process

* FIX: max_length and max_prompt_length was not being sent to ORPOTrainer

* FIX: Change ORPO max prompt length to 1/4 of max length, otherwise we get strange behaviour

* FIX: Removed change from a different PR

* FIX: Black fix

* explicitly set max prompt len for orpo config

---------

Co-authored-by: Ali Mosavian <ali.mosavian@kry.se>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-05-14 08:51:17 -04:00
Wing Lian
1634ac82e0 make sure to save on the last step (#1615) 2024-05-14 08:48:39 -04:00
Wing Lian
02982733ec fix attention mask collation (#1603) 2024-05-14 08:17:30 -04:00
Chansung Park
5d97e65f95 add dstack section (#1612) [skip ci]
* add dstack section

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-05-14 08:13:45 -04:00
Wing Lian
2147cf6837 Llama3 dpo (#1610)
* add dpo llama3

* fix dpo bos and eos

* bos token gets added automatically by the tokenizer

* explicit <|end_of_text|> not needed, as eot_id is sufficient

---------

Co-authored-by: Nero10578 <owenarliawan@gmail.com>
2024-05-11 18:29:03 -04:00
Ram
50421c8b1d feat: Add LLaMA-3 instruct prompt strategies for fine-tuning (#1553)
* Add prompt strategies

* Update modified URL

* Update modified URL

* Update fastchat_conversation_turns.py

* Update register function

* Remove extra /n for system prompt

* Fix return

* Fix BOS

* Update requirements, pylint

* Linting

* Linting

* fix tuples, make sure to set system message in template

* tests for llama3 tokenization

* fix conditionals for loading chat template

---------

Co-authored-by: Ram <ram@Rams-MacBook-Pro.local>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-05-11 00:08:04 -04:00
Antoni-Joan Solergibert
b32c08f8cc adding llama3 fastchat conversation monkeypatch (#1539)
* adding llama3 fastchat conversation monkeypatch

* Updated conversation turns to work with PR3259 of FastChat

* fixed bos token

* bump fastchat version

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-05-10 10:40:05 -04:00
Wing Lian
fff06af8d0 ignore the fsdp_config section too (#1606) [skip ci] 2024-05-09 13:30:39 -04:00
Wing Lian
796a085b2f make sure to save the lora adapter at the end of RL/dpo training (#1573) 2024-05-08 10:39:33 -04:00
Wing Lian
cb78a36374 improve tool handling roles (#1587) 2024-05-07 11:30:40 -04:00
NanoCode012
8b9c15b17f feat: exclude mamba blocks for jamba (#1578) 2024-05-07 22:52:57 +09:00
Chirag Jain
9e1480e9ca Pass deepspeed and fsdp as None explicitly when merging adapters to allow custom device_map (#1575) 2024-05-07 22:47:55 +09:00
marijnfs
3367fca732 Gradio configuration parameters (#1591)
* Gradio Configuration Settings

* Making various Gradio variables configurable instead of hardcoded

* Remove overwriting behavour of 'default tokens' that breaks tokenizer for llama3

* Fix type of gradio_temperature

* revert un-necessary change and lint

---------

Co-authored-by: Marijn Stollenga <stollenga@imfusion.de>
Co-authored-by: Marijn Stollenga <stollenga@imfusion.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-05-06 15:43:42 -04:00
tpoisonooo
1ac899800b docs(config.qmd): add loraplus example (#1577)
* Update qwen2-moe-lora.yaml

* feat(project): update
2024-05-06 14:05:28 +09:00
Wing Lian
70185763f6 add torch 2.3.0 to builds (#1593) 2024-05-05 18:45:45 -04:00
Wing Lian
120b809465 fix for jupyterlab on cloud start (#1594) 2024-05-05 10:08:43 -04:00
Wing Lian
29cf15a28c improve save callbacks (#1592) 2024-05-04 23:19:18 -04:00
Chirag Jain
dde02fcb94 Pass weakref to model in the SIGINT handler to free up model post train function (#1581)
* Pass weakref to model in the SIGINT handler to free up model post train()

* Fix lint issues

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-05-03 11:05:28 -04:00
Ali Mosavian
b9bb169602 FIX: TRL trainer preprocessing step was running in one process (#1583)
* FIX: TRL trainer preprocessing step was running in one process

* FIX: Changed so that dataset_num_proc is sent to CPO, KTO and ORPO trainer args and directly to the trainer when DPO

* FIX: Changed back to only support ORPO for now, since KTO is handled in another way

---------

Co-authored-by: Ali Mosavian <ali.mosavian@kry.se>
2024-05-03 11:02:59 -04:00
JohanWork
601c08b4c2 ADD: warning hub model (#1301)
* update warning for save_strategy

* update

* clean up

* update

* Update test_validation.py

* fix validation step

* update

* test_validation

* update

* fix

* fix

---------

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2024-05-01 01:05:12 +09:00
Abhinand
cc5d31e0d9 Add debug option for RL dataset preprocessing (#1404)
* adding debug option for RL dataset preprocessing

* Refine formatting of debugging code in RL dataset preprocessing

* Update __init__.py

* chore: fix lint

---------

Co-authored-by: NanoCode012 <kevinvong@rocketmail.com>
2024-05-01 00:36:04 +09:00
NanoCode012
1aeece6e24 chore(doc): clarify micro_batch_size (#1579) [skip ci] 2024-05-01 00:33:53 +09:00
Wing Lian
5294653a2d PoSE context length ext (#1567)
* PoSE wip

* fixes for pose splitting

* set pose context len so we can pick that up seperately from the usable training context len

* support min sample len and define num chunks

* fix chunk splitting

* support for curriculum/ordered learning with pose

* fix sequence len sort

* add curriculum_sampling to pydantic
2024-04-27 12:28:20 -04:00
Motoki Wu
98c25e15cb Add ORPO example and e2e test (#1572)
* add example for mistral orpo

* sample_packing: false for orpo

* go to load_dataset (since load_rl_datasets require a transfom_fn, which only dpo uses currently)
2024-04-27 12:07:06 -04:00
Wing Lian
68601ec6ad make sure everything stays in the same dtype when using dpo + FSDP (#1559) 2024-04-22 16:00:05 -04:00
Haoxiang Wang
60f5ce0569 Add support for Gemma chat template (#1530)
* Add support for Gemma chat template

* Update fschat version to include its newest support for Gemma chat style

* pin fastchat to current HEAD

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-04-21 19:55:40 -04:00
Frank Ruis
7477a53287 wrap prepared_ds_path in str() to avoid TypeError in fsspec package (#1548)
* wrap prepared_ds_path in str() to avoid TypeError in fsspec package

`fsspec` calls `if "::" in path` on `prepared_ds_path`, which will throw an error if it is a `PosixPath` object.

* update test too

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-04-21 19:55:20 -04:00
Wing Lian
7d1d22f72f ORPO Trainer replacement (#1551)
* WIP use trl ORPOTrainer

* fixes to make orpo work with trl

* fix the chat template laoding

* make sure to handle the special tokens and add_generation for assistant turn too
2024-04-19 17:25:36 -04:00
NanoCode012
0e8f340945 fix(yml): update llama-3 config (#1543) [skip ci] 2024-04-19 20:44:46 +09:00
NanoCode012
59ef25470c fix(packages): lock datasets version (#1545) 2024-04-19 20:42:10 +09:00
Wing Lian
c10563c444 fix broken linting (#1541)
* chore: lint

* include examples in yaml check

* mistral decided to gate their models...

* more mistral models that were gated
2024-04-19 01:03:04 -04:00
Monk (looking for PhD Fall’24)
37c037c69d Adding Llama-3 qlora (#1536)
* Create qlora.yml

* Update qlora.yml
2024-04-18 21:27:32 +02:00
Wing Lian
15f7910d33 llama-3 examples (#1537) 2024-04-18 14:28:03 -04:00
NanoCode012
d28ba2e405 feat(doc): Add example for pad_token (#1535) 2024-04-19 02:20:20 +09:00
Atlas
0eadfc8c86 Create mixtral_22.yml (#1514) [skip ci]
Code sourced from here:

https://twitter.com/mattshumer_/status/1778135774887567712
2024-04-17 01:16:00 -04:00
Atlas
bcaa92325d Update Readme to include support for Mixtral8X22B (#1518) [skip ci] 2024-04-17 01:15:30 -04:00
YTING
7d9bafcb88 Update README.md (#1521) [skip ci] 2024-04-17 01:15:05 -04:00
Wing Lian
e07dcb288c add docs around pre-processing (#1529) 2024-04-16 19:45:46 -04:00
Wing Lian
6319da1f9b Unsloth gradient checkpointing offload (#1528)
* unsloth gradient checkpointing

* fix validation too

* fixes to make it work with mistral

* monkeypatch the checkpoint fn earlier
2024-04-16 14:53:57 -04:00
Wing Lian
132eb740f0 DBRX Model Support (#1462)
* wip for dbrx finetuning

* add fastcore for parallel loading of sharded weights

* fix dtype for load, use PartialState instead of accelerator to init process group, remove redundant wandb callback

* update to use v2 of the converted model

* more fixes for dbrx loras

* make sure to enable fsdp activation checkpointing

* fix support for 8bit loras too for dbrx

* apply z3 leaf moe fix for DBRX with deepspeed

* don't raise value error since child module searches could fail and be ok

* revert a previous change to fix fsdp

* update mistral/mistral qlora+fsdp yamls

* fix qlora+fsdp quant storage type

* more edge cases for qlora-fsdp

* fixes for fsdp+qlora w optimizer in 8bit

* add bigstral z3 config and make sure to use full_state_dict for fsdp
2024-04-12 09:02:36 -04:00
Thomas Capelle
5ed29393e3 Update SaveAxolotlConfigtoWandBCallback to use artifact instead of save (#1483)
* deprecated wandb.save

* also use wandb.save for axolotl yaml

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-04-09 18:58:38 -04:00
Wing Lian
da9b1a3196 use locale agnostic seperator to make large nums easier to read (#1503) 2024-04-09 17:28:43 -04:00
DavidFarago
057fa44191 WIP: Support table logging for mlflow, too (#1506)
* WIP: Support table logging for mlflow, too

Create a `LogPredictionCallback` for both "wandb" and "mlflow" if
specified.

In `log_prediction_callback_factory`, create a generic table and make it
specific only if the newly added `logger` argument is set to "wandb"
resp. "mlflow".

See https://github.com/OpenAccess-AI-Collective/axolotl/issues/1505

* chore: lint

* add additional clause for mlflow as it's optional

* Fix circular imports

---------

Co-authored-by: Dave Farago <dfarago@innoopract.com>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-04-09 17:28:27 -04:00
Scott Fleming
8fa0785f74 Correctly handle splits for datasets.arrow_dataset.Dataset objects (#1504)
* Correctly handle splits for datasets.arrow_dataset.Dataset objects

The `load_tokenized_prepared_datasets` function currently has logic for loading a dataset from local path that always checks if a split is in the dataset. The problem is, if the dataset is loaded using `load_from_disk` and it is an Arrow-based dataset, *there is no* split information. Instead what happens is, by calling `split in ds`, it presumably searches through all the rows and columns of the arrow dataset object to find e.g., 'train' assuming `split == 'train'`. This causes the program to hang.

See https://chat.openai.com/share/0d567dbd-d60b-4079-9040-e1de58a4dff3 for context.

* chore: lint

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-04-09 16:40:26 -04:00