Aman Karmani
dfe591435f
make lisa training example work on one 24gb gpu
2024-04-02 03:19:54 +00:00
Aman Karmani
5dd9364c00
example config for lisa
2024-04-01 07:27:16 +00:00
Aman Karmani
6185cd5227
fix LISA by ensuring params are not frozen during __init__
2024-04-01 06:57:28 +00:00
Aman Karmani
b357c93f23
improve lisa callback logging
2024-04-01 04:54:03 +00:00
Wing Lian
21a5094226
fix default and fix attribute traversal for layers
2024-03-31 00:27:04 -04:00
Wing Lian
3a9ad7c66e
add lisa support
2024-03-30 22:55:15 -04:00
Wing Lian
89134f2143
make sure to install causal_conv1d in docker ( #1459 )
2024-03-29 16:43:25 -04:00
Wing Lian
6086be85f7
qwen2_moe support w multipack ( #1455 )
2024-03-29 11:04:53 -04:00
Wing Lian
4a92a3b9ee
Nightlies fix v4 ( #1458 ) [skip ci]
...
* another attempt at github actions
* try again
2024-03-29 11:04:34 -04:00
Wing Lian
46a73e3d1a
fix yaml parsing for workflow ( #1457 ) [skip ci]
2024-03-29 10:21:08 -04:00
Wing Lian
da3415bb5a
fix how nightly tag is generated ( #1456 ) [skip ci]
2024-03-29 09:29:17 -04:00
Wing Lian
8cb127abeb
configure nightly docker builds ( #1454 ) [skip ci]
...
* configure nightly docker builds
* also test update pytorch in modal ci
2024-03-29 08:25:45 -04:00
Wing Lian
05b398a072
fix some of the edge cases for Jamba ( #1452 )
...
* fix some of the edge cases for Jamba
* update requirements for jamba
2024-03-29 02:38:02 -04:00
Keith Stevens
e634118f90
Support loading datasets saved via save_to_disk ( #1432 )
...
* Support loading datasetes saved via save_to_disk
* Adding comprehensive unittests
* Fix dataset tests due to new hash changes
2024-03-29 00:19:36 -04:00
Wing Lian
02af0820f7
Jamba ( #1451 )
...
* fixes for larger models
* add qlora example for deepspeed
* add readme for jamba
2024-03-28 21:03:22 -04:00
Wing Lian
4155e9988f
fix layer_replication arg to peft ( #1446 )
2024-03-27 10:18:56 -04:00
Wing Lian
25afd35842
support layer replication for peft and fix rslora integration ( #1445 )
2024-03-27 10:16:47 -04:00
Wing Lian
da265dd796
fix for accelerate env var for auto bf16, add new base image and expand torch_cuda_arch_list support ( #1413 )
2024-03-26 16:46:19 -04:00
WenboPan
e07347b188
Remove seq_len arg in rotary_emb ( #1443 )
...
* remove seq_len in llama rotary_emb
* chore: lint
---------
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2024-03-26 15:19:44 -04:00
Far El
bcdc9b1601
Fix falcon tokenization step ( #1441 ) [skip ci]
...
* Fix falcon tokenization step
* chore: lint
---------
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2024-03-26 15:19:34 -04:00
Satpal Singh Rathore
c19d060a74
turn sample_packing on for training ( #1438 ) [skip ci]
2024-03-26 15:19:04 -04:00
Wing Lian
601b77bc9d
make sure to capture non-null defaults from config validation ( #1415 )
2024-03-26 15:18:47 -04:00
NanoCode012
ff939d8a64
fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path ( #1298 )
...
* fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path
* fix: normalize config
2024-03-25 15:34:54 +09:00
Phuc Van Phan
324d59ea0d
docs: update link to docs of advance topic in README.md ( #1437 )
2024-03-24 21:49:27 -07:00
NanoCode012
f1ebaa07c6
chore(config): refactor old mistral config ( #1435 )
...
* chore(config): refactor old mistral config
* chore: add link to colab on readme
2024-03-25 12:00:44 +09:00
Wing Lian
34ba634b8c
Fix ORPO multi gpu ( #1433 )
...
* don't drop attention_mask for orpo
* handle multi-gpu cases better for orpo
* revert change to not drop the attention_mask from inputs for orpo
2024-03-22 15:22:58 -07:00
Hamel Husain
4e69aa48ab
Update docs.yml
2024-03-21 22:36:57 -07:00
Hamel Husain
629450cecd
Bootstrap Hosted Axolotl Docs w/Quarto ( #1429 )
...
* precommit
* mv styes.css
* fix links
2024-03-21 22:28:36 -07:00
Wing Lian
2a1589f6f6
strip out hacky qlora-fsdp workarounds now that qlora-fsdp fixes are upstreamed ( #1428 )
2024-03-21 11:56:13 -04:00
Younes Belkada
7d55607368
HF / FEAT: Optimize HF tags ( #1425 ) [skip ci]
...
* optimize tags
* chore: lint
---------
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2024-03-21 11:55:56 -04:00
Wing Lian
7803f0934f
fixes for dpo and orpo template loading ( #1424 )
2024-03-20 11:36:24 -04:00
Wing Lian
dd449c5cd8
support galore once upstreamed into transformers ( #1409 )
...
* support galore once upstreamed into transformers
* update module name for llama in readme and fix typing for all linear
* bump trl for deprecation fixes from newer transformers
* include galore as an extra and install in docker image
* fix optim_args type
* fix optim_args
* update dependencies for galore
* add galore to cicd dockerfile
2024-03-19 09:26:35 -04:00
NanoCode012
40a88e8c4a
Feat: Add sharegpt multirole ( #1137 )
...
* feat(prompt): support multiple roles for sharegpt
* fix: add handling of empty role back
* feat: rebased and allowed more dynamic roles via config
* fix: variable
* chore: update message
* feat: add vicuna format
* fix: JSON serializable error
* fix: typing
* fix: don't remap for unknown keys
* fix: add roles to pydantic
* feat: add test
* chore: remove leftover print
* chore: remove leftover comment
* chore: remove print
* fix: update test to use chatml
2024-03-19 20:51:49 +09:00
Seungduk Kim
43bdc5d3de
Add a config not to shuffle merged dataset ( #1394 ) [skip ci]
...
* Add a config not to shuffle merged dataset
* Update README.md
* Update src/axolotl/utils/config/models/input/v0_4_1/__init__.py
Co-authored-by: Wing Lian <wing.lian@gmail.com >
* invert the condition name
* update README
* info -> debug
---------
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2024-03-19 20:51:00 +09:00
NanoCode012
b1e3e1b25f
fix(config): passing gradient_checkpoint_kwargs ( #1412 )
...
* fix(config): change default use_reentrant to true
* Update trainer_builder.py
* fix: make sure to pass kwargs to enable checkpoint
* chore: lint
2024-03-19 12:57:43 +09:00
Wing Lian
2ea70ebbd8
ORPO ( #1419 )
...
* orpo trainer
* rl handling for orpo
* support for remove_unused_columns
* orpo fixes
* fix loader for orpo
* chore: lint
* fix default for remove_unused_columns
* roll ORPO into the main AxolotlTrainer so it can be compatible with some of the other techniques like relora
* better handling of system message for orpo
* revert system prompt changes for chat templtes
* no need for else condition
* split dataset parsing into it's own component
2024-03-18 13:10:00 -04:00
jbl
e8c8ea64b3
Update README.md ( #1418 )
...
Add Phorm AI Badge
2024-03-17 23:47:46 -04:00
NanoCode012
d485a08393
chore(script): remove redundant setting ( #1411 )
2024-03-16 21:10:38 +09:00
NanoCode012
f083aed2c7
Fix(readme): Improve README QuickStart info ( #1408 )
...
* Fix(readme): Improve README QuickStart info
* chore: add to toc
2024-03-16 21:10:22 +09:00
NanoCode012
868c33954d
Feat(readme): Add instructions for Google GPU VM instances ( #1410 )
2024-03-16 21:10:05 +09:00
Wing Lian
8df7b888ff
beta support for multipack with gemmoe: ( #1402 )
2024-03-14 15:52:23 -04:00
Sebastian Raschka
6366b0c212
Fix Gemma 7b qlora.yml ( #1405 )
2024-03-14 15:44:38 -04:00
Seungduk Kim
05bcc9ea56
Train parameters exclusively in specific ranges ( #1390 )
...
* Train parameters exclusively in specific ranges
* Fix the style and update docs
* Update yaml example
2024-03-14 11:05:42 -04:00
Chirag Jain
3bd8203c35
Don't disable existing loggers when configuring axolotl logging ( #1395 )
2024-03-14 11:05:21 -04:00
Hamel Husain
8b12468230
Add QLoRA + FSDP Docs ( #1403 )
...
* pre commit
* Update fsdp_qlora.md
2024-03-14 11:04:51 -04:00
Chirag Jain
0976781e15
Update ChatTemplate enum to include alpaca and gemma ( #1396 )
2024-03-13 11:06:02 -04:00
Wing Lian
8a82d2e0a4
add handling for argilla dpo-mix ( #1397 )
2024-03-12 17:17:10 -04:00
Wing Lian
4326520829
chore: lint ( #1389 )
2024-03-10 21:02:55 -04:00
Brian Fitzgerald
b7d8a7dc4d
Add Glaive conversation format support ( #1365 )
...
* Add Glaive conversation format support
* fix black formatting errors
* Fix black and pylint formatting errors
* only set role_key_tool if provided in the dataset constructor
* Update src/axolotl/prompt_strategies/sharegpt.py
Co-authored-by: Wing Lian <wing.lian@gmail.com >
* sharegpt test
* tokenizer test
* fix formatting
---------
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2024-03-10 20:50:25 -04:00
Seungduk Kim
b0ee9ec734
Set gradient_clipping to auto in DeepSpeed configs ( #1382 ) [skip ci]
2024-03-10 20:50:12 -04:00