Wing Lian
9f824ef76a
simplify the example configs to be more minimal and less daunting ( #2486 ) [skip ci]
...
* simplify the example configs to be more minimal and less daunting
* drop empty s2_attention from example yamls
2025-04-04 13:47:26 -04:00
Sunny Liu
1c14c4a15c
Add hub model id config options to all example yml files ( #2196 ) [skip ci]
...
* added hub model_id in example yml
* add hub model id to example yml
2024-12-17 11:24:30 -05:00
Wing Lian
4fde300e5f
update outputs path so that we can mount workspace to /workspace/data ( #1623 )
...
* update outputs path so that we can mount workspace to /workspace/data
* fix ln order
2024-05-15 12:44:13 -04:00
NanoCode012
a7a9a1433a
fix(examples): remove is_*_derived as it's parsed automatically ( #1297 )
2024-02-22 00:52:46 +09:00
Wing Lian
782b6a4216
set fp16 to false if bf16, update bf16: auto in example YAMLs ( #1122 ) [skip ci]
...
* set fp16 to false if bf16, update bf16: auto in example YAMLs
* unset fp16 so that it fallsback properly if bf16 isn't available
* Update README.md [skip-ci]
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com >
* test that bf16 disables fp16
---------
Co-authored-by: NanoCode012 <kevinvong@rocketmail.com >
2024-01-22 18:44:01 -05:00
Wing Lian
5f79b8242f
new evals_per_epoch and saves_per_epoch to make things cleaner ( #944 )
...
* new evals_per_epoch and saves_per_epoch to make things cleaner
* update per PR feedback
2023-12-12 15:35:23 -05:00
NanoCode012
a1da39cd48
Feat(wandb): Refactor to be more flexible ( #767 )
...
* Feat: Update to handle wandb env better
* chore: rename wandb_run_id to wandb_name
* feat: add new recommendation and update config
* fix: indent and pop disabled env if project passed
* feat: test env set for wandb and recommendation
* feat: update to use wandb_name and allow id
* chore: add info to readme
2023-12-04 22:17:25 +09:00
Wing Lian
f544ab2bed
don't compile deepspeed or bitsandbytes from source ( #837 )
2023-11-08 19:49:55 -05:00
Wing Lian
8b79ff0e94
fix eval_steps to be a sane default ( #797 )
...
* fix eval_steps to be a sane default
* update docs for fractional eval_steps
2023-10-27 22:36:30 -04:00
Wing Lian
2d8def68dc
simplify by removing duplicate base_model_config ( #772 )
2023-10-23 01:42:38 -04:00
Wing Lian
e50a64e85e
prepared dataset caching, other misc fixes ( #665 )
...
* prepared dataset caching, other misc fixes
* also don't load from disk cache unless explicit
2023-10-02 21:07:24 -04:00
Doan Minh Phuong
1aa400721e
Fix Codellama examples ( #582 )
...
* Fix seq_len
* Update lora.yml
* Update qlora.yml
* Update lora.yml
* Update lora.yml
* Update qlora.yml
2023-09-15 04:19:13 -04:00
Wing Lian
343714972b
recommend padding when using sample packing ( #531 )
2023-09-06 17:00:21 -04:00
mhenrichsen
35130711d6
Feat(cfg): Add code-llama configs for all sizes ( #479 )
...
* configs for all sizes
* update tokenizer type
---------
Co-authored-by: mhenrichsen <some_email@hey.com >
2023-08-27 10:20:17 +09:00