Wing Lian
b50d35bec9
Logging config for colab ( #2611 )
...
* only configure logging on cli to play nicely with colab
* allow reloading the config on the fly from a dict
* make sure to use dict for yaml
* reuse existing function for load
* make cli args optional
* mps fix and respect max_steps
2025-05-07 16:10:13 -04:00
Wing Lian
bc6dfa6899
add missing __init__ for lr monkeypatch fix ( #2609 )
2025-05-07 16:10:13 -04:00
Dhruv Mullick
9d6e8af622
Add num_completions_to_print for trl and grpo ( #2604 )
2025-05-07 16:10:12 -04:00
Wing Lian
d49a4268b8
additional args for grpo config/trainer ( #2598 )
2025-05-07 16:10:12 -04:00
Wing Lian
1d6e931115
replace zero_only with simpler if statement ( #2592 )
2025-05-07 16:10:12 -04:00
Wing Lian
24907533d1
don't automatically enable lora kernels for RL training ( #2600 )
2025-05-07 16:10:12 -04:00
Wing Lian
0e9d816d2e
only import vllm serve cli if its being called ( #2597 ) [skip ci]
2025-05-07 16:10:12 -04:00
Wing Lian
72f142186a
Handle other reasoning trace dataset formats ( #2591 )
...
* Handle other reasoning trace dataset formats
* rename var to improve readability
* chore: refactor with comments
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai >
2025-05-07 16:10:11 -04:00
Wing Lian
87726322bf
upload the deepspeed json to wandb ( #2593 ) [skip ci]
2025-05-07 16:10:11 -04:00
NanoCode012
ae8ae7534c
feat: add qwen3 moe block for ds3 ( #2596 ) [skip ci]
2025-05-07 16:10:11 -04:00
Wing Lian
ee00142cb5
patch to convert LR from tensor to float when using DS ( #2595 ) [skip ci]
2025-05-07 16:10:11 -04:00
Aleksandr Dremov
097e7e3b5b
Plugins create_lr_scheduler support ( #2584 )
...
* lr_scheduler support
* fix
* Update scheduler.py
* Update scheduler.py
* cfg handling
* black
* remove debug
* remove adding the axolotl cfg to the scheduler mixin
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-05-07 16:10:11 -04:00
Dan Saunders
c714958181
auto-enable lora kernels where possible ( #2589 )
...
* auto-enable lora kernels where possible
* test
* revert change to example yaml
* naming
* remove print
* slight logic change
2025-05-07 16:10:11 -04:00
Wing Lian
c337ca0872
support for qwen3 with lora kernels ( #2588 )
...
* support for qwen3 with lora kernels
* fix patch
* typo
2025-05-07 16:10:10 -04:00
Dan Saunders
f04f7cf5ad
Fix eval + add smoke test ( #2586 )
...
* fix evaluate CLI
* add smoke test
* fix naming
* lint
2025-05-07 16:10:10 -04:00
Wing Lian
c64a951bc9
set config on the PluginManager for callback access ( #2587 )
2025-05-07 16:10:10 -04:00
Wing Lian
fc88cc56cb
Post release fixes ( #2581 )
...
* fix missing kwarg on child
* make the runpod test shorter
* update docs
* rename runpod test json file
* typing fixes and ordering of doc
2025-05-07 16:10:10 -04:00
Wing Lian
14d670dbf0
v0.9.0 release ( #2578 )
ci-cd / build-axolotl (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
ci-cd / build-axolotl (vllm, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl (vllm, 126, 12.6.3, 3.11, 2.7.0) (push) Has been cancelled
publish pypi / Create Release (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 126, 12.6.3, 3.11, 2.7.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 124, 12.4.1, 3.11, 2.6.0) (push) Has been cancelled
publish pypi / Upload release to PyPI (push) Has been cancelled
2025-04-28 18:23:17 -04:00
Wing Lian
2d77165dc0
automatically split out reasoning trace from dataset ( #2579 )
...
* automatically split out reasoning trace from dataset
* chore: lint
* fix import
2025-04-28 18:23:03 -04:00
Wing Lian
63b17e3109
chat template and example for qwen3 ( #2577 )
2025-04-28 15:09:41 -04:00
NanoCode012
1178a15ede
Feat: Add qwen3 and CCE for qwen family ( #2518 )
2025-04-28 12:18:46 -04:00
Wing Lian
c513487d1a
support val_set_size for splitting test split from train with DPO ( #2572 )
2025-04-28 12:12:15 -04:00
NanoCode012
7099343c56
feat: add eos_tokens and train_on_eot for chat_template EOT parsing ( #2364 )
...
* feat: add eos_tokens and train_on_eot for chat_template EOT parsing
* fix: comments
* chore: add some examples of tokens
* feat: add new potential errors for chat_template to faq
* feat: add examples for EOT handling
* fix: change error to warning for missing EOS
* fix: warning typo
* feat: add tests for eot token handling
* fix: remove broken caplog capture in test
* fix: chattemplate strategy with kd missing eot changes
2025-04-28 10:11:20 -04:00
Wing Lian
5000cb3fe7
grab sys prompt too from dataset ( #2397 ) [skip ci]
...
* grab sys prompt too from dataset
* chore: add field_system to docs
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai >
2025-04-28 10:11:06 -04:00
divyanshuaggarwal
170cdb5be9
Add Post_model_load, post_lora_load, post_train, post_train_unload function calls ( #2539 )
...
* Update train.py
add post_model_load and post_lora_load model calss.
* Update train.py
add post_train and post_train_unload function calls
* Update train.py
* Update base.py
* Update train.py
* chore: lint
* clarify plugin hooks
* Update src/axolotl/integrations/base.py
Co-authored-by: Dan Saunders <danjsaund@gmail.com >
* Update src/axolotl/utils/models.py
Co-authored-by: Dan Saunders <danjsaund@gmail.com >
* Update src/axolotl/utils/models.py
Co-authored-by: Dan Saunders <danjsaund@gmail.com >
* Update src/axolotl/integrations/base.py
Co-authored-by: Dan Saunders <danjsaund@gmail.com >
* Update models.py
* Update models.py
* remove extra call to post_model_load
* chore: lint
* add test for hooks and gc trainer
* disable duplicated code check for test
* fix the path and add better handling
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
Co-authored-by: Dan Saunders <danjsaund@gmail.com >
2025-04-28 10:10:28 -04:00
Dhruv Mullick
8b33ae1c4f
Fix bug in grpo reward module import ( #2571 )
2025-04-28 00:31:56 -04:00
Wing Lian
f9c7c3bb72
don't use is_main_process during config validation ( #2569 )
2025-04-26 14:14:52 -04:00
Wing Lian
5dba5c82a8
fix support for wandb run_name for rl trainers ( #2566 ) [skip ci]
...
* fix support for wandb run_name for rl trainers
* prefer to use wandb random names for run_name
2025-04-25 21:10:54 -04:00
Chiwan Park
e3c9d541a7
fix: crash when pretraining_dataset with dispatch_batches is false ( #2558 )
2025-04-25 17:15:03 -04:00
Wing Lian
53dbf97d85
make cce default to true when using the plugin ( #2562 ) [skip ci]
2025-04-25 17:14:26 -04:00
Eko Julianto Salim
2c2563bc34
fix: gradient checkpointing functools.partial object has no attribute __self__ ( #2563 ) [skip ci]
...
* fix: gradient checkpointing causing functools.partial error
* lint
* chore: lint
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-04-25 17:02:37 -04:00
Dan Saunders
ae1c7ace63
Sequence parallel training context manager ( #2553 )
...
* ctx manager for SP
* updates
* update
* further simplifying
* accommodate both training context managers
* simplifying
* simplifying
* nit
* reorg
* tweak codecov yaml
* add gather post hook, simplify, fixes
* pytest
* pytest fix
2025-04-25 10:33:54 -04:00
Wing Lian
a4d5112ae1
builds for torch 2.7.0 ( #2552 )
...
* builds for torch==2.7.0
* use xformers==0.0.29.post3
* no vllm support with torch 2.7
* update default, fix conditional
* no xformers for 270
* no vllm on 2.7.0 for multigpu test too
* remove deprecated verbose arg from scheduler
* 2.7.0 tests on cpu
2025-04-24 00:39:31 -04:00
NanoCode012
a6d28d19b1
feat: add glm and glm4 multipack and cce ( #2546 )
...
* feat: add glm and glm4 multipack
* feat: add glm4 example
* feat: add cce for glm
2025-04-23 10:27:51 -04:00
Wing Lian
32e335dd51
fix missing host/port for vllm ( #2543 )
...
* fix missing host/port for vllm
* set tensor parallel size so it doesn't always default to cli override
2025-04-22 10:16:48 -04:00
Wing Lian
341e95aac9
prevent rate limiting to hf when using dispatch batches ( #2536 ) [skip ci]
2025-04-21 10:31:35 -04:00
Catgat
b882dfb63f
Fixed Rex Scheduler Warm Up ( #2535 ) [skip ci]
...
* Fixed Rex Scheduler Warm Up
* chore: lint
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-04-21 10:30:55 -04:00
Chiwan Park
4ce469d32e
fix: upgrade liger to 0.5.8 and use native Gemma3 patches ( #2527 )
...
* fix: upgrade liger to 0.5.8 and use native Gemma3 patches
* fix: make lint happy
* doc: update Liger Kernel FLCE support for Gemma 3
2025-04-18 09:57:40 -07:00
Wing Lian
60a8f0958d
zero val fix for beta ( #2538 )
2025-04-17 17:27:19 -07:00
NanoCode012
9da730d6a4
fix(doc): cut cross entropy installation instructions broken in qmd ( #2532 )
2025-04-16 15:02:51 -07:00
NanoCode012
32637fad00
fix: preprocess yielding whole dataset to each worker ( #2503 ) [skip ci]
2025-04-16 15:02:35 -07:00
Dan Saunders
b8c633aa97
batch api HF adapter for ring-flash-attn; cleanup and improvements ( #2520 )
...
* batch api HF adapter for ring-flash-attn; cleanup and improvements
* update
* adding all batch ring-flash-attn methods via single adapter
* removing pad_to_sequence_len=False for now
* fix
* updating docs to include batch SP
* review comments
* fixes for batch API funcs, simplify
* fixes
* fix
* updates
* add batch_zigzag smoke test
2025-04-16 13:50:48 -04:00
NanoCode012
682a9cf79b
Fix: add delinearization and make qlora work with fsdp2 ( #2515 )
...
* fixes for delinearization, and make qlora work with fsdp2
* Add back mistakenly removed lm_eval
* typo [skip ci]
* patch evals for torch.compile + fsdp2
* also check torch_compile w fsdp2
* lots of fixes for flex attn with llama4
* fix patch check and patch llama4 too
* attempt to make the patches stick
* use transformers 4.51.2
* update configs and README for llama4
* remove torch.compile for CI test
* cleanup any existing singletons
* set singleton cache to None instead of deleting
* use importlib reload with monkeypatch
* don't worry about transformers version, mark inputs with grads, fix regex
* make sure embeds aren't on cpu
* logging and mem improvements
* vllm version and add to docker, make sure to save processor on conversion
* fix ambiguous tensor bool check
* fix vllm to not use v1, upgrade hf transformers
* fix tests
* make flex_attn_compile_kwargs configurable, since this depends on model params
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
Co-authored-by: Salman Mohammadi <salman.mohammadi@outlook.com >
2025-04-15 23:31:39 -07:00
NanoCode012
271b24cccc
feat: update cce to latest ( #2521 )
2025-04-15 22:17:10 -07:00
NanoCode012
e0420b3528
fix: allow merge lora on pre-quantized model ( #2511 )
...
* fix: allow merge lora on pre-quantized model
* fix: remove unused sections per comment
2025-04-09 14:01:42 -04:00
NanoCode012
f85861a0b2
fix: liger swiglu for llama4 ( #2504 )
...
* fix: liger swiglu for llama4
* feat: add liger to deepseek v3
* fix: unpack not found
* fix: spelling
* fix: comment out deepseek v3
* fix: retest deepseek
* fix: map glu
* fix: patch model forward
* chore: add temp code to save
* fix: remove deepseek to move into separate PR
2025-04-09 02:53:17 -04:00
Wing Lian
0dac2ddeac
Llama4 linearized ( #2502 )
...
* llama4 support for linearized experts
* clean up fsdp2 sharding to prevent hang
* add yaml config
* cleanup example [skip ci]
2025-04-07 20:47:00 -04:00
NanoCode012
a6c03217f5
feat: add llama4 CCE ( #2498 )
...
* feat: add llama4 CCE
* fix: update model support list doc
* feat: include llama4_text
2025-04-07 17:12:28 -04:00
Dan Saunders
59cd472504
SP cu_seqlens fix, refactor ( #2495 )
...
* working on masking fix
* refactor and fix multipack seqlens
* pre-commit fix
* adding smoke test
* using existing packed seqlens util
* log warning re: logged losses / gradient scaling per rank
2025-04-07 14:47:57 -04:00
NanoCode012
9b89591ead
Feat: Add doc on loading datasets and support for Azure/OCI ( #2482 )
...
* fix: remove unused config
* feat: add doc on dataset loading
* feat: enable azure and oci remote file system
* feat: add adlfs and ocifs to requirements
* fix: add links between dataset formats and dataset loading
* fix: remove unused condition
* Revert "fix: remove unused condition"
This reverts commit 5fe13be73e .
2025-04-07 12:41:13 -04:00