Wing Lian
0d691cc2a7
add base docker image with pytorch 2.7.0 and variant for cuda 12.8 ( #2551 )
...
* add base docker image with pytorch 2.7.0 and variant for cuda 12.8
* my bash is terrible
2025-04-23 14:59:03 -04:00
Dan Saunders
c4053481ff
Codecov fixes / improvements ( #2549 )
...
* adding codecov reporting
* random change
* codecov fixes
* adding missing dependency
* fix
---------
Co-authored-by: Dan Saunders <dan@axolotl.ai >
2025-04-23 10:33:30 -04:00
NanoCode012
a6d28d19b1
feat: add glm and glm4 multipack and cce ( #2546 )
...
* feat: add glm and glm4 multipack
* feat: add glm4 example
* feat: add cce for glm
2025-04-23 10:27:51 -04:00
Wing Lian
32e335dd51
fix missing host/port for vllm ( #2543 )
...
* fix missing host/port for vllm
* set tensor parallel size so it doesn't always default to cli override
2025-04-22 10:16:48 -04:00
Wing Lian
7651550850
make sure to download fixtures for kd test ( #2541 )
...
* make sure to download fixtures for kd test
* use same alpaca dataset
2025-04-21 10:31:50 -04:00
Wing Lian
341e95aac9
prevent rate limiting to hf when using dispatch batches ( #2536 ) [skip ci]
2025-04-21 10:31:35 -04:00
Catgat
b882dfb63f
Fixed Rex Scheduler Warm Up ( #2535 ) [skip ci]
...
* Fixed Rex Scheduler Warm Up
* chore: lint
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-04-21 10:30:55 -04:00
Wing Lian
b640db1dbc
don't run multigpu tests twice, run SP in separate test ( #2542 )
...
* don't run multigpu tests twice, run SP in separate test
* fix multiline
2025-04-21 10:24:13 -04:00
Chiwan Park
4ce469d32e
fix: upgrade liger to 0.5.8 and use native Gemma3 patches ( #2527 )
...
* fix: upgrade liger to 0.5.8 and use native Gemma3 patches
* fix: make lint happy
* doc: update Liger Kernel FLCE support for Gemma 3
2025-04-18 09:57:40 -07:00
Wing Lian
60a8f0958d
zero val fix for beta ( #2538 )
2025-04-17 17:27:19 -07:00
NanoCode012
9da730d6a4
fix(doc): cut cross entropy installation instructions broken in qmd ( #2532 )
2025-04-16 15:02:51 -07:00
NanoCode012
32637fad00
fix: preprocess yielding whole dataset to each worker ( #2503 ) [skip ci]
2025-04-16 15:02:35 -07:00
Dan Saunders
f776f889a1
adding codecov reporting ( #2372 ) [skip ci]
...
* adding codecov reporting
* update codecov-action to v5
* fix
---------
Co-authored-by: Dan Saunders <dan@axolotl.ai >
2025-04-16 15:02:17 -07:00
Wing Lian
69eda209a6
re-enable DS zero3 ci with updated transformers ( #2533 )
2025-04-16 14:48:40 -07:00
Dan Saunders
b8c633aa97
batch api HF adapter for ring-flash-attn; cleanup and improvements ( #2520 )
...
* batch api HF adapter for ring-flash-attn; cleanup and improvements
* update
* adding all batch ring-flash-attn methods via single adapter
* removing pad_to_sequence_len=False for now
* fix
* updating docs to include batch SP
* review comments
* fixes for batch API funcs, simplify
* fixes
* fix
* updates
* add batch_zigzag smoke test
2025-04-16 13:50:48 -04:00
NanoCode012
682a9cf79b
Fix: add delinearization and make qlora work with fsdp2 ( #2515 )
...
* fixes for delinearization, and make qlora work with fsdp2
* Add back mistakenly removed lm_eval
* typo [skip ci]
* patch evals for torch.compile + fsdp2
* also check torch_compile w fsdp2
* lots of fixes for flex attn with llama4
* fix patch check and patch llama4 too
* attempt to make the patches stick
* use transformers 4.51.2
* update configs and README for llama4
* remove torch.compile for CI test
* cleanup any existing singletons
* set singleton cache to None instead of deleting
* use importlib reload with monkeypatch
* don't worry about transformers version, mark inputs with grads, fix regex
* make sure embeds aren't on cpu
* logging and mem improvements
* vllm version and add to docker, make sure to save processor on conversion
* fix ambiguous tensor bool check
* fix vllm to not use v1, upgrade hf transformers
* fix tests
* make flex_attn_compile_kwargs configurable, since this depends on model params
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
Co-authored-by: Salman Mohammadi <salman.mohammadi@outlook.com >
2025-04-15 23:31:39 -07:00
NanoCode012
271b24cccc
feat: update cce to latest ( #2521 )
2025-04-15 22:17:10 -07:00
Wing Lian
198d775d6d
make sure the all of the model is on the same device, so this test will pass on multigpu ( #2524 ) [skip ci]
2025-04-15 22:15:42 -07:00
NanoCode012
e4307fb7d7
feat: add examples for deepcoder ( #2517 )
2025-04-12 07:25:23 -07:00
Wing Lian
dd8bad06d0
remove strict=false from example yamls [skip ci] ( #2523 ) [skip ci]
2025-04-12 07:25:11 -07:00
Wing Lian
de8a625dd7
make e2e tests a bit faster by reducing test split size ( #2522 ) [skip ci]
...
* [ci] make e2e tests a bit faster by reducing test split size
* use 10% split of alpaca dataset to speed up dataset loading/tokenization
* reduce gas 4->2 for most e2e tests
* increase val set size for packing
2025-04-12 07:24:43 -07:00
NanoCode012
51267ded04
chore: update doc links ( #2509 )
...
* chore: update doc links
* fix: address pr feedback
2025-04-11 09:53:18 -04:00
NanoCode012
756a0559c1
feat(doc): explain deepspeed configs ( #2514 ) [skip ci]
...
* feat(doc): explain deepspeed configs
* fix: add fetch configs
2025-04-11 09:52:43 -04:00
NanoCode012
9a8e3e9c7b
Feat(examples): add deepcogito ( #2516 ) [skip ci]
...
* feat: add examples for deepcogito
* fix: reduce num evals per epoch
* fix: reduce num epochs
2025-04-11 09:52:23 -04:00
Wing Lian
7e7180fa10
add mocks for loading datasets in cli train tests ( #2497 ) [skip ci]
...
* add mocks for loading datasets in cli train tests
* Apply suggestions from code review to fix patched module for preprocess
Co-authored-by: NanoCode012 <nano@axolotl.ai >
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai >
2025-04-11 09:51:59 -04:00
Sung Ching Liu
22c562533d
Update rlhf.qmd ( #2519 )
...
Fix typo in command that spawns a vllm server, should be `axolotl vllm-serve` not `axolotl vllm_serve`
2025-04-10 11:33:09 -04:00
NanoCode012
16823e1de6
feat: add CNAME ( #2513 )
2025-04-10 12:34:25 +07:00
NanoCode012
e0420b3528
fix: allow merge lora on pre-quantized model ( #2511 )
...
* fix: allow merge lora on pre-quantized model
* fix: remove unused sections per comment
2025-04-09 14:01:42 -04:00
Wing Lian
9f986f5e71
Add Llama4 maverick examples ( #2512 )
2025-04-09 14:01:28 -04:00
NanoCode012
f85861a0b2
fix: liger swiglu for llama4 ( #2504 )
...
* fix: liger swiglu for llama4
* feat: add liger to deepseek v3
* fix: unpack not found
* fix: spelling
* fix: comment out deepseek v3
* fix: retest deepseek
* fix: map glu
* fix: patch model forward
* chore: add temp code to save
* fix: remove deepseek to move into separate PR
2025-04-09 02:53:17 -04:00
Wing Lian
630e40dd13
upgrade transformers to 4.51.1 ( #2508 )
...
* upgrade transformers to 4.51.1
* multigpu longer timeout
2025-04-09 02:53:00 -04:00
Wing Lian
bf9efe2a09
[llama4] fix the mm yaml, add scout single gpu yaml ( #2510 )
...
* [llama4] fix the mm yaml, add scout single gpu yaml
* add README for llama4
* rename to specify fsdp
2025-04-09 02:52:45 -04:00
Wing Lian
0dac2ddeac
Llama4 linearized ( #2502 )
...
* llama4 support for linearized experts
* clean up fsdp2 sharding to prevent hang
* add yaml config
* cleanup example [skip ci]
2025-04-07 20:47:00 -04:00
NanoCode012
a6c03217f5
feat: add llama4 CCE ( #2498 )
...
* feat: add llama4 CCE
* fix: update model support list doc
* feat: include llama4_text
2025-04-07 17:12:28 -04:00
Dan Saunders
59cd472504
SP cu_seqlens fix, refactor ( #2495 )
...
* working on masking fix
* refactor and fix multipack seqlens
* pre-commit fix
* adding smoke test
* using existing packed seqlens util
* log warning re: logged losses / gradient scaling per rank
2025-04-07 14:47:57 -04:00
NanoCode012
9b89591ead
Feat: Add doc on loading datasets and support for Azure/OCI ( #2482 )
...
* fix: remove unused config
* feat: add doc on dataset loading
* feat: enable azure and oci remote file system
* feat: add adlfs and ocifs to requirements
* fix: add links between dataset formats and dataset loading
* fix: remove unused condition
* Revert "fix: remove unused condition"
This reverts commit 5fe13be73e .
2025-04-07 12:41:13 -04:00
NanoCode012
31498d0230
fix(doc): clarify roles mapping in chat_template ( #2490 ) [skip ci]
2025-04-07 12:40:32 -04:00
NanoCode012
d25daebea9
fix: duplicate llama4 chattemplate enum ( #2500 )
...
* fix: duplicate llama4 chattemplate enum
* fix: duplicate chat_template string
2025-04-07 12:39:19 -04:00
NanoCode012
e0e5d9b1d6
feat: add llama4 multimodal ( #2499 )
...
* feat: add llama4 multimodal
* feat: add torchvision to base docker
* just use latest torchvision
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-04-07 10:49:29 -04:00
Wing Lian
8bbad21bfd
llama4 support ( #2493 )
...
* llama4 support
* add xet support [skip ci]
* be flexible on transformers version and skip test on version
* don't use deepspeed for the fix_untrained_tokens test
* reordering to trigger torch 2.6.0 tests first
* slightly smaller train set
* use 4.51.0 for now
* remove stray print, add llama4 chat template to schema, bump peft to 0.15.1
* patches to make llama4 performant
* add preliminary fp8 support
2025-04-07 10:49:15 -04:00
Wing Lian
5f4af3665d
FSDP2 support ( #2469 )
...
* fsdp2 support
* use accelerate release 1.6.0
* allow 8bit optims with fsdp2
* liger + torch compile fix
* add fsdp2 e2e tests
* use transformers commit with fsdp2 support
* skip zero3 tests for this PR for now
* fix fsdp2 config for ci
* make sure both flex and flash attn work with fsdp2, skip fix untrained tokens
* okay, actually use fdsp2...
* more fixes to flex for fsdp2
* make sure to patch all the loaded models
* additional validation for fsdp2, bump dep versions
2025-04-06 17:08:01 -04:00
Sung Ching Liu
a8f38c367c
Flex Attention + Packing with BlockMask support ( #2363 )
2025-04-05 18:02:57 -04:00
Wing Lian
e7e0cd97ce
Update dependencies and show slow tests in CI ( #2492 )
...
* use latest torchao, gradio, schedule-free
* get info on slow tests
* speed up tests by avoiding gradient checkpointing and reducing eval size
2025-04-05 17:41:31 -04:00
Wing Lian
949471039f
fix tokenizer overrides w gemma3 ( #2488 )
...
* fix tokenizer overrides w gemma3
* fix offline wrapping
2025-04-05 01:25:44 -04:00
NanoCode012
de451f99a5
fix: cohere cce scaling wrong tensor ( #2483 )
2025-04-04 13:47:44 -04:00
Wing Lian
9f824ef76a
simplify the example configs to be more minimal and less daunting ( #2486 ) [skip ci]
...
* simplify the example configs to be more minimal and less daunting
* drop empty s2_attention from example yamls
2025-04-04 13:47:26 -04:00
Wing Lian
dd66fb163c
check if fixture exists in the cache already ( #2485 )
...
* check if fixture exists in the cache already
* add docstring explaining what is going on
2025-04-04 13:47:01 -04:00
Dan Saunders
e0cc4f1a87
removing deepspeed guard for LoRA Triton kernels ( #2480 )
2025-04-03 14:50:56 -04:00
NanoCode012
64d8035f50
fix(example): align example to correct adapter ( #2478 )
...
* fix(example): align example to correct adapter
* fix: add missing load in 4 bit
2025-04-03 08:48:14 -04:00
Wing Lian
5249e98058
add additional tf32 opt for cudnn ( #2477 ) [skip ci]
2025-04-03 08:47:52 -04:00