Wing Lian
08f287b57f
swap llama tests for 7m param model
2025-04-17 09:52:35 -07:00
Wing Lian
b4c7d9c29d
fix perplexity scores
2025-04-17 07:58:53 -07:00
Wing Lian
d2637fb01d
first pass at modifying tests to use llama-7m
2025-04-16 21:14:04 -07:00
NanoCode012
9da730d6a4
fix(doc): cut cross entropy installation instructions broken in qmd ( #2532 )
2025-04-16 15:02:51 -07:00
NanoCode012
32637fad00
fix: preprocess yielding whole dataset to each worker ( #2503 ) [skip ci]
2025-04-16 15:02:35 -07:00
Dan Saunders
f776f889a1
adding codecov reporting ( #2372 ) [skip ci]
...
* adding codecov reporting
* update codecov-action to v5
* fix
---------
Co-authored-by: Dan Saunders <dan@axolotl.ai >
2025-04-16 15:02:17 -07:00
Wing Lian
69eda209a6
re-enable DS zero3 ci with updated transformers ( #2533 )
2025-04-16 14:48:40 -07:00
Dan Saunders
b8c633aa97
batch api HF adapter for ring-flash-attn; cleanup and improvements ( #2520 )
...
* batch api HF adapter for ring-flash-attn; cleanup and improvements
* update
* adding all batch ring-flash-attn methods via single adapter
* removing pad_to_sequence_len=False for now
* fix
* updating docs to include batch SP
* review comments
* fixes for batch API funcs, simplify
* fixes
* fix
* updates
* add batch_zigzag smoke test
2025-04-16 13:50:48 -04:00
NanoCode012
682a9cf79b
Fix: add delinearization and make qlora work with fsdp2 ( #2515 )
...
* fixes for delinearization, and make qlora work with fsdp2
* Add back mistakenly removed lm_eval
* typo [skip ci]
* patch evals for torch.compile + fsdp2
* also check torch_compile w fsdp2
* lots of fixes for flex attn with llama4
* fix patch check and patch llama4 too
* attempt to make the patches stick
* use transformers 4.51.2
* update configs and README for llama4
* remove torch.compile for CI test
* cleanup any existing singletons
* set singleton cache to None instead of deleting
* use importlib reload with monkeypatch
* don't worry about transformers version, mark inputs with grads, fix regex
* make sure embeds aren't on cpu
* logging and mem improvements
* vllm version and add to docker, make sure to save processor on conversion
* fix ambiguous tensor bool check
* fix vllm to not use v1, upgrade hf transformers
* fix tests
* make flex_attn_compile_kwargs configurable, since this depends on model params
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
Co-authored-by: Salman Mohammadi <salman.mohammadi@outlook.com >
2025-04-15 23:31:39 -07:00
NanoCode012
271b24cccc
feat: update cce to latest ( #2521 )
2025-04-15 22:17:10 -07:00
Wing Lian
198d775d6d
make sure the all of the model is on the same device, so this test will pass on multigpu ( #2524 ) [skip ci]
2025-04-15 22:15:42 -07:00
NanoCode012
e4307fb7d7
feat: add examples for deepcoder ( #2517 )
2025-04-12 07:25:23 -07:00
Wing Lian
dd8bad06d0
remove strict=false from example yamls [skip ci] ( #2523 ) [skip ci]
2025-04-12 07:25:11 -07:00
Wing Lian
de8a625dd7
make e2e tests a bit faster by reducing test split size ( #2522 ) [skip ci]
...
* [ci] make e2e tests a bit faster by reducing test split size
* use 10% split of alpaca dataset to speed up dataset loading/tokenization
* reduce gas 4->2 for most e2e tests
* increase val set size for packing
2025-04-12 07:24:43 -07:00
NanoCode012
51267ded04
chore: update doc links ( #2509 )
...
* chore: update doc links
* fix: address pr feedback
2025-04-11 09:53:18 -04:00
NanoCode012
756a0559c1
feat(doc): explain deepspeed configs ( #2514 ) [skip ci]
...
* feat(doc): explain deepspeed configs
* fix: add fetch configs
2025-04-11 09:52:43 -04:00
NanoCode012
9a8e3e9c7b
Feat(examples): add deepcogito ( #2516 ) [skip ci]
...
* feat: add examples for deepcogito
* fix: reduce num evals per epoch
* fix: reduce num epochs
2025-04-11 09:52:23 -04:00
Wing Lian
7e7180fa10
add mocks for loading datasets in cli train tests ( #2497 ) [skip ci]
...
* add mocks for loading datasets in cli train tests
* Apply suggestions from code review to fix patched module for preprocess
Co-authored-by: NanoCode012 <nano@axolotl.ai >
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai >
2025-04-11 09:51:59 -04:00
Sung Ching Liu
22c562533d
Update rlhf.qmd ( #2519 )
...
Fix typo in command that spawns a vllm server, should be `axolotl vllm-serve` not `axolotl vllm_serve`
2025-04-10 11:33:09 -04:00
NanoCode012
16823e1de6
feat: add CNAME ( #2513 )
2025-04-10 12:34:25 +07:00
NanoCode012
e0420b3528
fix: allow merge lora on pre-quantized model ( #2511 )
...
* fix: allow merge lora on pre-quantized model
* fix: remove unused sections per comment
2025-04-09 14:01:42 -04:00
Wing Lian
9f986f5e71
Add Llama4 maverick examples ( #2512 )
2025-04-09 14:01:28 -04:00
NanoCode012
f85861a0b2
fix: liger swiglu for llama4 ( #2504 )
...
* fix: liger swiglu for llama4
* feat: add liger to deepseek v3
* fix: unpack not found
* fix: spelling
* fix: comment out deepseek v3
* fix: retest deepseek
* fix: map glu
* fix: patch model forward
* chore: add temp code to save
* fix: remove deepseek to move into separate PR
2025-04-09 02:53:17 -04:00
Wing Lian
630e40dd13
upgrade transformers to 4.51.1 ( #2508 )
...
* upgrade transformers to 4.51.1
* multigpu longer timeout
2025-04-09 02:53:00 -04:00
Wing Lian
bf9efe2a09
[llama4] fix the mm yaml, add scout single gpu yaml ( #2510 )
...
* [llama4] fix the mm yaml, add scout single gpu yaml
* add README for llama4
* rename to specify fsdp
2025-04-09 02:52:45 -04:00
Wing Lian
0dac2ddeac
Llama4 linearized ( #2502 )
...
* llama4 support for linearized experts
* clean up fsdp2 sharding to prevent hang
* add yaml config
* cleanup example [skip ci]
2025-04-07 20:47:00 -04:00
NanoCode012
a6c03217f5
feat: add llama4 CCE ( #2498 )
...
* feat: add llama4 CCE
* fix: update model support list doc
* feat: include llama4_text
2025-04-07 17:12:28 -04:00
Dan Saunders
59cd472504
SP cu_seqlens fix, refactor ( #2495 )
...
* working on masking fix
* refactor and fix multipack seqlens
* pre-commit fix
* adding smoke test
* using existing packed seqlens util
* log warning re: logged losses / gradient scaling per rank
2025-04-07 14:47:57 -04:00
NanoCode012
9b89591ead
Feat: Add doc on loading datasets and support for Azure/OCI ( #2482 )
...
* fix: remove unused config
* feat: add doc on dataset loading
* feat: enable azure and oci remote file system
* feat: add adlfs and ocifs to requirements
* fix: add links between dataset formats and dataset loading
* fix: remove unused condition
* Revert "fix: remove unused condition"
This reverts commit 5fe13be73e .
2025-04-07 12:41:13 -04:00
NanoCode012
31498d0230
fix(doc): clarify roles mapping in chat_template ( #2490 ) [skip ci]
2025-04-07 12:40:32 -04:00
NanoCode012
d25daebea9
fix: duplicate llama4 chattemplate enum ( #2500 )
...
* fix: duplicate llama4 chattemplate enum
* fix: duplicate chat_template string
2025-04-07 12:39:19 -04:00
NanoCode012
e0e5d9b1d6
feat: add llama4 multimodal ( #2499 )
...
* feat: add llama4 multimodal
* feat: add torchvision to base docker
* just use latest torchvision
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-04-07 10:49:29 -04:00
Wing Lian
8bbad21bfd
llama4 support ( #2493 )
...
* llama4 support
* add xet support [skip ci]
* be flexible on transformers version and skip test on version
* don't use deepspeed for the fix_untrained_tokens test
* reordering to trigger torch 2.6.0 tests first
* slightly smaller train set
* use 4.51.0 for now
* remove stray print, add llama4 chat template to schema, bump peft to 0.15.1
* patches to make llama4 performant
* add preliminary fp8 support
2025-04-07 10:49:15 -04:00
Wing Lian
5f4af3665d
FSDP2 support ( #2469 )
...
* fsdp2 support
* use accelerate release 1.6.0
* allow 8bit optims with fsdp2
* liger + torch compile fix
* add fsdp2 e2e tests
* use transformers commit with fsdp2 support
* skip zero3 tests for this PR for now
* fix fsdp2 config for ci
* make sure both flex and flash attn work with fsdp2, skip fix untrained tokens
* okay, actually use fdsp2...
* more fixes to flex for fsdp2
* make sure to patch all the loaded models
* additional validation for fsdp2, bump dep versions
2025-04-06 17:08:01 -04:00
Sung Ching Liu
a8f38c367c
Flex Attention + Packing with BlockMask support ( #2363 )
2025-04-05 18:02:57 -04:00
Wing Lian
e7e0cd97ce
Update dependencies and show slow tests in CI ( #2492 )
...
* use latest torchao, gradio, schedule-free
* get info on slow tests
* speed up tests by avoiding gradient checkpointing and reducing eval size
2025-04-05 17:41:31 -04:00
Wing Lian
949471039f
fix tokenizer overrides w gemma3 ( #2488 )
...
* fix tokenizer overrides w gemma3
* fix offline wrapping
2025-04-05 01:25:44 -04:00
NanoCode012
de451f99a5
fix: cohere cce scaling wrong tensor ( #2483 )
2025-04-04 13:47:44 -04:00
Wing Lian
9f824ef76a
simplify the example configs to be more minimal and less daunting ( #2486 ) [skip ci]
...
* simplify the example configs to be more minimal and less daunting
* drop empty s2_attention from example yamls
2025-04-04 13:47:26 -04:00
Wing Lian
dd66fb163c
check if fixture exists in the cache already ( #2485 )
...
* check if fixture exists in the cache already
* add docstring explaining what is going on
2025-04-04 13:47:01 -04:00
Dan Saunders
e0cc4f1a87
removing deepspeed guard for LoRA Triton kernels ( #2480 )
2025-04-03 14:50:56 -04:00
NanoCode012
64d8035f50
fix(example): align example to correct adapter ( #2478 )
...
* fix(example): align example to correct adapter
* fix: add missing load in 4 bit
2025-04-03 08:48:14 -04:00
Wing Lian
5249e98058
add additional tf32 opt for cudnn ( #2477 ) [skip ci]
2025-04-03 08:47:52 -04:00
Wing Lian
3877c5c69d
set release version 0.8.0 ( #2476 )
...
ci-cd / build-axolotl (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl (vllm, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
publish pypi / Create Release (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
publish pypi / Upload release to PyPI (push) Has been cancelled
* set release version 0.8.0
* make sure to include ring-flash-attn in docker image build
v0.8.0
2025-04-02 09:50:56 -04:00
NanoCode012
adb593abac
fix: document offload gradient_checkpointing option ( #2475 )
2025-04-02 09:35:42 -04:00
NanoCode012
a0117c9bce
fix: separate gemma3 text and vision example config ( #2471 ) [skip ci]
...
* fix: separate gemma3 text and vision example config
* fix: update to use a text-only dataset
* fix: typo
2025-04-02 09:35:29 -04:00
NanoCode012
e6cfb093d2
fix: disable SP during merge ( #2470 ) [skip ci]
2025-04-02 09:35:00 -04:00
NanoCode012
7abc71dc0b
fix: gemma3 loss in forward pass ( #2473 ) [skip ci]
...
* fix: gemma3 loss in forward pass
* fix: lint
* fix: move patch before plugins
* Update src/axolotl/monkeypatch/gemma3.py
Co-authored-by: salman <salman.mohammadi@outlook.com >
---------
Co-authored-by: Wing Lian <wing.lian@gmail.com >
Co-authored-by: salman <salman.mohammadi@outlook.com >
2025-04-02 09:34:41 -04:00
NanoCode012
45bf634d17
feat: add support for multimodal in lora kernels ( #2472 ) [skip ci]
...
* feat: add support for multimodal in lora kernels
* fix: improve multimodal checks
* fix: add fallback for model config
* chor: add gemma3 to docs
2025-04-02 09:33:46 -04:00
NanoCode012
80ba4b69f1
fix: pydantic warning validator not returning self ( #2474 )
2025-04-02 07:40:49 -04:00