Salman Mohammadi
8065fed126
adding venv to prompt
2025-07-02 15:27:42 +01:00
NanoCode012
8ae5a2311b
feat: update handling for mistraltokenizer decode and multiprocessing pickling fix ( #2790 )
...
* feat: update handling for mistraltokenizer decode
* fix: update mistral common package version
* fix: to use correct release
* fix triton path
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-07-02 08:07:18 -04:00
NanoCode012
6383630155
Fix: tokenize stall due to not shuffling dataset ( #2845 )
...
* fix: shuffle dataset even if only one to fix tokenize stall
* fix: warn if shuffling merged with curriculum sampling
* chore: refactor
2025-07-02 08:06:00 -04:00
Vincenzo di Cicco
f2b352f2e5
Add sample_packing_sequentially to trainer args ( #2853 ) [skip ci]
2025-07-02 08:05:35 -04:00
NanoCode012
bf5928d0ee
feat(doc): update docker tag examples ( #2851 ) [skip ci]
...
* feat(doc): update docker tag examples
* chore: comment
2025-07-02 08:05:01 -04:00
Dhruv Mullick
d1224db8f4
Decouple generate_during_eval from wandb to support other visualizers ( #2849 ) [skip ci]
...
* Add generate_during_eval for mlflow for dpo
* Decouple generate_during_eval from wandb
2025-07-02 08:04:40 -04:00
mhenrichsen
327b4e48e9
Add installation instructions for pip and Docker to README.md ( #2854 )
...
* Add installation instructions for pip and Docker to README.md
* Enhance README.md with Docker installation guidance for improved setup reliability.
2025-07-02 09:03:52 +02:00
Dan Saunders
35fdbce102
Ensure device mesh patching is applied ( #2842 )
...
* move patches; make patch stronger
* fix broken tests
* guard sequence_parallel_degree comparison against none
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-06-29 22:16:32 -04:00
Wing Lian
cb811f8bf1
upgrade to flash-attn 2.8.0.post2 ( #2828 )
...
* upgrade to flash-attn 2.8.0.post2
* use cu126 with torch 2.6
* seems vllm 0.8.5.post1 not compatible with cuda12.6.3 and torch 2.6
* cu126 + torch 2.6 as the default
* use cu126 for multigpu w torch 2.6 too
* drop vllm for now from ci for now
2025-06-29 22:11:16 -04:00
Wing Lian
7563e1bd30
set a different triton cache for each test to avoid blocking writes to cache ( #2843 )
...
* set a different triton cache for each test to avoid blocking writes to cache
* set log level
* disable debug logging for filelock
2025-06-29 22:05:21 -04:00
Wing Lian
81893c775c
Accelerate 1.8.1 and BNB 0.46.0 update ( #2815 )
...
* update accelerate to v1.8.0
* update bnb also
* fix multigpu ci timeout
* fix test set size
* use latest accelerate 1.8.1
* disable default dtype
2025-06-28 15:29:19 -04:00
Wing Lian
a1a740608d
add assertion for packing patch to _get_unpad_data ( #2840 )
2025-06-27 11:20:23 -04:00
kallewoof
ec15a7a691
Support --lora-on-cpu flag for DPO model merging ( #2766 ) [skip ci]
...
* Support --lora-on-cpu flag for DPO model merging
* fix: use device=cpu in _convert_embedding_modules_dtype when lora_on_cpu is set
2025-06-27 11:19:24 -04:00
Wing Lian
0a7a216b60
allow for different sequence_len for evaluations ( #2836 ) [skip ci]
...
* allow for different sequence_len for evaluations
* reversed 🤦
* add more information to filter msg
2025-06-27 11:02:51 -04:00
NanoCode012
d8280d45c1
feat: add chat_template kwargs ( #2837 )
2025-06-27 10:38:46 -04:00
Wing Lian
24f2887e87
don't fail during preprocess for sampling from iterable dataset ( #2825 ) [skip ci]
2025-06-27 10:37:53 -04:00
NanoCode012
29289a4de9
feat: replace old colab notebook with newer one ( #2838 ) [skip ci]
...
* feat: replace old colab notebook with newer one
* fix: point to update cce fork
2025-06-27 10:35:47 -04:00
Wing Lian
a24957fa04
fix for iterable datasets and pickling ( #2831 ) [skip ci]
...
* fix for iterable datasets and pickling
* more fixes for pretraining
* can't pickle mock generator dataset
2025-06-27 10:35:23 -04:00
NanoCode012
927bf530bc
fix(doc): default messages example used wrong key ( #2832 )
...
* fix(doc): default messages example used wrong key
* feat: add links to SP, multi-gpu, multi-node on readme
2025-06-26 10:47:31 -04:00
github-actions[bot]
18954ba100
chore: update pre-commit hooks ( #2821 ) [skip ci]
...
Co-authored-by: djsaunde <1245942+djsaunde@users.noreply.github.com >
2025-06-26 10:46:53 -04:00
Wing Lian
d8cf66edbd
use fork for multiprocess start method for packing in parallel ( #2830 )
2025-06-25 13:17:33 -04:00
NanoCode012
181cc3106b
fix: catch httperror from ratelimiting hf when checking user token ( #2827 )
2025-06-25 09:50:13 -04:00
NanoCode012
20106116da
fix: 'NoneType' object has no attribute 'column_names' ( #2822 ) [skip ci]
...
* fix: 'NoneType' object has no attribute 'column_names'
* chore: typing
2025-06-25 09:49:55 -04:00
Younes B
a27c4f8771
feat: add falcon-h1 into axolotl ( #2811 ) [skip ci]
...
* feat: add falcon-h1 into axolotl
* fix pre-commit
* review
* fix: remove packing
2025-06-25 09:49:42 -04:00
NanoCode012
bb1109b81d
feat: update CCE to use axolotl's fork ( #2813 ) [skip ci]
...
* feat: update CCE to use axolotl's fork
* chore: improve error message
* feat: add eot token for gemma3 configs
* fix: only warn on more than 1 image
* fix: re-add gemma3 patch
* Revert "fix: re-add gemma3 patch"
This reverts commit f04db5e873 .
* feat: add qwen25 vl example
* feat: point to upstream fork cce package
* feat: update cce commit
2025-06-25 09:49:22 -04:00
Dan Saunders
8c69ec3a1e
gating _gather_outputs (causes increased vram usage) ( #2829 )
...
* SP vram fix
* gating _gather_outputs (causes increased vram usage)
* reverting unneeded change
2025-06-25 08:33:55 -04:00
Dan Saunders
46675496a3
log config ( #2819 )
...
* log config
* moving text art; adding sensitive value redaction + sorting
* revert pre-commit changes
* remove none-valued config before dumping
* just redact api keys
2025-06-24 14:59:30 -04:00
NanoCode012
c6b5d35e5d
fix: re-add gemma3 patch ( #2817 )
2025-06-24 10:51:30 +07:00
Wing Lian
12c826816d
chunked cross entropy loss ( #2625 )
...
* chunked cross entropy loss
* refactor so we can add test
* use relative import
* update schema description
2025-06-23 23:08:46 -04:00
Dan Saunders
1d8f500709
deepspeed fix ( #2820 )
2025-06-23 09:07:57 -04:00
Wing Lian
0494359c6c
update trl to 0.18.2 ( #2814 )
2025-06-19 11:27:59 -04:00
NanoCode012
26c39e1ca7
fix(doc): address exitcode formatting to help search ( #2809 ) [skip ci]
2025-06-19 11:19:52 -04:00
Dan Saunders
45adf1bfb9
get_logger use_environ fix ( #2808 )
...
* get_logger use_environ fix
* rethinking
* replacing old logger imports
* simplify
* fix boolean cond
2025-06-19 11:16:52 -04:00
Carsten Kragelund Jørgensen
eb3a57eb17
Ignore generation/endgeneration tags when analyzing Jinja chat template ( #2787 )
...
* ignore generation/endgeneration tags
Axolotl handles calculating the mask for assistant turns on its own, and as such these tags are not needed, however currently the analyzer does not recognize them at all and throws an error.
* feat: add phi4 tokenizer test and unblock gemma2
* fix: improve template
* chore: refactor
* chore: lint
---------
Co-authored-by: NanoCode012 <nano@axolotl.ai >
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-06-18 15:59:07 -04:00
Wing Lian
34da391391
Set dev version ( #2807 ) [skip ci]
2025-06-18 15:49:05 -04:00
NanoCode012
0bb9077553
Fix: logging on py310 ( #2802 )
...
* feat: encourage py311
* fix: logging import on py310
* fix: do upper and simplify handling
2025-06-18 15:46:27 -04:00
Wing Lian
a85efffbef
bump transformers==4.52.4 ( #2800 ) [skip ci]
...
* bump transformers==4.52.4
* don't use hf offline for qwen tokenizer
* increase timeout
* don't use methodtype
* increase timeout
* better assertion logging
* upgrade deepspeed version too
2025-06-18 15:46:14 -04:00
Dan Saunders
06a648263b
Config doc autogen: follow-up fix docs build ( #2806 )
...
* config reference doc autogen
* improvements
* cleanup; still ugly but working
* reformat
* remove autogen config ref from git
* factor out validations
* rewrite
* rewrite
* cleanup
* progress
* progress
* progress
* lint and minifying somewhat
* remove unneeded
* coderabbit
* coderabbit
* update preview-docs workflow triggers
* installing with deps
* coderabbit
* update refs
* overwrote file accidentally
* docs install deps
2025-06-18 15:42:54 -04:00
Dan Saunders
9d5bfc127e
Config doc autogen ( #2718 )
...
* config reference doc autogen
* improvements
* cleanup; still ugly but working
* reformat
* remove autogen config ref from git
* factor out validations
* rewrite
* rewrite
* cleanup
* progress
* progress
* progress
* lint and minifying somewhat
* remove unneeded
* coderabbit
* coderabbit
* update preview-docs workflow triggers
* installing with deps
* coderabbit
* update refs
* overwrote file accidentally
2025-06-18 15:36:53 -04:00
Wing Lian
da8f6c32b9
update favicon ( #2801 )
...
* update favicon
* correct size favicon
2025-06-17 18:09:24 -04:00
Wing Lian
88c0e8d048
release tag ( #2799 )
ci-cd / build-axolotl (<nil>, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 126, 12.6.3, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 128, 12.8.1, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl (vllm, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
publish pypi / Create Release (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 126, 12.6.3, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 128, 12.8.1, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 124, 12.4.1, 3.11, 2.6.0) (push) Has been cancelled
publish pypi / Upload release to PyPI (push) Has been cancelled
v0.10.0
2025-06-17 12:13:27 -04:00
NanoCode012
d8e8cd8558
feat: remove evalfirst callback with built-in trainer arg ( #2797 )
2025-06-17 12:09:33 -04:00
Wing Lian
ccc94da8ad
KD fix w/ online distillation ( #2700 ) [skip ci]
...
* kd fixes
* fix collator setup
* fix input args
* better handling to drop string fields for kd with raw dataset
* kd trainer has kd temp as part of the init
* drop top_k before softmax
* simplfy and remove zscore
* WIP chunked KD loss with autograd wrapper
* more fixes and liger-type chunked loss
* collator cls for plugins
* remove debugging
* additional plugin collator kwargs, don't scale up kd loss by t^2
* don't need temp arg to distill method
* online kd wip
* add close to comment block
* suport sampling params/max new tokens
* handle when no custom collator is used in plugins
* logsumexp trick:
* fix check
* shift off the first empty token
* fix length of padding
* use max not min
* temp scale kd loss at end
* support for dynamic plugin training args mixins and symmetric kl
* chore: lint
* fix trainer callback base class
* Fix decay
* accept compressed responses for smaller wire payload
* post-rebase lint
* more KD updates
* increase hyperparams_count for gradients for added normalize_topk
* fix to remove attention_mask
* rename vars for consistency
* fix rebase issues
* default to dropping last batch in multipack batch sampler
* improve handling of train len
* init collator_cls_and_kwargs
* explicit drop_last=False when checking for multipack completeness
* use separate v2 loader for kd
* fix kd tests to use subprocess so it picks up kd training args
* default value for kd_beta arg
* use updated dataset for ci
* longer timeout for e2e
2025-06-17 12:09:13 -04:00
Matt Cummins
ba62aa65ee
fixed the lora_target_modules syntax ( #2793 )
2025-06-15 16:47:02 -04:00
NanoCode012
21388cf615
Fix: lora kernel pre-patch applied despite post-patch not applied ( #2772 )
...
* fix: do not pre-patch self attention if lora dropout non-zero
* fix: add test to check patch not applied
* fix: test
* fix: test config check
* fix where we check so that tests don't break
* fix: test
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-06-14 11:54:06 -07:00
NanoCode012
80d5b066ec
Fix: adding magistral fsdp config, fixing not eval with test_datasets, handle mllama attention ( #2789 ) [skip ci]
...
* feat: add fsdp config for magistral
* fix: add mllama self attention handling for lora kernels
* fix: no eval if val_set_size 0 despite having test_datasets
* fix: add note for cce for vlm in newer model
2025-06-14 11:53:43 -07:00
NanoCode012
a3c82e8cbb
fix: grpo doc link ( #2788 ) [skip ci]
2025-06-13 12:03:47 -07:00
Wing Lian
b2274d430b
support for QAT w RL (DPO) ( #2776 )
2025-06-13 10:00:35 -04:00
NanoCode012
eac4a61f55
Feat: Add Magistral and mistral-common tokenizer support ( #2780 )
2025-06-12 19:18:33 -04:00
Wing Lian
ace9287c96
update loss value for flakey e2e test ( #2786 ) [skip ci]
...
* update loss value for flakey e2e test
* use pytest skip
* parametrize combinations
2025-06-12 18:06:14 -04:00