Commit Graph

1977 Commits

Author SHA1 Message Date
Wing Lian
c7f1c191a3 additional validation for fsdp2, bump dep versions 2025-04-06 15:18:56 -04:00
Wing Lian
1a5d445413 make sure to patch all the loaded models 2025-04-06 14:45:30 -04:00
Wing Lian
7e410ab480 more fixes to flex for fsdp2 2025-04-06 14:24:50 -04:00
Wing Lian
b5a51c378b okay, actually use fdsp2... 2025-04-06 13:55:46 -04:00
Wing Lian
c902f4222d make sure both flex and flash attn work with fsdp2, skip fix untrained tokens 2025-04-06 12:30:14 -04:00
Wing Lian
9329db9c3a fix fsdp2 config for ci 2025-04-06 07:55:54 -04:00
Wing Lian
ad7293f617 skip zero3 tests for this PR for now 2025-04-06 07:49:38 -04:00
Wing Lian
475125e4ca use transformers commit with fsdp2 support 2025-04-06 07:49:06 -04:00
Wing Lian
2b5e546da0 add fsdp2 e2e tests 2025-04-06 07:49:06 -04:00
Wing Lian
252dc5c91b liger + torch compile fix 2025-04-06 07:49:06 -04:00
Wing Lian
af3f981f51 allow 8bit optims with fsdp2 2025-04-06 07:49:06 -04:00
Wing Lian
52b96031b4 use accelerate release 1.6.0 2025-04-06 07:49:05 -04:00
Wing Lian
03dcf1a5ea fsdp2 support 2025-04-06 07:49:05 -04:00
Sung Ching Liu
a8f38c367c Flex Attention + Packing with BlockMask support (#2363) 2025-04-05 18:02:57 -04:00
Wing Lian
e7e0cd97ce Update dependencies and show slow tests in CI (#2492)
* use latest torchao, gradio, schedule-free

* get info on slow tests

* speed up tests by avoiding gradient checkpointing and reducing eval size
2025-04-05 17:41:31 -04:00
Wing Lian
949471039f fix tokenizer overrides w gemma3 (#2488)
* fix tokenizer overrides w gemma3

* fix offline wrapping
2025-04-05 01:25:44 -04:00
NanoCode012
de451f99a5 fix: cohere cce scaling wrong tensor (#2483) 2025-04-04 13:47:44 -04:00
Wing Lian
9f824ef76a simplify the example configs to be more minimal and less daunting (#2486) [skip ci]
* simplify the example configs to be more minimal and less daunting

* drop empty s2_attention from example yamls
2025-04-04 13:47:26 -04:00
Wing Lian
dd66fb163c check if fixture exists in the cache already (#2485)
* check if fixture exists in the cache already

* add docstring explaining what is going on
2025-04-04 13:47:01 -04:00
Dan Saunders
e0cc4f1a87 removing deepspeed guard for LoRA Triton kernels (#2480) 2025-04-03 14:50:56 -04:00
NanoCode012
64d8035f50 fix(example): align example to correct adapter (#2478)
* fix(example): align example to correct adapter

* fix: add missing load in 4 bit
2025-04-03 08:48:14 -04:00
Wing Lian
5249e98058 add additional tf32 opt for cudnn (#2477) [skip ci] 2025-04-03 08:47:52 -04:00
Wing Lian
3877c5c69d set release version 0.8.0 (#2476)
Some checks failed
ci-cd / build-axolotl (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl (vllm, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
publish pypi / Create Release (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, 3.11, 2.5.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 124, 12.4.1, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 124, 12.4.1, 3.11, 2.4.1) (push) Has been cancelled
publish pypi / Upload release to PyPI (push) Has been cancelled
* set release version 0.8.0

* make sure to include ring-flash-attn in docker image build
v0.8.0
2025-04-02 09:50:56 -04:00
NanoCode012
adb593abac fix: document offload gradient_checkpointing option (#2475) 2025-04-02 09:35:42 -04:00
NanoCode012
a0117c9bce fix: separate gemma3 text and vision example config (#2471) [skip ci]
* fix: separate gemma3 text and vision example config

* fix: update to use a text-only dataset

* fix: typo
2025-04-02 09:35:29 -04:00
NanoCode012
e6cfb093d2 fix: disable SP during merge (#2470) [skip ci] 2025-04-02 09:35:00 -04:00
NanoCode012
7abc71dc0b fix: gemma3 loss in forward pass (#2473) [skip ci]
* fix: gemma3 loss in forward pass

* fix: lint

* fix: move patch before plugins

* Update src/axolotl/monkeypatch/gemma3.py

Co-authored-by: salman <salman.mohammadi@outlook.com>

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
Co-authored-by: salman <salman.mohammadi@outlook.com>
2025-04-02 09:34:41 -04:00
NanoCode012
45bf634d17 feat: add support for multimodal in lora kernels (#2472) [skip ci]
* feat: add support for multimodal in lora kernels

* fix: improve multimodal checks

* fix: add fallback for model config

* chor: add gemma3 to docs
2025-04-02 09:33:46 -04:00
NanoCode012
80ba4b69f1 fix: pydantic warning validator not returning self (#2474) 2025-04-02 07:40:49 -04:00
Wing Lian
0bfa180f7d torch 2.7.0 base image for testing (#2467) 2025-04-01 15:38:26 -04:00
NanoCode012
9e22c4ca6a fix: set rl=None during inference (#2463) 2025-04-01 12:25:53 -04:00
NanoCode012
990b5896bc fix: downgrade deepspeed to fix grad checkpoint oom (#2465) [skip ci] 2025-04-01 12:25:05 -04:00
Dan Saunders
7d0eb66b54 fixing eval for SP (#2468) 2025-04-01 11:59:08 -04:00
Wing Lian
df119e3724 Validation for Muon optimizer with DS/FSDP (#2464) 2025-04-01 09:39:12 -04:00
NanoCode012
f4ae8816bb Fix: remove the numerous sequential log (#2461)
* fix: remove sequential logs

* feat(doc): add for sample pack sequentially and curriculum sampling
2025-04-01 09:20:00 -04:00
NanoCode012
9b95e06cbb Fix(doc): Minor doc changes for peft and modal (#2462) [skip ci]
* fix(doc): document peft configs

* fix(doc): explain modal env vs secrets difference

* fix(doc): clarify evaluate vs lm-eval

* fix: clarify what is performance
2025-04-01 08:48:36 -04:00
Wing Lian
e0aba74dd0 Release update 20250331 (#2460) [skip ci]
* make torch 2.6.0 the default image

* fix tests against upstream main

* fix attribute access

* use fixture dataset

* fix dataset load

* correct the fixtures + tests

* more fixtures

* add accidentally removed shakespeare fixture

* fix conversion from unittest to pytest class

* nightly main ci caches

* build 12.6.3 cuda base image

* override for fix from huggingface/transformers#37162

* address PR feedback
2025-04-01 08:47:50 -04:00
Wing Lian
328d598114 gemma3 packing fixes (#2449)
* make gemma3 work with packing

* multi-gpu e2e for ci

* update gemma3 model namespace to use mirror

* add gradient checkpointing to multigpu e2e ci

* update gemma3 examples for use_reentrant and fix ddp find unused params

* fix tests for gemma3

* fix import for test utils

* set correct train loss for gemma3 e2e
2025-03-31 17:15:23 -04:00
DreamGenX
4d36ecc724 Sequential sample packing (#2404) [skip ci]
* add sequential sample packing

* chore: lint

---------

Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-03-31 15:48:20 -04:00
NanoCode012
7acf93b59f Fix(doc): Clarify doc on attention configs and missing pad_token (#2455) [skip ci]
* fix: clarify input type

* fix: handling of error message if data_files not available

* fix: clarify attention handling

* fix: add doc on missing pad token
2025-03-31 15:47:28 -04:00
Wing Lian
b6fc46ada8 Updates for trl 0.16.0 - mostly for GRPO (#2437) [skip ci]
* add grpo scale_rewards config for trl#3135

* options to connect to vllm server directly w grpo trl#3094

* temperature support trl#3029

* sampling/generation kwargs for grpo trl#2989

* make vllm_enable_prefix_caching a config param trl#2900

* grpo multi-step optimizeations trl#2899

* remove overrides for grpo trainer

* bump trl to 0.16.0

* add cli  to start vllm-serve via trl

* call the python module directly

* update to use vllm with 2.6.0 too now and call trl vllm serve from module

* vllm 0.8.1

* use python3

* use sys.executable

* remove context and wait for start

* fixes to make it actually work

* fixes so the grpo tests pass with new vllm paradigm

* explicit host/port and check in start vllm

* make sure that vllm doesn't hang by setting quiet so outouts go to dev null

* also bump bnb to latest release

* add option for wait from cli and nccl debugging for ci

* grpo + vllm test on separate devices for now

* make sure grpo + vllm tests runs single worker since pynccl comms would conflict

* fix cli

* remove wait and add caching for argilla dataset

* refactoring configs

* chore: lint

* add vllm config

* fixup vllm grpo args

* fix one more incorrect schema/config path

* fix another vlllm reference and increase timeout

* make the tests run a bit faster

* change mbsz back so it is correct for grpo

* another change mbsz back so it is correct for grpo

* fixing cli args

* nits

* adding docs

* docs

* include tensor parallel size for vllm in pydantic schema

* moving start_vllm, more docs

* limit output len for grpo vllm

* vllm enable_prefix_caching isn't a bool cli arg

* fix env ordering in tests and also use pid check when looking for vllm

---------

Co-authored-by: Salman Mohammadi <salman.mohammadi@outlook.com>
2025-03-31 15:47:11 -04:00
Dan Saunders
b35992262e Ray train bugfix (#2458)
* fix nccl pg destroy warning

* update

* ray bugfix
2025-03-31 15:17:43 -04:00
Dan Saunders
ef6eb77cc8 destroy process group on Ctrl+C / training or eval run (#2457)
* fix nccl pg destroy warning

* update
2025-03-31 12:36:47 -04:00
Dan Saunders
5410195e0b Sequence parallelism quick follow-ups; remove ModelCallback (#2450)
* guard return if ring attn alrady registered

* add docs link, bits in multi-gpu docs, remove save model callback (subsumed by HF trainers)

* configurable heads_k_stride from ring-flash-attn hf adapter
2025-03-31 09:13:42 -04:00
NanoCode012
cf0c79d52e fix: minor patches for multimodal (#2441)
* fix: update chat_template

* fix: handle gemma3 showing a lot of no content for turn 0

* fix: remove unknown config from examples

* fix: test

* fix: temporary disable gemma2 test

* fix: stop overwriting config.text_config unnecessarily

* fix: handling of set cache to the text_config section

* feat: add liger gemma support and bump liger to 0.5.5

* fix: add double use_cache setting

* fix: add support for final_logit_softcap in CCE for gemma2/3

* fix: set use_cache before model load

* feat: add missing layernorm override

* fix: handle gemma3 rmsnorm

* fix: use wrapper to pass dim as hidden_size

* fix: change dim to positional

* fix: patch with wrong mlp

* chore: refactor use_cache handling

* fix import issues

* fix tests.e2e.utils import

---------

Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-03-31 13:40:12 +07:00
Wing Lian
4ba80a0e5a fix streaming packing test (#2454)
* fix streaming packing test

* constrain amount of text generated
2025-03-29 08:30:06 -04:00
Wing Lian
c49682132b use offline for precached stream dataset (#2453) 2025-03-28 23:39:09 -04:00
Wing Lian
e46239f8d3 bump liger to 0.5.5 (#2448) 2025-03-28 19:21:03 -04:00
Wing Lian
05f03b541a hf offline decorator for tests to workaround rate limits (#2452) [skip ci]
* hf offline decorator for tests to workaround rate limits

* fail quicker so we can see logs

* try new cache name

* limit files downloaded

* phi mini predownload

* offline decorator for phi tokenizer

* handle meta llama 8b offline too

* make sure to return fixtures if they are wrapped too

* more fixes

* more things offline

* more offline things

* fix the env var

* fix the model name

* handle gemma also

* force reload of modules to recheck offline status

* prefetch mistral too

* use reset_sessions so hub picks up offline mode

* more fixes

* rename so it doesn't seem like a context manager

* fix backoff

* switch out tinyshakespeare dataset since it runs a py script to fetch data and doesn't work offline

* include additional dataset

* more fixes

* more fixes

* replace tiny shakespeaere dataset

* skip some tests for now

* use more robust check using snapshot download to determine if a dataset name is on the hub

* typo for skip reason

* use local_files_only

* more fixtures

* remove local only

* use tiny shakespeare as pretrain dataset and streaming can't be offline even if precached

* make sure fixtures aren't offline

improve the offline reset
try bumping version of datasets
reorder reloading and setting
prime a new cache
run the tests now with fresh cache
try with a static cache

* now run all the ci again with hopefully a correct cache

* skip wonky tests for now

* skip wonky tests for now

* handle offline mode for model card creation
2025-03-28 19:20:46 -04:00
Wing Lian
a4e430e7c4 add override of upstream fix for multi-gpu orpo (#2440)
* add override of upstream fix

* override batch loss metrics for CPO/Simpo as well
2025-03-26 18:14:59 -04:00