Dan Saunders
79ddaebe9a
Add ruff, remove black, isort, flake8, pylint ( #3092 )
...
* black, isort, flake8 -> ruff
* remove unused
* add back needed import
* fix
2025-08-23 23:37:33 -04:00
Wing Lian
c6d69d5c1b
release v0.11.0 ( #2875 )
...
ci-cd / build-axolotl (<nil>, 126, 12.6.3, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 126, 12.6.3, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl (<nil>, 128, 12.8.1, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl (vllm, 126, 12.6.3, 3.11, 2.7.0) (push) Has been cancelled
publish pypi / Create Release (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 126, 12.6.3, 3.11, 2.7.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 126, 12.6.3, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 126, 12.6.3, true, 3.11, 2.6.0) (push) Has been cancelled
ci-cd / build-axolotl-cloud (<nil>, 128, 12.8.1, 3.11, 2.7.1) (push) Has been cancelled
ci-cd / build-axolotl-cloud-no-tmux (<nil>, 126, 12.6.3, 3.11, 2.6.0) (push) Has been cancelled
publish pypi / Upload release to PyPI (push) Has been cancelled
* release v0.11.0
* don't build vllm into release for now
* remove 2.5.1 references
* smollm3 multipack support
* fix ordering of e2e tests
2025-07-09 09:22:35 -04:00
Wing Lian
a85efffbef
bump transformers==4.52.4 ( #2800 ) [skip ci]
...
* bump transformers==4.52.4
* don't use hf offline for qwen tokenizer
* increase timeout
* don't use methodtype
* increase timeout
* better assertion logging
* upgrade deepspeed version too
2025-06-18 15:46:14 -04:00
Wing Lian
cb03c765a1
add uv tooling for e2e gpu tests ( #2750 )
...
* add uv tooling for e2e gpu tests
* fixes from PR feedback
* simplify check
* fix env var
* make sure to use uv for other install
* use raw_dockerfile_image
* Fix import
* fix args to experimental dockerfile image call
* use updated modal versions
2025-06-05 07:25:06 -07:00
Wing Lian
ecc719f5c7
add support for base image with uv ( #2691 )
2025-06-02 12:48:55 -07:00
Wing Lian
a27b909c5c
GRPO fixes (peft) ( #2676 )
...
* don't set peft_config on grpo to prevent double peft wrap
* remove overrides needed to support bug
* fix grpo tests
* require more CPU for multigpu to help with torch compile for vllm
2025-05-16 15:47:03 -04:00
Dan Saunders
c4053481ff
Codecov fixes / improvements ( #2549 )
...
* adding codecov reporting
* random change
* codecov fixes
* adding missing dependency
* fix
---------
Co-authored-by: Dan Saunders <dan@axolotl.ai >
2025-04-23 10:33:30 -04:00
Wing Lian
630e40dd13
upgrade transformers to 4.51.1 ( #2508 )
...
* upgrade transformers to 4.51.1
* multigpu longer timeout
2025-04-09 02:53:00 -04:00
Dan Saunders
c907ac173e
adding pre-commit auto-update GH action and bumping plugin versions ( #2428 )
...
* adding pre-commit auto-update GH action and bumping plugin versions
* running updated pre-commit plugins
* sorry to revert, but pylint complained
* Update .pre-commit-config.yaml
Co-authored-by: Wing Lian <wing.lian@gmail.com >
---------
Co-authored-by: Dan Saunders <dan@axolotl.ai >
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2025-03-21 11:02:43 -04:00
Wing Lian
a4170030ab
don't install extraneous old version of pydantic in ci and make sre to run multigpu ci ( #2355 )
2025-02-21 22:06:29 -05:00
salman
c071a530f7
removing 2.3.1 ( #2294 )
2025-01-28 23:23:44 -05:00
Wing Lian
3c1921e400
add hf cache caching for GHA ( #2247 )
...
* add hf cache caching for GHA
* use modal volume to cache hf data
* make sure to update the cache as we add new fixtures in conftest
2025-01-09 20:59:54 +00:00
Wing Lian
3931a42763
change deprecated modal Stub to App ( #2038 )
2024-11-11 15:10:34 -05:00
Wing Lian
e12a2130e9
first pass at pytorch 2.5.0 support ( #1982 )
...
* first pass at pytorch 2.5.0 support
* attempt to install causal_conv1d with mamba
* gracefully handle missing xformers
* fix import
* fix incorrect version, add 2.5.0
* increase tests timeout
2024-10-21 11:00:45 -04:00
Wing Lian
54392ac8a6
Attempt to run multigpu in PR CI for now to ensure it works ( #1815 ) [skip ci]
...
* Attempt to run multigpu in PR CI for now to ensure it works
* fix yaml file
* forgot to include multigpu tests
* fix call to cicd.multigpu
* dump dictdefault to dict for yaml conversion
* use to_dict instead of casting
* 16bit-lora w flash attention, 8bit lora seems problematic
* add llama fsdp test
* more tests
* Add test for qlora + fsdp with prequant
* limit accelerate to 2 processes and disable broken qlora+fsdp+bnb test
* move multigpu tests to biweekly
2024-08-09 11:50:13 -04:00