Commit Graph

109 Commits

Author SHA1 Message Date
Wing Lian
ef223519c9 update deps (#1663) [skip ci]
* update deps and tweak logic so axolotl is pip installable

* use vcs url format

* using dependency_links isn't supported per docs)
2024-05-28 11:23:34 -04:00
Wing Lian
60113437e4 cloud image w/o tmux (#1628) 2024-05-15 22:27:40 -04:00
Wing Lian
3319780300 update torch 2.2.1 -> 2.2.2 (#1622) 2024-05-15 09:45:27 -04:00
Wing Lian
70185763f6 add torch 2.3.0 to builds (#1593) 2024-05-05 18:45:45 -04:00
Wing Lian
c10563c444 fix broken linting (#1541)
* chore: lint

* include examples in yaml check

* mistral decided to gate their models...

* more mistral models that were gated
2024-04-19 01:03:04 -04:00
Wing Lian
4a92a3b9ee Nightlies fix v4 (#1458) [skip ci]
* another attempt at github actions

* try again
2024-03-29 11:04:34 -04:00
Wing Lian
46a73e3d1a fix yaml parsing for workflow (#1457) [skip ci] 2024-03-29 10:21:08 -04:00
Wing Lian
da3415bb5a fix how nightly tag is generated (#1456) [skip ci] 2024-03-29 09:29:17 -04:00
Wing Lian
8cb127abeb configure nightly docker builds (#1454) [skip ci]
* configure nightly docker builds

* also test update pytorch in modal ci
2024-03-29 08:25:45 -04:00
Wing Lian
05b398a072 fix some of the edge cases for Jamba (#1452)
* fix some of the edge cases for Jamba

* update requirements for jamba
2024-03-29 02:38:02 -04:00
Wing Lian
da265dd796 fix for accelerate env var for auto bf16, add new base image and expand torch_cuda_arch_list support (#1413) 2024-03-26 16:46:19 -04:00
Hamel Husain
4e69aa48ab Update docs.yml 2024-03-21 22:36:57 -07:00
Hamel Husain
629450cecd Bootstrap Hosted Axolotl Docs w/Quarto (#1429)
* precommit

* mv styes.css

* fix links
2024-03-21 22:28:36 -07:00
Wing Lian
7803f0934f fixes for dpo and orpo template loading (#1424) 2024-03-20 11:36:24 -04:00
Wing Lian
00018629e7 run tests again on Modal (#1289) [skip ci]
* run tests again on Modal

* make sure to run the full suite of tests on modal

* run cicd steps via shell script

* run tests in different runs

* increase timeout

* split tests into steps on modal

* increase workflow timeout

* retry doing this with only a single script

* fix yml launch for modal ci

* reorder tests to run on modal

* skip dpo tests on modal

* run on L4s, A10G takes too long

* increase CPU and RAM for modal test

* run modal tests on A100s

* skip phi test on modal

* env not arg in modal dockerfile

* upgrade pydantic and fastapi for modal tests

* cleanup stray character

* use A10s instead of A100 for modal
2024-02-29 14:26:26 -05:00
Wing Lian
6d4bbb877f deprecate py 3.9 support, set min pytorch version (#1343) [skip ci] 2024-02-28 12:58:05 -05:00
Wing Lian
5894f0e57e make mlflow optional (#1317)
* make mlflow optional

* fix xformers

don't patch swiglu if xformers not working
fix the check for xformers swiglu

* fix install of xformers with extra index url for docker builds

* fix docker build arg quoting
2024-02-26 11:41:33 -05:00
NanoCode012
a359579371 deprecate: pytorch 2.0.1 image (#1315) [skip ci]
* deprecate: pytorch 2.0.1 image

* deprecate from main image

* Update main.yml

* Update tests.yml
2024-02-22 11:39:47 +09:00
Wing Lian
ea00dd0852 don't use load and push together (#1284) 2024-02-09 14:54:31 -05:00
Wing Lian
aaf54dc730 run the docker image builds and push on gh action gpu runners (#1218) 2024-02-09 10:32:54 -05:00
Wing Lian
8da1633124 Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci] 2024-01-26 16:50:44 -05:00
Wing Lian
36d053f6f0 run PR e2e docker CI tests in Modal (#1217) [skip ci]
* wip modal for ci

* handle falcon layernorms better

* update

* rebuild the template each time with the pseudo-ARGS

* fix ref

* update tests to use modal

* cleanup ci script

* make sure to install jinja2 also

* kickoff the gh action on gh hosted runners and specify num gpus
2024-01-26 16:13:27 -05:00
Wing Lian
1b180034c7 ensure the tests use the same version of torch as the latest base docker images (#1215) [skip ci] 2024-01-26 10:38:30 -05:00
Wing Lian
74c72ca5eb drop py39 docker images, add py311, upgrade pytorch to 2.1.2 (#1205)
* drop py39 docker images, add py311, upgrade pytorch to 2.1.2

* also allow the main build to be manually triggered

* fix workflow_dispatch in yaml
2024-01-26 00:38:49 -05:00
Wing Lian
badda3783b make sure to register the base chatml template even if no system message is provided (#1207) 2024-01-25 10:38:08 -05:00
Wing Lian
0f77b8d798 add commit message option to skip docker image builds in ci (#1168) [skip ci] 2024-01-22 19:55:36 -05:00
Wing Lian
ece0211996 Agnostic cloud gpu docker image and Jupyter lab (#1097) 2024-01-15 22:37:54 -05:00
Hamel Husain
2dc431078c Add link on README to Docker Debugging (#1107)
* add docker debug

* Update docs/debugging.md

Co-authored-by: Wing Lian <wing.lian@gmail.com>

* explain editable install

* explain editable install

* upload new video

* add link to README

* Update README.md

* Update README.md

* chore: lint

* make sure to lint markdown too

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-12 08:51:35 -05:00
Mark Saroufim
44ba616da2 Fix broken pypi.yml (#1099) [skip ci] 2024-01-11 12:35:31 -05:00
Wing Lian
6c19e9302a add python 3.11 to the matrix for unit tests (#1085) [skip ci] 2024-01-10 13:02:01 -05:00
Wing Lian
9032e610b1 use tags again for test image, only run docker e2e after pre-commit checks (#1081) 2024-01-10 09:04:56 -05:00
Wing Lian
788649fe95 attempt to also run e2e tests that needs gpus (#1070)
* attempt to also run e2e tests that needs gpus

* fix stray quote

* checkout specific github ref

* dockerfile for tests with proper checkout

ensure wandb is dissabled for docker pytests
clear wandb env after testing
clear wandb env after testing
make sure to provide a default val for pop
tryin skipping wandb validation tests
explicitly disable wandb in the e2e tests
explicitly report_to None to see if that fixes the docker e2e tests
split gpu from non-gpu unit tests
skip bf16 check in test for now
build docker w/o cache since it uses branch name ref
revert some changes now that caching is fixed
skip bf16 check if on gpu w support

* pytest skip for auto-gptq requirements

* skip mamba tests for now, split multipack and non packed lora llama tests

* split tests that use monkeypatches

* fix relative import for prev commit

* move other tests using monkeypatches to the correct run
2024-01-09 21:23:23 -05:00
Hamel Husain
9ca358b671 Simplify Docker Unit Test CI (#1055) [skip ci]
* Update tests-docker.yml

* Update tests-docker.yml

* run ci tests on ci yaml updates

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-06 08:20:33 -05:00
JinK
553c80f79a streaming multipack for pretraining dataset (#959)
* [Feat] streaming multipack

* WIP make continued pretraining work w multipack

* fix up hadrcoding, lint

* fix dict check

* update test for updated pretraining multipack code

* fix hardcoded data collator fix for multipack pretraining

* fix the collator to be the max length for multipack pretraining

* don't bother with latest tag for test

* cleanup docker build/test

---------

Co-authored-by: jinwonkim93@github.com <jinwonkim>
Co-authored-by: Wing Lian <wing.lian@gmail.com>
2024-01-05 22:13:21 -05:00
Hamel Husain
eb4c99431b Update tests-docker.yml (#1052) [skip ci] 2024-01-05 14:26:18 -05:00
Wing Lian
bcc78d8fa3 bump transformers and update attention class map name (#1023)
* bump transformers and update attention class map name

* also run the tests in docker

* add mixtral e2e smoke test

* fix base name for docker image in test

* mixtral lora doesn't seem to work, at least check qlora

* add testcase for mixtral w sample packing

* check monkeypatch for flash attn multipack

* also run the e2e tests in docker

* use all gpus to run tests in docker ci

* use privileged mode too for docker w gpus

* rename the docker e2e actions for gh ci

* set privileged mode for docker and update mixtral model self attn check

* use fp16/bf16 for mixtral w fa2

* skip e2e tests on docker w gpus for now

* tests to validate mistral and mixtral patches

* fix rel import
2024-01-03 12:11:04 -08:00
Wing Lian
37820f6540 support for cuda 12.1 (#989) 2023-12-22 11:08:22 -05:00
Hamel Husain
2e61dc3180 Add tests to Docker (#993) 2023-12-22 06:37:20 -08:00
Hamel Husain
62ba1609b6 bump actions versions 2023-12-21 08:54:08 -08:00
Wing Lian
161bcb6517 Dockerfile torch fix (#987)
* add torch to requirements.txt at build time to force version to stick

* fix xformers check

* better handling of xformers based on installed torch version

* fix for ci w/o torch
2023-12-21 09:38:20 -05:00
Wing Lian
40a6362c92 support for mamba (#915)
* support for mamba

* more mamba fixes

* use fork for mamba kwargs fix

* grad checkpointing doesn't work

* fix extras for mamaba

* mamba loss fix

* use fp32 and remove verbose logging

* mamba fixes

* fix collator for mamba

* set model_type on training_args

* don't save safetensors for mamba

* update mamba config to disable safetensor checkpooints, install for tests

* no evals for mamba tests

* handle save_pretrained

* handle unused safetensors arg
2023-12-09 12:10:41 -05:00
Wing Lian
0de1457189 try #2: pin hf transformers and accelerate to latest release, don't reinstall pytorch (#867)
* isolate torch from the requirements.txt

* fix typo for removed line ending

* pin transformers and accelerate to latest releases

* try w auto-gptq==0.5.1

* update README to remove manual peft install

* pin xformers to 0.0.22

* bump flash-attn to 2.3.3

* pin flash attn to exact version
2023-11-16 10:42:36 -05:00
Wing Lian
70157ccb8f add a latest tag for regular axolotl image, cleanup extraneous print statement (#746) 2023-10-19 12:28:29 -04:00
Wing Lian
2aa1f71464 fix pytorch 2.1.0 build, add multipack docs (#722) 2023-10-13 08:57:28 -04:00
Wing Lian
7f2618b5f4 add docker images for pytorch 2.10 (#697) 2023-10-07 12:23:31 -04:00
Wing Lian
f4868d733c make sure we also run CI tests when requirements.txt changes (#663) 2023-10-02 08:43:40 -04:00
Wing Lian
5b0bc48fbc add mistral e2e tests (#649)
* mistral e2e tests

* make sure to enable flash attention for the e2e tests

* use latest transformers full sha

* uninstall first
2023-09-29 00:22:40 -04:00
Wing Lian
b6ab8aad62 Mistral flash attn packing (#646)
* add mistral monkeypatch

* add arg for decoder attention masl

* fix lint for duplicate code

* make sure to update transformers too

* tweak install for e2e

* move mistral patch to conditional
2023-09-27 18:41:00 -04:00
Maxime
5d931cc042 Only run tests when a change to python files is made (#614)
* Update tests.yml

* Update .github/workflows/tests.yml

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
2023-09-20 22:02:04 -04:00
Wing Lian
62a774140b Fix for check with cfg and merge_lora (#600) 2023-09-18 21:14:32 -04:00