Wing Lian
59a7ac427d
make sure to scale too
2025-01-24 13:11:25 -05:00
Wing Lian
e3393042e5
hopefully fix the lora/dora logic
2025-01-24 13:11:25 -05:00
Wing Lian
08a4e8a7fb
refactor a bit
2025-01-24 13:11:25 -05:00
Wing Lian
b582d340b0
save tokenizer too
2025-01-24 13:11:25 -05:00
Wing Lian
474ba1a1b8
chore: lint/formatting
2025-01-24 13:11:25 -05:00
Wing Lian
de771fcb05
fix convert logger and registration
2025-01-24 13:11:25 -05:00
Wing Lian
f32d429db5
fix import path to args
2025-01-24 13:11:25 -05:00
Wing Lian
82005f8eeb
auto modeling for rrt
2025-01-24 13:11:25 -05:00
Wing Lian
b439ed3345
support optional dora
2025-01-24 13:11:24 -05:00
Wing Lian
623eaca740
more fixes to conversion
2025-01-24 13:11:24 -05:00
Wing Lian
38dfd3fadb
wip conversion cli
2025-01-24 13:11:24 -05:00
Wing Lian
daa9408233
more wip
2025-01-24 13:11:24 -05:00
Wing Lian
257231ac46
wip rrt
2025-01-24 13:11:24 -05:00
Wing Lian
887513285d
support for custom lr groups for non-embedding modules ( #2213 )
...
* support for custom lr groups for non-embedding modules
invert name check for group modules
include lr_groups in training args
additional conditional for creating optimizer
fix regular params as w weight decay
fix lookup and add docs
* address pr feedback
2025-01-24 12:56:28 -05:00
Wing Lian
20620771f1
Pretrain multipack ( #2278 )
...
* fix for pretrain with packing
* fix model name and loss expected
* make sure to check with micro batch size for pretraining
* change loss threshholds based on parametrization
* make tests smaller for CI
* fix pretrain packing
* fix pretrain packing test
* address pr feedback
2025-01-24 12:55:20 -05:00
NanoCode012
6086162488
chore(doc): improve explanation for *_steps and *_strategy ( #2270 )
2025-01-24 10:07:02 -05:00
mashdragon
b2774af66c
Take split param from config in all load_dataset instances ( #2281 )
2025-01-24 10:06:50 -05:00
NanoCode012
74f9782fc3
chore(doc): fix explanation on gcs creds retrieval ( #2272 )
2025-01-24 10:05:58 -05:00
Wing Lian
8a7a0b07dc
support for latest transformers release 4.48.1 ( #2256 )
2025-01-23 21:17:57 -05:00
Wing Lian
8fb72cbc0b
use the extracted field_messages to parse the role fields ( #2265 )
2025-01-21 15:39:30 -05:00
Adithya Kamath
bb9d4102c4
Add 5000 line history limit to tmux for docker cloud ( #2268 )
2025-01-21 15:39:17 -05:00
Wing Lian
af727eedf7
option to not concatenate during pretraining ( #2263 )
...
* option to not concatenate during pretraining
* simplify conditional and add doc to config.qmd
2025-01-20 14:07:34 -05:00
jwongTensora
8606093921
fix for indexing error from token/embeddings mismatch ( #2257 )
...
Co-authored-by: jwong <jwongTensora@gmail.com >
2025-01-14 22:09:29 -05:00
NanoCode012
cba5a457d9
fix: use text_column even when not packing for pretraining ( #2254 )
...
* fix: use text_column even when not packing for pretraining
* feat: update test to check when not packing
* chore: lint
* Update src/axolotl/utils/data/pretraining.py
Co-authored-by: Wing Lian <wing.lian@gmail.com >
---------
Co-authored-by: Wing Lian <wing@axolotl.ai >
Co-authored-by: Wing Lian <wing.lian@gmail.com >
2025-01-14 22:08:56 -05:00
Wing Lian
19cd83d408
rename references to dpo dataset prep to pref data ( #2258 )
2025-01-14 22:07:55 -05:00
Dan Saunders
1ed4de73b6
CLI cleanup and documentation ( #2244 )
...
* CLI init refactor
* fix
* cleanup and (partial) docs
* Adding documentation and continuing cleanup (in progress)
* remove finetune.py script
* continued cleanup and documentation
* pytest fixes
* review comments
* fix
* Fix
* typing fixes
* make sure the batch dataset patcher for multipack is always loaded when handling datasets
* review comments
* fix
---------
Co-authored-by: Dan Saunders <dan@axolotl.ai >
Co-authored-by: Wing Lian <wing@axolotl.ai >
2025-01-13 17:55:29 +00:00
Wing Lian
f89e962119
skip over rows in pretraining dataset ( #2223 )
...
* skip over rows in pretraining dataset
* update docs
2025-01-13 10:44:45 -05:00
Wing Lian
bc1c9c20e3
assume empty lora dropout means 0.0 and add tests ( #2243 )
...
* assume empty lora dropout means 0.0 and add tests
* remove un-necessary arg
* refactor based on pr feedback:
* chore: lint
2025-01-13 10:44:11 -05:00
Wing Lian
dd26cc3c0f
add helper to verify the correct model output file exists ( #2245 )
...
* add helper to verify the correct model output file exists
* more checks using helper
* chore: lint
* fix import and relora model check
* workaround for trl trainer saves
* remove stray print
2025-01-13 10:43:29 -05:00
Wing Lian
d8b4027200
use 2.5.1 docker images as latest tag as it seems stable ( #2198 )
2025-01-10 08:35:25 -05:00
Wing Lian
fb3352e21c
rename liger test so it properly runs in ci ( #2246 )
2025-01-09 17:31:43 -05:00
NanoCode012
ed77e7001e
feat: add support for data_files in pretraining ( #2238 )
2025-01-09 21:04:13 +00:00
Wing Lian
7669a03fb4
update upstream HF deps ( #2239 )
...
* bump axolotl contribs for upstream main conflicts:
* bump datasets, tokenizer, trl
* remove log workarounds in trl
* bump lm-eval
* remove unsloth_ import from critical path
* remove llama fa2 from conftest
* unsloth breaks with latest upstream
2025-01-09 21:01:59 +00:00
Vincenzo di Cicco
6553683170
Use SequentialSampler if curriculum_sampling is enabled with sample_packing ( #2235 )
2025-01-09 21:01:22 +00:00
Wing Lian
5e0124e2ab
update modal version for ci ( #2242 )
2025-01-09 21:01:02 +00:00
NanoCode012
2e8d7c1adb
fix: mistral nemo does not recognize token_type_ids in forward ( #2233 )
2025-01-09 21:00:36 +00:00
Wing Lian
3c1921e400
add hf cache caching for GHA ( #2247 )
...
* add hf cache caching for GHA
* use modal volume to cache hf data
* make sure to update the cache as we add new fixtures in conftest
2025-01-09 20:59:54 +00:00
Wing Lian
7faf2b6e8e
Merge group queue ( #2248 )
...
* add support for merge groups
* also lint merge groups
2025-01-09 15:49:00 -05:00
salman
c1b920f291
Fixing OSX installation ( #2231 )
...
* bumping version, removing non-osx compatible deps
* updating pylintrc
* fixing linters
* reverting changes
2025-01-07 13:42:01 +00:00
Wing Lian
3915abee4c
make sure padding is labeled as -100 for pretraining ( #2227 )
2024-12-31 15:22:18 -05:00
NJordan72
7a38dbe674
fix: allow trainer builder to use custom jinja chat template ( #2219 )
...
* fix: allow trainer builder to use custom jinja chat template
* chore: use get_chat_template_from_config
Co-authored-by: Chirag Jain <jain.chirag925@gmail.com >
* fix: swap imports
---------
Co-authored-by: Chirag Jain <jain.chirag925@gmail.com >
2024-12-24 16:18:50 -05:00
Wing Lian
e0a2eb2ebd
fix untrained tokens if specified explicitly from a list ( #2210 )
2024-12-23 09:08:28 -05:00
Wing Lian
d852d7af7a
inference - don't default w accelerate, fix base model ( #2216 ) [skip ci]
2024-12-23 07:48:41 -05:00
Wing Lian
3742deb1de
add deepspeed example with torch compile enabled ( #2212 ) [skip ci]
2024-12-22 12:11:39 -05:00
Wing Lian
2312caaa98
GC every n steps ( #2209 )
2024-12-21 17:38:33 -05:00
Wing Lian
307cf7c685
move the dataset loading from remote/disk to a shared function so we can re-use for RL ( #2204 )
2024-12-20 21:43:52 -05:00
Dan Saunders
70541145f1
adding test_datasets compat with pretraining_dataset (streaming) ( #2206 ) [skip ci]
2024-12-20 21:43:33 -05:00
Wing Lian
42bd32a233
add outputs (symlink) to gitignore [skip ci] ( #2205 )
2024-12-19 20:14:43 -05:00
Dan Saunders
5b8fb5e939
remove cicd pytest xdist args ( #2201 )
...
* remove cicd pytest xdist args
* Delete outputs
2024-12-19 11:44:53 -05:00
Wing Lian
bd2a594b89
use DataCollatorWithFlattening when not sample packing ( #2167 )
2024-12-17 17:46:44 -05:00