Logo
Explore Help
Sign In
tocmo0nlord/axolotl
1
0
Fork 0
You've already forked axolotl
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
aca03983157612918520c4cbb1a08a9c535daf01
axolotl/src/axolotl
History
NanoCode012 669f1d052c Fix: Higher vram usage for mistral and sample_packing (#691)
* Fix: Higher vram usage for mistral and sample_packing

* chore: update comment

* chore: lint
2023-10-06 12:33:43 -04:00
..
cli
prepared dataset caching, other misc fixes (#665)
2023-10-02 21:07:24 -04:00
common
Debug tokenization output: Add ability to output text only (no tokens), and/or specify num samples to see (#511)
2023-08-31 14:26:52 -07:00
models
attention_mask not needed for training (#642)
2023-09-27 11:12:08 -04:00
monkeypatch
flash_attention + sample packing for stablelm 3b (#671)
2023-10-05 16:03:43 -04:00
prompt_strategies
add support for defined train split (#654)
2023-09-28 20:14:14 -04:00
utils
Fix: Higher vram usage for mistral and sample_packing (#691)
2023-10-06 12:33:43 -04:00
__init__.py
WIP for axolotl trainer
2023-04-14 00:20:05 -04:00
convert.py
Lint convert.py
2023-05-31 02:53:53 +09:00
datasets.py
Correct typos in datasets.py (#639)
2023-09-27 12:12:10 -04:00
logging_config.py
log rank too (#527)
2023-09-06 08:37:51 -04:00
prompt_tokenizers.py
don't strip the prompt for check since we don't strip to tokenize anymore (#650)
2023-09-28 12:21:51 -04:00
prompters.py
use fastchat conversations template (#578)
2023-09-27 12:10:45 -04:00
train.py
create a model card with axolotl badge (#624)
2023-09-22 16:13:26 -04:00
Powered by Gitea Version: 1.25.4 Page: 460ms Template: 15ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API