* use fastchat conversations template * require fastchat (fschat) pip install * handle roles dynamically from conversation * tweak fastchat conversation with a monkeypatch to get individual turns * fix up so it works with multiple conversation styles, and don't strip the turns * fix sharegpt fixture now that we're using a more correct tokenization * use a new prompter and support fastchat conversation type * use sharegpt from prompt strategies now * update docs, add chatml template * add a newline after im_end token * ensure we correctly set system message * update per PR feedback to handle deprecated sharegpt types * don't add duplicate wandb req * make sharegpt fields configurable from yml * llama2 fixes * don't fail fatally when turns are improper
35 lines
720 B
Plaintext
35 lines
720 B
Plaintext
--extra-index-url https://download.pytorch.org/whl/cu118
|
|
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
|
|
torch==2.0.1
|
|
auto-gptq
|
|
packaging
|
|
peft @ git+https://github.com/huggingface/peft.git
|
|
transformers @ git+https://github.com/huggingface/transformers.git@0ac3875011d32dc85e0e83970507e3afe8f0febb
|
|
bitsandbytes>=0.41.1
|
|
accelerate @ git+https://github.com/huggingface/accelerate@80da9cfb09bb3cc9f1b385cb55d6b90d025a5fd9
|
|
deepspeed
|
|
addict
|
|
evaluate
|
|
fire
|
|
PyYAML>=6.0
|
|
datasets
|
|
flash-attn>=2.2.1
|
|
sentencepiece
|
|
wandb
|
|
einops
|
|
xformers
|
|
optimum
|
|
hf_transfer
|
|
colorama
|
|
numba
|
|
numpy>=1.24.4
|
|
# qlora things
|
|
bert-score==0.3.13
|
|
evaluate==0.4.0
|
|
rouge-score==0.1.2
|
|
scipy
|
|
scikit-learn==1.2.2
|
|
pynvml
|
|
art
|
|
fschat==0.2.29
|