* hf offline decorator for tests to workaround rate limits * fail quicker so we can see logs * try new cache name * limit files downloaded * phi mini predownload * offline decorator for phi tokenizer * handle meta llama 8b offline too * make sure to return fixtures if they are wrapped too * more fixes * more things offline * more offline things * fix the env var * fix the model name * handle gemma also * force reload of modules to recheck offline status * prefetch mistral too * use reset_sessions so hub picks up offline mode * more fixes * rename so it doesn't seem like a context manager * fix backoff * switch out tinyshakespeare dataset since it runs a py script to fetch data and doesn't work offline * include additional dataset * more fixes * more fixes * replace tiny shakespeaere dataset * skip some tests for now * use more robust check using snapshot download to determine if a dataset name is on the hub * typo for skip reason * use local_files_only * more fixtures * remove local only * use tiny shakespeare as pretrain dataset and streaming can't be offline even if precached * make sure fixtures aren't offline improve the offline reset try bumping version of datasets reorder reloading and setting prime a new cache run the tests now with fresh cache try with a static cache * now run all the ci again with hopefully a correct cache * skip wonky tests for now * skip wonky tests for now * handle offline mode for model card creation
67 lines
992 B
Plaintext
67 lines
992 B
Plaintext
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
|
|
|
|
# START section of dependencies that don't install on Darwin/MacOS
|
|
bitsandbytes==0.45.3
|
|
triton>=3.0.0
|
|
mamba-ssm==1.2.0.post1
|
|
xformers>=0.0.23.post1
|
|
autoawq==0.2.7.post3
|
|
liger-kernel==0.5.3
|
|
# END section
|
|
|
|
packaging==23.2
|
|
|
|
peft==0.15.0
|
|
transformers==4.50.0
|
|
tokenizers>=0.21.1
|
|
accelerate==1.5.2
|
|
datasets==3.5.0
|
|
deepspeed==0.16.4
|
|
trl==0.15.1
|
|
|
|
optimum==1.16.2
|
|
hf_transfer
|
|
sentencepiece
|
|
gradio==3.50.2
|
|
|
|
modal==0.70.5
|
|
pydantic==2.10.6
|
|
addict
|
|
fire
|
|
PyYAML>=6.0
|
|
requests
|
|
wandb
|
|
einops
|
|
colorama
|
|
numba
|
|
numpy>=1.24.4,<=2.0.1
|
|
|
|
# qlora things
|
|
evaluate==0.4.1
|
|
scipy
|
|
scikit-learn==1.4.2
|
|
nvidia-ml-py==12.560.30
|
|
art
|
|
tensorboard
|
|
python-dotenv==1.0.1
|
|
|
|
# remote filesystems
|
|
s3fs>=2024.5.0
|
|
gcsfs>=2024.5.0
|
|
# adlfs
|
|
|
|
zstandard==0.22.0
|
|
fastcore
|
|
|
|
# lm eval harness
|
|
lm_eval==0.4.7
|
|
langdetect==1.0.9
|
|
immutabledict==4.2.0
|
|
antlr4-python3-runtime==4.13.2
|
|
|
|
torchao==0.7.0
|
|
schedulefree==1.3.0
|
|
|
|
axolotl-contribs-lgpl==0.0.6
|
|
axolotl-contribs-mit==0.0.3
|