try #2: pin hf transformers and accelerate to latest release, don't reinstall pytorch (#867)

* isolate torch from the requirements.txt

* fix typo for removed line ending

* pin transformers and accelerate to latest releases

* try w auto-gptq==0.5.1

* update README to remove manual peft install

* pin xformers to 0.0.22

* bump flash-attn to 2.3.3

* pin flash attn to exact version
This commit is contained in:
Wing Lian
2023-11-16 10:42:36 -05:00
committed by GitHub
parent 3cc67d2cdd
commit 0de1457189
3 changed files with 6 additions and 8 deletions

View File

@@ -71,6 +71,7 @@ jobs:
- name: Install dependencies
run: |
pip3 install --extra-index-url https://download.pytorch.org/whl/cu118 -U torch==2.0.1
pip3 uninstall -y transformers accelerate
pip3 install -U -e .[flash-attn]
pip3 install -r requirements-tests.txt