* feat: support new arg num_items_in_batch * use kwargs to manage extra unknown kwargs for now * upgrade against upstream transformers main * make sure trl is on latest too * fix for upgraded trl * fix: handle trl and transformer signature change * feat: update trl to handle transformer signature * RewardDataCollatorWithPadding no longer has max_length * handle updated signature for tokenizer vs processor class * invert logic for tokenizer vs processor class * processing_class, not processor class * also handle processing class in dpo * handle model name w model card creation * upgrade transformers and add a loss check test * fix install of tbparse requirements * make sure to add tbparse to req * feat: revert kwarg to positional kwarg to be explicit --------- Co-authored-by: Wing Lian <wing.lian@gmail.com>
16 lines
613 B
Python
16 lines
613 B
Python
"""Test module for checking whether the integration of Unsloth with Hugging Face Transformers is working as expected."""
|
|
import unittest
|
|
|
|
from axolotl.monkeypatch.unsloth_ import check_self_attn_is_patchable
|
|
|
|
|
|
class TestUnslothIntegration(unittest.TestCase):
|
|
"""Unsloth monkeypatch integration tests."""
|
|
|
|
def test_is_self_attn_patchable(self):
|
|
# ensures the current version of transformers has loss code that matches our patching code
|
|
self.assertTrue(
|
|
check_self_attn_is_patchable(),
|
|
"HF transformers self attention code has changed and isn't patchable",
|
|
)
|