diff --git a/README.md b/README.md
index a626635dc..55a11d6c1 100644
--- a/README.md
+++ b/README.md
@@ -22,39 +22,49 @@ Features:
## Table of Contents
-- [Introduction](#axolotl)
-- [Supported Features](#axolotl-supports)
-- [Quickstart](#quickstart-)
-- [Environment](#environment)
- - [Docker](#docker)
- - [Conda/Pip venv](#condapip-venv)
- - [Cloud GPU](#cloud-gpu) - Latitude.sh, JarvisLabs, RunPod
- - [Bare Metal Cloud GPU](#bare-metal-cloud-gpu)
- - [Windows](#windows)
- - [Mac](#mac)
- - [Google Colab](#google-colab)
- - [Launching on public clouds via SkyPilot](#launching-on-public-clouds-via-skypilot)
- - [Launching on public clouds via dstack](#launching-on-public-clouds-via-dstack)
-- [Dataset](#dataset)
-- [Config](#config)
- - [Train](#train)
- - [Inference](#inference-playground)
- - [Merge LORA to Base](#merge-lora-to-base)
- - [Special Tokens](#special-tokens)
- - [All Config Options](#all-config-options)
-- Advanced Topics
- - [Multipack](./docs/multipack.qmd)
- - [RLHF & DPO](./docs/rlhf.qmd)
- - [Dataset Pre-Processing](./docs/dataset_preprocessing.qmd)
- - [Unsloth](./docs/unsloth.qmd)
-- [Common Errors](#common-errors-)
- - [Tokenization Mismatch b/w Training & Inference](#tokenization-mismatch-bw-inference--training)
-- [Debugging Axolotl](#debugging-axolotl)
-- [Need Help?](#need-help-)
-- [Badge](#badge-)
-- [Community Showcase](#community-showcase)
-- [Contributing](#contributing-)
-- [Sponsors](#sponsors-)
+- [Axolotl](#axolotl)
+ - [Table of Contents](#table-of-contents)
+ - [Axolotl supports](#axolotl-supports)
+ - [Quickstart ⚡](#quickstart-)
+ - [Usage](#usage)
+ - [Advanced Setup](#advanced-setup)
+ - [Environment](#environment)
+ - [Docker](#docker)
+ - [Conda/Pip venv](#condapip-venv)
+ - [Cloud GPU](#cloud-gpu)
+ - [Bare Metal Cloud GPU](#bare-metal-cloud-gpu)
+ - [LambdaLabs](#lambdalabs)
+ - [GCP](#gcp)
+ - [Windows](#windows)
+ - [Mac](#mac)
+ - [Google Colab](#google-colab)
+ - [Launching on public clouds via SkyPilot](#launching-on-public-clouds-via-skypilot)
+ - [Launching on public clouds via dstack](#launching-on-public-clouds-via-dstack)
+ - [Dataset](#dataset)
+ - [Config](#config)
+ - [All Config Options](#all-config-options)
+ - [Train](#train)
+ - [Preprocess dataset](#preprocess-dataset)
+ - [Multi-GPU](#multi-gpu)
+ - [DeepSpeed](#deepspeed)
+ - [FSDP](#fsdp)
+ - [FSDP + QLoRA](#fsdp--qlora)
+ - [Weights \& Biases Logging](#weights--biases-logging)
+ - [Special Tokens](#special-tokens)
+ - [Inference Playground](#inference-playground)
+ - [Merge LORA to base](#merge-lora-to-base)
+ - [Common Errors 🧰](#common-errors-)
+ - [Tokenization Mismatch b/w Inference \& Training](#tokenization-mismatch-bw-inference--training)
+ - [Debugging Axolotl](#debugging-axolotl)
+ - [Need help? 🙋](#need-help-)
+ - [Badge ❤🏷️](#badge-️)
+ - [Community Showcase](#community-showcase)
+ - [Contributing 🤝](#contributing-)
+ - [Sponsors 🤝❤](#sponsors-)
+ - [💎 Diamond Sponsors - Contact directly](#-diamond-sponsors---contact-directly)
+ - [🥇 Gold Sponsors - $5000/mo](#-gold-sponsors---5000mo)
+ - [🥈 Silver Sponsors - $1000/mo](#-silver-sponsors---1000mo)
+ - [🥉 Bronze Sponsors - $500/mo](#-bronze-sponsors---500mo)
|
@@ -96,6 +106,7 @@ Features:
| RWKV | ✅ | ❓ | ❓ | ❓ | ❓ | ❓ | ❓ |
| Qwen | ✅ | ✅ | ✅ | ❓ | ❓ | ❓ | ❓ |
| Gemma | ✅ | ✅ | ✅ | ❓ | ❓ | ✅ | ❓ |
+| Jamba | ✅ | ✅ | ✅ | ❓ | ❓ | ✅ | ❓ |
✅: supported
❌: not supported
diff --git a/examples/jamba/README.md b/examples/jamba/README.md
index 54f5d1da9..4c9dc85a0 100644
--- a/examples/jamba/README.md
+++ b/examples/jamba/README.md
@@ -6,5 +6,5 @@
- ✅ qlora w/ deepspeed Zero-3 needs at least 2x GPUs and 67GiB VRAM (wtf?)
- ✅ qlora single-gpu, ~51GiB VRAM
- ✅ multipack
-- ❓ FSDP
+- ✅ FSDP
- ❓ 8-bit LoRA
diff --git a/examples/jamba/qlora_fsdp.yaml b/examples/jamba/qlora_fsdp_large.yaml
similarity index 94%
rename from examples/jamba/qlora_fsdp.yaml
rename to examples/jamba/qlora_fsdp_large.yaml
index 2ea268344..28316efd5 100644
--- a/examples/jamba/qlora_fsdp.yaml
+++ b/examples/jamba/qlora_fsdp_large.yaml
@@ -1,4 +1,4 @@
-base_model: ai21labs/Jamba-v0.1
+base_model: ai21labs/AI21-Jamba-1.5-Large
tokenizer_type: AutoTokenizer
load_in_4bit: true
@@ -11,7 +11,7 @@ datasets:
drop_system_message: true
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
-output_dir: jamba-fsdp-qlora-ft
+output_dir: jamba-large-fsdp-qlora-ft
save_safetensors: true
adapter: qlora
sequence_len: 2048
|