diff --git a/README.md b/README.md index b119f6e12..47750bc1a 100644 --- a/README.md +++ b/README.md @@ -1,383 +1,177 @@
- A powerful, flexible tool designed to streamline post-training for various AI models with enterprise-grade features and optimizations.
+
+
+
+
Your ultimate toolkit for efficient, scalable, and versatile large language model fine-tuning.
+ + +Axolotl is a powerful, flexible, and user-friendly tool designed to supercharge your post-training workflows for a wide range of cutting-edge AI models.
+pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninja
+pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
+
+# Download example axolotl configs, deepspeed configs
+axolotl fetch examples
+axolotl fetch deepspeed_configs # OPTIONAL
+ Other installation approaches are described here.
+ +# Fetch axolotl examples
+axolotl fetch examples
+
+# Or, specify a custom path
+axolotl fetch examples --dest path/to/folder
+
+# Train a model using LoRA
+axolotl train examples/llama-3/lora-1b.yml
+ + That's it! Check out our Getting Started Guide ➜ for a more detailed walkthrough.
|
- |
-
- |
-
|
- |
-
-
- |
-
Dive deep into Axolotl's capabilities with our extensive documentation:
++ Contributions are always welcome and highly appreciated! Axolotl thrives on community support. Please see our Contributing Guide for details on how you can help make Axolotl even better. +
+ +A huge thank you to our visionary sponsors who provide the essential resources to keep Axolotl at the forefront of LLM fine-tuning:
+ +
+ + Modal: Revolutionizing cloud computing for Gen AI. Run jobs, deploy models, and fine-tune LLMs at scale with ease. +
++ Interested in powering the future of Axolotl? Become a sponsor! Contact us at wing@axolotl.ai +
- Magistral with mistral-common tokenizer support has been added to Axolotl. - - See examples → - -
-- QAT support has been added to Axolotl. - - Explore the docs → - -
-- Llama 4 support has been added in Axolotl. - - See examples → - -
-- • Sequence Parallelism (SP) for scaling context length - - - Blog - | - - Docs - -
-- • (Beta) Multimodal models fine-tuning - - - Check docs → - -
-- • LoRA optimizations for better memory usage and speed - - - Docs → - -
-- • GRPO support added - - - Blog - | - - Example - -
-- Reward Modelling / Process Reward Modelling fine-tuning support added. - - See docs → - -
-- Train LLaMA, Mistral, Mixtral, Pythia, and more. Full compatibility with HuggingFace transformers causal language models. -
-- Full fine-tuning, LoRA, QLoRA, GPTQ, QAT, Preference Tuning (DPO, IPO, KTO, ORPO), RL (GRPO), Multimodal, and Reward Modelling. -
-- Reuse a single YAML file across dataset preprocessing, training, evaluation, quantization, and inference. -
-- Multipacking, Flash Attention, Xformers, Flex Attention, Liger Kernel, Sequence Parallelism, and Multi-GPU training. -
-- Load from local files, HuggingFace datasets, and cloud storage (S3, Azure, GCP, OCI). -
-- Pre-built Docker images and PyPI packages for seamless deployment on cloud platforms and local hardware. -
-# Install dependencies
-pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninja
-
-# Install Axolotl with Flash Attention and DeepSpeed
-pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
-
-# Download examples and configs
-axolotl fetch examples
-axolotl fetch deepspeed_configs # OPTIONAL
- - Other installation methods available in our documentation -
-# Fetch examples
-axolotl fetch examples
-
-# Or specify custom path
-axolotl fetch examples --dest path/to/folder
-
-# Start training with LoRA
-axolotl train examples/llama-3/lora-1b.yml
- - That's it! Check our Getting Started Guide for detailed walkthrough -
-Detailed setup instructions for different environments
-Full configuration options and examples
-Loading datasets from various sources
-Supported formats and usage instructions
-Scale your training across multiple GPUs
-Distributed training across multiple machines
-Efficient batch packing for training
-Auto-generated code documentation
-Frequently asked questions
-- We welcome contributions from the community! Whether it's bug fixes, \ No newline at end of file +
+ This project is proudly licensed under the Apache 2.0 License. See the LICENSE file for full details. +
\ No newline at end of file