diff --git a/.nojekyll b/.nojekyll index 3d627ddfd..a14f9e4e3 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -e905cd86 \ No newline at end of file +5da99456 \ No newline at end of file diff --git a/docs/dataset-formats/conversation.html b/docs/dataset-formats/conversation.html index f335a0565..55953191f 100644 --- a/docs/dataset-formats/conversation.html +++ b/docs/dataset-formats/conversation.html @@ -552,6 +552,19 @@ Important type: chat_template roles_to_train: train_on_eos: +
+
+
+ +
+
+Tip +
+
+
+

If you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.

+
+
  1. Using the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.
diff --git a/docs/faq.html b/docs/faq.html index 89f080c03..702ad5518 100644 --- a/docs/faq.html +++ b/docs/faq.html @@ -486,6 +486,10 @@ ul.task-list li input[type="checkbox"] {

A: This is because of the mismatch between tokenizer.eos_token and EOS/EOT token in template. Please make sure to set eos_token under special_tokens to the same EOS/EOT token as in template.

+

Q: “chat_template choice is tokenizer_default but tokenizer’s chat_template is null. Please add a chat_template in tokenizer config”

+
+

A: This is because the tokenizer does not have a chat template. Please add a chat template in the tokenizer config. See chat_template for more details.

+
diff --git a/docs/reward_modelling.html b/docs/reward_modelling.html index f21cc0021..20248e1ca 100644 --- a/docs/reward_modelling.html +++ b/docs/reward_modelling.html @@ -491,22 +491,30 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin val_set_size: 0.1 eval_steps: 100 +

Bradley-Terry chat templates expect single-turn conversations in the following format:

+
{
+    "system": "...", // optional
+    "input": "...",
+    "chosen": "...",
+    "rejected": "..."
+}

Process Reward Models (PRM)

Process reward models are trained using data which contains preference annotations for each step in a series of interactions. Typically, PRMs are trained to provide reward signals over each step of a reasoning trace and are used for downstream reinforcement learning.

-
base_model: Qwen/Qwen2.5-3B
-model_type: AutoModelForTokenClassification
-num_labels: 2
-
-process_reward_model: true
-datasets:
-  - path: trl-lib/math_shepherd
-    type: stepwise_supervised
-    split: train
-
-val_set_size: 0.1
-eval_steps: 100
+
base_model: Qwen/Qwen2.5-3B
+model_type: AutoModelForTokenClassification
+num_labels: 2
+
+process_reward_model: true
+datasets:
+  - path: trl-lib/math_shepherd
+    type: stepwise_supervised
+    split: train
+
+val_set_size: 0.1
+eval_steps: 100
+

Please see stepwise_supervised for more details on the dataset format.

diff --git a/search.json b/search.json index 7214fe3e8..217e8a473 100644 --- a/search.json +++ b/search.json @@ -317,7 +317,7 @@ "href": "docs/reward_modelling.html", "title": "Reward Modelling", "section": "", - "text": "Overview\nReward modelling is a technique used to train models to predict the reward or value of a given input. This is particularly useful in reinforcement learning scenarios where the model needs to evaluate the quality of its actions or predictions. We support the reward modelling techniques supported by trl.\n\n\n(Outcome) Reward Models\nOutcome reward models are trained using data which contains preference annotations for an entire interaction between the user and model (e.g. rather than per-turn or per-step).\nbase_model: google/gemma-2-2b\nmodel_type: AutoModelForSequenceClassification\nnum_labels: 1\ntokenizer_type: AutoTokenizer\n\nreward_model: true\nchat_template: gemma\ndatasets:\n - path: argilla/distilabel-intel-orca-dpo-pairs\n type: bradley_terry.chat_template\n\nval_set_size: 0.1\neval_steps: 100\n\n\nProcess Reward Models (PRM)\nProcess reward models are trained using data which contains preference annotations for each step in a series of interactions. Typically, PRMs are trained to provide reward signals over each step of a reasoning trace and are used for downstream reinforcement learning.\nbase_model: Qwen/Qwen2.5-3B\nmodel_type: AutoModelForTokenClassification\nnum_labels: 2\n\nprocess_reward_model: true\ndatasets:\n - path: trl-lib/math_shepherd\n type: stepwise_supervised\n split: train\n\nval_set_size: 0.1\neval_steps: 100", + "text": "Overview\nReward modelling is a technique used to train models to predict the reward or value of a given input. This is particularly useful in reinforcement learning scenarios where the model needs to evaluate the quality of its actions or predictions. We support the reward modelling techniques supported by trl.\n\n\n(Outcome) Reward Models\nOutcome reward models are trained using data which contains preference annotations for an entire interaction between the user and model (e.g. rather than per-turn or per-step).\nbase_model: google/gemma-2-2b\nmodel_type: AutoModelForSequenceClassification\nnum_labels: 1\ntokenizer_type: AutoTokenizer\n\nreward_model: true\nchat_template: gemma\ndatasets:\n - path: argilla/distilabel-intel-orca-dpo-pairs\n type: bradley_terry.chat_template\n\nval_set_size: 0.1\neval_steps: 100\nBradley-Terry chat templates expect single-turn conversations in the following format:\n{\n \"system\": \"...\", // optional\n \"input\": \"...\",\n \"chosen\": \"...\",\n \"rejected\": \"...\"\n}\n\n\nProcess Reward Models (PRM)\nProcess reward models are trained using data which contains preference annotations for each step in a series of interactions. Typically, PRMs are trained to provide reward signals over each step of a reasoning trace and are used for downstream reinforcement learning.\nbase_model: Qwen/Qwen2.5-3B\nmodel_type: AutoModelForTokenClassification\nnum_labels: 2\n\nprocess_reward_model: true\ndatasets:\n - path: trl-lib/math_shepherd\n type: stepwise_supervised\n split: train\n\nval_set_size: 0.1\neval_steps: 100\nPlease see stepwise_supervised for more details on the dataset format.", "crumbs": [ "How To Guides", "Reward Modelling" @@ -779,7 +779,7 @@ "href": "docs/faq.html", "title": "FAQ", "section": "", - "text": "General\nQ: The trainer stopped and hasn’t progressed in several minutes.\n\nA: Usually an issue with the GPUs communicating with each other. See the NCCL doc\n\nQ: Exitcode -9\n\nA: This usually happens when you run out of system RAM.\n\nQ: Exitcode -7 while using deepspeed\n\nA: Try upgrading deepspeed w: pip install -U deepspeed\n\nQ: AttributeError: ‘DummyOptim’ object has no attribute ‘step’\nQ: ModuleNotFoundError: No module named ‘mpi4py’ using single GPU with deepspeed\n\nA: You may be using deepspeed with single gpu. Please remove the deepspeed: section in the yaml file or --deepspeed CLI flag.\n\nQ: The codes is stuck on saving preprocessed datasets.\n\nA: This is usually an issue with the GPU. This can be resolved through setting the os environment variable CUDA_VISIBLE_DEVICES=0. If you are on runpod, this is usually a pod issue. Starting a new pod should take care of it.\n\n\n\nChat templates\nQ: jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'content' / 'role' / ____\n\nA: This means that the property mapping for the stated attribute does not exist when building chat_template prompt. For example, if no attribute 'content', please check you have added the correct mapping for content under message_property_mappings.\n\nQ: Empty template generated for turn ___\n\nA: The content is empty for that turn.\n\nQ: Could not find content start/end boundary for turn __\n\nA: The specific turn’s start/end could not be detected. Please ensure you have set the eos_token following your chat_template. Otherwise, this could be a chat_template which doesn’t use proper boundaries for each turn (like system). On the rare occurrence, make sure your content is not [[dummy_message]]. Please let us know about this.\n\nQ: Content end boundary is before start boundary for turn ___\n\nA: This is an edge case which should not occur. Please create an Issue if this happens.\n\nQ: Content end boundary is the same as start boundary for turn ___. This is likely an empty turn.\n\nA: This is likely an empty turn.\n\nQ: The EOS/EOT token is incorrectly being masked or not being masked.\n\nA: This is because of the mismatch between tokenizer.eos_token and EOS/EOT token in template. Please make sure to set eos_token under special_tokens to the same EOS/EOT token as in template.", + "text": "General\nQ: The trainer stopped and hasn’t progressed in several minutes.\n\nA: Usually an issue with the GPUs communicating with each other. See the NCCL doc\n\nQ: Exitcode -9\n\nA: This usually happens when you run out of system RAM.\n\nQ: Exitcode -7 while using deepspeed\n\nA: Try upgrading deepspeed w: pip install -U deepspeed\n\nQ: AttributeError: ‘DummyOptim’ object has no attribute ‘step’\nQ: ModuleNotFoundError: No module named ‘mpi4py’ using single GPU with deepspeed\n\nA: You may be using deepspeed with single gpu. Please remove the deepspeed: section in the yaml file or --deepspeed CLI flag.\n\nQ: The codes is stuck on saving preprocessed datasets.\n\nA: This is usually an issue with the GPU. This can be resolved through setting the os environment variable CUDA_VISIBLE_DEVICES=0. If you are on runpod, this is usually a pod issue. Starting a new pod should take care of it.\n\n\n\nChat templates\nQ: jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'content' / 'role' / ____\n\nA: This means that the property mapping for the stated attribute does not exist when building chat_template prompt. For example, if no attribute 'content', please check you have added the correct mapping for content under message_property_mappings.\n\nQ: Empty template generated for turn ___\n\nA: The content is empty for that turn.\n\nQ: Could not find content start/end boundary for turn __\n\nA: The specific turn’s start/end could not be detected. Please ensure you have set the eos_token following your chat_template. Otherwise, this could be a chat_template which doesn’t use proper boundaries for each turn (like system). On the rare occurrence, make sure your content is not [[dummy_message]]. Please let us know about this.\n\nQ: Content end boundary is before start boundary for turn ___\n\nA: This is an edge case which should not occur. Please create an Issue if this happens.\n\nQ: Content end boundary is the same as start boundary for turn ___. This is likely an empty turn.\n\nA: This is likely an empty turn.\n\nQ: The EOS/EOT token is incorrectly being masked or not being masked.\n\nA: This is because of the mismatch between tokenizer.eos_token and EOS/EOT token in template. Please make sure to set eos_token under special_tokens to the same EOS/EOT token as in template.\n\nQ: “chat_template choice is tokenizer_default but tokenizer’s chat_template is null. Please add a chat_template in tokenizer config”\n\nA: This is because the tokenizer does not have a chat template. Please add a chat template in the tokenizer config. See chat_template for more details.", "crumbs": [ "Troubleshooting", "FAQ" @@ -1314,7 +1314,7 @@ "href": "docs/dataset-formats/conversation.html#chat_template", "title": "Conversation", "section": "chat_template", - "text": "chat_template\nChat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer’s template, a supported template, or custom jinja2.\n\n\ndata.jsonl\n\n{\"conversations\": [{\"role\": \"...\", \"content\": \"...\"}]}\n\nSee configs for full configs and supported templates.\n\nMigrating from sharegpt\nMost configs can be adapted as follows:\n# old\nchat_template: chatml\ndatasets:\n - path: ...\n type: sharegpt\n conversation: chatml\n\n# new (if using tokenizer's chat_template)\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n\n# new (if setting a new chat_template like chatml, gemma, etc)\nchat_template: chatml\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\nWe recommend checking the below examples for other usecases.\n\n\nExamples\n\nUsing the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.\n\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train:\n train_on_eos:\n\nUsing the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.\n\nchat_template: gemma # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train: [\"assistant\"] # default value\n\nUsing the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.\n\nchat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n\nUsing a custom jinja template on OpenAI messages format, training on all assistant messages.\n\n# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty\nchat_template_jinja: \"{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\\n' + message['content'] + '<|end|>' + '\\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\\n' + message['content'] + '<|end|>' + '\\n' + '<|assistant|>' + '\\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\\n'}}{% endif %}{% endfor %}\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure that your tokenizer.eos_token is same as EOS/EOT token in template. Otherwise, set eos_token under special_tokens.\n\n\n\n(Advanced) Using fine-grained control over tokens and turns to train in a conversation\n\nFor a data sample that looks like:\n\n\ndata.jsonl\n\n{\n \"conversations\": [\n {\"from\": \"system\", \"value\": \"You are an AI assistant.\", \"train\": false},\n {\"from\": \"human\", \"value\": \"Hello\", \"train\": false},\n {\"from\": \"assistant\", \"value\": \"Hello\", \"train\": true},\n {\"from\": \"human\", \"value\": \"How are you?\", \"train\": true},\n {\n \"from\": \"assistant\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train_detail\": [\n {\"begin_offset\": 0, \"end_offset\": 8, \"train\": false},\n {\"begin_offset\": 9, \"end_offset\": 18, \"train\": true},\n {\"begin_offset\": 19, \"end_offset\": 30, \"train\": false},\n ],\n },\n {\n \"from\": \"human\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train\": true,\n },\n {\"from\": \"assistant\", \"value\": \"Hi there!\", \"train\": true}\n ]\n}\n\nThe configuration would look like:\ndatasets:\n - path: ...\n type: chat_template\n chat_template: tokenizer_default\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n roles_to_train: []\n train_on_eos: turn\n message_field_training: train\n message_field_training_detail: train_detail\n\n\n\n\n\n\nTip\n\n\n\nIt is not necessary to set both message_field_training and message_field_training_detail at once.", + "text": "chat_template\nChat Template strategy uses a jinja2 template that converts a list of messages into a prompt. Support using tokenizer’s template, a supported template, or custom jinja2.\n\n\ndata.jsonl\n\n{\"conversations\": [{\"role\": \"...\", \"content\": \"...\"}]}\n\nSee configs for full configs and supported templates.\n\nMigrating from sharegpt\nMost configs can be adapted as follows:\n# old\nchat_template: chatml\ndatasets:\n - path: ...\n type: sharegpt\n conversation: chatml\n\n# new (if using tokenizer's chat_template)\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n\n# new (if setting a new chat_template like chatml, gemma, etc)\nchat_template: chatml\ndatasets:\n - path: ...\n type: chat_template\n\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\nWe recommend checking the below examples for other usecases.\n\n\nExamples\n\nUsing the default chat template in the tokenizer_config.json on OpenAI messages format, training on only last message.\n\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train:\n train_on_eos:\n\n\n\n\n\n\nTip\n\n\n\nIf you receive an error like “chat_template choice is tokenizer_default but tokenizer’s chat_template is null.”, it means the tokenizer does not have a default chat_template. Follow the examples below instead to set a custom chat_template.\n\n\n\nUsing the gemma chat template to override the tokenizer_config.json’s chat template on OpenAI messages format, training on all assistant messages.\n\nchat_template: gemma # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n roles_to_train: [\"assistant\"] # default value\n\nUsing the tokenizer_config.json’s chat template or chatml as fallback if the former’s chat template does not exist, on OpenAI messages format, training on all assistant messages.\n\nchat_template: tokenizer_default_fallback_chatml # this overwrites the tokenizer's chat_template\ndatasets:\n - path: ...\n type: chat_template\n\nUsing a custom jinja template on OpenAI messages format, training on all assistant messages.\n\n# chat_template: jinja # `jinja` will be implied if the `chat_template_jinja` is set and this field is empty\nchat_template_jinja: \"{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|system|>' + '\\n' + message['content'] + '<|end|>' + '\\n'}}{% elif (message['role'] == 'user') %}{{'<|user|>' + '\\n' + message['content'] + '<|end|>' + '\\n' + '<|assistant|>' + '\\n'}}{% elif message['role'] == 'assistant' %}{{message['content'] + '<|end|>' + '\\n'}}{% endif %}{% endfor %}\"\n\ndatasets:\n - path: ...\n type: chat_template\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure that your tokenizer.eos_token is same as EOS/EOT token in template. Otherwise, set eos_token under special_tokens.\n\n\n\n(Advanced) Using fine-grained control over tokens and turns to train in a conversation\n\nFor a data sample that looks like:\n\n\ndata.jsonl\n\n{\n \"conversations\": [\n {\"from\": \"system\", \"value\": \"You are an AI assistant.\", \"train\": false},\n {\"from\": \"human\", \"value\": \"Hello\", \"train\": false},\n {\"from\": \"assistant\", \"value\": \"Hello\", \"train\": true},\n {\"from\": \"human\", \"value\": \"How are you?\", \"train\": true},\n {\n \"from\": \"assistant\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train_detail\": [\n {\"begin_offset\": 0, \"end_offset\": 8, \"train\": false},\n {\"begin_offset\": 9, \"end_offset\": 18, \"train\": true},\n {\"begin_offset\": 19, \"end_offset\": 30, \"train\": false},\n ],\n },\n {\n \"from\": \"human\",\n \"value\": \"I'm doing very well, thank you!\",\n \"train\": true,\n },\n {\"from\": \"assistant\", \"value\": \"Hi there!\", \"train\": true}\n ]\n}\n\nThe configuration would look like:\ndatasets:\n - path: ...\n type: chat_template\n chat_template: tokenizer_default\n field_messages: conversations\n message_property_mappings:\n role: from\n content: value\n roles_to_train: []\n train_on_eos: turn\n message_field_training: train\n message_field_training_detail: train_detail\n\n\n\n\n\n\nTip\n\n\n\nIt is not necessary to set both message_field_training and message_field_training_detail at once.", "crumbs": [ "Dataset Formats", "Conversation" diff --git a/sitemap.xml b/sitemap.xml index f8fd10b87..9f170d926 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,162 +2,162 @@ https://axolotl-ai-cloud.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html - 2025-03-07T13:59:04.910Z + 2025-03-10T09:26:02.164Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/stepwise_supervised.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/template_free.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/tokenized.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/nccl.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/amd_hpc.html - 2025-03-07T13:59:04.905Z + 2025-03-10T09:26:02.159Z https://axolotl-ai-cloud.github.io/axolotl/docs/config.html - 2025-03-07T13:59:04.905Z + 2025-03-10T09:26:02.159Z https://axolotl-ai-cloud.github.io/axolotl/docs/multi-gpu.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/installation.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/torchao.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/reward_modelling.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/input_output.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/multimodal.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/getting-started.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/inference.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/multipack.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/debugging.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/lr_groups.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/TODO.html - 2025-03-07T13:59:04.904Z + 2025-03-10T09:26:02.158Z https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/LICENSE.html - 2025-03-07T13:59:04.924Z + 2025-03-10T09:26:02.178Z https://axolotl-ai-cloud.github.io/axolotl/index.html - 2025-03-07T13:59:04.921Z + 2025-03-10T09:26:02.175Z https://axolotl-ai-cloud.github.io/axolotl/src/axolotl/integrations/cut_cross_entropy/ACKNOWLEDGEMENTS.html - 2025-03-07T13:59:04.924Z + 2025-03-10T09:26:02.178Z https://axolotl-ai-cloud.github.io/axolotl/FAQS.html - 2025-03-07T13:59:04.904Z + 2025-03-10T09:26:02.158Z https://axolotl-ai-cloud.github.io/axolotl/docs/multi-node.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/faq.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/batch_vs_grad.html - 2025-03-07T13:59:04.905Z + 2025-03-10T09:26:02.159Z https://axolotl-ai-cloud.github.io/axolotl/docs/lora_optims.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/rlhf.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/cli.html - 2025-03-07T13:59:04.905Z + 2025-03-10T09:26:02.159Z https://axolotl-ai-cloud.github.io/axolotl/docs/unsloth.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/fsdp_qlora.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset_preprocessing.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/custom_integrations.html - 2025-03-07T13:59:04.905Z + 2025-03-10T09:26:02.159Z https://axolotl-ai-cloud.github.io/axolotl/docs/mac.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/docker.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/ray-integration.html - 2025-03-07T13:59:04.909Z + 2025-03-10T09:26:02.163Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/index.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/conversation.html - 2025-03-07T13:59:04.905Z + 2025-03-10T09:26:02.159Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/pretraining.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html - 2025-03-07T13:59:04.906Z + 2025-03-10T09:26:02.160Z diff --git a/styles.css b/styles.css index 891349b4b..749ff4366 100644 --- a/styles.css +++ b/styles.css @@ -14,7 +14,7 @@ h1 { font-family: var(--font-title); font-weight: 400; - font-size: 6rem; + font-size: 5rem; line-height: 1.1; letter-spacing: -0.05em; font-feature-settings: "ss01" on;