#Advanced Formatting
The settings provided in this section allow for more control over the prompt-building strategy, primarily for Text Completion APIs.
Most of the settings in this panel do not apply to Chat Completions APIs as they are governed by the prompt manager system instead.
#Backend-defined templates
Applies to: Text Completion APIs
Not applicable to Chat Completion APIs as they use a different prompt builder.
Some Text Completion sources provide an ability to automatically choose templates recommended by the model author. This works by comparing a hash of the chat template defined in the model's tokenizer_config.json
file with one of the default SillyTavern templates.
- Derive templates option must be enabled in the Advanced Formatting menu. This can be applied to Context, Instruct, or both.
- A supported backend must be chosen as a Text Completion source. Currently only llama.cpp and KoboldCpp support deriving templates.
- The model must correctly report its metadata when the connection to the API is established. If this didn't work, try updating the backend to the latest version.
- The reported chat template hash must match the one of the known SillyTavern templates. This only covers default templates, such as Llama 3, Gemma 2, Mistral V7, etc.
- If the hash matches, the template will be automatically selected if it exists in the templates list (i.e., not renamed or deleted).
#System Prompt
Applies to: Text Completion APIs
For equivalent settings in Chat Completion APIs, use Prompt Manager. The Main Prompt is the equivalent of the System Prompt in Chat Completion APIs.
The System Prompt defines the general instructions for the model to follow. It sets the tone and context for the conversation. For example, it tells the model to act as an AI assistant, a writing partner, or a fictional character.
The System Prompt is a part of the Story String and usually the first part of the prompt that the model receives.
See the prompting guide to learn more about the System Prompt.
#Context Template
Applies to: Text Completion APIs
For equivalent settings in Chat Completion APIs, use Prompt Manager.
Usually, AI models require you to provide the character data to them in some specific way. SillyTavern includes a list of pre-made conversion rules for different models, but you may customize them however you like.
The options for this section are explained in Context Template.
#Tokenizer
A tokenizer is a tool that breaks down a piece of text into smaller units called tokens. These tokens can be individual words or even parts of words, such as prefixes, suffixes, or punctuation. A rule of thumb is that one token generally corresponds to 3~4 characters of text.
The options for this section are explained in Tokenizer.
#Custom Stopping Strings
Accepts a JSON-serialized array of stopping strings. Example: ["\n", "\nUser:", "\nChar:"]
. If you're unsure about the formatting, use an online JSON validator. If the model output ends with any of the stop strings, they will be removed from the output.
Supported APIs:
- KoboldAI Classic (versions 1.2.2 and higher) or KoboldCpp
- AI Horde
- Text Completion APIs: Text Generation WebUI (ooba), Tabby, Aphrodite, Mancer, TogetherAI, Ollama, etc.
- NovelAI
- OpenAI (max 4 strings) and compatible APIs
- OpenRouter (both Text and Chat Completion)
- Claude
- Google AI Studio