Edit
Conversation Configuration

Each prompt should be specific and concise. Separate each prompt with a new line so they will be sent to the LLM separately.

LLM Configuration

FAQs

What is the different between the models?
Model Name Strengths Pricing
GPT-4o Optimized for efficiency, balancing performance and cost.
  • Input: $0.005 / 1K tokens
  • Output: $0.015 / 1K tokens
GPT-3.5 Turbo Good for general tasks, but less powerful overall
  • Input: $0.0005 / 1K tokens
  • Output: $0.0015 / 1K tokens
GPT-4 Turbo Faster and cheaper than GPT-4
  • Input: $0.01 / 1K tokens
  • Output: $0.03 / 1K tokens
GPT-4 Better context, reasoning, and creativity but slower and more expensive
  • Input: $0.03 / 1K tokens
  • Output: $0.06 / 1K tokens
What is a token?

A token in the context of GPT models is a unit of text, which can be as short as one character or as long as one word. For example, in the sentence "ChatGPT is great!", the text could be broken down into tokens such as "Chat", "GPT", "is", "great", and "!". The exact tokenization depends on the language model's encoding scheme.

What is "temperature"?

The temperature parameter in GPT models controls the randomness of the generated text. Lower temperatures (e.g., 0.2) result in more deterministic and focused outputs, while higher temperatures (e.g., 0.8) produce more diverse and creative responses.

Tool Configuration