Each prompt should be specific and concise. Separate each prompt with a new line so they will be sent to the LLM separately.
A token in the context of GPT models is a unit of text, which can be as short as one character or as long as one word. For example, in the sentence "ChatGPT is great!", the text could be broken down into tokens such as "Chat", "GPT", "is", "great", and "!". The exact tokenization depends on the language model's encoding scheme.
The temperature parameter in GPT models controls the randomness of the generated text. Lower temperatures (e.g., 0.2) result in more deterministic and focused outputs, while higher temperatures (e.g., 0.8) produce more diverse and creative responses.