Large language model
Gen AI 분류

The `temperature` parameter (range: 0.0 - 1.0, default 0)

컨텐츠 정보

본문

The temperature parameter (range: 0.0 - 1.0, default 0)

What is temperature?

The temperature is used for sampling during the response generation, which occurs when top_p and top_k are applied. Temperature controls the degree of randomness in token selection.

How does temperature affect the response?

Lower temperatures are good for prompts that require a more deterministic and less open-ended response. In comparison, higher temperatures can lead to more "creative" or diverse results. A temperature of 0 is deterministic: the highest probability response is always selected. For most use cases, try starting with a temperature of 0.2.

A higher temperature value will result in a more exploratative output, with a higher likelihood of generating rare or unusual words or phrases. Conversely, a lower temperature value will result in a more conservative output, with a higher likelihood of generating common or expected words or phrases.

Example:

For example,

temperature = 0.0:

  • The cat sat on the couch, watching the birds outside.
  • The cat sat on the windowsill, basking in the sun.

temperature = 0.9:

  • The cat sat on the moon, meowing at the stars.
  • The cat sat on the cheeseburger, purring with delight.

Note: It's important to note that while the temperature parameter can help generate more diverse and interesting text, it can also increase the likelihood of generating nonsensical or inappropriate text (i.e. hallucinations). Therefore, it's important to use it carefully and with consideration for the desired outcome.

For more information on the temperature parameter for text models, please refer to the documentation on model parameters.

관련자료

댓글 0
등록된 댓글이 없습니다.
전체 12 / 1 페이지
RSS
번호
제목
이름