Gen AI 분류
The `temperature` parameter (range: 0.0 - 1.0, default 0)
컨텐츠 정보
- 763 조회
- 0 추천
-
목록
본문
#### The `temperature` parameter (range: 0.0 - 1.0, default 0)
##### What is _temperature_?
The temperature is used for sampling during the response generation, which occurs when top_p and top_k are applied. Temperature controls the degree of randomness in token selection.
##### How does _temperature_ affect the response?
Lower temperatures are good for prompts that require a more deterministic and less open-ended response. In comparison, higher temperatures can lead to more "creative" or diverse results. A temperature of `0` is deterministic: the highest probability response is always selected. For most use cases, try starting with a temperature of `0.2`.
A higher temperature value will result in a more exploratative output, with a higher likelihood of generating rare or unusual words or phrases. Conversely, a lower temperature value will result in a more conservative output, with a higher likelihood of generating common or expected words or phrases.
##### Example:
For example,
`temperature = 0.0`:
* _The cat sat on the couch, watching the birds outside._
* _The cat sat on the windowsill, basking in the sun._
`temperature = 0.9`:
* _The cat sat on the moon, meowing at the stars._
* _The cat sat on the cheeseburger, purring with delight._
**Note**: It's important to note that while the temperature parameter can help generate more diverse and interesting text, it can also increase the likelihood of generating nonsensical or inappropriate text (i.e. hallucinations). Therefore, it's important to use it carefully and with consideration for the desired outcome.
For more information on the `temperature` parameter for text models, please refer to the [documentation on model parameters](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models#text_model_parameters).
관련자료
댓글 0
등록된 댓글이 없습니다.