How to control response randomness with temperature

The temperature parameter controls how random or deterministic the model’s responses will be.

Range: 0 to 2
Default: 0.2

  • Lower temperature β†’ more focused, predictable answers (e.g. 0.2)
  • Higher temperature β†’ more creative, varied responses (e.g. 0.8 or 1.0)

πŸ‘‰ Tip: For most use cases, values between 0.2 and 0.8 give good results.

How it works:

ModeRecommended temperatureBehavior
Non-thinking mode0.2 (default)Straightforward, factual responses
Thinking mode0.5 or higherDeeper reasoning, more exploration
Highly creative0.8 - 1.0Storytelling, brainstorming, poetry
Very random / playful> 1.0Unexpected, experimental output

First, install the SDK:

$pip install -Uqq sarvamai

Then use the following Python code:

1from sarvamai import SarvamAI
2
3# Initialize the SarvamAI client with your API key
4client = SarvamAI(api_subscription_key="YOUR_SARVAM_API_KEY")
5
6# Example 1: Using default temperature (0.2) β€” straightforward, factual response
7response = client.chat.completions(
8 messages=[
9 {"role": "system", "content": "You are a helpful assistant."},
10 {"role": "user", "content": "Explain the concept of gravity."}
11 ]
12 # temperature is not specified β†’ uses default 0.2
13)
14
15print(response.choices[0].message.content)
1from sarvamai import SarvamAI
2
3client = SarvamAI(api_subscription_key="YOUR_SARVAM_API_KEY")
4
5# Example 2: Using temperature = 0.9 β€” more creative, varied response
6response = client.chat.completions(
7 messages=[
8 {"role": "system", "content": "You are a creative storyteller."},
9 {"role": "user", "content": "Tell me a story about a magical tiger."}
10 ],
11 temperature=0.9 # More creative storytelling
12)
13
14# Receive assistant's reply as output
15print(response.choices[0].message.content)