How to control the response length with max_tokens

The max_tokens parameter lets you control how long the model’s response can be — in terms of tokens.

  • A token can be a word, part of a word, or even punctuation
    (Example: “Hello!” = 2 tokens: "Hello" + "!")

Why use max_tokens?

  • To limit the size of the output
  • To control latency / cost (fewer tokens = faster and cheaper)
  • To avoid overly long answers if you want concise responses

How to choose the value:

Parameter details:

ParameterTypeDefault
nInteger2048

First, install the SDK:

$pip install -Uqq sarvamai

Then use the following Python code:

1from sarvamai import SarvamAI
2
3# Initialize the SarvamAI client with your API key
4client = SarvamAI(api_subscription_key="YOUR_SARVAM_API_KEY")
5
6# Example 1: Using default max_tokens (not specified) — model decides length
7response = client.chat.completions(
8 messages=[
9 {"role": "system", "content": "You are a helpful assistant."},
10 {"role": "user", "content": "Tell me about the planet Mars."}
11 ]
12 # max_tokens not specified → model uses internal maximum
13)
14
15print(response.choices[0].message.content)
1from sarvamai import SarvamAI
2
3client = SarvamAI(api_subscription_key="YOUR_SARVAM_API_KEY")
4
5# Example 2: Using max_tokens = 100 — limit response length
6response = client.chat.completions(
7 messages=[
8 {"role": "system", "content": "You are a helpful assistant."},
9 {"role": "user", "content": "Summarize the plot of Mahabharata."}
10 ],
11 max_tokens=100
12)
13
14# Receive assistant's reply as output
15print(response.choices[0].message.content)