Speech-to-Text Translation API Using Saaras Model
🔗 Overview
This notebook provides a step-by-step guide on how to use the STT-Translate API for translating audio files into text using Saaras, this API automatically detects the input language, transcribes the speech, and translates the text to English.
It includes instructions for installation, setting up the API key, uploading audio files, and translating audio using the API.
1. Installation
Before you begin, ensure you have the necessary Python libraries installed. Run the following commands to install the required packages:
2. Authentication
To use the API, you need an API subscription key. Follow these steps to set up your API key:
- Obtain your API key: If you don’t have an API key, sign up on the Sarvam AI Dashboard to get one.
- Replace the placeholder key: In the code below, replace “YOUR_SARVAM_AI_API_KEY” with your actual API key.
2.1 Initialize the Client
Create a Sarvam client instance using your API key. This client will be used to interact with the Saaras API.
3. Uploading Audio Files
To translate audio, you need to provide a .wav
or .mp3
file.
✅ Supported Environments:
- Google Colab
- Jupyter Notebook (VS Code, JupyterLab, etc.)
📝 Instructions:
- Ensure your audio file is in
.wav
or.mp3
format. - Run the cell below. The uploader will automatically adjust based on your environment:
- In Google Colab: You’ll be prompted to upload a
.wav
or.mp3
file via a file picker. - In Jupyter Notebook: You’ll be prompted to enter the full file path of the
.wav
or.mp3
file stored locally on your machine.
- In Google Colab: You’ll be prompted to upload a
- Once provided, the file will be available for use in the next step.
4. Saaras-v2.5 Usage for STT Translate
The Saaras-v2 model can be used for converting speech to text across diverse, production-grade scenarios. It supports basic transcription, code-mixed Indian speech, automatic language detection, and domain-specific prompting — all optimized for real-world applications like telephony, multi-speaker audio, and more.
4.1 Basic Usage
Basic transcription with specified language code.
Perfect for single-language content with clear audio quality.
4.2 Code-Mixed Speech
Handles mixed-language content with automatic detection of language switches within sentences.
Ideal for natural Indian conversations that mix multiple languages.
4.3 Automatic Language Detection
Let Saaras automatically detect the language being spoken.
Useful when the input language is unknown or for handling multi-language content.
4.4 Domain Prompting
Enhance transcription accuracy with domain-specific prompts and preserve important terms.
Perfect for specialized contexts like medical, legal, or technical content.
5. Handling Long Audio Files
If your audio file exceeds the 30-second limit supported by the real-time transcription API, you must split it into smaller chunks for accurate and successful transcription. These smaller segments are then transcribed individually using the real-time API, and the results are stitched back together to form the final transcript.
👉 For large audio files, switch to the Batch API designed for longer durations.
🔗 Try the Batch API here
📝 When to Use
- Audio length >30 seconds
- Real-time API returns timeout or error due to size
- You want to batch process long audio files for better accuracy and reliability
⚙️ How It Works
- The full
.mp3
or.wav
file is first split into smaller chunks (e.g., 29 seconds each) - Each chunk is then transcribed individually using the real-time API
- The individual results are finally combined to form one seamless transcript
> ⚠️ For short audio files (<30 seconds), you can skip this step and directly proceed with transcription using the real-time API.
The functions below help with:
- Prevents real-time API timeouts
- Splitting large
.wav
or.mp3
files into smaller chunks - Transcribing each chunk using the Saaras:v2.5
- Collating results into a single transcript
5.1 Define the split_audio Function
This function splits a long .mp3
or .wav
audio file into smaller chunks (default: 29 seconds) using FFmpeg.
It ensures each segment remains within the real-time API’s 30-second limit and stores them in the specified output directory.
5.2 Define the translate_audio_chunks
Function
This function takes the list of chunked audio file paths and uses the Saaras real-time API to translate each one individually. It collects all partial transcriptions and combines them into a single, complete transcript.
5.3 Putting It All Together
Call the split_audio_ffmpeg()
function first to break the audio into chunks, and then pass those chunks to translate_audio_chunks()
for transcription.
This two-step process ensures large audio files are handled smoothly using the real-time API.
6. Error Handling
You may encounter these errors while using the API:
-
403 Forbidden (
invalid_api_key_error
)- Cause: Invalid API key.
- Solution: Use a valid API key from the Sarvam AI Dashboard.
-
429 Too Many Requests (
insufficient_quota_error
)- Cause: Exceeded API quota.
- Solution: Check your usage, upgrade if needed, or implement exponential backoff when retrying.
-
500 Internal Server Error (
internal_server_error
)- Cause: Issue on our servers.
- Solution: Try again later. If persistent, contact support.
-
400 Bad Request (
invalid_request_error
)- Cause: Incorrect request formatting.
- Solution: Verify your request structure, and parameters.
-
422 Unprocessable Entity Request (
unprocessable_entity_error
)- Cause: Unable to detect the language of the input text.
- Solution: Explicitly pass the source_language_code parameter with a supported language.
7. Additional Resources
For more details, refer to the our official documentation and we are always there to support and help you on our Discord Server:
- Documentation: docs.sarvam.ai
- Community: Join the Discord Community
8. Final Notes
- Keep your API key secure.
- Use clear audio for best results.
- Explore advanced features like diarization and translation.
Keep Building! 🚀