Documentation Index
Fetch the complete documentation index at: https://togetherai-migration.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Together’s API endpoints for chat, vision, images, embeddings, speech are fully compatible with OpenAI’s API.
If you have an application that uses one of OpenAI’s client libraries, you can easily configure it to point to Together’s API servers, and start running your existing applications using our open-source models.
Configuring OpenAI to use Together’s API
To start using Together with OpenAI’s client libraries, pass in your Together API key to the api_key option, and change the base_url to https://api.together.xyz/v1:
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",
)
You can find your API key in your settings page. If you don’t have an account, you can register for free.
Querying a language model
Now that your OpenAI client is configured to point to Together, you can start using one of our open-source models for your inference queries.
For example, you can query one of our chat models, like Llama 3.1 8B:
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",
)
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
messages=[
{"role": "system", "content": "You are a travel agent. Be descriptive and helpful."},
{"role": "user", "content": "Tell me the top 3 things to do in San Francisco"},
]
)
print(response.choices[0].message.content)
Streaming a response
You can also use OpenAI’s streaming capabilities to stream back your response:
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",
)
stream = client.chat.completions.create(
model="Qwen/Qwen2.5-7B-Instruct-Turbo",
messages=[
{"role": "system", "content": "You are a travel agent. Be descriptive and helpful."},
{"role": "user", "content": "Tell me about San Francisco"},
],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Using Vision Models
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",
)
response = client.chat.completions.create(
model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
],
}],
)
print(response.choices[0].message.content)
Output:
The image depicts a serene and idyllic scene of a wooden boardwalk winding through a lush, green field on a sunny day.
* **Sky:**
* The sky is a brilliant blue with wispy white clouds scattered across it.
* The clouds are thin and feathery, adding to the overall sense of tranquility.
* **Boardwalk:**
* The boardwalk is made of weathered wooden planks, worn smooth by time and use.
* It stretches out into the distance, disappearing into the horizon.
* The boardwalk is flanked by tall grasses and reeds that reach up to the knees.
* **Field:**
* The field is filled with tall, green grasses and reeds that sway gently in the breeze.
* The grasses are so tall that they almost obscure the boardwalk, creating a sense of mystery and adventure.
* In the distance, trees and bushes can be seen, adding depth and texture to the scene.
* **Atmosphere:**
* The overall atmosphere is one of peace and serenity, inviting the viewer to step into the tranquil world depicted in the image.
* The warm sunlight and gentle breeze create a sense of comfort and relaxation.
In summary, the image presents a picturesque scene of a wooden boardwalk meandering through a lush, green field on a sunny day, evoking feelings of peace and serenity.
Image Generation
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",)
prompt = """
A children's book drawing of a veterinarian using a stethoscope to
listen to the heartbeat of a baby otter.
"""
result = client.images.generate(
model="black-forest-labs/FLUX.1-dev",
prompt=prompt
)
print(result.data[0].url)
Output:
Text-to-Speech
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",)
speech_file_path = "speech.mp3"
response = client.audio.speech.create(
model="cartesia/sonic-2",
input="Today is a wonderful day to build something people love!",
voice="helpful woman",
)
response.stream_to_file(speech_file_path)
Output:
Generating vector embeddings
Use our embedding models to generate an embedding for some text input:
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",
)
response = client.embeddings.create(
model = "togethercomputer/m2-bert-80M-8k-retrieval",
input = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
)
print(response.data[0].embedding)
Output
[0.2633975, 0.13856211, 0.14047204,... ]
Structured Outputs
from pydantic import BaseModel
from openai import OpenAI
import os, json
client = OpenAI(api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",)
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
completion = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
messages=[
{"role": "system", "content": "Extract the event information."},
{"role": "user", "content": "Alice and Bob are going to a science fair on Friday. Answer in JSON"},
],
response_format={
"type": "json_object",
"schema": CalendarEvent.model_json_schema(),
},
)
output = json.loads(completion.choices[0].message.content)
print(json.dumps(output, indent=2))
Output:
{
"name": "Alice and Bob",
"date": "Friday",
"participants": [
"Alice",
"Bob"
]
}
Function Calling
from openai import OpenAI
import os, json
client = OpenAI(api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",)
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current temperature for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country e.g. Bogotá, Colombia"
}
},
"required": [
"location"
],
"additionalProperties": False
},
"strict": True
}
}]
completion = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
messages=[{"role": "user", "content": "What is the weather like in Paris today?"}],
tools=tools,
tool_choice="auto"
)
print(json.dumps(completion.choices[0].message.model_dump()['tool_calls'], indent=2))
Output:
[
{
"id": "call_nu2ifnvqz083p5kngs3a3aqz",
"function": {
"arguments": "{\"location\":\"Paris, France\"}",
"name": "get_weather"
},
"type": "function",
"index": 0
}
]
The Together API is also supported by most OpenAI libraries built by the community.
Feel free to reach out to support if you come across some unexpected behavior when using our API.