POST
/
fine-tunes
Create job
curl --request POST \
  --url https://api.together.xyz/v1/fine-tunes \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "training_file": "<string>",
  "validation_file": "<string>",
  "model": "<string>",
  "n_epochs": 1,
  "n_checkpoints": 1,
  "n_evals": 0,
  "batch_size": 123,
  "learning_rate": 0.00001,
  "lr_scheduler": "none",
  "warmup_ratio": 0,
  "max_grad_norm": 1,
  "weight_decay": 0,
  "suffix": "<string>",
  "wandb_api_key": "<string>",
  "wandb_base_url": "<string>",
  "wandb_project_name": "<string>",
  "wandb_name": "<string>",
  "train_on_inputs": true,
  "training_method": {
    "method": "sft"
  },
  "training_type": {
    "type": "Full"
  },
  "from_checkpoint": "<string>"
}'
{
  "id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
  "training_file": "<string>",
  "validation_file": "<string>",
  "model": "<string>",
  "model_output_name": "<string>",
  "model_output_path": "<string>",
  "trainingfile_numlines": 123,
  "trainingfile_size": 123,
  "created_at": "<string>",
  "updated_at": "<string>",
  "n_epochs": 123,
  "n_checkpoints": 123,
  "n_evals": 123,
  "batch_size": 123,
  "learning_rate": 123,
  "lr_scheduler": {
    "lr_scheduler_type": "linear",
    "lr_scheduler_args": {
      "min_lr_ratio": 0
    }
  },
  "warmup_ratio": 123,
  "max_grad_norm": 123,
  "weight_decay": 123,
  "eval_steps": 123,
  "train_on_inputs": true,
  "training_method": {
    "method": "sft"
  },
  "training_type": {
    "type": "Full"
  },
  "status": "pending",
  "job_id": "<string>",
  "events": [
    {
      "object": "fine-tune-event",
      "created_at": "<string>",
      "level": "info",
      "message": "<string>",
      "type": "job_pending",
      "param_count": 123,
      "token_count": 123,
      "total_steps": 123,
      "wandb_url": "<string>",
      "step": 123,
      "checkpoint_path": "<string>",
      "model_path": "<string>",
      "training_offset": 123,
      "hash": "<string>"
    }
  ],
  "token_count": 123,
  "param_count": 123,
  "total_price": 123,
  "epochs_completed": 123,
  "queue_depth": 123,
  "wandb_project_name": "<string>",
  "wandb_url": "<string>",
  "from_checkpoint": "<string>"
}

Authorizations

Authorization
string
header
default:default
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
training_file
string
required

File-ID of a training file uploaded to the Together API

model
string
required

Name of the base model to run fine-tune job on

validation_file
string

File-ID of a validation file uploaded to the Together API

n_epochs
integer
default:1

Number of complete passes through the training dataset (higher values may improve results but increase cost and risk of overfitting)

n_checkpoints
integer
default:1

Number of intermediate model versions saved during training for evaluation

n_evals
integer
default:0

Number of evaluations to be run on a given validation set during training

batch_size
default:max

Number of training examples processed together (larger batches use more memory but may train faster). Defaults to "max". We use training optimizations like packing, so the effective batch size may be different than the value you set.

learning_rate
number
default:0.00001

Controls how quickly the model adapts to new information (too high may cause instability, too low may slow convergence)

lr_scheduler
object

The learning rate scheduler to use. It specifies how the learning rate is adjusted during training.

warmup_ratio
number
default:0

The percent of steps at the start of training to linearly increase the learning rate.

max_grad_norm
number
default:1

Max gradient norm to be used for gradient clipping. Set to 0 to disable.

weight_decay
number
default:0

Weight decay. Regularization parameter for the optimizer.

suffix
string

Suffix that will be added to your fine-tuned model name

wandb_api_key
string

Integration key for tracking experiments and model metrics on W&B platform

wandb_base_url
string

The base URL of a dedicated Weights & Biases instance.

wandb_project_name
string

The Weights & Biases project for your run. If not specified, will use together as the project name.

wandb_name
string

The Weights & Biases name for your run.

train_on_inputs
boolean
default:auto

Whether to mask the user messages in conversational data or prompts in instruction data.

training_method
object

The training method to use. 'sft' for Supervised Fine-Tuning or 'dpo' for Direct Preference Optimization.

training_type
object
from_checkpoint
string

The checkpoint identifier to continue training from a previous fine-tuning job. Format is {$JOB_ID} or {$OUTPUT_MODEL_NAME} or {$JOB_ID}:{$STEP} or {$OUTPUT_MODEL_NAME}:{$STEP}. The step value is optional; without it, the final checkpoint will be used.

Response

200 - application/json

Fine-tuning job initiated successfully

id
string<uuid>
required
status
enum<string>
required
Available options:
pending,
queued,
running,
compressing,
uploading,
cancel_requested,
cancelled,
error,
completed
training_file
string
validation_file
string
model
string
model_output_name
string
model_output_path
string
trainingfile_numlines
integer
trainingfile_size
integer
created_at
string
updated_at
string
n_epochs
integer
n_checkpoints
integer
n_evals
integer
batch_size
default:max
learning_rate
number
lr_scheduler
object
warmup_ratio
number
max_grad_norm
number
weight_decay
number
eval_steps
integer
train_on_inputs
default:auto
training_method
object
training_type
object
job_id
string
events
object[]
token_count
integer
param_count
integer
total_price
integer
epochs_completed
integer
queue_depth
integer
wandb_project_name
string
wandb_url
string
from_checkpoint
string