Skip to main content
GET
/
jobs
/
{job_id}
/
results
Retrieving Results
curl --request GET \
  --url https://api.sutro.sh/jobs/{job_id}/results \
  --header 'Authorization: <authorization>'
{
    "outputs": [
        "The capital of France is Paris.",
        "Quantum computing uses quantum mechanics principles to process information in ways that classical computers cannot..."
    ],
    "inputs": [
        "What is the capital of France?",
        "Explain quantum computing in simple terms"
    ],
    "cumulative_logprobs": [-0.5234, -1.2456]
}

Documentation Index

Fetch the complete documentation index at: https://docs.sutro.sh/llms.txt

Use this file to discover all available pages before exploring further.

Using the API directly is not recommended for most users. Instead, we recommend using the Python SDK.
Download the complete results of a batch inference job. Results can be downloaded in multiple formats optimized for different use cases.

Path Parameters

job_id
string
required
The job_id returned when you submitted the batch inference job

Query Parameters

result_format
enum
required
The format to download results in:
  • csv - CSV file (zipped for compression)
  • parquet - Parquet file
  • json - JSON object
include_inputs
boolean
default:"false"
Whether to include the input prompts in the results
include_cumulative_logprobs
boolean
default:"false"
Whether to include the cumulative log probabilities in the results

Headers

Authorization
string
required
Your Sutro API key using Key authentication scheme.Format: Key YOUR_API_KEYExample: Authorization: Key sk_abc123...

Response

Returns a downloadable file in the requested format.

Parquet

  • Returns a single Parquet file
  • Recommended for large datasets

CSV

  • Returns a ZIP file containing a CSV
  • File is compressed for efficient transfer
  • Column names: inputs, {job_id} outputs, cumulative_logprobs (if requested)

JSON

  • Returns a JSON object
  • Best for smaller datasets

Structured Outputs

When using structured outputs (by providing a json_schema when creating the job), the outputs will be JSON strings that conform to your specified schema.

Standard Models

For non-reasoning models, the output will be a JSON string following your schema:
{
  "outputs": [
    "{\"name\": \"John Doe\", \"age\": 30, \"email\": \"john@example.com\"}",
    "{\"name\": \"Jane Smith\", \"age\": 25, \"email\": \"jane@example.com\"}"
  ]
}

Reasoning Models

For reasoning models (like o1), the output includes both the structured content and the reasoning process:
{
  "outputs": [
    "{\"content\": {\"name\": \"John Doe\", \"age\": 30}, \"reasoning_content\": \"First, I identified the name from the text...\"}",
    "{\"content\": {\"name\": \"Jane Smith\", \"age\": 25}, \"reasoning_content\": \"I analyzed the passage and extracted...\"}"
  ]
}
The output structure for reasoning models:
  • content: The structured output following your JSON schema (can be a text string or JSON string containing an object matching your schema)
  • reasoning_content: The model’s step-by-step reasoning process (string)
Currently, when using structured outputs or reasoning models, one will need to run json.loads or similar on each output JSON, ie json.loads(outputs[0]) to transform from a string to a dict (or equivalent in other languages).
{
    "outputs": [
        "The capital of France is Paris.",
        "Quantum computing uses quantum mechanics principles to process information in ways that classical computers cannot..."
    ],
    "inputs": [
        "What is the capital of France?",
        "Explain quantum computing in simple terms"
    ],
    "cumulative_logprobs": [-0.5234, -1.2456]
}

Code Examples

import requests

response = requests.get(
    'https://api.sutro.sh/jobs/job_12345/results',
    headers={
        'Authorization': 'Key YOUR_SUTRO_API_KEY'
    },
    params={
        'result_format': 'csv',
        'include_inputs': True,
        'include_cumulative_logprobs': False
    }
)

# Save the zip file
with open('results.zip', 'wb') as f:
    f.write(response.content)

# Extract and read with pandas
import zipfile
import pandas as pd

with zipfile.ZipFile('results.zip') as z:
    csv_filename = z.namelist()[0]
    with z.open(csv_filename) as csv_file:
    df = pd.read_csv(csv_file)
    print(df.head())

Notes

  • Results can only be retrieved for jobs that have completed successfully
  • The order of results matches the order of the original inputs
  • CSV format: outputs are in a column named after the job_id
  • Parquet format uses zstd compression internally
  • For very large datasets (>100MB), Parquet is recommended
  • CSV files are automatically zipped to reduce download size