Using Reasoning Models#
Reasoning models are specialized LLMs that excel at complex problem-solving by explicitly showing their thought process. They are particularly effective for tasks requiring multi-step logic, analytical thinking, and code generation.
We support several reasoning models that provide both the final answer and the full reasoning trace used to arrive at that answer. See our pricing page for a list of available reasoning models.
What Reasoning Models Excel At#
Reasoning models are ideal for:
Complex problem-solving: Multi-step mathematical problems, logic puzzles, and analytical tasks
Decision-making tasks: Evaluating options with highly interpretable and explicit thought processes
Code generation and debugging: Writing, analyzing, and fixing code with clear explanations
Scientific and technical analysis: Breaking down complex concepts and providing detailed explanations
Output Format#
For each input row, reasoning models return a special JSON format that includes both the reasoning process and the final answer.
{
"reasoning_content": "Let me work through this step by step...",
"content": "The final answer or response"
}
reasoning_content: Contains the model’s step-by-step thought process
content: Contains the final answer or output
Basic Example#
import sutro as so
problems = [
"What is 15% of 240?",
"Explain why the sky appears blue"
]
results = so.infer(
problems,
model="qwen-3-14b-thinking",
system_prompt="Solve this problem step by step"
)
# Each result contains both reasoning and final answer
for result in results:
print(f"Reasoning: {result['reasoning_content']}")
print(f"Answer: {result['content']}")
Using Reasoning Models with Structured Outputs#
Reasoning models fully support structured outputs. When using output_schema
, the schema applies to the content
field, while reasoning_content
remains as free-form text. This combination allows the model to full explore the problem or task at hand, while also offering strict adherence to a specified output forma:
Example with Pydantic Model
import sutro as so
from pydantic import BaseModel
class MathSolution(BaseModel):
numerical_answer: float
unit: str
is_exact: bool
problems = [
"A car travels 120 miles in 2.5 hours. What is its average speed?"
]
results = so.infer(
problems,
model="qwen-3-32b-thinking",
system_prompt="Solve this physics problem",
output_schema=MathSolution
)
# Result format:
# {
# "reasoning_content": "To find average speed, I need to divide distance by time...",
# "content": {
# "numerical_answer": 48.0,
# "unit": "miles per hour",
# "is_exact": true
# }
# }
Note
When using structured outputs with reasoning models, only the content
field is validated against the schema. The reasoning_content
field always contains unstructured text showing the model’s thought process.
Best Practices#
Leverage the reasoning: The
reasoning_content
field is valuable for debugging, education, and building trust in AI outputsCrisp prompts: Reasoning models work best with explicit instructions that guide its thinking process, often phrases like “consider <important nuance of the problem>” can significantly boost performance and recall for nuanced tasks
Structured outputs: Use schemas when you need the final answer in a specific format while preserving a “thinking canvas” to explore the problem space