Welcome to the Era of Visual Agents with Trace2
Train AI agents with general feedback like rewards, natural language, or compiler errors. Write Python code directly and optimize it like training neural networks.
Familiar PyTorch-like Syntax
Define trainable parameters and optimize them with a simple, intuitive API
from opto.trace import node, bundle
from opto.optimizers import OptoPrime
# Define trainable function
@bundle(trainable=True)
def strange_sort_list(lst):
'''Sort list in strange order: min, max, min, max...'''
return sorted(lst)
# Optimize with feedback
optimizer = OptoPrime(strange_sort_list.parameters())
for epoch in range(5):
output = strange_sort_list([1, 2, 3, 4])
feedback = check_correctness(output)
optimizer.zero_feedback()
optimizer.backward(output, feedback)
optimizer.step() # LLM updates the function!Why Trace2?
A new paradigm for training AI systems end-to-end
Computation Graph
Automatically traces execution to build a computation graph, just like autograd but for AI agents.
Write Real Code
No need to wrap functions in strings. Write actual executable Python code with full IDE support.
Multiple Optimizers
Choose from OptoPrime, OPRO, or TextGrad. Switch optimizers with a single line of code.
Rich Feedback
Use any feedback: numerical rewards, natural language, compiler errors, or test results.
LLM Backend Agnostic
Works with OpenAI, Anthropic, or any LiteLLM-supported provider. Easy API key management.
Research-Backed
Published at NeurIPS 2024. Battle-tested on NLP, robotics, and multi-agent tasks.
Ready to optimize your AI agents?
Install Trace2 and start building self-improving AI systems in minutes.