Documentation
Complete guide to building self-optimizing AI systems with Trace2
Trace2 Documentation
Welcome to the Trace2 documentation! Trace2 is a PyTorch-like library for training AI agents end-to-end using general feedback.
Quick Links
What is Trace2?
Trace2 is a new AutoDiff-like tool for training AI systems with:
- 🎯 General Feedback: Use rewards, natural language, test results, or any feedback
- 📊 Computation Graphs: Automatic tracing of execution like PyTorch
- 🔧 Real Python Code: Write actual functions, not strings
- ⚡ Multiple Optimizers: Choose between OptoPrime, OPRO, and TextGrad
- 🤖 LLM Agnostic: Works with OpenAI, Anthropic, and 100+ providers
Getting Started
pip install trace-optThen write your first optimizable function:
from opto.trace import bundle
from opto.optimizers import OptoPrime
@bundle(trainable=True)
def my_function(input):
"""This function will be optimized."""
return input
optimizer = OptoPrime(my_function.parameters())Documentation Structure
📚 Getting Started
Start here if you're new to Trace2. Learn installation, basic concepts, and configuration.
🧠 Core Concepts
Deep dive into computation graphs, nodes, bundles, and feedback functions.
🚀 Optimizers
Compare and learn about OptoPrime, OPRO, and TextGrad optimizers.
📖 Guides
Best practices, LLM backends, debugging, and production tips.
💡 Examples
Hands-on examples from code optimization to multi-agent systems.
🎓 Tutorials
Step-by-step tutorials converted from Jupyter notebooks.
📘 API Reference
Complete Python API documentation auto-generated from source.
Community
- GitHub - Source code and issues
- Discord - Community chat
- Paper - NeurIPS 2024
- Roadmap - Future plans
Citation
If you use Trace2 in your research, please cite:
@article{cheng2024trace,
title={Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs},
author={Cheng, Ching-An and Nie, Allen and Swaminathan, Adith},
journal={arXiv preprint arXiv:2406.16218},
year={2024}
}