
Photo by Sandip Kalal on Unsplash
Mira Murati’s Thinking Machine Launches Training API Tinker
Thinking Machines Lab, the startup co-founded by former OpenAI Chief Technology Officer Mira Murati, announced its first product, a training API called Tinker, on Wednesday.
In a rush? Here are the quick facts:
- Thinking Machines launched its first product, a training API called Tinker.
- Tinker is in private beta mode, but has already been tested by multiple organizations, including Redwood Research and teams at Stanford, Princeton, and Berkeley.
- The company also launched an open-source library, Tinker Cookbook.
According to Mira Murati, Tinker will allow researchers and developers to build new AI models, systems, and workflows through its API. The tool is currently in private beta but has already been tested by multiple organizations.
“Today we launched Tinker,” wrote Murati on the social media platform X. “Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines.”
Today we launched Tinker.
Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines.
Excited to see what… https://t.co/rLSluOckuC
— Mira Murati (@miramurati) October 1, 2025
Thinking Machines Lab, which launched in February, described the new product as “a flexible API for fine-tuning language models.” Tinker supports open-source models such as Alibaba’s Qwen and Meta’s LLaMA, aiming to provide infrastructure support so developers and researchers can focus on customizing and adapting models to their needs.
“We handle scheduling, resource allocation, and failure recovery,” wrote the company. “This allows you to get small or large runs started immediately, without worrying about managing infrastructure.”
Tinker uses Low-Rank Adaptation (LoRA), a method that optimizes fine-tuning by reducing costs and improving speed. The company also released an open-source library called Tinker Cookbook, which provides methodologies and post-training resources that run on the Tinker API to help developers and researchers achieve better results.
“We believe [Tinker] will help empower researchers and developers to experiment with models and will make frontier capabilities much more accessible to all people,” said Mira Murati, cofounder and CEO of Thinking Machines, in a recent interview with Wired.
Other experts in the industry have also shared their thoughts on Tinker. Andrej Karpathy—former researcher at OpenAI and founder of Eureka Labs, called the API “clever” and pointed out a few advantages and challenges on the social media platform X.
“Tinker is cool,” wrote Karpathy. “Compared to the more common and existing paradigm of ‘upload your data, we’ll post-train your LLM,’ this is, in my opinion, a more clever place to ‘slice up’ the complexity of post-training—delegating the heavy lifting while keeping the majority of the data and algorithmic creative control.”
He also emphasized that the community needs to better understand when and how fine-tuning makes sense.