AI is no longer a nice-to-have, as customers expect personalized, intuitive experiences powered by the latest models. But getting these models into production, where they can deliver sustained business value, is notoriously complex. It requires a deep understanding of infrastructure, latency, pipelines, monitoring and many other dimensions that most teams aren't equipped to handle at production scale.

And that’s a huge problem.

That is where we believe Baseten is poised to thrive, and why we are proud to lead Baseten’s Series B.

Baseten’s inference engine helps companies simply and securely serve their models in production. Instead of spending valuable engineering time worrying about the complexities of managing GPUs in their cloud, reducing cold starts, or navigating frameworks like TRT-LLM, customers like Descript, Patreon and Picnic Health trust Baseten to provide the mission-critical infrastructure that enables AI at scale. These companies know that latency is important but reliability is non-negotiable.

Tuhin Srivastava, Amir Haghighat, Phil Howes and Pankaj Gupta are the complementary, deeply experienced founders we love to partner with. They spent years building and perfecting the foundations for Baseten, well ahead of the industry curve, and their common cause is an obsession with their customers. What else could explain Baseten’s 99.999% uptime during the last year while scaling inference workloads over 100X? Or the numerous success stories of AI-native companies building magical product experiences on top of Baseten?

We’re thrilled to be part of Baseten's journey and look forward to the transformational impact Tuhin, Amir and Phil will have on the mass adoption of AI and machine learning. Rather than relying on a tangled web of duct-taped infrastructure, just use Baseten.