Podcast Image: Oscar Rovira from Mystic AI - Round 2

Oscar Rovira from Mystic AI - Round 2

cup compound look shore wonderful cowboy vast teeth wolf touch process expression border occasionally return southern would secret draw to inside sad lady mass.

Host

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

Guests

Chris Wang

AI Innovation and Strategy Expert, CXC Innovation

Oscar Rovira

Co-Founder, Mystic AI

Oscar Rovira from Mystic AI - Round 2

Oscar is co-founder of Mystic, and he discusses the journey of building an AI infrastructure startup. Mystic started as a hardware-focused company but pivoted to software, offering a serverless API for machine learning. The platform allows users to run AI models anywhere, with a focus on reducing frictions and optimizing cost and performance. Mystic serves startups, mid-sized companies, and enterprises, with a particular emphasis on industries like healthcare, government, and finance. Oscar highlights the importance of education in the AI space and encourages companies to embrace technology to stay competitive.

Takeaways

  • Mystic is a platform that enables users to run AI models anywhere, with a focus on reducing frictions and optimizing cost and performance.
  • The platform serves startups, mid-sized companies, and enterprises, with a particular emphasis on industries like healthcare, government, and finance.
  • Mystic differentiates itself by offering modularity, allowing users to customize their infrastructure setup, and providing peace of mind by handling the complexities of running ML models.
  • The platform helps users control costs by running on spot instances, optimizing GPU usage, and supporting GPU fractionalization.
  • Oscar encourages companies to embrace technology and upskill themselves in the AI space to stay competitive.

Episode Transcript

Introduction and Background

Rod Rivera: Welcome to the AI Product Engineer podcast, where we discuss building AI products with industry innovators. I'm joined by my co-host Chris, an innovator and former leader of the Lufthansa Innovation Hub in Berlin. Today, we're excited to have Oscar von Mystic, co-founder of Mystic, an exciting startup in the AI infrastructure space. Oscar, could you introduce yourself and tell us how you got here?

Oscar: Certainly. I'm Oscar, one of the co-founders of Mystic. We started the company in 2019 after graduating from university. I had experience working on self-driving cars, and my co-founder had developed a new chip architecture for machine learning on FPGA. We initially focused on hardware, but as we progressed, we developed an API for remote access to our hardware. This evolved into what Mystic is today.

The Evolution of Mystic

Oscar: We realized that people wanted to run machine learning models as fast, cheaply, and easily as possible, regardless of the specific hardware. In 2020, we pivoted exclusively to software and went through Y Combinator. We launched the first serverless API for machine learning, allowing users to run models as simple API endpoints with pay-as-you-go pricing.

Over the years, we've iterated on this serverless API, optimizing for cost and performance. Recently, we've observed that companies starting with serverless APIs often hit scaling bottlenecks. They need more control over scaling and want to avoid premium fees associated with serverless platforms.

To address this, we've developed a "bring your own cloud" Mystic platform. This allows companies to deploy our platform on their own cloud accounts, whether it's Google, AWS, Azure, or even on-premises. It provides more control, cost-effectiveness, and the ability to use their own cloud credits.

Mystic's Unique Value Proposition

Rod Rivera: If you were to define Mystic in one sentence, how would you describe it?

Oscar: I'd say: "Run AI models anywhere as a simple API endpoint, with no DevOps required." We enable you to run models on our serverless infrastructure, on-premises, on your own cloud, or in a hybrid setup.

Technical Challenges and Industry Trends

Rod Rivera: What has been the most challenging aspect of building Mystic, from either a team or product perspective?

Oscar: One of the biggest challenges was developing a solution for running thousands of models simultaneously without dedicating a single GPU to each model. We had to innovate beyond the typical Kubernetes and Docker setup, creating new ways to share GPU resources efficiently across multiple models.

From a business perspective, we've observed that many startups underestimate the costs associated with running ML models at scale. We're seeing a trend towards companies and VCs investing in AI-first products with more carefully considered business plans and economics.

Mystic's User Base and Requirements

Chris: Can you tell us about your users and their requirements?

Oscar: We serve a range of customers, from startups to enterprises. Startups and mid-sized companies (less than 100 employees) typically move faster in adopting our solutions. Larger companies often focus more on performance and privacy concerns.

Many of our users have experienced the frustration of trying to build ML infrastructure themselves and prefer to use Mystic to avoid the need for a dedicated team of engineers. We offer solutions that cater to different scales, from serverless APIs for startups to more controlled, on-premises deployments for enterprises.

The Impact of Open-Source Models

Rod Rivera: How has the trend towards open-source models, like those on Hugging Face, impacted Mystic?

Oscar: The release of models like Llama 2 has increased awareness of the capabilities of open-source models. We've seen a mix of approaches: some companies start with the most powerful models to validate their products, then optimize for smaller, more efficient models later. Others, particularly those with more industry experience, choose the most appropriate model from the start.

We've also observed many newcomers to the industry who may not fully understand the nuances of these models. They often default to using the most powerful options, even when simpler solutions like linear regression or XGBoost might suffice for their needs.

Mystic's Approach to Cost Optimization

Rod Rivera: How does Mystic help companies control costs in their ML initiatives?

Oscar: We employ several strategies:

  1. Running on spot instances, which can reduce costs by up to 75% depending on the GPU and cloud provider.
  2. Optimized autoscaling to minimize idle GPU time.
  3. GPU fractionalization, allowing multiple models to run or be cached on a single GPU.

These layers of optimization significantly reduce the cost of running machine learning models for our customers.

The Future of ML Infrastructure

Rod Rivera: With the trend towards smaller, locally-runnable models and embedded AI, where do you see Mystic's role in the future?

Oscar: While local experimentation with models will continue, scaling to hundreds or thousands of users often requires a remote API solution for optimal performance. Mystic bridges this gap by providing a scalable infrastructure that can handle high-volume requests efficiently.

We're also keeping an eye on emerging architectures like RWKV and Mamba, which could potentially offer more scalable alternatives to transformer-based models. Additionally, developments in hardware and compiler technologies, such as Mojo, could further optimize model performance across different platforms.

Mystic's Role in Regulated Industries

Rod Rivera: What use cases are you seeing in industries that require high levels of privacy and security?

Oscar: We're seeing significant adoption in healthcare, government, and finance sectors. These industries often require tight control over data and cannot send information to external parties. Our "bring your own cloud" and enterprise-level offerings allow us to integrate directly with their existing cloud setups or on-premises infrastructure, making it easier for these highly regulated industries to adopt AI technologies securely.

Advice for AI Adoption

Rod Rivera: Do you have any advice for professionals looking to upskill themselves in the AI world?

Oscar: I recommend:

  1. Familiarizing yourself with open-source models and running them locally if possible.
  2. Exploring serverless APIs to quickly test different models.
  3. Learning about AI architectures through tutorials on platforms like LangChain or LlamaIndex.
  4. Experimenting with deploying models using services like Mystic to understand the infrastructure side.
  5. Exploring the vast array of models available on platforms like Hugging Face.

Closing Thoughts

Oscar: I'm very optimistic about AI's potential. I encourage people, especially in European markets, to adopt new technologies faster. Don't be afraid to dive in, even if it seems daunting at first. We need more people building innovative solutions, particularly in Europe, if we want to maintain a leadership position in the AI space.

Rod Rivera: Excellent advice, Oscar. For our audience, where can they find you and start using Mystic?

Oscar: You can find us at mystic.ai, and I'm also on LinkedIn as Oscar Rovira. We'd love to help you deploy ML models as simple API endpoints, whether on our serverless API or directly on your own cloud infrastructure.

Rod Rivera: Thank you for being here today, Oscar. For everyone listening, definitely consider Mystic AI if you're running AI applications with privacy requirements. Thank you for your insights.

Oscar: Thank you very much, Rod and Chris.

Chris: Thank you.

More from the Podcast

Post Image: Greg Michaelson from Zerve

Greg Michaelson from Zerve

In this eye-opening episode, we dive deep into the world of data science with Greg, co-founder of Zerve. Discover how his winding career path led him from the pulpit to the cutting edge of AI. Greg reveals the hidden flaws in popular data science tools and explains how Zerve is tackling age-old problems with innovative solutions.

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter