Podcast Image: Greg Michaelson from Zerve

Greg Michaelson from Zerve

In this eye-opening episode, we dive deep into the world of data science with Greg, co-founder of Zerve. Discover how his winding career path led him from the pulpit to the cutting edge of AI. Greg reveals the hidden flaws in popular data science tools and explains how Zerve is tackling age-old problems with innovative solutions.

Host

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

Guests

Greg Michaelson

Co-Founder, Zerve

Max Tee

VC Expert, AI Investor, BNY Mellon

Greg Michaelson from Zerve

In this conversation, Greg from Zerve discusses his background and the transition from working at DataRobot to joining Zerve, a startup. The main focus of the conversation is on the problem with notebooks in data science and how Zerve aims to solve this problem with their data science development environment. Greg explains the architecture of Zerv and the benefits it offers, such as stability, language interoperability, and persistent artifacts. The response to the transition from notebooks has been positive, although there is some inertia to overcome. Greg also shares advice for building notebook interfaces, emphasizing the importance of avoiding shortcuts and understanding the target users. In this conversation, Greg, co-founder of Zerve AI, discusses various aspects of the notebook landscape and the market opportunity for their product. He emphasizes the importance of understanding the target user and their pain points. The conversation covers topics such as fragmentation in the notebook landscape, market sizing, regulation and risk management, business model and pricing, visualization and monitoring, self-hosted solutions for security, the impact of generative AI, unexpected use cases, assumptions and learnings, the role of experts and coding skills, and final thoughts and call to action.

Takeaways

  • Notebooks have limitations in terms of stability and productionizability, which can be problematic for data scientists.
  • Zerve's data science development environment, Zerv, separates storage and compute, providing stability and language interoperability.
  • Zerv allows for collaboration and scalability, with the ability to run multiple blocks simultaneously and share persistent artifacts.
  • Building notebook interfaces requires careful consideration of architecture, user needs, and the option for self-hosting. Understanding the target user and their pain points is crucial for building a successful product.
  • The notebook landscape is fragmented, with different users working in different environments, which hinders collaboration.
  • The market opportunity for notebook solutions is significant, with millions of notebooks created and a growing demand for code-based data interaction.
  • Regulation and risk management are important considerations, especially in industries like financial services.
  • Zerve AI offers a free community version in their SAS environment and a paid self-hosted version for enterprise customers.
  • Generative AI is a growing trend, and Zerve AI is exploring integrations and tools in this space.
  • Zerve AI has a diverse user base, including data engineers, data scientists, educators, and students.
  • Unexpected use cases for Zerve AI include building a two-legged cow weighing scale.
  • Assumptions about user problems and needs can be similar across different organizations, with data maturity being a differentiating factor.
  • Becoming an expert in a specific area and learning to code are valuable skills for anyone starting their career or finishing their studies.

Episode Transcript

Introduction

Rod: Welcome to a new episode of our show. I'm Rod, your host, joined today by our co-host Max. We have a great guest who has been very active in the AI space for many years - Greg from Zerve. Greg, welcome to the show.

Greg: Hey, thanks for having me. This is fun.

Greg's Career Path

Rod: Before we start, why don't you tell us more about your career path? Your LinkedIn profile shows a very long and interesting trajectory.

Greg: Yeah, it's been a winding road for sure. I actually started out pre-med in school, but it turns out I faint at the sight of blood, which is a problem if you're going to be a doctor. So I didn't go down that path. I studied philosophy in undergrad and ended up becoming a Baptist preacher for about 10 years.

That was an interesting experience. Turns out being a Baptist preacher is really just a PR gig - trying to convince people to do things they don't want to do. It's a great way to spend your life doing good, but not ultimately what I wanted. So I went back to school and tried to enter a mathematics program at the University of Alabama. They told me I needed to get a bachelor's in math before I could even get into the graduate program.

So I went over to the college of business, and they said, "Hey, we'll take you." I ended up getting a PhD in applied statistics. After that, I worked at Regions Financial, a regional bank in the US, doing commercial credit modeling. Then I worked at Travelers Insurance doing claim analytics.

At Travelers, I met the founder of a company called DataRobot before he founded the company. He went off and started DataRobot, which invented automated machine learning. I joined him in 2015 when the company had no revenue and no customers. When I left, we had $150 million in revenue, and I had a team of about 400 people responsible for customer success. I was the Chief Customer Officer there.

That was a great experience. I was there for seven years, learned a lot, and got the chance to do a lot. After leaving DataRobot, I met a couple of guys in Ireland. We ended up hitting it off and thinking about the space in really similar ways. So I joined them as co-founder of this company called Zerve. And that's where I am now.

Transition from DataRobot to Zerve

Rod: Going from DataRobot, which is this massive company, to Zerve, which is a very nimble startup, must have been a massive difference. How has this changed your role, and how does it compare to going from a very established organization to a very young startup starting from scratch?

Greg: Well, when I joined DataRobot, it was very startupy. We had about 30 people, half of them were probably in Ukraine, so the office was fairly small. We were in Boston, and our offices were on the third and fourth floors above a kind of karaoke dive bar. One winter, the roof caved in - it was very startupy at the beginning of DataRobot.

After I left DataRobot, I took about a year off and looked at what I wanted to do for the next chapter. For me, it's not as much about the size of the company as it is about the project being interesting. Certainly, the stuff we're doing at Zerve is solving a problem that's been around for quite a long time. So it's exciting.

The Problem Zerve is Solving

Maxson: That's really cool. I love the story of you being a pastor and eventually going into applied statistics, and now being a co-founder of an AI company. You mentioned that with Zerve, there was a problem that has been lingering around for some time, and you and the team in Ireland are trying to solve that. Could you expand a little bit on the problem and provide some history on why it hasn't been solved yet?

Greg: Yeah. Are you guys familiar with notebooks like Jupyter notebooks?

Maxson: Yep.

Greg: Here's the thing: in the 2010s, there was this craze about low-code, no-code stuff. I think by now everybody has realized that the emperor has no clothes on that whole low-code, no-code thing. If you look at every one of those vendors, they've all added code-based offerings because the only people that are generating value from data today are coders, period.

I spent all of my time at DataRobot trying to convince the people that bought DataRobot that they should actually use it. And that's because we were selling to the wrong people. There's no such thing as a citizen data scientist. Every AI project requires code. That's our view at Zerve.

But if you look at the toolkit available for writing code to interact with your data, you've really got two options: notebooks and IDEs. Notebooks were actually designed for data analysis and exploration, but they were also designed by academics to be classroom scratch pads. So they are super brittle. It's really easy to get your notebook into a bad state.

Everyone that's ever used a notebook has made ample use of the "restart kernel" button because you get your notebook into a bad state. You've got to turn it off and turn it back on again and then rerun it from scratch. So what most data scientists do is they'll start in a notebook and they'll code and explore and restart their kernel and do all the things you do in a notebook.

And then at some point, they realize, "Oh, I've got to get serious here." And so they copy-paste their code out of Jupyter into VS Code or PyCharm or Spyder or whatever IDE they use. Then they're in a better spot because you can write stable code in a scripting environment. But of course, it sucks for data analysis. Data exploration is not a good experience in those applications.

So that's kind of the state of the art as far as data science tooling goes. It's like, do you want an unstable notebook that actually was designed to do data analysis, or do you want to use software engineering tools to have to occlude your way through the data exploration?

The Zerve Solution

That problem has existed for years. In 2018, there was a talk by Joel Grus at JupyterCon called "I Don't Like Notebooks." He's one of our angel investors now. He had the audience rolling and talked for like two hours on why notebooks were just completely not useful in some really significant ways.

So that's the problem we're trying to solve. Zerve is a data science development environment that gives you all of the interactivity and collaborative stuff - the good stuff out of notebooks - but it does it in a stable environment. There's no "restart kernel" button in Zerve. It's impossible to get it into a bad state. It always produces the same output every time.

Zerve's Architecture

Rod: Following up on that, I've been a user of notebooks for more than a decade, even when Jupyter Notebook was just IPython Notebook. I've seen versions that try to improve over them, so they are more cloud-oriented and collaborative. But at the same time, I noticed that this idea of state control and the ability to break some part without realizing they already broke the logic has always been there with all these vendors that have appeared over time. Why is it that this has not been addressed before, and only you now are capable of modifying this and making it better?

Greg: Yeah, I mean, part of it, I think, is that the market is thinking about this wrong. If you look at all the innovation that's happening in the cloud space, it's all about collaboration. Colab has been around for years. There's a company called Hex. Deep Note is another one. Datalore is JetBrains' notebook offering. They're all moving towards the cloud, putting the notebooks in the cloud so you can have a team log in and everybody can work together and collaborate.

But the big frailty with notebooks is run order. If you run the cells in the wrong order, or if you run one cell too many times, that's how you get it into a bad state. And so you can imagine, if you suddenly let 10 people log into a notebook online, they can all run code. You're going to be hitting that restart kernel button way more. It's an exponential type problem.

The main thing we do differently is that we separate storage and compute. In a Jupyter notebook, if I run code, it's on my laptop. I execute some code, it creates some objects in memory, and those objects stay in memory in a global memory space. So that's why if I run the cells in the wrong order, it's taking what's in memory and just monkeying with what's there. That's the cause of the frailty of notebooks.

So we split them. Compute and storage are different. We take the cells of a Jupyter notebook and lay them out as a graph, like a pipeline basically, so that you have blocks that can execute code. And then all of our compute is serverless. So we run in the cloud. When you click run on a block, it's spinning up a Lambda function or a Fargate container, however you have it configured. You can select your compute on a block-by-block basis. That way, if you need GPUs or whatever, you can use them only where you want them.

What happens is you click run, and then it spins up that serverless compute, it executes, and then it takes the variable space inside that block at the end of the execution and serializes it, caches it, stores it on disk, and passes it downstream to the subsequent blocks.

Benefits of Zerve's Architecture

That gives you tons of benefits:

  1. Stability: You have complete confidence that it will always produce the same output no matter how you run it, who runs it, what computer they run it from, what day it is, or how many people are running it.
  2. Multi-processing for free: I can run as many blocks as I want simultaneously. In a Jupyter notebook, it's sequential. If I want to train 10 models, I'm going to train them one after another unless I write fancy multi-processing code.
  3. Language interoperability: You can pull in a block to write some SQL and maybe query a Snowflake database. And then that can connect directly to a Python block that can interact with the output of that SQL block. And then you can connect that Python block to an R block, and the R block can interact with the pandas data frames as if they were R data frames.
  4. Persistence: All of the artifacts of your analysis are persistent. In a Jupyter notebook, if my computer crashes, I lose everything and have to rerun everything. In Zerve, it's in the cloud and it's persistent. So when I share my canvas with you, you don't have to rerun everything in order to load it into memory. It's just there.

Market Response and User Base

Rod: What has been the response from potential users and users to see this transition from a vertical set of blocks that you can just move up and down to this set approach of the canvas that you can structure however you want?

Greg: The response has been amazing. We launched four weeks ago, and we got over 1,200 users in the first week, which we thought was a really good result. We got lots of really cool comments on social media. Somebody said, "This is finally the Google Colab update we've been waiting for." Someone else compared us to Mode. Lots of really cool feedback.

Of course, there is some inertia. Getting somebody to change their development environment, we've got some convincing to do. We call it the "data science Stockholm syndrome" because we've been held captive by these crappy tools for so long that we sort of just think that that's how it has to be. But it is bad. You shouldn't have to copy-paste your work out of Jupyter into VS Code. So we think it's a step change. We think it's a significant enough improvement that it'll catch fire. And it seems to be doing that for sure. So we're excited.

Market Size and Business Model

Maxson: In terms of business model, how are you charging for Zerve? Is it a freemium model? What's the pricing model?

Greg: The community version in our SaaS environment that runs in our AWS is free. Anybody can sign up at zerve.ai, and it will always be free. We may offer premium tiers going forward, like with GPUs and higher limits and stuff like that. But for now, it's free for everyone.

The self-hosted version is paid. It's on a per-seat basis. It's very simple. We're on the AWS marketplace. It's in your environment, so your compute and your storage costs are your cloud costs. But then the licensing model for us is just per seat.

Self-Hosted vs Cloud Version

Rod: Now you mentioned the topic of cloud versus self-hosted. Something I hear from some founders working on AI data companies is that if they can do a cloud version, they prefer that because it's much easier to release updates and upgrades. Whereas if they're working with self-hosted, then some customers might not upgrade, and there may not be ways to reach them. How did you decide to actually have a self-hosted tier?

Greg: You have to have it, there's no question. The amount of complexity around SOC 2 compliance and having customers send you sensitive data is hard. That's years of work to try and achieve all of the controls you need to do that safely. And people don't want to do it in the first place. No CISO wants to work with a vendor where they're sending out big data sets of proprietary data to a third party. The risk of a breach is significant.

So for a product like ours in the data science space, it has to be on-premises. Anything else is a complete non-starter in my view. It's just table stakes.

Impact of Generative AI

Rod: In the last year, we have seen the rise of everything connected to generative AI. How has that impacted the product? And how do you see the balance between classic use cases of data cleaning and analysis versus new use cases like chat with your data and RAG apps?

Greg: Well, it's everywhere. Everyone we talk to is talking about LLMs. In fact, we were on with a VC this week, and she asked, "What's your LLM plan?" There are lots of really cool things that we're going to be doing in that space. I talked a little bit about the IDE for LLMs. Lots of really cool integrations to make use of that technology.

But the hallucination problem is still really bad. There's enough nuance to some of these projects, and it's important enough that you not give a bad answer, that I don't think we're at the place where just anybody can do this sort of stuff safely. I think we'll probably get there at some point, but then we'll all be unemployed anyway.

I don't think the LLMs are good enough yet to be standalone. At this point, they're tools for experts to speed up their work. I use it all the time, especially with Matplotlib. I hate drawing charts in Python, so I'm like, "All right, this is the chart I want, give me the code," and I'll use that as a base and then tweak it to get it where I want it to be.

Advice for Newcomers

Rod: You have been in this space for a long time, but life has taken you through many different paths. For anyone who is starting their career or just finishing their studies, what do you recommend they do?

Greg: Don't take shortcuts. Learn the stuff, become an expert. There seems to be a shortage of experts these days. Lots of people know a little bit about a lot of things, but you don't want to be that person. You want to become an expert.

You just have to commit to doing the hard work that it's going to take to get there. You'll hear people say, "Well, with LLMs, you won't need that," but somebody has to know. You want to be that somebody because there's no substitute for an expert when there's a problem. There's no security like being the only person in the room who knows how anything works.

So that's my advice: do the hard work to become an expert at something that you love.

Rod: Should everyone learn how to code?

Greg: Absolutely, 100%. Definitely. No question. And in multiple languages.

Closing Thoughts

Rod: Is there anything else that you would like to leave our audience with, Greg? Anything that we may have missed that you would like to emphasize or shout out?

Greg: Come sign up, join the bandwagon. You can tell your grandkids that you were there when it all started. Reach out on LinkedIn, and if you encounter questions or want a tour of what we built, I'd love to hop on and show it to you. We're taking a very hands-on approach initially to make sure that our users are just absolutely thrilled. So we're available. Sign up, give it a go.

Rod: Thank you so much for being here today, Greg. It has been a great discussion. We have learned so much about the world of notebooks and all the nuances behind it. Great to have you here.

Greg: Thanks for having me, it was my pleasure.

Maxson: Thank you so much, Greg. I think one of the biggest takeaways from this is: don't take shortcuts. There are no shortcuts.

Greg: No free lunch.

More from the Podcast

Post Image: E20: AI Agents & The Intelligence Age: Hype vs. Reality

E20: AI Agents & The Intelligence Age: Hype vs. Reality

Chris Rod Max dive into Sam Altman's 'Intelligence Age' predictions, the state of AI growth, and the rise of AI agents. They debate how close we really are to general AI, examine enterprise AI adoption challenges, and discuss Salesforce's positioning as an AI leader. Plus: will AI fundamentally reshape competitive dynamics in tech, and what role will humans play as AI capabilities expand? Tune in for a nuanced look at the opportunities and risks in the rapidly evolving AI landscape.

Max Tee

VC Expert, AI Investor, BNY Mellon