Podcast Image: E18: AI's $1 Billion Problem - Can Startups Compete in the New Tech Arms Race?

E18: AI's $1 Billion Problem - Can Startups Compete in the New Tech Arms Race?

As AI development costs soar past $1 billion, we examine how companies big and small are adapting to the new landscape. From Salesforce's ambitious pivot to AI agents to the geopolitical factors affecting development costs, we break down the strategic decisions shaping the future of AI in business. Learn about the challenges startups face competing with big tech, the pros and cons of cheaper infrastructure in China, and how organizations are navigating high-stakes AI investment decisions in a rapidly evolving field.

Host

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

Guests

Max Tee

VC Expert, AI Investor, BNY Mellon

Chris Wang

AI Innovation and Strategy Expert, CXC Innovation

E18: AI's $1 Billion Problem - Can Startups Compete in the New Tech Arms Race?

In this episode, the hosts discuss the costs and implications of running AI startups and initiatives in organizations. They explore Salesforce's announcement of becoming an AI agent company and the potential excitement it may generate in the market. The conversation also touches on the challenges of retaining AI talent within organizations and the strategies companies can employ to keep knowledge in-house. The hosts delve into the difficulties faced by AI startups in keeping up with big tech and the changing business models in the industry. They also discuss the differences in AI chip rental costs between China and the US and the decision-making process for companies when choosing AI models and vendors.

Takeaways

  • Salesforce's announcement of becoming an AI agent company highlights the growing trend of using AI-powered tools to automate tasks and improve efficiency.
  • Retaining AI talent within organizations can be challenging, and companies need to find ways to incentivize and keep innovators in-house.
  • AI startups face difficulties in keeping up with big tech and may need to rethink their business models to stay competitive.
  • The cost of developing AI models is high, and companies need to carefully consider their investment and potential returns.
  • The decision-making process for choosing AI models and vendors should involve evaluating different options, considering reputation, and assessing the fit with the organization's strategy and goals.

Episode Transcript

Introduction

Rod: Welcome to another episode of the Chris Rod Max show, where we discuss the impact of AI on organizations and what it means for the world. As usual, I'm joined by my co-hosts. Chris, hello!

Chris: Hello everyone, it's great to be back.

Rod: And Max, welcome!

Max: Hello, hello! It's good to be here again.

Salesforce's Transformation into an AI Agent Company

Rod: This week's main topic is the costs and implications of running AI startups and initiatives in organizations. We're starting with Salesforce, the CRM company that essentially started the SaaS revolution over 20 years ago. According to its founder, Mark Benioff, Salesforce is becoming an AI agent company. AI agents are tools powered by artificial intelligence that act autonomously. For example, if I have a list of contacts I want to reach, instead of doing this manually, I can instruct the AI to reach out to the ten contacts most likely to engage with me in the next two weeks.

It seems this move might be driven by the markets not reacting positively to Salesforce's AI initiatives. So, what do you think? Will Salesforce's AI agents excite the markets? How do you see this development?

Max: Thank you for the question. I see AI agents as an eventual development of why we have software today. Within Salesforce, you have multiple functions, and each of these functions will eventually turn into a smaller agent. Let me give you an example from my experience working for a software company selling enterprise software to banks.

We used Salesforce as a CRM for our sales team. My role was on the product side, working closely with the sales ops team to understand the incoming data. We had to create dashboards in Salesforce to understand, as a product manager, where people were leaning and what inputs were coming in.

With AI agents, you wouldn't need a person to do that manually. The whole idea is to get an agent and tell it to run a specific amount of data for you, and it will pull it from Salesforce automatically. So, in my mind, agents work as mini-software within larger software, performing smaller tasks for us. I think the world is moving in that direction because we already have a lot of software. We're just trying to figure out how to break down all the smaller features so they can perform one small task for a specific instance. What do you think?

Retaining AI Talent in Organizations

Rod: I'm thinking not only about the product-related aspects of agents in an organization but also about keeping innovators in place. For example, someone might say, "I just developed these insights and knowledge, and now I'll go start my own company." This actually happened to Salesforce in 2016 when they acquired Metamind, founded by Richard Socher, a well-known name in the AI space. But then in 2020, he left to start his own company called You.com, similar to Perplexity.

Chris, I'm thinking about large companies wanting to retain talent. For example, they might make acquisitions to bring in innovation and analytical minds to take the company to the next level. But then, suddenly, these founders find it more interesting to start their own new companies instead of staying with the organization. What advice do you have for them? How can they keep this talent or at least retain the knowledge these people are taking with them?

Chris: Honestly, I think money always attracts talent. So, I'm sure getting a pay raise will definitely help. But there are different components here. When I look at this interesting article about Salesforce and agents, I think it's a very technical term. From a user perspective, what you're really trying to do is avoid menial tasks and manual, repetitive work.

But I think the real challenge, and this is not the first time we've talked about it, is the interface. How do you make that happen? I honestly think this is the right time for product managers to figure out how to build on top of the old world, like a Salesforce environment. How do you build that layer of features? How do you integrate it with clients that are also very old school?

We talked about Deutsche Bahn before, which was looking for an MS-DOS engineer. That's the kind of reality we're faced with today. Now you have this transition plan where larger organizations and tech companies like Salesforce are trying to bridge between the older world and the future. They're building on a fragmented but also very old IT landscape and adding features and AI capabilities on top of it, but doing so in a way that also enables them to become future-ready.

The article was also very interesting when it comes to innovation. It says that Salesforce used to be a startup, and now that it's really big, they're trying to infuse this startup mentality and AI mentality into the company again. But it's incredibly difficult. That's the reason why we see this innovator's dilemma where new entrants come in and take over the game. In this particular case with Salesforce, a CRM system built on newer tech is definitely HubSpot. We do see that HubSpot has also become very large already with a lot of functionalities, but also taking away quite a bit of business from Salesforce.

So I think that's really what every company needs to navigate. It's not only about how to keep the talent they need, but also about attracting new talent, new product managers that can actually help bridge between these different roles. But then again, how do you infuse this startup mentality to change, which is obviously something we all resist? And at the same time, how do you acquire companies that are bringing in new technologies and integrate them? I think that's really the challenge organizations are facing.

The Importance of Data and AI-First Approaches

Rod: For established organizations, we have a situation where there's already some defensibility, a moat in terms of data they have accumulated over decades, in the case of Salesforce. All the companies using Salesforce have their commercial data inside. Max, do you think that gives them a head start, or would you say a scrappy new entrant, like some AI-first CRM, will have an edge given that they're built from the ground up for AI? Salesforce has this clunky interface that's hard to navigate, and nobody really understands how things work inside because it's now so complicated.

Max: I think data definitely plays a big role for larger companies. Especially for companies like Salesforce, they're in some of the most complex companies. So the workflows, the know-how, the ability to identify those steps are very important. When it comes to core systems of record, it's not very different; it's ultimately a database of information. But the Salesforce workflow is quite interesting, which plays into the point that Chris raised around UX and UI - how do you interface with that?

For example, you would have created a process around a sales method that you want to use, and there will be some idiosyncratic steps and workflow that you would have taken. How can you then install an agent within that process? Or do people need to redesign their entire process to fit that agent in? That knowledge is important because it will change how your AI model will be deployed internally. They might be small, menial tasks, but when do you interface those small menial tasks? It's a question that can only be answered by some of the organizations out there.

So I think having that knowledge is super important, and being able to train your AI models with that knowledge will give them a head start, if not a differentiating point down the line. At least that's how I think about it.

The Struggles of AI Startups Against Big Tech

Rod: We've established that UI/UX is important, data is crucial, and now we come to the topic of capital and differentiation. This brings me to our second article, which is about how AI startups are struggling to keep up with big tech. Bloomberg reports that Aleph Alpha, which raised more than $500 million just last November, is now announcing that they're shifting away from building advanced AI systems.

They're quoting their CEO, Jonas Andrulis, saying, "The world changed. Just having a European LLM (Large Language Model) is not sufficient as a business model. It doesn't justify the investment." European companies have historically tried to differentiate themselves around data, privacy, and being different from the US. But we're seeing that even for a company like Aleph Alpha, this doesn't seem to be enough. It doesn't seem to be a compelling argument for potential buyers to choose Aleph Alpha over, say, OpenAI.

What does this mean? For example, if a company is trying to position itself, what angles can it take? Is the European angle still a good one, but maybe the company wasn't able to exploit it enough?

Chris: I think we need to look at different layers here. We're talking about large language models, the basic infrastructure level when it comes to artificial intelligence. Developing these models is quite expensive from an energy perspective, data acquisition perspective, and also just for compute power.

But then again, I think there are different layers where you can actually build in and bake in security and policies, which I think you can also develop within the application layer. This is how you can differentiate yourself.

When it comes to this article, in my opinion, and having been in the innovation space for almost a decade, I think this is a very unique moment. It's quite expensive for all startups to actually build AI, especially the foundational models. To the extent that maybe this is really the first time where corporates and startups collaborating is a huge opportunity and advantage.

At least in Europe and in Germany, we had this wave of bringing both worlds together sometime around 2010 or 2014 when I think there was a major push into FinTech and digital payments. But now I think it's even larger in terms of opportunity because bigger companies and corporates do have the capital that they can deploy.

In a startup, you normally have very entrepreneurial minds and very tech-savvy people. If you were to bring that together, that's probably a good way forward when it comes to being an AI startup or being a corporate trying to be innovative. And I believe we see these kinds of examples now in the market. Look at OpenAI now collaborating or having Microsoft as one of their LPs.

I think any major AI innovator these days actually needs that kind of capital injection provider. And if it's not VC, then definitely it's the corporates. What are your thoughts on that?

The Economics of AI Development

Rod: Well, I was thinking about the topic you mentioned, that we have to differentiate between the foundational models, which is the lowest layer in the application stack of an AI company, and then the application layer that you build on top of that. For example, Aleph Alpha raised $500 million, but they're trying to build these foundational models. It might not be that every company needs its own model.

The article also quotes about Character AI, a startup that tries to create individual personas using AI. For example, if you want to chat with Elon Musk or if you need a virtual psychologist or friend. Character AI provides these personas so you can engage with them and have virtual friends online. They're also struggling because larger competitors like OpenAI are including this functionality directly into their platforms. They're baking them into the model itself, into their offering, and as a result, it's becoming hard to differentiate.

It's quoted that companies need to spend at least 30,000foragoodGPU.Ontheotherhand,30,000 for a good GPU. On the other hand, 30,000 for a company doesn't sound like much. But when we think that they might need hundreds of them, Max, how can a company justify these capital expenditures? What are the options, especially if you don't have access to public markets? If you're, for example, a larger startup or scale-up, what are your options here?

Max: To your point, just to answer your question around the capital source, at the moment, or should I say for the past 20-40 years or so, a lot of software has been funded by venture capitalists, sometimes by corporates that have a little more foresight, like Salesforce Ventures, which is technically still venture capital but with a corporate backing.

I think what I have in my mind is around the economics of AI. Traditionally with software, the idea is that you pull a huge amount of investment upfront, and eventually, there will be no marginal costs down the line. This means you build all your software once, and you'll be able to resell it many times. However, with AI, it seems we haven't reached that point yet.

You could continuously spend a huge amount of money to build the model, but then you still have a lot of capital expenditure that you eventually need to amortize over time, which eventually hits into however much you're still spending. My question is, at what point will it change from that inflection point? How long do I need to develop an AI model to be able to get the returns that I would like to have, which means with a lower or non-existent marginal cost?

I feel that given that it has changed, I do not know where that line is yet at the moment. Hence, you have to continuously tap into bigger capital sources. So you have your venture capitalists, your governments - we talked about Mistral last week, and France is doing a great job trying to pour capital into helping AI startups grow.

The other way to do it is probably more of a technical challenge. Is there a way to lower the cost of developing all these AI models? At least the world is currently telling us no. But it's good to think about what could be the alternative. So my general sense is either you go find someone with a lot of money like America, or you find a different way of doing things, which means you could build a smaller model with a cheaper rate, but then give you a return quicker so that you can use that to fund your larger model. It's almost like the Tesla strategy: you build a very expensive Tesla to sell it to someone, and eventually, you have enough capital to produce a cheaper Tesla for the world. Can you do that for AI models? I don't know. What do you think about different approaches to building AI models?

The Rising Costs of AI Development

Rod: What came to mind, and now that you're bringing up the topic of France, Chris, I know that you're very active in the APAC region. The CEO of Anthropic, Dario Amodei, was quoted saying that today it costs around 100milliontotrainamodel.Buthesaysthatrightnow,thisisjustthestart.Theexpectedcostjustbynextyearwillbehittingaround100 million to train a model. But he says that right now, this is just the start. The expected cost just by next year will be hitting around 1 billion, and potentially by 2025-2026, it will get towards five or ten billion.

In the APAC region, this type of sum is enormous. So let's say you need 10 billion to do these types of scientific undertakings. What does this mean for regions outside the US? For example, now with your perspective from APAC, does it mean that it will be pretty much just the US and China that can provide infrastructure for everyone else? Or do you see possibilities still for regional players to arise?

Chris: Yeah, that's a difficult question. I mean, that's why the US and China are the biggest countries when it comes to innovation, and I think it's for good reasons - not only because they have good talent but also the required capital. I think the Middle East can play a role, and I think they are. Obviously, I'm not very familiar with the region, but they definitely have the capital. However, I haven't heard of any large language models coming out of the Middle East. I suppose they rather invest in US and China capabilities.

Honestly speaking, it's not like you can just come up with talent and develop something. I think one of the articles we're speaking about today really compared this to the race of shared bikes. Remember, back in 2016-2017, at least in China, that was a huge thing where you saw companies like Mobike, Ofo, and some other players pop up out of nowhere, trying to race for shared bike expansion.

When I thought about this, I don't think it's really true for the AI race that we're seeing right now. I think what we do see is that lots of AI players have started, and obviously, it always starts with a lot of fragmentation. But I think we see much faster consolidation in this market now. That's also because it is so expensive. So lots of startups are actually rethinking if it's really worth going into it. Especially now, we live in a world where profitability matters. It's not like 10 years ago where Uber could just burn money endlessly; it was just about growth.

So I think that's why we see that kind of speed of consolidation when it comes to AI models. However, I actually truly believe that there's so much more potential in the application layer. I mean, again, I think we have not cracked the nut yet on how corporates and enterprises can really adopt AI. Is it really a monopoly that bigger tech companies have, like Nvidia and Microsoft, to really build that kind of enterprise capability? Or will there be startups that are also serving B2B enterprises, maybe SMEs, with AI capabilities?

The Risk of Overinvestment in AI

Rod: Indeed, that's the case, and different from a quote in the article somewhere. Of course, you can tap foreign sources and so on, and then the main companies are building. But for example, Meta CEO Mark Zuckerberg claims that he sees a lot of companies actually overbuilding. The argument is that either you over-invest now, maybe become a little bit irrational and put more money on your shoulders, but then you risk being left behind in 10 to 15 years if you're not doing this.

Max, what are you seeing in this space? Are you seeing that companies are indeed overbuilding, over-investing? For example, now that we see companies need billions to develop something, OpenAI is raising $100 billion to stay relevant in the market and come up with new developments. When we look at those companies that are in the application layer, are you seeing that companies are really over-investing in this space? Are they over-building? Or are they taking a more cautious approach and seeing what works, what doesn't, and trying things step by step?

Max: I think it depends on which corporate you go to. There are some corporates that are definitely going head-on. But when we say overbuilding, it's not just momentarily deploying a lot of resources, but also doing it consistently over time. I don't think we have given enough time to see who is truly committed to AI yet, because every AI program can be shut down in two years. It's the same argument that people talk about with corporate venture capital - the lifespan of a corporate venture capital initiative is three to four years because the person in charge decides to leave, and therefore it's gone.

So to answer your question about whether or not they're over-investing, I don't think so. I don't think we're throwing money at AI without looking at use cases, because corporates are really focused on the returns they can generate from AI. The question then becomes, can they sustain those losses over time so that they eventually can see the light at the end of the tunnel? It might take four or five years, but they still continuously invest. I don't know if that will happen. So we will just have to see in the next two or three years whether or not some of these programs continue to be where they are.

Secondly, I think the over-investing, the irrational exuberance, especially when it comes to the innovation economy, is almost a feature, not a bug. You need to invest so that you can keep building. Everyone will just keep trying things, and you will realize that eventually, there will be a path that works better compared to others. Either people pivot or they die, but that's part of capitalism, right? You just get weeded out, and then you go again.

So in my head, I agree with Mark Zuckerberg's sentiment. It's better to over-invest. You can't get it wrong, but at least you try, and you will learn something and go again. It reminds me of Amazon. Now they are a giant corporation, but when they talk about the Fire Phone, Jeff Bezos always said they're going to make more investments or bigger mistakes compared to the Fire Phone. Hence, in my head, it's almost a feature, not a bug, which means as innovators, we don't really care as much about over-investing, but just keep building. This is very reflective of the current climate for AI with the amount of billions that are pouring in.

The Geopolitics of AI Infrastructure

Rod: Yes, on one side is the topic of over-investing, but also the other one is, let's say, this bang for your buck - what you're getting for your money. This is written in our next article from the Financial Times, which says that the Nvidia AI chips are cheaper to rent in China than in the US, which is a bit surprising. We've been discussing in previous shows how there are all these local initiatives about AI in a box that's very cost-effective and so on. This was with local vendors and manufacturers.

But here we see that the Financial Times is claiming that Nvidia chips can be rented by local providers in China for around 6anhourcomparedto,forexample,6 an hour compared to, for example, 10 an hour in the US for the same setup. And well, here we have the implications of one-side costs, but also, for example, geopolitics, and the topic of data privacy and so on.

Chris, when you're talking to startups, do they have a framework for how to navigate this and say, actually, it's very, very cheap to do it in China, let's just go and find a provider there? Or are they trying to have some other considerations? How are startups making decisions about where to allocate their resources and where to find infrastructure and vendors for this infrastructure?

Chris: Honestly speaking, I think this is a geopolitical dynamic. In general, I think this will benefit the domestic market in China to develop AI services and products. The sentiment that I hear from startups is, we try to avoid China just at all costs, simply because you never know with your data, you never know with regulations, you never know how predictable or unpredictable China is. And I think that's just the general sentiment over the last couple of years.

So when we think about this particular case here with chips being rented cheaper than in the US, I think this doesn't come as a surprise to me, honestly, just simply because all costs, the total costs of chip rental - whatever flows into that is obviously also cheaper, much cheaper in China than in the US.

But when it comes to availability and proliferation of Nvidia chips, of course, this is definitely something that will benefit domestic local players within the market. When I worked in an AI startup, and we were using computer vision to provide a service and we were expanding into China, it was a very ad hoc or opportunistic opportunity to really look at the China market. Obviously, China is this huge market - if you were to capture just 0.1%, that's more than any other markets you can actually generate revenue from.

Yet at the same time, it was just very difficult to build in China simply because China works on a different internet model. You can't really use Google or Facebook or any other services you would normally build upon. So we had to make the decision, are we going to go full localization and really replicate everything from AWS to Ali Cloud and other providers of services in order to replicate our service and product over there?

There were big concerns in terms of data and privacy, and I think this has only increased over the last year. So in general, I think startups tend to avoid China, especially when it comes to operations. In terms of target market, only the ones that really have a strong value proposition, and it makes sense - normally that sits within retail or pharmaceuticals.

Decision-Making in AI Infrastructure Choices

Rod: If we abstract this a little bit and think about the decision between a cheaper provider or a better-known one, how are companies deciding that? For example, Max, when you have startups coming to you and saying, 'Hey, we're providing a service, and we are, say, 20% cheaper than Amazon Web Services or 30% cheaper than Google,' do you say, 'Hey, this is exciting because they are much more affordable,' or do you think, 'Actually, I prefer to stick to Amazon, I prefer to use Microsoft because these are the names we know'? As the usual saying goes, 'Nobody got fired for buying IBM.' So what's the current thinking in larger organizations?

Max: I think the latter definitely still prevails. There's always a chance of being fired, especially in this current climate, if you make a wrong step. At least in most corporates, if you will. You can't compare that to Big Tech, for example. I give an example of the Fire Phone because Amazon's previous executive that looked after the Fire Phone wasn't fired after it failed. In fact, he was given a bigger mandate to go try something else. But I don't think that exists so much in other corporates.

Coming back to your question specifically around the trade-off between costs and other considerations in larger corporates, the cost thing will always exist. But the bigger question that a lot of corporates are asking is, what are the bigger opportunities here? For at least the last 10 years, they have been battered. Some corporates are just too short-term focused, and some corporates are not thinking about the long term, not investing enough. Therefore, their lunch will get eaten by startups or smaller tech companies.

In the case of banks, they say big tech is coming in, taking all the retail business. And yeah, they are. So I think corporate executives today are not dumb, nor are they blind or deaf. They are thinking about investing in the long term. But they just have to balance it out and be able to convince the investor community they are doing the right thing. It's almost like telling a story from top-down because your CEO will go out there and say that we're continuously investing in the long-term, and then your internal managing directors or directors running the company, the managers, will still have to think about what makes more sense for the long-term. What's the biggest bang for the buck that we talked about? At least, that's the prevailing thinking in larger corporates.

And also, I think we mentioned this before, with corporates, it's always a bell curve decision-making process. With the X-axis being success and the Y-axis being the different types of bets, they would rather invest in things that have a higher probability of success but lower return, more tangible or more predictable, as compared to investing in something less predictable but with outsized returns. Simply because it's just the time it takes, the way you would eventually be able to convince someone else to give you the capital to do that is also quite hard.

So hence, the cost point will be there. But the decision-making process within corporates hasn't really changed much. It's just they have decided to sometimes invest in the longer term, trying to do things slightly differently. But if they're not in their space, they probably won't be the first one jumping in. For example, if Microsoft didn't jump into AI headfirst, a lot of larger corporates probably wouldn't pick up AI immediately. That's at least how I've seen it at the moment.

Mitigating Risks in AI Investments

Rod: The risk of being fired and having a situation where even staying on the application layer and saying, 'We're not building our own large language models in our organization, we will just build on top,' we're seeing that this still requires a significant investment. Chris, do you have any frameworks or decision-making processes that executives can use to decide how to mitigate this risk? How can they say, 'I want to be innovative, I want to bring this innovation into my organization and maybe start some AI initiatives on my project,' but at the same time, have a way to reduce the risk of saying, 'I have to take maybe quite some millions, use them in a project that has unknown return on investment, unknown impact, and potentially might fail'? How can they avoid the situation where they're told, 'You just used 10, 20, 50 million on this, it failed, so thank you, but you have to go and leave'?

Chris: I think it depends on the size of the company, honestly speaking. If we talk about an SME or a corporate, generally speaking, IT is always a difficult animal because you touch upon core systems, depending on the industry, very critical systems even. And obviously, to innovate around that is very difficult just simply because it's so protected.

So normally, I would say if we think about a strategy or a framework, there are different aspects we should always look at. Generally speaking, what's your strategy? How much do you want to invest? What's the attitude of whatever decision board or governance system looks at AI - is it going to be a key priority or not? I think that's the first question to ask.

Then the second is obviously where do you want to employ it? How is this going to benefit everyone? Then there are things like talent - do you have the talents? Where do you actually want to source the talents? What's your organizational structure and operating model?

I think those questions are not really different from any other framework. Every time you try to bring in a new technology or innovation, you will have to answer these questions. I think when it comes to IT in particular, normally people try to avoid going into the core systems, but rather find some kind of way around. So either through an innovation center or with a smaller business unit or maybe with a smaller team to try out something. Build the case, build the pilot, build the evidence that it's a good technology, that the vendor is a good one.

Normally, also, if you think about external technologies, you would do some tendering, would shortlist, longlist, and then shortlist some suppliers of certain technologies and then run a pilot. And I think this is reality, and this is how normally new innovation comes into a company.

Conclusion

Rod: We've covered a lot of topics this week. We discussed Salesforce's transformation into an AI agent company, the challenges of retaining AI talent, the importance of data and AI-first approaches, the struggles of AI startups against big tech, the rising costs of AI development, the risk of over-investment in AI, the geopolitics of AI infrastructure, and decision-making processes in AI investments.

As usual, remember to like, subscribe, and leave your comments. We always answer your questions and appreciate when you share our content. Join us next week for another episode of the Chris Rod Max show. Until next time!

More from the Podcast

Post Image: E21: Is OpenAI's $157 Billion Valuation Justified? Chris Rod Max Weigh In

E21: Is OpenAI's $157 Billion Valuation Justified? Chris Rod Max Weigh In

In this episode, we dive deep into the recent challenges faced by OpenAI, including employee departures and leadership changes, and explore whether its $157 billion valuation is justified. Chris, Rod, and Max discuss the difficulties of rapidly scaling AI companies and provides insights on potential investments in the AI sector. We also examine Hippocratic AI, a healthcare AI startup valued at $500 million, and debate the future of AI in healthcare. Join us for a fascinating discussion on the current state and future potential of AI companies.

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

Post Image: E17: AI Revolution Decoded: Eric Schmidt on Open Source, Big Tech's Edge, and Corporate Adoption

E17: AI Revolution Decoded: Eric Schmidt on Open Source, Big Tech's Edge, and Corporate Adoption

In this episode, we explore former Google CEO Eric Schmidt's insights on the rapidly evolving AI landscape. Key topics include the debate between open and closed source AI models, the massive capital investments driving innovation, and the challenges corporations face in adopting these technologies. From technical advancements like increased context windows to the potential of AI agents, we dive deep into the current state of AI and its implications for the future of technology and business. Join us for a thought-provoking discussion on how AI is reshaping our world and the obstacles that lie ahead.

Max Tee

VC Expert, AI Investor, BNY Mellon