Podcast Image: E17: AI Revolution Decoded: Eric Schmidt on Open Source, Big Tech's Edge, and Corporate Adoption

E17: AI Revolution Decoded: Eric Schmidt on Open Source, Big Tech's Edge, and Corporate Adoption

In this episode, we explore former Google CEO Eric Schmidt's insights on the rapidly evolving AI landscape. Key topics include the debate between open and closed source AI models, the massive capital investments driving innovation, and the challenges corporations face in adopting these technologies. From technical advancements like increased context windows to the potential of AI agents, we dive deep into the current state of AI and its implications for the future of technology and business. Join us for a thought-provoking discussion on how AI is reshaping our world and the obstacles that lie ahead.

Host

Max Tee

VC Expert, AI Investor, BNY Mellon

Guests

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

Chris Wang

AI Innovation and Strategy Expert, CXC Innovation

E17: AI Revolution Decoded: Eric Schmidt on Open Source, Big Tech's Edge, and Corporate Adoption

In this episode, Max, Chris, and Rod discuss Eric Schmidt's views on AI revolution and the latest developments in AI, particularly focusing on the Clip model. They explore the controversy surrounding work-life balance and the future of work in the context of AI. They also delve into the technical aspects of AI development, such as the use of token context windows and the significance of CUDA in GPU programming. The conversation touches on the challenges of adopting new AI technologies in large organizations and the role of innovation units and pilot programs. The conversation explores the limits of knowledge systems in AI and the challenges of understanding AI models. It discusses the transparency of AI models and the need for explainability. The conversation also delves into the capital costs of AI and the different approaches to open source and closed source models. The potential use cases of combining context window, agents, and text-to-action are explored, along with the importance of programming skills for everyone.

Takeaways

  • Eric Schmidt's views on AI revolution sparked controversy, particularly his comments on work-life balance and the importance of winning.
  • The development of AI has seen advancements in the use of token context windows, allowing for more extensive input and fresher information.
  • The adoption of new AI technologies in large organizations can be challenging due to factors such as cybersecurity concerns, fragmented IT systems, and the need for buy-in from stakeholders.
  • Innovation units and pilot programs can help bridge the gap between new AI technologies and large organizations, facilitating testing and proving the value of these technologies.
  • Understanding the political dynamics within an organization and finding champions for new AI technologies can increase the chances of successful adoption. Understanding AI models and their limitations is an ongoing challenge.
  • Transparency and explainability are important considerations in AI development.
  • The capital costs of AI development can be immense, and different approaches to open source and closed source models have their pros and cons.
  • Combining context window, agents, and text-to-action can lead to interesting use cases in AI.
  • Programming skills should be taught as an essential class for everyone to enable better adoption of AI and efficient workflow automation.

Episode Transcript

Introduction

Max: Welcome to the Chris Rod Max show. I'm here today with Chris and Rod. Every week, we discuss AI stories so you can learn from the experts out there doing cool things with AI. Our goal is to break down the news for you. We hope you enjoy this week's episode where we'll talk about Eric Schmidt, the former CEO of Google, and his views on the AI revolution. Today, we're also going to discuss Clip, which is slightly different from previous episodes. The idea is to explore new ways of bringing you the latest news. So, Chris and Rod, are we ready?

Chris: Very excited. Let's do it.

Rod: Let's go.

Max: Great. For those who haven't seen it yet, there was a video labeled as controversial featuring Eric Schmidt, the former Google CEO, talking about the AI revolution. We've managed to secure some clips online that we'll discuss today to explore what's happening. Let me just share the quick video.

Rod: Max, what's the context? Where was he? What was the point of this interview? What's the story behind it?

Max: Eric Schmidt was giving a talk at Stanford, I believe. He was discussing the latest AI developments with Stanford University students. Something he said about Google's approach to winning was initially labeled as controversial. He basically stated that some Google employees decided that work-life balance was more important than winning. This caused quite an uproar in different news media. But to me, that wasn't the main issue. Rod and Chris, you've both seen the video before. Perhaps you could share your thoughts on why this was considered controversial.

Work-Life Balance vs. Innovation

Rod: I think it's controversial because we're seeing a divide between two perspectives. On one side, there are those who believe now is the time to be fully focused on building things and driving this revolution forward. They think we have to pour all our energy into that. On the other side, there are those saying that life is more than just work. There's more to us than that. Also, thanks to AI and automation, we're becoming more efficient, and the need to work long hours is becoming less relevant. In fact, thanks to AI, it might be possible to do just a few hours of work a day and still get the same output. I'd say this controversy reflects a cultural clash that's happening, not only in Silicon Valley but in many other places, about the future of work and ultimately, what we're supposed to be doing. Should we be living to work or working to live?

Chris: I think the topic of remote work is sensitive, not necessarily controversial. We're living in a post-COVID era, and many employers are trying to bring people back to the office. I think there's a spark, maybe even a revolution, on the employee side, wanting more life quality, which also means more flexibility around where to work. At the same time, I don't think this has much to do with the topic at hand, which is AI and its advancement. I think it's quite interesting what the former Google CEO is actually saying about the future, and I'm sure we're going to dive deeper into that today.

AI Development and CUDA

Max: Great. Let's move on to the first clip of what the former Google CEO said about AI development happening every six months. Rod, from your technical perspective, what are some of the big things that are happening? In this video, he actually talked about token contexts and windows. Could you get us up to speed on some of the latest developments over the past six months, especially from a tech standpoint?

Rod: Let's take a step back and try to understand why he's asking this question. Why does he think that speaking about a one million context length window is so relevant? First, when we're writing something to ChatGPT, this is the context window. It's how much input we're passing in terms of information and background. For those who remember how it was very early on, when we started using it in the beginning, it was very limited. You could pass maybe a couple of paragraphs, and immediately you would hit a limit. As a result, you couldn't provide too much background information about your problem or question. You had to rely pretty much on ChatGPT using the historical knowledge it was trained on to understand what was being asked.

Now, when he's talking about the one million token context window, it means we can put entire books as part of our input and then use this for GPT-4 or any other large language model to generate our answers or whatever we're asking. This means we can provide a lot of new information, including company data or personal information that isn't known to the wider world. This provides much more nuance and also allows us to introduce the freshest information. For example, if breaking news happened five minutes ago, it likely won't be in the model's training data. But by having these very long context windows, we can introduce this information so the system has an understanding of it.

Max: That's interesting. So on one side, it's about putting entire books into models and making larger models understand things a little better. Chris, from your perspective, when it comes to the application development of AI over the past six months, what are your thoughts? Is it refreshing every six months because of the technical updates we're seeing?

Chris: Honestly, I think the biggest bang was in November 2022 when ChatGPT came out to the public market. Since then, yes, there have been optimizations and incremental changes and improvements. We also see improved models from competitors battling each other on different fronts and optimization key values. But honestly, I'm still waiting for the amazing voice feature that they've demonstrated.

I wonder if the question of whether it's every six months or not is really relevant. I think what's now become more in the spotlight are the use cases and the commercialization potential. As we discussed last week, we're very interested in understanding how Gen AI can provide value to different corporations and companies. This is really the key to whether AI will be supported by the wider industry for further development.

My point is, it's not necessarily interesting to know whether something new comes out every six months. I think the change is actually gradual and incremental. But I think the question for me is really the rate of support that the topic actually has. If you look at the stock market, stocks in the AI space are quite volatile these days, simply because there's a controversial discussion around the real value of AI. You see some people arguing that we've reached the peak or the deflation moment in the hype cycle, but then there are others who strongly believe that it's still growing and that we're just at the beginning.

Rod: One thing I was thinking about, Chris, is that on one side, this shows how hard it is to really assess which things will have an impact and what becomes meaningful. You were expecting this flashy functionality with voice and so on that's very human-like. We're still waiting for that. And then it turns out that something that's hard to explain, such as this one million context input for a large language model, is what's having a higher impact and enabling these types of enterprise cases. It shows how difficult it is for companies to understand in advance what will have a meaningful impact and what they should be investing in.

Chris: Adding to this, Rod, I think even the simplest office tasks haven't been fully solved or implemented in a way that people understand how to use AI today. Just think about basic Excel operations. So many companies still rely on Excel, and you would argue that with so many more data points at hand, there must be a simpler way to use Gen AI and run through it. We actually see that the integration into existing workflows is one of the biggest hindering factors preventing companies from unlocking the real value or potential of AI.

Adoption of AI in Large Corporations

Max: That's a really good point. You talked about the interest and capital flowing into the system, and also about some of the limitations on translating technical capabilities into immediate tangible benefits. But over time, that will happen. We just have to trust the process.

Now, Eric Schmidt also talked about how CUDA works in the video. He dubbed it a C programming language for GPU. Rod, could you share your thoughts on what CUDA is and why he says the open-source part of VLM is very hard to replicate?

Rod: Actually, he's not entirely correct when he says nobody's talking about that. We have spoken about it on the show, saying that we've been seeing other companies trying to compete with NVIDIA in terms of new hardware, new GPUs, and so on, like AMD, Graphcore, Grok, etc. We've seen new entrants and other established players who are now offering AI-related technologies, like Intel. And we've been wondering why they're not achieving this penetration.

The main insight is that it's not only about the hardware, but also the ecosystem that you need to build things on top of it. CUDA is a software library developed by NVIDIA that allows developers to interact on a very low level with these GPUs, doing basic operations like matrix multiplication in a very fast and efficient manner. By now, this is the industry standard. Pretty much anyone who wants to establish a new technology needs first to capture the developer mindshare, the community building these tools. For that, they need to have very good software, and this software needs to be adopted, which becomes a true moat for enterprises.

It's very, very hard for others to try to do things on top. I've seen Intel and AMD trying to do something in this space, etc. But this is something that takes many years, if not decades. CUDA has been in the space for quite a while now, pretty much since GPUs have been around. Even if others come up with something that is on the same level or maybe slightly better, it's always a question for developers: Why should I adopt it if I'm already so invested in this existing stack?

Max: That's super interesting. It's about the moat they're building with all these different developers. Chris, from an adoption perspective, how do you see the development of all these languages as well as the depth of the development community? We always talk about the ability to translate some of the technical capabilities to use cases and tangible benefits. Here, it's more on the technical side. I wonder if it matters, especially when a large corporation is trying to adopt some of these different AI solutions for their own needs.

Chris: It's hard for me to comment on it, but as an example, think about whenever you have a new language - it's very difficult from a cybersecurity point of view to get it in, but then also to find the right people within the company to build on top of it. I remember vaguely, there was this meme about Deutsche Bahn or something where they were looking for an MS-DOS programmer because some of their systems are still running on it. I think, looking into the aviation industry, you still have systems from the seventies or eighties, and from a regulation point of view, it's still the most widely used and adopted system.

What I'm trying to illustrate here is that with every new language you bring out into the market, you actually face adoption risk. Even something trivial like Ruby, which is very common nowadays in the startup world, is very difficult for many companies to adopt. More and more, you see that gap of innovation where a lot of corporates or larger entities are living in a different system world, and then you have the more agile, maybe smaller ones and startups running in a very different world.

Rod: I'm curious, Chris, since you've been in innovation units, what's the decision-making process like when an innovation unit comes with proposals for new technologies, systems, or models? How is it decided whether to adopt them or not? How do large companies handle that?

Chris: Well, I don't think there's a silver bullet here. These innovation entities normally serve to bridge somehow. But when we talk about a serious enterprise-wide rollout, it takes forever to approve and even longer to implement because the existing systems are often very old.

Regarding stakeholders and decision-makers, it depends on who the business unit owner is, or even up to the executive board, depending on how large the enterprise software adoption is. Practically, it normally starts with a small team, a business unit that is a bit more open-minded and has more liberty in choosing the kind of software and systems they use. Then you would try to prove the case that an AI software or service actually brings value to the company. It might then get adopted by another business unit until it reaches group IT. That's normally how it works.

To be honest, my personal view is that it's never really great if it's just a sort of patch-up solution. You might work with mini-sites or websites or some workarounds. The problem is that this is how you create a very fragmented IT system landscape in a company. Every team finds workarounds because nobody wants to go through the formal approval process if possible, as it takes so much time. That's how you end up having a large bunch of different languages, systems, and solutions.

Rod: So, for example, if I'm in middle management at a large organization and I heard about a cool new library or tool on the Chris Rod Maxx show, how should I go about trying to roll it out at my company? Where should I start?

Chris: Normally, the most practical way is to try it out in a way that doesn't ring any alarms or go against any policies or cybersecurity measures, but still allows you to test it. The test case is super important. That's also where innovation labs or hubs really help to bridge and create that kind of trust and support these POCs or pilots to make the case. It's also a little bit about luck and finding people in the organization who are more entrepreneurial and open-minded, who can actually help support a certain case. That's normally the grassroots strategy that most people use.

Another approach is top-down, where an executive board actually recognizes the value of a certain technology. We see evidence of this in earnings calls and how often AI or artificial intelligence is mentioned. This is another way where it's top-down driven and an executive board is open-minded or they deem AI as a crucial factor technology for the future. Then normally it gets down to maybe a group IT person.

There's also another route where you go to the bigger players like NVIDIA or Microsoft, and you try to figure out if there's a way you can act within a specified or existing contract framework that you already have with a supplier, which makes things so much easier to implement.

Rod: So if I understood correctly, one way to do it is to reach out to my innovation unit in the organization and try to pitch them what I have in mind, and work with them on some rollout plan or pilot program that we can then later present to a CIO, CTO, or someone higher up in the hierarchy.

Chris: Exactly. That's one way. The other way is obviously the more formal top-down approach and going with existing suppliers that are also very big in the market.

AI Adoption Strategies for Startups

Max: Just to echo what Chris said, we've seen different examples of how organizations are deploying AI. There are folks that go from the top down - I know a few banks that have a head of AI, which is really a new thing. And then there are also folks that are trying to set up almost like a hub within the innovation labs, especially for the newer Gen AI applications.

Rod: Max, I want to ask, from the side of startups, if a startup wants to introduce new technologies to a large organization - say, for example, this thing with CUDA, a new startup does a better CUDA and it has some benefits - how can this startup penetrate it? How can this startup do some pilot with a large company?

Max: From my perspective, it really depends on whether the company has an existing vendor or not. Are you trying to solve a new problem or replace another vendor? If you're trying to replace a vendor, you might start internally. If you're trying to introduce something entirely new, I think you do need to have some sort of champion internally, as Chris pointed out. It doesn't matter if they're running the project from a grassroots perspective or if they're coming from the top down trying to push certain applications. That's what I've seen work.

Obviously, there's a key person risk here. Often, there are big projects or initiatives that have kicked off, sometimes even with investments around them. In the end, the key person might leave, and then the project kind of fizzles out until the next person comes in to take over.

If you're trying to solve a problem where you already have an existing vendor doing something similar but not the best at what they're doing, the best way I've seen is to do a comparison. You almost go straight to the person currently implementing that system and try to understand the politics around it. Why are they incentivized to keep the current systems in place? If they're not, they're probably going to search for new systems to replace it.

So to do that, it's all about proving the value. I have no doubt that a lot of the technology out there is very good. But it's just being aware of the political situation of the individuals using or trying to push the system that will be super helpful. It really depends on whether or not there are existing players within the organization.

From a startup perspective, I know a lot of times it's all about build and then sell as straightforwardly as we can. Unfortunately, with larger organizations, there are microorganisms within them. So it's important to understand how all of that interacts with each other. At least that's what I've seen.

Understanding AI Systems

Max: Now we've talked about how organizations could think about bridging the gap between what's happening outside from a technology perspective and what's coming in. One thing that Eric Schmidt talked about is knowledge systems and their limits. He has a quote that says people are creating things that they don't understand. I find that alarming, but what is he trying to say? Let me play the clip, and then we can discuss it.

[Clip plays]

So on this, what I find interesting is the question itself. We're seeing a lot of different AI models. Are we at the point where we're creating things that we don't understand? Eric talks about how we could potentially see this play out. But before we get to that, I'd love to get your thoughts, Rod, especially on the development side. We're building a lot of things very quickly. I'd love to understand, from a technical perspective, at what point do we reach a limit?

Rod: This criticism has been around for decades, pretty much since the advent of deep learning, which is the set of algorithms that underpin what we know today as large language models. We say they are like black boxes. Compared to previous algorithms used in the community, where it was very clear through mathematics to derive step by step what was happening, now what we have, especially with these LLMs, is a bit of a black box. Things come in, other things come out. But this is just because we do not yet have all the instruments.

This has been changing a lot. There has been a lot of theoretical work that explains how every model is functioning behind the scenes. So it's not a full black box anymore, maybe more of a gray box in a sense. It's true that compared to other methodologies, we do not yet have a fully transparent explanation. It's not like a clear box. But this will change over time. We have to remember that this is relatively new technology, relatively new models and algorithms. Whereas something like linear regression has existed for centuries, so of course, it has had much more time to develop its theory and explanations.

On one side, yes, we do not fully understand it, but things are changing fast. And in the end, it's a question of whether we need to fully understand it in order to assess its usefulness or derive value from it. That doesn't seem to be the case. We can also have mechanisms or safe checks that come after the output. So maybe we do not understand fully what is happening in the system, but once we get an output, we can assess if this output is relevant. For example, many people tell me they've noticed that ChatGPT is 'lying' more and more. This is a way to verify it, right? I don't know where it's getting its knowledge from, I don't know how it is working, but when I get the output, I can confirm if this is what I want or not.

Max: Chris, what are your thoughts? I find that quite interesting. But with a large corporate, most of the time, they want to understand a lot of the stuff, or at least be able to articulate it when it goes to a board meeting. What are your thoughts on this?

Chris: I don't want to answer this in a corporate context, but I think it's more of a human context, isn't it? It's like the fascination of what intelligence is. And honestly speaking, I don't think we understand our brains well enough to understand where exactly intelligence sits. And so of course, what we are doing is basically mimicking the brain and the neural network. And this is basically the foundation of Gen AI or AI in general.

I think another human trait we have is to always want to be in control of things. And so we get very nervous when we don't understand fully what's going on. There was this big sensation a couple of years ago when DeepMind invented AlphaGo, and they made this famous game with the world champion of Go at the time. They played five games, and I think at the end, AlphaGo actually won. My understanding is there are more possibilities of playing Go than maybe atoms or stars in the universe, something crazy.

And of course, if we had to break it down and make it controllable for us to understand, you would actually have to go through each and every possibility, and then the tree would branch out, and you would try to understand each and every variation of the game and every next step that you can do. But AlphaGo actually cuts through this by using heuristics and sort of going into the upper tree or the lower tree and making educated guesses.

And I think this is at least a bit of a way of how I explain it to myself of how larger models actually work - they make guesses and they have heuristics to work with. And so it's maybe a little bit like how a child would instinctively choose one decision or another. And it's maybe not fully understandable as an adult, but it's still working and it's still an intelligent being and we still trust the outcome of it. What do you think?

Rod: Indeed, one of the characteristics of all these systems is the ability of forecasting into the future and to do this in parallel streams, where we as humans have limitations on how many things we can think about at the same time. I would have to really, really focus on tasks at hand, otherwise I can't really do it. And also, we cannot think, let's say, 20 steps ahead, but maybe like one or two, three at most. And that's where our limits are.

What happens very often is that, for example, in the case of Go, yes, maybe there are millions of possibilities, but likely, historically, humans have not explored these millions of possibilities, but only a subset of them. And not only that, but also the system can play out all these historic options and see which one is the likeliest that the competitor, the other person playing the game, will try to make the move. And this is very similar to how the LLMs work, how ChatGPT works, and so on. In the end, it's about trying to forecast, predict the next word, the next combination of words, and so on. So if I say this first word, then most likely will come the second one, and then based on the second one, the third one, and so on.

Why was it decided that this was the case? Well, in theory, it was decided through probability rules. But what was the mechanism behind it? We do not fully understand it. So always when people think about this topic of transparent systems, black boxes, white boxes, and so on, for me, it's like this philosophical reference of Plato's cave. In the end, we always see just a shadow of what's going on, and maybe it's on the shadow we try to interpret things, we get very close to the reality, but we will never be fully there.

The Future of AI and Its Implications

Max: From an intelligence perspective here, I totally agree. I think this kind of just reflects on what's going on in the development of putting a lot of investment into the overall understanding of intelligence. We may not have a very good understanding of it, but there might be some sort of outcome that we are trying to drive because it's beyond our understanding today. It doesn't mean that it's beyond our understanding in the future. We just have to kind of trust the process.

Now, Eric Schmidt also talked about Mistral's new model. The reason why this is interesting is because he's trying to outline that the capital costs for AI are so immense that some of the leaders are pulling away from the other smaller players. I don't know if this is true. I'd love to hear your thoughts, but before I hear your thoughts, let's just jump straight to the video.

[Video plays]

Rod: Before we dive into that, what's fantastic here is that very often, not only in our show but also in general in media and society, we discuss the lack of innovation in Europe or that Europe is being left behind in terms of AI and other technologies. But here we have an example of Eric Schmidt praising an AI startup based in Paris, France. France is not necessarily what we might think of as the cradle of innovation. And we can see that it's still possible to build meaningful and impactful companies outside of the US and Silicon Valley.

But to your point about open source models and closed source, this is what will define the race of AI. We have two schools, two philosophies. On one side, we have OpenAI, where everything is closed and you have to pay. You have no idea how in practice GPT-4 really works. Sure, OpenAI does a lot of research and academic publications where they provide some explanations, but we don't have access to the code or, more importantly, to the dataset they're using to train these models.

The second school of thought is represented by companies like Mistral and Meta. They're saying that we need trust in AI, and one way to establish this is by being transparent and offering pretty much everything available. You have access to the weights of the models, to how the model is built, and a lot of information on the data behind it. Also, access to the model is free. Anyone can just go to the internet, download it, and start using it.

But to Eric's point about the massive capital expenditures required to develop this - we need a lot of infrastructure, servers, and highly qualified experts. We need to spend a lot of energy doing all this computation. We're talking about really spending millions just to get one of these models ready. And the question will always be, where will this come from?

Potentially, it might be an undertaking that is societal, where we say we as a society need to foster this for the benefit of humanity. So it becomes something that governments might fund. Or it can be the idea, for example in the case of Meta, where they see this as a way to protect themselves from new entrants. If the future is AI and if all these eyeballs are spending time in ChatGPT instead of chatting on Facebook, then they need to do something.

Just this week, there was an announcement that Meta, through its multiple platforms, has 400 million users using their chatbots. So Meta has already created a relevant competitor to ChatGPT that we're not really aware of because it's not really a specific product or brand. It's just somebody embedded in their multiple systems - Instagram, Facebook, and so on. So for them, it's already paying off that they're spending so much money on all these open source models, with the additional benefit that they are also planting the seeds for new OpenAI competitors such that OpenAI never gets too strong because they will just be competing with others that are being powered by technologies that Meta provided.

Max: Yeah, I think the seeding of new competitors is quite interesting to me. Chris, from your perspective, especially on the adoption and competitive landscape, I'd love to hear your thoughts. Have you seen differences between open source and closed source in different software industries? Because of the capital costs and support going into one company, my understanding is it may not always be the best software just because you put more money into it. At least that's what I'd like to believe. Therefore, just because you put a lot of money in there and turn it into the best software in the world and do it closed source may not actually... I guess my point is more about supporting the creation of the software. But from an adoption perspective, does it change the way you think about it? Open vs. closed?

Chris: I actually think that the logic Eric Schmidt used is not so sound. Essentially, he's saying, 'Listen, my entire career, I've been working on free stuff, and we can't do it this time around because it's so expensive to build AI.' I want to separate it because I think there's something around business models. Google can do all these things for free, like Gmail, Google Maps, etc., that we use day-to-day for free because they found a business model and someone else to pay for it - the advertisers.

The other thought I had was that open source versus closed source is basically what Rod described. It's whether or not you can look into the code or you can't, and can you collaborate and fork or duplicate the software or the code and do something with it? We do have other companies where you have an open source approach. For example, Linux is one. They provide their software, but they find other ways and means to monetize it.

So I think this is the way I would actually apply here in this case around AI, which is, okay, you can or cannot make a code available to others. And this is one part of your strategy, but still you have to figure out the other part, which is how do you make money and cover your costs? And so for me, this is a bit of a nuanced sort of distraction or sidestep, to be very honest.

And I think overall the question is still standing, which is how do AI startups or companies make money? And usually the answer is with some kind of service on top of it, right? So maybe you don't pay for the software code per se, but you pay for having this beautiful interface called a chatbot, or you pay for maybe even consulting services that Nvidia helps to build up something for your enterprise that has something to do with cloud and AI. That's normally how to think about it.

So one way is to think about the investment costs. And I think for startups, it's really coming from VC pocket money. And these days, obviously, very deep pockets are required in order to build some of these models. But then again, there's this like, let's say, front end where you need to think about your business model. And I think that's a challenge for any new starting company in technology or elsewhere.

The Future of AI: Context Windows, Agents, and Text-to-Action

Max: Great. Given that you talk about the different services, which I think Eric also touched on, he talked about three things: the combination of context window, agents, and text-to-action, which are really things that you can build on top of those models and those investments in order to get to some of the end outcomes that you want. With that, I would just play a quick clip about him talking about that, and I'd love to get both your thoughts around it.

[Video plays]

For me, what's interesting about this is he was trying to bring some of the LLM stuff that we've been talking about to life, utilizing things like context window, bringing things current, agents by having something running models for you day and night without you doing it yourself, and next is really turning whatever language we want to tell something into any sort of code, and then it will take action in the digital context.

We have been spending some time talking about different use cases. I just wanted to hear from both of you: by combining all three of these elements, what are some of the interesting possible use cases we could be thinking about? Also, what are some of the things that corporations need to consider when it comes to adopting AI, taking all three of these things into account?

Rod: I want to provide a bit of context first for those in the audience who don't know what agents are. Agents are a whole new, almost discipline that is happening in generative AI with the idea that all these large language models can be seen as grunt workers. I can say, 'Hey, what if I have multiple GPT models or any other model, Mistral and so on, that have assigned tasks and I can coordinate them?'

Normally, when we think about preparing a spreadsheet, let's say, the list of largest companies in the world by revenue, this means that we need to carry out multiple tasks. We need to figure out on the internet who are the largest companies in the world. And then we need to find out where we can find information about the revenue and so on. So we need to plug in multiple sources. We also need to do some spreadsheet formatting and so on. Here are multiple associated tasks.

The idea with agencies is, what if we could delegate these tasks - the research, the formatting of the spreadsheet, the combining of data sources, and so on - and each task is given to a different AI model? And then another model coordinates them together and presents the final product. This is what is seen as the next frontier in AI. We start to see this not as smart humans inside a computer, but rather as capable single-task ground workers that are doing things in parallel and combining these to do full workflows.

What Eric is saying is, especially in life sciences, you need to carry out so many trials, you need to be exploring so many options. It's like what Chris was saying about Go, where there are so many combinations possible. What if we can have a full team of virtual researchers who are testing combinations of compounds and seeing what works? And then in the end, they come to you the next day with the combination of a compound that had the best results. What we don't see is that during the night, it wasn't necessarily one researcher doing this, but it was, quote-unquote, a team of millions of models trying different combinations in parallel.

Chris: I would actually love to see a world, and this is maybe taking it a bit further, where we wouldn't be doing the research manually or the Excel file manually or the PowerPoint or whatever file we're trying to modify. Instead, I hope that everyone, and hopefully already in school as a mandatory class, gets to learn about programming.

I wouldn't necessarily agree with this whole 'we have a couple of cheap programmers' idea. I actually believe that programming and understanding computers and building these kinds of workflows that Rod is describing should be an essential class like math or language for everyone to be able to build it. Because, going back to our previous conversation about companies, organizations, and all these challenges and obstacles of adopting new technologies, yes, of course, people at work don't know how to use AI. They don't know how to plug workflows together. And even using something like Zapier or Bourdain or whatever it is, is already really, really difficult because they maybe don't understand the logic or underlying infrastructure around it.

So I think we would be living in a more efficient, better, less menial work kind of world if all of us would be able and capable of understanding human-to-computer systems and also being able to plug workflows together.

Conclusion

Max: Great. I think I would love to live in that world one day, Chris, so I don't have to do all these menial tasks. But with that, thank you so much, everyone, for tuning in this week. We had a very engaging conversation, and we hope you will tune in again next week when we'll break down more AI news. Thank you so much again for joining the Chris Rod Maxx show this week. We'll see you next week. Please like, subscribe, and share with whomever you think would be interested in this. Thank you.

Rod: And thank you so much to all who are leaving their comments, giving us feedback, and really supporting us in this journey of exploring AI. It always makes me very happy to see your comments.

Max: Awesome.

More from the Podcast

Post Image: E14: NVIDIA's AI chip delays, Meta's open-source gambit & Big Tech's $100B AI spending spree

E14: NVIDIA's AI chip delays, Meta's open-source gambit & Big Tech's $100B AI spending spree

The hosts explore the topic of NVIDIA's chip delay that sends shockwaves through the AI industry. Apple's surprise move to Google TPUs raises eyebrows. Meta doubles down on open-source AI with a $40B investment. Plus: Big Tech's unprecedented $100B AI spending spree - visionary or bubble? Breaking down the antitrust concerns, market dynamics, and potential disruptions in the rapidly evolving AI landscape.

Max Tee

VC Expert, AI Investor, BNY Mellon

Post Image: Oscar Rovira from Mystic AI - Round 2

Oscar Rovira from Mystic AI - Round 2

cup compound look shore wonderful cowboy vast teeth wolf touch process expression border occasionally return southern would secret draw to inside sad lady mass.

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter