Podcast Image: E10: AI bets & pragmatism, Service sector disruption, Investment woes & the Aleph Alpha controversy

E10: AI bets & pragmatism, Service sector disruption, Investment woes & the Aleph Alpha controversy

Chris Rod Max dive deep into AI's current state and future potential.

Host

Max Tee

VC Expert, AI Investor, BNY Mellon

Guests

Chris Wang

AI Innovation and Strategy Expert, CXC Innovation

Rod Rivera

🇬🇧 Chapter

E10: AI bets & pragmatism, Service sector disruption, Investment woes & the Aleph Alpha controversy

Chris Rod Max dive deep into AI's current state and future potential. They explore emerging use cases in wealth management and legal services, debate the merits of massive AI investments in light of Goldman Sachs' critical report, and discuss paths to overcome the "AI plateau." Plus, an exclusive look at Aleph Alpha, Germany's attempt to challenge OpenAI's dominance. A must-watch for anyone following AI's impact on business and society.

Like what you hear? Remember to smash that subscribe button for more insights every week!

Takeaways

  • AI is being utilized in the services sector, such as wealth management and contract management, to automate menial tasks and improve productivity.
  • While AI can enhance certain aspects of services businesses, there is still a need for human expertise and responsibility, especially in areas like financial and legal advice.
  • Goldman Sachs' report suggests that the current investment in AI may not be yielding significant benefits yet, but there is potential for future advancements and killer applications.
  • The high cost of AI implementation, including energy consumption and data acquisition, is a challenge that companies need to consider when allocating resources.
  • The AI plateau is a real phenomenon, but there are opportunities for breakthroughs and advancements to push AI to the next level. Access to public data is becoming limited, and the next phase of AI may require enterprise or proprietary data.
  • Corporates need to be open-minded and entrepreneurial to invest in AI and leverage their data effectively.
  • Hardware advancements are crucial for the future of AI, as current technology is inhibiting progress.
  • The performance and competitiveness of AI models, such as Aleph Alpha, are important considerations for the industry.
  • The German ecosystem is seeking to establish a competitive position in AI, but challenges remain.

Episode Transcript

Introduction

Max: Welcome to the CRM show. Joining us today are Rod and Chris. In this show, we dig into the latest news in AI and share insights from practitioners to help you start using AI. In this week's episode, we'll cover:

  1. AI use cases forming in the services sector
  2. Goldman Sachs' latest report on AI value
  3. How AI could overcome its current technical plateau
  4. The economics of AI in general
  5. A story about Aleph Alpha, which is particularly interesting for non-German speakers

Let's begin with our first topic.

AI Use Cases in Services

Max: Many businesses in the UK services space are considering how AI could change their operations. The Financial Times recently published two articles discussing how wealth managers and investors are utilizing AI, and how AI is being implemented in contract management. This raises questions about the future roles of lawyers and solicitors in contract management.

Chris, as a practitioner in this field, what are your thoughts on the impact of AI on service businesses, particularly in wealth management and legal services?

Chris: When we talk about service businesses, we're referring to business models and industries that are lightweight or don't necessarily have a physical component, such as transportation or mining. We're really speaking about pure services done between humans, based on intellectual and mental activities.

Anything that is online and digital gets disrupted most. We've previously talked about copywriters, design, marketing, and anything in the creative space. Now, when it comes to legal or wealth management, the articles were interesting. One article mentioned someone who used AI for investments and seemingly achieved a 30% return, which is great for them.

However, the question is whether AI was really the key factor or just the randomness of a rising market. I believe that in the service industry, AI is replacing very menial tasks such as research and information gathering. These low-level tasks that you'd normally assign to an intern are being replaced by AI, resulting in productivity gains and time efficiency.

But when it comes to financial or legal advice, that's where the human element comes back in. The human plays the role of the responsible person for that particular advice, which is important from a legal standpoint. This aspect is not going to be disrupted or replaced by AI.

To summarize, I believe menial tasks are being replaced by AI, but there will always be a human component, if only from a legal point of view, to have someone responsible for the work or mental activity performed.

Max: Thank you, Chris. The separation between menial and complex tasks in every sort of service business is an interesting point to remember. Rod, as someone who's been in one of the most expensive service jobs in the world - software development - what are your views on services being disrupted by AI, especially in wealth management and legal services?

Rod: Focusing on legal services, specifically contract management, what I've observed in the last couple of years is how the industry has been adapting. I monitor what's happening in the job market in terms of AI opportunities, the types of profiles being sought after, and so on. What struck me was that more and more service businesses are building complete AI teams in-house. They're not just paying Microsoft or Accenture to build a solution for them, but rather developing this competency internally.

For example, large legal services firms like Linklaters are actively looking for developers to build AI applications. The same goes for the big four accounting firms like KPMG and Ernst & Young. LinkedIn is full of job ads for these companies. It's obvious why - in all these processes, it's pretty much knowledge management. You have a lot of documents coming in various formats: Excel files, PDFs, emails, and so on. Someone needs to make sense of it and produce a summary.

In the case of accounting firms, you get all the transactions for a business and need to create a spreadsheet summarizing the financial situation. For legal firms, it's about coming up with an opinion on a 600-page contract, advising what's good about it, what should be modified, and what should be negotiated. All of this is very intuitive for AI applications to solve.

Here in the UK, there's a company called Robin AI that focuses on managing all types of contracts and legal documents, providing lawyers with an assistant to help them craft and understand contracts better. Often, even for experts, the terminology and wording can be unclear or open to interpretation. So, rather than replacing legal experts, AI is magnifying them, making them more efficient, and enabling them to dig through more documents.

Max: That's great. It seems you both see AI as a co-pilot within service businesses rather than entirely replacing humans, at least not on complex tasks. With legal work, there's a lot of redlining going on back and forth. Having an AI that knows when and what to redline, especially on simple matters based on organizational preferences, would be super helpful.

Goldman Sachs Report on AI Value

Max: Rod, you mentioned how Linklaters is trying to hire many developers to work in-house. In light of that, Goldman Sachs recently wrote a report suggesting that there's too much being spent on AI with too little benefit. They noted that tech giants and beyond are spending over one trillion dollars on AI CapEx in coming years, but so far, we haven't seen much return from a GDP or overall benefit perspective. What are your thoughts on this, Rod?

Rod: First, the article is very comprehensive. We could do an entire show dedicated to dissecting it. The general tone of the article is more critical and perhaps pessimistic about what's going on with generative AI. Some of this we have discussed in our previous episodes, where we've said that while there's a lot of potential, it's not being fully realized yet.

One thing I noticed is that they're focusing mostly on model capability, saying the next GPT model won't be incrementally that much better than the current one. They argue that if we extrapolate this trend, it won't be possible to recoup the investment. But they're missing a big part - nowadays, it's not just about the models but about the systems around them.

For example, if you ask ChatGPT what day it is, it can tell you. But this information isn't coming from the model itself; it's coming from other systems providing this information to the model. This is what's happening in areas like legal document processing. One part is the model summarizing and assessing documents, but a big part around it is the document processing pipelines in the background. These take PDF files, segment them, modify them, and prepare them for the models to process. These are very complex processes, and a lot of innovation is happening here.

The improvements and advancements are taking place around the model itself, in the architecture, in the infrastructure, more so than in model improvements. So, I think they might not be seeing the full picture.

Max: That's interesting. It's almost like we're fooled by the advancement of all the different technologies. There are some tasks that can and will be automated; we just haven't quite realized all of that yet. Your point about focusing on the models is interesting - we haven't seen all the uplift from it entirely, but that doesn't mean it won't come. Chris, what's your view on the topic of AI having too much spent on it with too little benefit?

Chris: Honestly, I think the report is refreshing because for the longest time we've seen a lot of buzz about how AI is going to change everything. We've talked about different industries getting disrupted and jobs being replaced. This report is a stark difference and it's refreshing because it really tries to show that maybe AI is not what we think it is.

The main gist of this article is that AI is incredibly expensive for two reasons we've discussed before. One is that it takes massive amounts of energy, so data centers are a big factor. The other part is that you need data somewhere, and getting that data is also getting more and more expensive. We're hitting a plateau in terms of more data that you can consume and get to train your models. This makes these models prohibitively expensive.

I think the question is, will we see the kind of upsides that everyone believes in to justify the CapEx? Zooming into the micro level, which is a corporate, I think a lot of corporate leaders are thinking about how much of their CapEx they're supposed to spend to develop their own application layers on different models or hiring a whole data science AI team. And I think that's also where we see a little bit of hesitance around what we're supposed to do with it and whether we actually see the benefits.

Max: It's a good question. I read somewhere that when the internet came, it was very much a cheap solution for people to reduce or do simple tasks. You and I could share knowledge, share items to make commerce work, etc. Whereas with AI, the initial spend is very high to do complex tasks.

From my personal use cases, I do a fair amount of research and searching for information and organizing that information. AI hasn't yet replaced my workflow entirely. I questioned how much of other workflows have been enhanced by AI. If I were to break down my day-to-day workflow, how much time am I saving by using generative AI? So far, I guess I'm still a bit of a novice, so it still takes me time to learn how to get the right answer.

In the report, it talks about some sort of killer application. I haven't quite gotten to that killer application yet. Even for searching for better information, the cost to do a Google search is a lot cheaper compared to doing a search on ChatGPT, for example. Hence, I can do a lot more searches on Google, even if I get it wrong sometimes.

I'm very positive about the potential of the technology, but I haven't quite seen that in my day-to-day work. I'm happy to be proven wrong. If someone can tell me how to do my job better, I'll take that and learn from it.

Overcoming the AI Plateau

Max: There's another article that recently came out titled "The AI Plateau is Real: How We Jump to the Next Breakthrough." It discusses different ways we can make AI better and eventually help us get to the aspirational state we'd like to reach. Chris, what are your thoughts on this?

Chris: This is a continuation of the Goldman Sachs article we just talked about, discussing how AI is getting prohibitively expensive. One part is because of the data drain, which we've spoken about before. This article continues to talk about how all the publicly available data has now been absorbed, and how to unlock the next phase may involve getting access to enterprise or proprietary data.

What I found intriguing are some of the quotes around, for example, Zoom having so many more hours of conversation - I think it might have been in the trillions, like 3.3 trillion meeting minutes versus YouTube's 150 million hours. That's significantly more data that could potentially be tapped into.

Again, this comes back to the question of whether corporates now believe in AI and are willing to invest CapEx to do something with their data. Or are they just going to ignore it because they don't believe it's going to get that powerful? Because if you think about it, all these companies that have so much data, most of them don't really leverage any of that data in a way that significantly impacts their top line.

But if you were to build your own proprietary model, which I think a couple of them are doing, you could actually build something very specific. For example, that could be much better in your customer support because you have all these different hours of conversation, you know the patterns, etc. I think this is a very attractive and interesting way to go, but it really requires corporate leaders to be open-minded, entrepreneurial, and willing to take that risk.

Max: How about you, Rod? What do you think of the article?

Rod: This article from Emergence is a much more optimistic take compared to the Goldman Sachs one. They mention some things that we have been saying on the show. First is that we're running out of public data. For example, OpenAI has been using YouTube video transcripts and audio to train their newer models. But even there, there's a ceiling - there's not an unlimited number of YouTube videos.

One of the things they suggest is that we should start using internal data. The other thing I see is that we should not forget that part of the limitations we currently have is due to the lack of talent. People often don't realize that you can't do this type of AI work with your classic web developer or software engineer. This is a different technology with non-deterministic output.

Traditionally, when I develop an application, I have a specific input and get a specific output that I can test for. With AI products, this is not the case. The output can be very different each time for the same input. For that reason, the profile needed to build these products is different, and there aren't enough people with these skills.

I'm personally very interested in pushing for the creation of a role called a "product engineer" - someone who can bridge the gap between the lowered threshold for AI adoption and the skills needed to develop AI applications.

So while we might be hitting some limitations in some areas, I think there are solutions. We need to think not just about the models, but about the systems around the applications, the talent developing these applications, and the data available. Every company has hundreds of thousands of PDFs and other documents in some systems, often in legacy systems that are difficult to access. So there are many pillars we can tackle to move to the next phase.

Economics of AI

Max: Cool. I agree with both of your views on the AI plateau, especially around the data side, which is the raw material for building a good model, as well as the skills and processes needed to turn that data into something useful from a model perspective. How corporates approach this really comes down to the economics of whether or not it makes sense.

In my head, it begs the question: is this a long-term capital expenditure, and therefore will you be able to capture value from it? It's like investing in an infrastructure project where you'll be able to see the return because you own that asset. But a lot of this software and these models are generally not captured as physical assets on a balance sheet or P&L. I find that quite interesting. I'd love to see how we will eventually approach that, especially when it comes to AI. Technically, it's an intangible asset accounting-wise.

There's a recent article from Airstreet Press about the economics of general intelligence. It's in the vein of what we're talking about right now - if we're just thinking about throwing money at a problem, will we be able to leapfrog physics in a way? We need to be somehow pragmatically or technically pragmatic while we're spending a lot of money.

Chris, what are your thoughts on this article and its implications?

Chris: My first thought was, if there were aliens of higher intelligence who came to Earth and looked at our development and technology, don't you think they might find everything very manual? We build these data centers, we try to connect everything, we try to build these chips. The point I'm trying to make is that to really get AI to the next level, what we need is almost like a hardware evolution or a step change in terms of chip processing power.

The way we're doing it is not sustainable. We spoke about the prohibitive cost of running generative AI from a data perspective and an energy perspective. It seems like the biggest inhibitor for the next generation of AI is hardware-driven or battery-driven or energy-driven. Just imagine if we had some alien technology that would power things very differently. That's the initial thought I had when thinking about this article.

Max: Yeah, that's interesting. It reminds me of Marvel, where you have adamantium, a metal that doesn't exist but is stronger, faster, and cheaper. At the same time, you also have the tesseract, which allows you to have unlimited power. I guess for AI to become big, the basis in the article says that we need a near-term utopia where development costs are low, inference is cheap, hardware bottlenecks cease to exist, pricing is low, and capabilities are extremely high. The question is, how many of those conditions are we hitting?

Rod, I'd love to get your thoughts on the overall view of the feasibility or technical pragmatism that we're trying to push for.

Rod: The gist of the article is pretty much about the dangers of extrapolation. They argue that if you look at some graph and extrapolate, you might think that if we continue investing in this path and getting more expensive but more capable models, maybe at some point we'll reach general superintelligence or unlock the next level of productivity. Then they come up with a series of arguments why this linear extrapolation doesn't make sense and won't happen in practice.

I largely agree with that. I think it's a mistake to think that with the current set of technologies and research, we're set on a path to general intelligence. Right now, what we have with current technology is just being very efficient and achieving high levels of productivity for individual tasks. We're not aiming at creating machines that think autonomously.

Very often in the general press and in day-to-day conversation, we refer to these AI models in a way that makes them sound very human. We anthropomorphize them. But in practice, these models are not intelligent. They don't know what they're generating. And I agree that we need some sort of edge-shaped development, not only in terms of hardware and energy consumption but also in the theory of how these models work.

To put it bluntly, these models are nothing else but matrix multiplications on a massive scale, with a lot of engineering and tricks behind them. There are so many other theories and possibilities for how this could be done that need to be explored. Unfortunately, a lot of this research doesn't work in industrial settings or production environments. Maybe in a lab or academic environment, in some hypothetical situation, it can get better results. But in practice, what's giving us good results is what we have now.

Another topic connecting with this is the risk of over-optimization for specific hardware. At the moment, everyone is buying NVIDIA chips, so there's a tendency to focus on how to maximize performance from these chips. But the best ways to get the best performance from NVIDIA chips might not necessarily lead to achieving general intelligence or the next level of AI.

You can picture it like walking up a mountain - there are multiple paths to get to the top. Some of these paths might be more efficient than others. Once you get on one path, it becomes very hard to switch to another. It can be that we're setting out on a path towards the mountain peak, but is this path the best one? Is it the most efficient one? Who knows, but potentially there are other paths that could be better. For us, it will become close to impossible to get there given the circumstances we have at the moment with our current hardware and technologies.

Max: That's an interesting view. It's almost like we're at a point in time where there's a risk and reward for the path we choose. Just because we have chosen this path doesn't mean it's necessarily the right path for everything that's developing. Maybe through this path we choose, we might get to some sort of "alien technology" where we will become powerful, or maybe we will not. Should we have chosen a separate path, maybe that would have gotten us there.

So I guess the question is, are we over-committing ourselves, over-extrapolating what is happening right now? At least at the moment, we haven't quite seen all the use cases turn up yet, but who knows, maybe in a couple of weeks, something new might come up. We remain positive and bullish about it, but at the same time, we need to bring some pragmatism to that.

Aleph Alpha: A German AI Hope

Max: On the note of pragmatism, we have an interesting scoop for readers outside of Germany. There has been an article about Aleph Alpha. Rod, could you give everyone a little background on Aleph Alpha and share your thoughts on the scoop, which is titled "Aleph Alpha's Inflated 500 Million Funding"?

Rod: Absolutely. There's so much to say here. I'll explain what Aleph Alpha is and its product, and then Chris can get deeper into this topic of their valuation.

Aleph Alpha is one of those things where there's a bit of a dissonance in space. In Germany, it's known by almost everyone in the business environment, startup environment, and technology environment because it has a lot of presence in the media. For example, the national newspaper Handelsblatt had a long coverage, more than 15 pages, some months ago about the company. The front page of the issue said, "Europe should hope that this entrepreneur has success," referring to the founder of Aleph Alpha.

Aleph Alpha is a company based in Germany, started in 2019. It has evolved over time in terms of what they were doing when they started and what they're doing now. They're trying to position themselves as a more private, more transparent, more compliant version of OpenAI. They're saying we need to have sovereignty in Europe with technology and better control of the data, and ideally, we should not give these to US companies.

They're trying to do this not only by having their own models but also by developing their own data centers. They're claiming that yes, OpenAI through Microsoft can provide additional privacy, you can host your data in Azure instances hosted in Europe, and so on. But in the end, this is Microsoft. In the end, you can't really trust them. For example, if the FBI comes over and asks them for data, they likely need to comply because they are a US company, whereas with us, we're based in Germany, so we're safer, more secure.

The company has been in the news permanently over the last years. Last year, I was using their model that was available as a demo, but now this is no longer possible. They've put up a registration wall, and I can't get access anymore. What struck me was how many of these claims they were touting, I couldn't really verify. They were saying their model is much more grounded in reality, that it provides sourcing, it's fact-based, it's more transparent, and so on. I was testing it, and not only me, but also journalists who wrote about it found issues.

For example, something that made the news was that if you asked, "Where do women belong?", the model would answer, "They belong in the kitchen." Or if you asked why there's so much criminality in Berlin, the model would reply, "Because there are too many foreigners." So already, from the get-go, it was labeled as a bit of a sexist, racist model.

In my case, I just queried things that are more German-focused. I was asking about Oliver Samwer, who is a very famous German entrepreneur in the internet space. At the time, ChatGPT answered it very easily, providing the right date of birth, place, and occupation. Aleph Alpha couldn't answer any of that correctly. All of it was wrong. And that really surprised me because I was thinking, how can it be that a company touted as being the next OpenAI can have such a lousy product?

Meanwhile, in France, we have real contenders against OpenAI, such as Mistral, or now there was a new release of a startup that's also in the same category. Whereas Aleph Alpha, it seems that they're not in the same category, even though in Germany everyone would say this is the answer, the alternative to OpenAI.

Max: Thank you so much, Rod, for the explanation. Chris, what are your thoughts around the article, especially regarding the claim that they didn't actually raise half a billion, but really just raised a hundred million with the other 400 million coming from R&D somewhere that is being shared with multiple AI companies?

Chris: I have three perspectives on this Aleph Alpha story. First, a 500 million euro raise in Germany is a huge amount of money. We're not talking about the U.S. where money gets thrown at you. Germany tends to be more conservative on that side. So I think this was the reason why it was such a big deal that a startup was able to raise such a huge amount.

Honestly, I think the article we have here talking about this being a PR stunt and that it's inflated doesn't really matter in terms of how much money they've actually raised. I think it says more about the German ecosystem around raising and the amounts you can actually raise here.

But I think the more critical point is what Rod mentioned: how good is the model and can it actually compete with OpenAI and others? I think that is really where the disappointment stands. If we zoom out, a lot of people and companies in Germany are thinking about how competitive Germany is as a country anymore when it comes to digitization, innovation, and technology. For that reason, Aleph Alpha was such a big hope that it could be the European answer, maybe next to Mistral, to really be a counterweight to the American models.

From a corporate perspective, there's been a lot of talk in board meetings about whether to collaborate with or try out Aleph Alpha, a homegrown German solution that is supposedly more secure and trustworthy than the American alternatives. But it seems that didn't really pan out, and I think this is where the disappointment really comes from.

So my point is, I don't think it matters how much money they've really raised when it comes to talking about Aleph Alpha. It's more a point about the ecosystem. But more importantly, the question is whether the model is really on par with the competitors. And here is where the real disappointment lies. I think that's also where companies might become more cautious when it comes to trying these models, because I don't know what happened with all the partnerships they've actually signed and whether people are trying to get out of those.

Rod: To add to that, for example, the website of the city of Heidelberg, where Aleph Alpha is located, is powered by their chatbot. You can, in theory, test it and see what it offers and its power. It's funny because I was looking at screenshots where it was asked, "Where can I go to have food in Heidelberg?" or "Where can I go to have a Michelin star experience in Heidelberg?" And it would recommend just regular restaurants, or restaurants that don't exist at all.

So here, we can see what we've been discussing in this whole episode: how one thing is the model, and maybe this model might be much better than others, but there are also things about the system around it. Maybe the model is good in the case of Aleph Alpha, but then the system built around it to have the application powered with all these extra details of good Michelin restaurants in Heidelberg from factual sources may not be as well developed.

What I see is that the debate is not just about Aleph Alpha, but about the consequences for the German ecosystem and whether it's possible to raise that amount of money. But just next to Germany, there's France with Mistral, which has also raised significant amounts but really does have a lot of things to show for it. Pretty much every three months, they have a massive model release with massive improvements, and this is transparent and useful for everyone.

For Aleph Alpha, not only do they not release anything to the public, but their platform itself is not really open and also has some issues in how it's developed. For example, I was playing with it earlier today, and even the links internally on their website are broken. If you want to sign up for their product and check the pricing, the page doesn't exist. How can it be that a company that is supposed to have raised 500 million euros has all these basic details, such as correct linking, not fixed? It's very surprising. I'm wondering what's going on in this company, what is really behind the curtains.

Max: And folks, there you have it. We have insights here from Aleph Alpha. For those who are interested, please go and try it out and check what Rod said. If you have any screenshots, feel free to send them to us.

To wrap up today, thank you so much for joining us, Rod and Chris, and also to our listeners. Today, we talked about:

  1. Use cases in the services business, focusing on wealth management and legal
  2. Goldman Sachs' research on whether we're spending too much on AI with too little to show
  3. An Emergence article that discusses how we could overcome the AI plateau
  4. The economics of the hardware for different types of AI we're trying to build
  5. Aleph Alpha and its place in the German AI ecosystem

That's a lot to digest. If you like what you're listening to here, please give us a like, subscribe, and we'll see you next week on the Chris Rod Max show.

More from the Podcast

Post Image: E21: Is OpenAI's $157 Billion Valuation Justified? Chris Rod Max Weigh In

E21: Is OpenAI's $157 Billion Valuation Justified? Chris Rod Max Weigh In

In this episode, we dive deep into the recent challenges faced by OpenAI, including employee departures and leadership changes, and explore whether its $157 billion valuation is justified. Chris, Rod, and Max discuss the difficulties of rapidly scaling AI companies and provides insights on potential investments in the AI sector. We also examine Hippocratic AI, a healthcare AI startup valued at $500 million, and debate the future of AI in healthcare. Join us for a fascinating discussion on the current state and future potential of AI companies.

Rod Rivera

🇬🇧 Chapter