Podcast Image: E06: AI Hype vs. Reality: $10M Gains for Klarna, But Security Fears Arise as Models Infiltrate Devices

E06: AI Hype vs. Reality: $10M Gains for Klarna, But Security Fears Arise as Models Infiltrate Devices

In this episode, Chris, Rod, and Max discuss the hype cycle of AI, the cost and benefit analysis of AI implementation, and the security concerns surrounding AI in consumer devices.

Host

Chris Wang

AI Innovation and Strategy Expert, CXC Innovation

Guests

Max Tee

VC Expert, AI Investor, BNY Mellon

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

E06: AI Hype vs. Reality: $10M Gains for Klarna, But Security Fears Arise as Models Infiltrate Devices

In this episode, Chris, Rod, and Max discuss the hype cycle of AI, the cost and benefit analysis of AI implementation, and the security concerns surrounding AI in consumer devices. They explore where AI currently sits on the hype cycle and the potential future developments of generative AI. They also examine the cost savings and productivity gains that companies can achieve through AI, using examples from Klarna and IKEA. The conversation concludes with a discussion on the privacy implications of AI in consumer devices and the potential impact on the market.

Like what you hear? Remember to smash that subscribe button for more insights every week!

Takeaways

  • AI is still in the early stages of the hype cycle, with expectations and excitement continuing to grow.
  • Companies can achieve significant cost savings and productivity gains through the implementation of AI, as demonstrated by Klarna and IKEA.
  • Chatbots are evolving with generative AI, allowing for more natural and flexible conversations.
  • Privacy concerns surrounding AI in consumer devices need to be addressed, but the trade-off between privacy and productivity is a complex issue.
  • The future of AI will depend on advancements in data availability, hardware capabilities, and user acceptance.

Episode Transcript

Introduction

Chris: Welcome back to another episode of the Chris Rod Max show where every week we talk about the latest trends in AI, discuss the second and third order consequences, and engage in interesting discussions with founders and startups within the space. Welcome back Rod and Max. How are you guys today?

Rod: Hi everyone, great to be here. Hi Max, hi Chris.

Max: Good. Good to be back.

The AI Hype Cycle

Chris: Today we have really interesting topics. We are going to talk about the hype cycle of AI from a macro perspective. We'll discuss the cost-benefit analysis for companies employing AI in their workflows. And last but not least, we'll touch on security with the recent co-pilot recall and the OpenAI-Apple collaboration that Elon Musk has been vocal about.

Let's start with the first one. Gartner's hype cycle shows how new technology gains awareness, goes through a peak of inflated expectations, then a trough of disillusionment as those expectations are not met, before steadily rising to a plateau of productivity as the technology matures and gets adopted.

Over the last two years, we've seen an explosion of excitement and hype around AI. My question is, where do we sit today on this curve and what do you expect will happen with AI going forward? Rod, let's start with you.

Rod: When I see the Gartner hype cycle curve, I'm skeptical we are at the peak of the hype as they claim. I believe it's just getting started. Most companies are just moving out of a pilot phase, and many have not even begun pilots yet. So to think the hype will slow down in a couple years - I'm quite doubtful. Quite the opposite, I think this is just the beginning.

Chris: Interesting. What about you Max?

Max: I concur with Rod. There's a difference between what the media portrays versus what's actually happening on the ground. Based on my conversations, people are trying AI but almost always at a smaller scale. The full rollout hasn't happened yet to make it available organization-wide or to entire customer bases.

However, I do see AI being part of roadmap discussions from board meetings down to weekly development meetings. The hype cycle in terms of rollout is just starting, because that's when we'll see if the economics actually work. Overall the productivity gain is a good thing, both for business economics and society at large.

Chris: So you both believe things are just accelerating rather than expectations deflating. We see this in the market too - Nvidia is now a trillion dollar company, there's significant VC funding going into AI, and startups like Anthropic are raising huge rounds at multi-billion dollar valuations.

Looking at Gartner's curve, only some narrow AI use cases like virtual assistants may be hitting the trough of disillusionment, as we question if users really prefer AI over humans. But I agree, for most other areas in the generative AI space, there is still much progress and expansion to come. Multi-modal AI combining images, tables, text, video is yet to be rolled out widely.

After the GPT-4 announcement, the question is will AI models get more data to continue improving? Rod, Max - what do you think about data as a limiting factor alongside chips, datacenters and energy?

Rod: A few days ago I read an article questioning when we'll run out of data to train these large models. Currently we use almost all the data available publicly on the internet. There are rumors that for OpenAI's latest video model, they used YouTube transcripts to get more training data.

Some researchers say we could run out of new data to use in as little as 4 years depending on the quality threshold. While synthetic data generated by AI could be used, studies show model quality degrades with more synthetic rather than human-generated data. So in 5-6 years, we may need a new technological approach if we exhaust both human data and the utility of synthetic data.

Max: It's fascinating to consider we may one day run out of data. It made me think - how do humans train ourselves? We have thousands of years of data coded into our DNA that triggers automatic responses.

The second order question becomes, will running out of data prevent us from achieving AGI? And do we need to train narrow AI models on all the world's data or are we better off using targeted subsets specific to each use case? It's like a WWII analogy - tanks were great until they ran out of fuel. As we fund more foundational models, where will they get the data and how will they differentiate if data is finite? Just some thoughts to consider.

Cost-Benefit Analysis for Companies Adopting AI

Chris: Let's shift to the second topic - is AI really helpful for businesses in terms of saving costs? Rod, you had an interesting article on Klarna - maybe you can elaborate.

Rod: Klarna is known as a leader in generative AI. Earlier this year, they released a customer support chatbot that replaced up to 700 employees. Just this month, they shared an update on using AI to automate their marketing.

Typically, global marketing campaigns require translators, copywriters, agencies, photoshoots, studios, etc. - a lot of logistics and expense. Now Klarna is using GPT models to generate campaign text instead of copywriters. And they're using image models like MidJourney and DALL-E to create visuals instead of photos.

Klarna claims this is saving them $10 million on an annualized basis. Not only from reduced costs, but it means far fewer people involved in creating campaigns. This will likely encourage many more companies to move from real photos to AI-generated images.

Chris: These numbers from Klarna are great to see published, as it helps build confidence in the business case for investing in AI. To recap a few key figures:

  • AI is responsible for 37% of Klarna's cost savings, translating to $10 million annually
  • It's not just about cost reduction, but also productivity gains - AI helps them create more campaigns, not just replace people

Max, from a financial perspective, what's your take?

Max: In financial services, it's always been about productivity gains. The more productivity we unlock, the more we expand what we can achieve as a society.

I'm wondering how much of those productivity gains will move beyond just marketing to other parts of the business. Can AI improve credit underwriting for example? That could be a huge time saver if the models are trusted.

My sense is AI will permeate entire company operations, from targeting customers to underwriting to customer service and marketing. This will reduce the workforce needed to achieve a certain revenue level, freeing up people's time for other innovations.

An interesting question is around consumer demand - if we create more marketing, can people actually consume more or do we just get better at targeting? There are still opportunities to expand, with half the global population not yet online. The productivity gains can help deliver more services to the developing world.

Rod: Max, you raise a good point about how many smaller companies lack the resources to create the polished campaigns larger firms can. Now with AI-generated text and images, they can achieve a more professional look despite limited means.

It's similar to how brands were initially reluctant to work with influencers taking casual mirror selfies compared to glossy magazine shoots. But today, brands embrace influencer content. As a society, we've come to accept a lower production quality and prefer more natural, less polished content. The same will happen with AI-generated media compared to expensive productions. More companies and individuals will be able to elevate the quality of what they produce.

Chris: These are really compelling examples of how AI can meaningfully help businesses in ways that not only cut costs but boost productivity, especially in marketing-related activities.

AI Chatbots and Customer Support Chris: Let's look at another use case. Referring back to the hype cycle, one area that seems headed for the trough of disillusionment is chatbots. If we examine the customer support scenario, IKEA has some insightful numbers. They implemented an AI assistant to help navigate their vast product catalog, but only 1,500 users are active per month. Rod, how would you evaluate that?

Rod: You can take a glass half empty or half full view here. Pessimistically, only a few thousand users engage with this chatbot out of the 190 million monthly visitors IKEA gets according to SimilarWeb. It's a very small fraction of their total user base.

However, we don't know the details - perhaps the chatbot is only available to registered users or for limited use cases, meaning it's just a subset who have access.

Taking an optimistic perspective, the fact IKEA is making headlines with cutting-edge tech is noteworthy. We don't typically think of IKEA as a tech company, yet here they are deploying advanced AI solutions that until recently were only feasible for the likes of Google, Meta and Microsoft.

Importantly, the 5% conversion rate they're seeing is actually quite good for the industry. So while the absolute usage numbers seem small, the results are promising.

Chris: Max, what's your take?

Max: AI chatbots have existed in some form in financial services, though not always powered by generative AI as they increasingly are now. I expect to see continued growth in this area.

The ability to engage with customers outside business hours and maintain a consistent experience even when human agents are off duty is quite powerful. It's akin to the 24/7 nature of the internet - you can have an ongoing conversation and get support anytime. Hiring round-the-clock human staff globally is the current model for many large financial services firms, which gets expensive.

I think this "always-on" dimension will enhance customer experience and extend across industries, not just tech companies. It eliminates the friction of having to wait until the next day if an issue arises after hours.

Whether chatbots are truly headed for disillusionment, I don't think so. We're just scratching the surface and removing the barriers that used to hinder us, like only being able to get help during limited working hours. This should go away as AI can engage with us to resolve issues anytime.

Chris: Maybe one of you can explain the difference between a conventional scripted chatbot versus a generative AI-powered one?

Rod: It comes down to whether the flow is pre-defined or not. Chatbots have been around for decades, but historically, creating one required mapping out the possible conversation flows and decision trees in advance.

As a result, you had a very narrow range of possibilities. You often had to restrict user input to something like "Press 1 for Yes, 2 for No." Following the pre-set paths made the experience feel very robotic and breaking the flow was difficult.

In contrast, with generative AI, you enable pure natural conversation. Users can type freely and even if what they enter doesn't exactly match a keyword or has synonyms, grammatical errors, or ambiguity, the AI can still recognize the intent and provide relevant information.

Chris: Essentially you're saying it's like having a dialog with ChatGPT, which many of our listeners have likely tried. You can make mistakes and don't have to hit precise keywords, yet still get a sensible response.

My one concern would be the AI "hallucinating" or making up an inaccurate answer, as we've seen in cases where companies had to compensate customers because the AI provided incorrect information about a refund or entitlement.

Rod: Definitely, hallucinations are a constant challenge that companies face in implementing AI - not just for customer-facing apps like chatbots but also for internal tools. The core issue in areas like enterprise search is how to retrieve relevant results that are also factual based on the source documents.

Chris: I think part of the disillusionment with chatbots stems from our negative impressions formed by poor experiences in the past. How many times have we all struggled with unhelpful chatbots from banks, insurers or utilities? They've developed a bad reputation over time because the underlying tech was so limited.

It will take a while to overcome that stigma and convince people that engaging with an AI is now more like having an intelligent conversation and can be more efficient than past chatbots. The technology has fundamentally evolved.

Hardware Integration and Data Privacy

Chris: The last topic is the recent Microsoft update allowing Copilot to learn from PC user behavior. It can now take screenshots, which raised security concerns. There were also announcements about Apple integrating OpenAI's ChatGPT into their phones, which Elon Musk has been very vocal about. What are your thoughts?

Rod: I think a lot comes down to communication and helping users understand when their data is sent to external servers. With Apple, they're taking a hybrid approach to revamp Siri, which has been around for about a decade but hasn't lived up to its initial hype as an intelligent assistant.

Now Apple is looking to integrate generative AI from OpenAI to power Siri. For more complex queries, data will be sent to OpenAI's servers. But for basic daily requests, the goal is to process those on-device, so the data never leaves the phone, requires no internet connection, and remains private.

Apple says that even when accessing OpenAI's servers, they will always ask for user consent first. And any data sent will have the IP address masked to prevent OpenAI from building user profiles based on query history.

Will this be enough to assure users their data is safe and not being misused? I'm not sure. But it shows the importance of transparency in explaining how these systems work. It also highlights how devices are becoming powerful enough to run many AI tasks on-board without relying on the cloud.

I was at an Intel event recently, and even though we don't typically associate them with generative AI, they are releasing "AI PCs" with new chips to compete with NVIDIA's GPUs. They demonstrated running AI workloads on consumer laptops with results comparable to NVIDIA while using less power.

So in the coming months, we'll see this new hardware proliferate and how Microsoft, Apple and others build software to harness on-device processing. Nevertheless, the concerns you mentioned about Microsoft's Copilot taking password screenshots will likely persist, because a lot comes down to how the software is designed. Bugs and poor implementation are often at the root of security issues.

Chris: So Rod, do you think Elon Musk is exaggerating by saying if Apple moves forward with OpenAI integration, he won't let any employees at his companies use iPhones?

Rod: Well, he never shies away from stirring up controversy. It's what drives engagement on his Twitter platform. We should also remember he has a stake in OpenAI-competitor Anthropic. So anything that makes OpenAI look worse is good for him.

I'm sure he's exaggerating to some degree. I don't think he'll actually ban iPhones from all his companies as he threatened.

Chris: What's your opinion, Max?

Max: Maybe we should make a bet just for fun - will he do it or not? I think there are enough alternative devices out there to get by without Apple in a corporate environment. So feasibility-wise, he could do it.

Whether he follows through likely depends how serious he is about promoting Anthropic as a privacy-focused AI company. If this is about more than just headlines and he truly wants to build an AI business with strong privacy safeguards, he might go ahead with it. It would be a quintessential Elon move.

Banning iPhones would also undercut Apple significantly. There may be a secondary motive to hinder the Apple-OpenAI deal. If that partnership gives OpenAI some form of exclusivity on iOS, it boxes out Anthropic. So there are likely competitive dynamics at play here too.

Hard to know his true intentions, but if the goal is to protect sensitive company data to train Anthropic's models, then he may well do it. If it's just a tactic to scare Apple away from an exclusivity arrangement with OpenAI, then perhaps not. Certainly an interesting situation to watch.

Chris: None of us can read his mind, but he did recently drop his lawsuit against OpenAI, so we may never know his full motivations.

Max, what's your general take on the privacy outcry we've seen with generative AI being embedded into consumer hardware and operating systems?

Max: This echoes privacy debates we've had many times before. The old adage was "if you're not paying for the product, you are the product." It referred to how Facebook and others captured our data to sell to advertisers for targeting.

I think from a consumer perspective, it comes down to the value exchange. Am I willing to give up some privacy in order to gain capabilities that significantly boost my productivity and quality of life?

For most people, if they're getting more value from sharing data to power services that uplift their lives, then they'll make that tradeoff. We've done this for years with Google's location tracking for instance.

Looking at it from Apple's view, by adding these AI enhancements, will it drive more device sales, even if it means reduced privacy because some data has to be collected to train the models? If so, you could justify it as a net positive - increasing consumer capabilities and productivity which then also benefits the broader economy.

So it's a balancing act. If we were so worried about privacy and tracking, we'd hardly be able to walk outside. Even in London, as soon as you exit the Underground, there are cameras everywhere capturing your face. We have little insight into how that data is used, but tolerate it as a public safety measure. China has taken that mass surveillance to another level.

The Western world also deploys these systems extensively, with the rationale that it's for our collective security and wellbeing. So in evaluating the adoption of AI technologies, it really comes down to the tradeoffs involved for individuals and society.

Conclusion

Chris: I think you raise a fair point - a lot comes down to people's awareness of the issues at stake.

To recap, we covered quite a few important topics today:

  1. The AI Hype Cycle: Our view is that we don't foresee hitting a plateau anytime soon. Gartner sees a leveling off driven by limits on data and questions about return on investment. But we believe for most domains, there is still significant room for progress and growth.
  2. Cost-Benefit Analysis: The business case for AI depends heavily on the specific use case. We see strong evidence of AI driving both cost savings and increased productivity in areas like marketing. Other applications like chatbots have a steeper adoption curve as they must overcome negative perceptions from earlier technologies. But the potential is immense.
  3. Consumer AI and Privacy: The integration of generative AI into devices and operating systems from Microsoft and Apple is renewing the classic debate around convenience and capabilities versus data privacy and security. As with prior technologies, much will depend on the value users perceive relative to the data they must share. Clearer communication from tech companies is essential.

Thank you both for sharing your insights. To our listeners, if you enjoyed the show, please subscribe, like, and share it on Twitter, LinkedIn, and your favorite podcast platforms. We'll be back again next week to discuss the latest developments in AI. Until then, farewell!