Podcast Image: E01: Generative AI Moonshots, Cybersecurity Startups, and the Enterprise Adoption Dance

E01: Generative AI Moonshots, Cybersecurity Startups, and the Enterprise Adoption Dance

Chris Rod Max discuss the latest developments in AI and the challenges faced by businesses in adopting generative AI. They explore the concept of incremental innovations versus bold initiatives and the factors that influence decision-making in large corporations.

Host

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter

Guests

Chris Wang

AI Innovation and Strategy Expert, CXC Innovation

Max Tee

VC Expert, AI Investor, BNY Mellon

E01: Generative AI Moonshots, Cybersecurity Startups, and the Enterprise Adoption Dance

Chris Rod Max discuss the latest developments in AI and the challenges faced by businesses in adopting generative AI. They explore the concept of incremental innovations versus bold initiatives and the factors that influence decision-making in large corporations. The conversation also touches on the potential adoption of hardware devices for AI assistants and the role of AI in cybersecurity. Funding news related to AI agents in cybersecurity is also discussed.

Keywords

#AI, #generativeAI, #AIassistants, #cybersecurity, #fundingnews

Chapters

  • 00:00 Introduction and Overview
  • 02:16 Incremental Innovations vs. Bold Initiatives in AI
  • 10:44 Exploring the Potential of Hardware Devices for AI Assistants
  • 32:18 The Importance of AI Adoption in the Cybersecurity Industry

Listen on Spotify

https://open.spotify.com/episode/5JaeWwIX7u3YeZu4SUR1B3?si=xj1cpSvKSG-Fko8pn4o6QQ

Full Transcript of the Show

Rod (00:02)

Welcome to another episode of The Chris Rod Max Show. In this episode, we discuss the latest developments in AI, what they mean for businesses, and we talk with the movers and shakers in this space. I'm joined by my co-hosts, Christine Wang and Maxson J.Y. Tee. Hi Christine and Max!

Christine (00:20)

Hey, good to meet again.

Max (00:20)

Hello.

Rod (00:23)

As usual, a lot has happened this week. Over 30 companies have announced funding rounds. We'll discuss some of them, but first, there's been a recent article from Bain & Company called "How Generative AI Moonshots Can Reach Escape Velocity". For those who are not consultants โ€“ and Christine was a consultant โ€“ maybe you can explain, Christine, why this article is so relevant that we should discuss it this week.

Christine (00:55)

I think it's very interesting to consider the trends we're seeing in the AI space right now, with many new companies emerging. There are big questions for large corporations around how to leverage AI and these new startups. It's interesting to look at the market fragmentation. The article mentions that 30% of Global 500 companies are investing in generative AI.

From a corporate perspective, it's important to understand the actual use cases people are engaging with. For startups, it's valuable to see where the demand is and identify gaps they could fill. That's why we're looking at it.

Rod (01:46)

Yes, it's always intriguing to examine the different perspectives companies have. 30% investing in generative AI could be seen as a high or low number, considering this wave started about two years ago. More importantly, the majority are focusing on incremental innovations.

The article discusses moving away from incremental changes - automating workflows or digitalizing processes with some AI - and instead pursuing bold innovations that transform the business. Max, you've been active in large organizations. Are you seeing them lean towards incremental innovations or really bet on generative AI as something that will transform everything?

Max (02:59)

Great question. Traditionally, large organizations tend to be more risk-averse. The return profile of ambitious moonshots may not always align with the priorities of executives in large corporations. They often focus more on incremental improvements to existing products, services, and operational efficiency.

If companies are pursuing moonshots, they're likely incubating them at a relatively small scale without significant funding yet. Based on what I've seen, the focus is definitely more incremental. I think this comes down to two reasons:

  1. Understanding the potential uses of AI, especially generative AI, across the whole industry or company. They mainly understand their current workflows, so that's where AI tools get applied.

  2. Risk appetite. If an executive has been around a long time, they may have more leeway to take moonshots. But for newer execs, proposing a risky moonshot could potentially be career-limiting. The incentives don't quite line up for moonshots in large corporates yet. They always talk about quick wins - it's just the nature of large companies.

Personally, I believe if you really want to win, it doesn't matter if it's quick or not. Just a thought.

Christine (05:22)

To add to your points, it's an interesting debate - what constitutes a moonshot versus an incremental improvement? Typically, incremental means additional features or enhancements to the core business or current revenue-generating model. Moonshots are really the big things that could potentially even disrupt the core business or lead to a whole new revenue stream.

I agree it takes time to see the value and return of AI. We're seeing a standard technology adoption curve, with many use cases being tested incrementally. Some may stick and become more disruptive over time.

There are incremental shifts due to varying risk appetites. The other factor, as another article by Benedict Evans discusses, is building infrastructure. He compares it to how Excel enables many different use cases. This kind of foundation also takes time to develop before we see major shifts in ways of working.

I'm curious about your perspective on this from a more technical product angle, Rod.

Rod (07:28)

There's always a bit of a conundrum. The article lists multiple things organizations should do to move from incremental innovation to transformation. The first is using early wins to gain conviction.

But I often see the challenge in large organizations is first understanding the problem, and then aligning everyone and all the legacy systems and processes. It's not as simple as flipping a switch and magically transforming things. Even incremental changes require a lot to happen successfully.

We saw this in the previous AI wave back in 2014-2015. There were high expectations that weren't met because initiatives failed due to organizations lacking the right processes and talent to adopt the technology.

Having conviction to pursue quick wins and be bold in implementing technologies is one thing, but what happens on the ground tends to be very different.

Max (08:56)

Absolutely. Beyond just adoption, quick wins are usually tied to top-line revenue. Introducing bold solutions in certain industries, especially when selling to large corporate customers, may face slower adoption due to their risk aversion.

Consumer-focused innovations often work better because consumers will pick up interesting new behavior-changing solutions much faster. But selling to enterprises, you run into the same problems they have with your solution.

For example, offering an exciting new generative AI solution to a giant bank to improve compliance could easily take 6 months just for due diligence, another 6 months to understand potential risks, and require whole new processes to meet regulatory requirements around data sharing, etc.

The incentives don't quite align for moonshots in large corporates yet. Quick wins might be harder at the moment, but people and companies can change rapidly these days or risk getting disrupted.

Rod (10:44)

Going back to Christine's point about Benedict Evans' article, his main thesis is that it's been almost two years since the generative AI wave began, yet we haven't really seen new killer applications besides ChatGPT.

As we discuss the need to quickly show results through quick wins, the counterargument is that we don't yet have the use cases or applications that should be widely adopted. Is there a middle path where we can have quick wins that are actually relevant for organizations?

Christine (11:31)

In my opinion, quick wins can be quite hard to measure sometimes. 30-50% of incremental investment has gone into AI and generative AI, but how do you measure the return?

Many employees are using ChatGPT and may have better email flow or productivity because of it. But it's very hard to measure that incremental productivity on an individual level. The use cases we normally see at a corporate level lean more towards customer support chatbots. They're often not perfect and still require human agents to help.

But we did see a recent article about a fintech player that eliminated 700 people because they now use ChatGPT for customer support. I think those are the use cases that get counted.

However, I believe there's a huge gray zone where you incrementally get better at customer interactions as a sales manager or knowledge gathering because the initial analysis is done by ChatGPT or another tool. That's something we don't measure and is very hard to quantify.

So unless we see these bigger use cases emerge, it's challenging to tell if or how generative AI is actually adding value to a company on a broader level.

Rod (13:35)

Connecting the Bain & Company article with Benedict Evans' piece, plus our funding news this week, Evans brings up the topic of multi-agent systems. He suggests that roles like elevator operators, which have been eliminated by automation, could happen with other professions.

One possibility for bold innovation beyond streamlining processes is multi-agent systems. We have multiple companies raising rounds in this space that we'll mention.

Have you looked into multi-agents? Have you seen them in action? Do you think companies will replace employees with multi-agents? What do you see happening on the organizational and innovation side?

Max (14:46)

I think the holy grail would be Jarvis for all. When we compete with each other, we'll just have Jarvis vs Jarvis vs Jarvis, like people with their pets.

The multi-agent use case is definitely interesting. We're seeing attempts to build solutions for organizational information retrieval, allowing you to access the entire organization's knowledge without setting up calls with random people just to learn something in 15 minutes.

I believe there are many opportunities to automate information retrieval, understanding, and producing more relevant content, which would save a lot of time as Christine mentioned.

So far, large organizations don't seem to be pursuing multi-agents. They're trying to centralize learnings, likely due to the scaling laws of AI โ€“ the more you train it, the faster and more efficient it becomes. Everyone is aiming for one agent for the whole company to show ROI before deploying multi-agents.

There seems to be a barbell effect at the moment. Either you believe in one model to rule them all, like Microsoft becoming the agent for the world, or you believe in multiple smaller models deployed to different devices for various tasks.

On one side of the barbell, there's a giant model for everything. On the other, there are many specialized models that could be used for multiple things, with the possibility of having a few models for one task because they target different customers.

Rod (17:41)

Speaking of Jarvis, there have been a couple hardware devices that came to market recently, like Humane and Rabbit, which are small gadgets or smart assistants. The reviews haven't been particularly positive and they seem to still be in the early stages.

Max, do you think devices like these could be adopted in enterprises? Could we see them issued alongside company smartphones and laptops to help with productivity? Or is it too early for widespread adoption?

Christine (18:28)

I want to hear your opinion first, Rod.

Rod (18:32)

I would say these devices are still very basic and there's a question of whether we really need them. We already have smartphones that can do almost everything these devices can. Smartphones are also starting to run some of these AI models locally on-device.

In contrast, these new devices, due to affordability or the teams behind them being startups with limited capacity, have to call the model from the cloud. So when you ask about the weather or the height of the Empire State Building, the device itself doesn't know - it has to call a remote service, get the answer, and relay it back. That makes the experience a bit clunky and not very smooth, with potential delays of 30 seconds or more.

It's still very early days, but I think what's more likely to happen is that these AI assistants will be integrated into the apps and platforms we already use. A few years ago we had Cortana from Microsoft and Siri from Apple. I could see smart assistants getting embedded into business applications and company intranets rather than people carrying around a separate hardware device.

The counterargument is that some people are moving away from do-everything smartphones. They want to avoid distractions and prefer single-purpose devices, which these new AI gadgets provide.

Which camp are you in - a dedicated device for everything or a smartphone for everything?

Christine (20:34)

This reminds me of the Quantified Self movement around 2010 when we had those first-generation step counters like Jawbone and Fitbit. They were pretty dumb devices, yet another thing to carry.

We went through some iterations and now most people either have a smartwatch, use their iPhone to track steps, or maybe have a ring or strap. It's fascinating because you'd think having a smartphone that can take pictures, make calls, and handle workflows would be enough. Yet there's still room for another device.

On the flip side, we've seen examples like Google Glass that didn't go so well in trying to really work for people, though that obviously veers into the fashion realm.

When we talk about a smart assistant device to carry around, I could see it going both ways. It could get integrated into smartphones - I believe Apple is already exploring bringing more AI to their devices and the new iPad is supposed to have some kind of integration. Or it could certainly be a separate device.

I think the key factor in which direction it goes is whether the device can provide so much value that it justifies carrying yet another gadget in your pocket or bag.

Rod (22:26)

You make a very interesting point. A decade or so ago, we had all these single-purpose devices like step counters and movement trackers. Most of that functionality has now been integrated into smartphones. But at the same time, we have things like Oura rings for sleep tracking that are quite popular, despite the substantial cost of the device plus a subscription.

I've been considering getting one myself. It seems to be a successful product even though it's a significant investment and commitment. I wonder which route these AI hardware devices will go.

Tying this back to enterprise adoption, do you think companies might start providing these devices for productivity purposes? Large organizations in banking, pharmaceuticals and such employ hundreds of thousands of people. These days, hardware is increasingly commoditized.

Could we imagine a scenario where Bank of America issues a smart assistant device to employees to help with their tasks? Or do we think that's something that won't find traction in the enterprise?

Max (24:09)

I think it will come, given enough time. Under the right circumstances, anything can be adopted, even if it's just hype. Even your Prime water bottle from Jake Paul can sell for $1,500 at times.

If it brings enough value to individuals, from a longevity and durability perspective, it can be done. Even today, a lot of innovation happening within large organizations is because one or two folks with vision decided to onboard certain solutions or hardware.

An example I heard was a trading desk before 2008 that wanted to spin up their quant model. They realized it would cost them more time and money to fill out tickets and request a new computer than to just go to a PC shop themselves, put it together, bring it in, and figure out how to connect it to the internal servers. That's what they did because the standard process took forever.

Because of situations like that, people will likely start doing more of this if internal processes become too clunky. Eventually, the organizations realized they could save time by just giving everyone powerful enough individual computers to run what they need.

I think this kind of thing will keep happening. Human behavior doesn't really change much. It's just the technology and hardware that human behavior manifests through that changes. So is it possible? Absolutely. When will it happen? I don't know.

Rod (26:31)

This discussion of hardware devices, agents, and innovation connects nicely with our first funding news of the week, which is exactly on the topic of agents.

A company called Bricklayer AI raised $2.5 million in pre-seed funding. They've built a security platform that combines multiple AI agents to form a "team of AI specialists". The idea is not to replace SecOps and DevOps teams, but to enable, enhance, and collaborate with their human counterparts to create a more efficient cybersecurity team.

It's about delegating certain tasks to an AI agent. They define operational roles that theoretically could be filled by a human, like a security analyst, intel analyst, or incident responder. Then they specify which tasks these agents should handle.

In an area as critical as security, which is core to any organization, do you think companies will embrace this? Or will they say it's too sensitive and they need people who are accountable?

Max (28:11)

Before we get to that, Christine, I'd like to quickly address the cybersecurity aspect, as I've been looking into it recently. The open secret in cybersecurity is that when you have a multitude of vulnerabilities, especially in large organizations, they don't catch everything. It's impossible due to the complexity of applications.

Additionally, there aren't enough security professionals to hire - they're expensive and have limited time. They can only focus on the biggest vulnerabilities, while smaller ones that may compound over time often get missed. That's why we're seeing security and data breaches left and right.

Introducing AI in this space makes a lot of sense to me because these issues simply aren't being solved today. Improving the productivity of application security teams with AI could lead to greater security assurances overall.

Accountability won't shift entirely to software or AI yet, but enhancing productivity would allow security personnel to take on more. It's a bit like VisiCalc for accountants. Christine, I interrupted you. What were you saying?

Christine (29:58)

I just wanted to take a step back and summarize the important points we've discussed. We touched on devices and whether AI merits another dedicated assistant device. We talked about use cases, productivity, and probably responsibility - who's accountable if things go wrong.

Regarding the cybersecurity startup you mentioned, from a productivity standpoint, this seems like a workflow augmentation to me. The Bain & Company article noted that 38% of initiatives focus on augmentation and automation. As long as the AI enhances employee productivity, I think it makes a lot of sense and could be applied in a cybersecurity environment.

Ultimately, accountability still needs to lie with whoever employs the AI or uses it to perform tasks. It's comparable to using software like Excel or a grammar tool - you're responsible for the final output, even if the tool assisted you.

As for whether it merits a separate device, I was pondering that. It could make sense if it somehow provides greater security to have a dedicated device for these interactions. But if not, I don't think it requires another gadget to host your smart assistant. A company-issued phone or laptop with an appropriate level of security should suffice.

Rod, as someone who works for an AI startup, what has been your experience selling to large companies? What kind of checklist do you need to go through?

Rod (32:18)

That makes a lot of sense. I was also thinking about another company, APEX, that raised funding this week in the cybersecurity space. Some OpenAI alumni are backing them. They raised $7 million.

While Bricklayer AI seems more focused on cybersecurity itself, enabling security operations teams to complete their tasks, Apex is taking a different approach. They're focused on ensuring that the adoption of AI happening in enterprises is done securely.

Apex claims that 92% of Fortune 500 organizations are using ChatGPT in some way, or at least their employees are, possibly in a shadow IT manner. Organizations lack visibility into who is doing what and which data is being shared.

Apex aims to establish a layer that focuses on controlling the security of AI adoption by also using AI agents. I think this could be a good balance. On one hand, agents make a lot of sense but may be early. On the other, cybersecurity is always a top priority for enterprises.

Having this specialized niche of securing enterprise AI usage with AI itself could be an interesting sweet spot. Christine, Max, what do you think about this approach of AI-powered security for AI adoption in contrast to Bricklayer AI's agent-based security operations?

Max (34:27)

When it comes to adopting these solutions, there will definitely be increasing demand for tools that help figure out the security aspects of AI systems. It's almost like having software, and then another software to check if that software is safe. Now we have AI, and another AI to check if that AI is safe. The parallel is similar.

The first wave of software aimed to convert many manual processes into code. Now we want to move from automation with traditional software to using AI for faster, better automation. But that also opens up new types of security vulnerabilities within organizations.

I could certainly see banks and other regulated industries doing this, especially when they have to answer to regulators. There's a need for these solutions because they provide the license to operate, both figuratively and literally. If you lose customer confidence, you won't be able to operate going forward.

Cybersecurity is definitely an area where people will keep using more solutions. It's essentially a cat and mouse game that never ends. Whenever there's a new technology, someone out there is trying to figure out how to exploit it to attack you. Then you have to figure out how to use new tech to counter that. It keeps going, driving this constant one-upmanship to become better and better. That's how I think it will be adopted, especially in cybersecurity.

Christine (38:06)

This is not an easy question because there are a lot of factors to consider. I think it depends on the strategy, the required security level, existing partners these companies have and whether those partners can provide a sufficient level of security or if it's really a greenfield situation where they have to look to the market for a new player.

Generally speaking, corporates and larger companies tend to gravitate towards known brands and companies that have been around for a while. There's always the risk that a startup may run out of funding. In a critical space like security, if there are data breaches or other issues, that's a major concern requiring a lot of trust and backing. People may stay a bit more conservative in this area.

More from the Podcast