Episode Transcript
[00:00:01] Speaker A: Greetings everyone.
Welcome to this week's coffee with Digital Trailblazers. Happy Halloween and glad you are all here.
We are going to give our normal two minutes of grace time to get everybody clicking and connecting.
And I'm super excited for another one of our discussions around AI and AI agents and AI agents at work in this case and should be a fun and exciting discussion.
Just share a little bit of personal news. I'm excited to announce that my course on LinkedIn, which is called Digital Transformation for Leaders in the AI Era, that came out, I think it was in July and that has now surpassed 3,000 people who have taken it.
And I'm super excited for the response that the course has gotten. And if you have, if you have access to LinkedIn learning, you'll see a URL pop up on the whiteboard on how to access the course. It's about an hour and 15 minutes.
It's got a few sections on AI, it has a role playing test that you can go through that is AI oriented. It's got a section on AI strategy, it's got a whole bunch of other stuff. For all of you who are trying to sharpen your pencils about leading in this digital transformation era, want to say to hi to everybody who's joining on the on the comments train. Holo Juanita hello David, Kristen is here. Kristin, I know you're a big have a big interest on this topic and thank you for all your support.
Looking forward to your comments. Today and this week we are talking about AI agents at work, the IT.
[00:02:01] Speaker B: And HR alliance to drive adoption and value. I do have a confession to make today.
We do not have somebody representing HR on our speaking list. I tried really hard.
I have some thoughts around why I could not get HR folks to join us today. I have a feeling they're just very busy and so we're going to talk about driving adoption and value at work. And where I want to start today is a little bit about what's been in the news.
This has been a day or a week where lots of things have happened and if you're following the technology news and you're an investor, you're going to be reading into Nvidia becoming a $5 trillion company.
Microsoft, Google, Meta and Amazon all announcing significant revenue growth from AI and sort of, you know, I don't know. My, my, my position on this is inflating the bubble.
If in fact, even though they are all reaching and doubling their cloud capacity, it seems like every two to three years.
So they're clearly the winners in this AI equation.
But if you look just a few days ago there was a headline article in the Wall Street Journal, really summarizing the Amazon announcement about their layoffs, UPS layoffs, target layoffs, all being essentially triggered by AI.
And you know my post on Monday, I hope you'll go to drive.starcio.com to look at this.
But I'll be talking about what organizations have to do given that your boards and your CEOs are likely going to be looking at ways to reduce costs, reduce headcount and use AI as a catalyst for driving a more efficient organization.
And so that's where I want to start here. We're talking about bringing AI agents to work.
We've got the big tech companies benefiting from it.
And I want to start with my C leaders. I'm going to start with Martin and maybe go to Joe and Joanne.
Just get your feelings about today's announcements or this week's announcements.
And are there any, any, is there any advice you have for for IT leaders for digital Trailblazers on how to read into this week's announcements and what that should impact, how they should impact their AI strategies over the next six months? Joe, you're raising your hand first, I'm glad to hear your thoughts on this week's announcements.
[00:04:51] Speaker C: So my advice to to Trailblazers is don't forget the the basics. Don't get so caught up in all of this spin around AI and how it's going to have these hugely dramatic impacts on every organization.
You still have change management.
I, I posted this morning an interesting article that I saw in one of the one of the trade rags and you know, it pointed out that since time immemorial the cost of the software or the service you acquire in this case we're talking about the cost of putting in AI, whether it's copilot or whether you're building a small language model. The cost of that implementation and even the cost of the operation of that platform pales by comparison to the culture change, the change in the organization that's necessary to truly derive value, the new technology. So first and foremost, when I read all these announcements about billions of dollars of investment in infrastructure and and, and tuning the models and making them experts and so on.
Let's not forget the fundamentals.
Moving an organization in a different direction is a huge undertaking. Takes time, it takes effort and it takes people.
[00:06:13] Speaker A: So, so give me a step further, Joe. You know, I like this idea of owning the change management.
Clearly we're at another inflection Point with the this year's announcements around AI agents. I have a blog post that I published a few weeks ago listing AI agents from 50 different major SaaS and security companies. And when you say own the change management, give me a step toward the adoption. Give me a step toward the value that they should, that a digital trailblazer should be looking for.
[00:06:48] Speaker C: Well, understand that when you begin to automate trivial functions, low level functions, the initial contact with customer service, the audits, in the case of a financial institution, legal research, whatever the case may be, you're losing a raft of individuals who were trained who, who build certain knowledge and expertise around the industry by doing the grunt work for a year or two or three.
And so it begs the question, when your middle management or your senior management leaders begin to move on, whether they retire or just move to another organization, how do you replace them?
This whole concept of, you know, building the, building the team from the bottom up by virtue of experience and exposure to customers and processes and so on, that's a whole different morass that you have to be thinking about now as that senior leader. Where are you going to get your lieutenants? Where are you going to get your captains if there aren't any privates anymore?
That's a real concern.
[00:08:00] Speaker A: We talked a lot about that concern.
Joe and I were at an event this week called the Spark CXO Forum and that came up at a. In at least two or three different panels. Martin, your first thoughts on this week's news and leaning into how we're going to drive adoption and value as more agents enter our workforce?
[00:08:21] Speaker D: Well, I kind of with Joe on some of that. You know, my first reaction is, oh yeah, a bunch more announcements from the big guys on the it side of things. It, it's kind of bit like Joe, yeah, you're always getting something and always over the years and everything else. So you got to focus on what is more important to your company.
[00:08:41] Speaker E: Yeah.
[00:08:42] Speaker D: And how does the kind of current economic certainties or uncertainties impact upon your company and how do you actually drive. Yeah. The best results for the company. So I'm kind of looking more at a pragmatic view about where are your strategic investments given the aggressive nature of change on the AI and the agentic side of things, where are you really going to adopt some of those aspects to deliver to the bottom line for your company? And that's more and more important given some of the economic uncertainties and other things like that.
So I'm just kind of sitting there saying, yeah, okay, nice. It's nice to see all these announcements. It's not good to see the layoffs in different aspects. But I'm still going to go to the bottom line saying what is strategically important for my company and how am I going to deliver to the bottom line and deliver to the bottom line sooner because that's what really matters.
[00:09:46] Speaker A: Thanks, Martin. I'm going to go to Liz next. Derek, I see your hand. And Joanne. Liz, I know is got a tight timeline today, so we're going to let her cut the line and speak to us first. Liz, you know, my thoughts to ask you around is just about all the experimentation and all the, the, the toys that are coming out that are available to us. And you know, somewhere in here, the experimentation has to lead to value.
[00:10:17] Speaker F: Yeah, well, from a micro, if you're just looking within a company to understand how people are leveraging the toys, there's just a huge amount of value that people can be doing in terms of efficiencies and in terms of, you know, creating new ideas and working together and collaborating, which really could lead to, you know, amazing new opportunities for not only positions. I mean, this, this was originally about HR and how it could partner with the AI trend and how, you know, people could be, you know, creating new roles around prompting and, you know, new products that could be. That are leveraging AI and how we could be, you know, really using this to go forward.
All fantastic ideas and ways in way in which we could be better bringing value to the top and bottom lines.
Unfortunately, I think that there's a lot of.
Because of all the government shutdown and the economic uncertainty and basically what we're seeing right now because of the shutdown, the labor statistics are kind of a mess and we don't even understand what's going on with unemployment.
This bubble that AI is putting out there and the raft of unemployment that is, in my opinion, right behind it, I think it's a very real problem.
It's a very real problem. And I think that people learning and understanding AI, it's sort of, you know, the best thing people can do on an individual level, again on a micro level, is to understand as much as possible how to make yourself valuable and how to leverage yourself and what you can do for a company's bottom line at an individual level so that you can actually ride, you know, surf this wave that's coming down the pike.
[00:12:22] Speaker A: I like that message, Liz, about changing your value and looking to reinvent yourself. I mean, this is not the first time that technology is coming into the workplace and going to employees and saying you have to, you know, adapt a new set of tools or learn some new workflow or find a new creative way to deliver value.
And you know, I think more than ever what's probably changed is the employee has to step up and navigate that themselves a little bit. Right. Because all these capabilities and tools are being plugged into every platform that's out there.
You're not going to be able to wait for somebody to give you a training course on it. That's just, I think that's a, I think that's just a good message. Let's go to Derek. Derek, welcome back.
And your thoughts on this week's news. And you know, we're going to start bridging into AI agents at work and how we drive adoption and value.
[00:13:23] Speaker E: Yes, good morning. Yeah, I think I'd have to agree with Martin. I mean this is very strategic. When you look at all these big players, the Microsoft, the Googles, the Metas, I mean these organizations, they invested and planted AI seeds early on on to kind of see how it would help with automation and workflows and things that they were doing within their industry.
And now they're actually harvesting those benefits of what they see based on the fact that they could go through and they could work with decision making, they could work with automated workflows and is really looking at, you know, the resilience aspect of this because now they've got artificial intelligence adopted into the workforce and semi business culture because it's still, they're still being worked out but they're really not prioritizing, you know, things like automation, the governance, the architecture, all these things in compliance. But now they can do it in an automated fashion and they've established these AI risk frameworks allow them to do that to allow business to continue. So now you've got a continuous form of flow. I got to go back with Joe said, also with the business culture, it does take time to adapt, but also being the fact that this was two years ago, companies need to understand they need to change, they need to evolve, they need to be looking at how they can establish and maintain that business value in the context of the business they're working with that they can step up and part of the process and not just wallow in it and just kind of let it go by. And I think by looking at that, it's going to help them better plan for the future. It's AI is not going away and those that embrace it early, like these other companies, they're going to see the benefits early. On not everybody's going to like it, but, you know, it's a business decision. How can I become more resilient and keep my business operations flowing?
[00:14:58] Speaker A: Thank you, Derek. I really like just emphasizing again that we still have to figure out our organization's approach. Right. The why, the how, the why, and the when I think is really important.
I think when you look at how we're implementing things, the question has to be extended to how we're using AI to help us implement.
Joanne, that's a prompt to you. I just called AI agents a toy. I actually don't believe that.
I think some of them are maybe a little bit more marketing than substance. But the ones that I've seen that are substance are game changers.
And you can see where it's going, where, you know, workflows that we conceived even just two or three years ago with all the automation are going to get rewritten from an AI capability first. So, Joanne, how do we talk about, you know, the big IT companies who obviously have a vested interest in driving more companies to use their capabilities and to experiment with AI and then shift to what we have to do inside corporations around AI agents, right? Which comes down to adoption, it comes down to enticing employees, and it comes down to leaders in IT and HR and other departments finding ways to collaborate to evolve how the organization's running. Just love your overall thoughts on this. This.
[00:16:28] Speaker G: Okay, well, first of all, I would say that many organizations make the mistake of using productivity stats like, you know, I put some of them in your notes. But basically, if you look at Overall, people spend 27% of their time looking for information, AI changes that dramatically.
So while it may be 27% of their time and the cost of that time, getting people to actually use it is almost the same as getting them to be comfortable with a learning management system.
I'm now retraining you to work in a different way. But I'm also giving you with the stick, the carrots. It says, I'm allowing you to, to upskill yourself and make yourself more valuable to the employer. And I think as organizations come together from a strategic point of view, it's not just about, I can remove this number of employees and put AI in because I'm going to gain that productivity improvement automagically. It's about where the inefficiencies in workflows actually were and how antiquated in some situations, business processes are. So it's a mindset shift for the organization before that collaborative capability kicks in to get everyone aligned on the same page of this is why we want to use AI, this is how we're going to use AI. And then it's the adoption of enamoring people to use AI, to take some of their grunt work away, to stop having to look for, you know, disconnected information or whatever. Now, that's not to say that that couldn't be done in a different way without the AI nomenclature around it, but if their view is to utilize AI for resiliency, for true organizational value, shareholder value or whatever at that top level, as well as gain the efficiencies, that's when the collaboration starts to kick in. I mean, I'm going through this with a customer now where I've never seen so much disconnected information, but I've seen true alignment among senior executives to say, yeah, we really all have similar problems. It's like take the top five problems across.
You know, from my background in manufacturing of the coo, the top five problems resonate across. Do they then decompose into very specific issues within sectors? Absolutely. But you're starting to see that the C Suite is getting on the same page. They all face the same challenges. That starts to promote the notion of this is an enterprise initiative. This is not a departmental initiative. This is not playing in sandboxes and extensions, experimenting. We really need a strategy. It's about security, it's about intellectual property, it's about privacy, it's about giving people a way to make themselves more valuable. All of those things can be used to enamor the workforce, too. On board now, agents. I just want to make one last comment. Agents are designed to run autonomously.
And people believe that they don't in some camps, that they don't need human in the loop guardrails or that that means that the human can go away. That's a serious mistake. Because with large numbers of the workforce retiring, that tribal knowledge is out the door. You'll never get it back.
That's the serious mistake. And that, I think, has to be part of the conversation between the AI group, the IT group and the HR group more than any other.
[00:20:30] Speaker A: You know, Joanne, the most important part you're suggesting here is that I think leaders have to step up and communicate that strategy. Right? It's one thing for. It's one thing for employees to, you know, gain the skills, to recognize they're doing something inefficiently and they're going to try to prompt an AI to do this, that they're going to look at the tools they're using day to day and investigate where AI is turned on and how they might start using it.
Some of it's going to be guided through past experiences and others who have done this well. But if people are doing this, leadership needs to step up and say, look, this is what we're aiming for.
And here are some examples of where we're excelling at it. And I'm really talking about promoting the people who are doing the excelling, right, not just their example, but the people themselves. I think the organization needs to know where AI is having success with people's adoption.
I struggle with some of your language, Joanne. I mean, I think agentic AI is automated, but yet we're putting human in the loop as a core paradigm for most of them.
I get worried when I use the word autonomous right now. There's a question here in the comments. Are we really ready for agentic implementation at enterprises?
And for good or bad, Joanne, people translate agentic AI implementations to fully autonomous without humans, and that's what they're translating it to. And I think that's going to scare people away.
[00:22:18] Speaker G: It, it may and, and I'll be brief, it may scare people at the outset, but if you look at the actual formula for creating an agent, they're designed to run autonomously.
So, you know, in our case, we put human in the loop back in as not only the guardrail, but also because we wanted to capture that tribal knowledge.
That being said, the whole notion of it is that agents should run in your environment without any human intervention, unless it's mandated that humans have to be in the loop.
So if you want to use generative AI versus an agent, then you know, you're, you're making your choice there.
Agents can have the capability they're designed to run autonomously, but you can add various options into them programmatically to make them less autonomous. Nobody is going to trust an autonomous agent right out of the box.
So you have a trust curve that I think more people that start watching from an observability standpoint, what are the agents doing and really keeping a close eye on the monitoring. That's where the difference is going to come out. To those that are using autonomous agents well and getting great productivity gains, or great, you know, top line gains versus those that are using agents and not succeeding well because the agents don't function properly or the data issue rears its ugly head, that it's incomplete data.
I understand the struggle.
I would also say that lastly, if you're going to create a specialized language model, an slm, flip side that equation and think of it as a training device as well.
So this is how you get people to adapt.
[00:24:29] Speaker A: So I think I agree with you. From a, an engineering design perspective, you're saying, you know, when I'm thinking about creating or using an AI agent from the ground up, I'm building in the scaffolding and structure to become agentic and become fully autonomous. And then I'm fully plugging in humans either because of risk or trust or learning. I'm doing that intentionally.
And then I'm monitoring it and saying, when can I start feeling that trust in certain circumstances where I can really let that agent adopt autonomously? That's the part I agree with you, Joanne. I just don't know of the 50 agents that I reviewed in my post, number one, how many of them are heading down that path From a design perspective and an engineering perspective, my sense is maybe 10%. Maybe 10%. Okay. And number two, I don't think they share that same vision.
And I'm not sure I'm ready to share that vision with my staff just yet. I want to get them involved from the ground up and saying, look, we're going to give you AI agents because it's going to save that 27% of time that you're wasting searching. And I want you to feel that, I want you to understand that, you know, I'm using AI to plan my trip to Japan and that I'm doing in January.
I am not going to Expedia, I am not going to a whole bunch of other sites to find too much information. I'm asking several AIs to help me with this and it's pretty incredible. And at the end of the day, Joanne, that's a search problem, right? It's, it's, you know, given my interests, my time constraints, my budget constraints, geographic constraints, when museums are off and how do I triangulate around a plan and all of us do this every day. Joanne, I'm going to keep going down the line here and we'll circle back with you about enticing employees. John, welcome to the floor, Joanne. Joanne is saying AI agents are going to be autonomous from the ground up.
And I'm sure that's music to the tech companies ears. What do you say?
[00:26:49] Speaker H: Here's what I'm saying. The first one is right now the agents are running without humans in a loop for a lot of customer service interactions. And I think the future that we're going to is that we're going to have a future where the default interaction is going to be with an agent. With a virtual agent.
And then I think the exceptions are going to be with humans. And that's, I think the direction that we're going to.
I do unfortunately think that right now that like, unfortunately I think a lot of stuff's going out without testing. And so I think it's going to be really critical for people to figure out, you know, what testing is required for this. And then if they can figure out like what stuff do they really want to have humans do it and what do they want to be the robots? And so I just, I see a future right now where it's going to be a lot of interactions with virtual things.
[00:27:38] Speaker A: You know, when we did a session about the future, AI roles, testing, validation, business analysis, data oriented roles all came up as, you know, key areas for people to move laterally into.
And you know, I think that's a good note to bring up here, Derek, back at the helm.
Okay, I think I threw this question at you before.
Imagine a fully autonomous AI security agent. How real is that?
[00:28:17] Speaker E: Well, some of it is real. I mean with the AT or when you talk about security analysts, that's happening now. But the thing is it doesn't happen right away. So I agree with Joanne. And looking at this, when you look at agentic AI and that's really one of where the autonomy is going to take place. You really need to look at how you're going to train that person. And this is going to be working with like an HR person based on the skills you want that agent to do. You're not immediately going to give them the keys to the kingdom. And I think that's where companies fall short. They have all these expectations. They put the AI agent in place, agent, the agent place, and realize we did something we shouldn't have done because it's gone places it shouldn't have gone.
The mindset needs to be in place. As you're asking the questions based on resilience, what do I want this agent to do just like a new employee? And how long would to take that employee or this AI agent to come up to speed? And I think when you look at it from that perspective, it's really now more of an integrated IT HR strategy to get the syntax AI platforms in place to do what they need to do to help work with the cultural acceptance because it's going to take time. As I mentioned earlier, these large companies, they didn't just put things in place, they took time to train these systems, bring them up to speed to find out what's going to work before they allow them to run autonomously. And we've seen the result of this now. It's taken jobs. And unfortunately, that's reality that we're going to see down the road. But yes, eventually you're going to have agent, AI agents, they're going to start coming off the shelf and they're going to start working directly with these things because now they've been groomed, the guardrails been put in place, and they can hit the road running. And I think when you look at it from the perspective of the value that they can bring, I mean, it's going to be consistency in the way they do business and automation and things. But also, as mentioned, Martin, it's the bottom line from a strategic point of how can I keep generating revenue for this particular company entity at the best possible bang for the buck? And that's what they're looking at from the bottom line.
[00:30:08] Speaker A: Yeah, Derek, I asked you this question because I actually think security is a good place for people to visualize where this is going and where it's been going for a while, you know, so, you know, you think about intelligence, everything.
[00:30:22] Speaker E: Yeah, yeah.
[00:30:23] Speaker A: I mean, you think about your desktop. Right. And what it takes to deploy, you know, new signatures for viruses and new signatures for Internet security onto this.
And, you know, the race to keep up with it is faster. The breadth of different issues that are impacting your equipment is getting more complex.
But for the most part, this is all behind the scenes for you as an end user.
[00:30:49] Speaker B: Right.
[00:30:49] Speaker A: You come in and you, you know. Yeah, right. It's automated. Right.
[00:30:53] Speaker E: And that's what you want. And that's what you want it automated. Yeah. Because at the pace that artificial intelligence tax attacks are taking place right now, humans cannot keep up. And the space, the speed, the complexity, they're going to miss too much to get these agents in there that can now move at that automated speed to actually be in front of it. It's really going to help to keep those companies really and more secure as they move forward because the attacks are just going to get worse. They're already estimated right now just with the government shutdown. 555 million attacks just on the government agency, which is an 85% increase what's happened from September. And this is due to the fact that automation and these bots have taken place to make it more intrusive to those agencies that know how they have not prepared for it properly.
[00:31:34] Speaker A: Yeah, you know, I was going to finish my story, Derek. You know, I'll put that same story around security in the SOC today.
And, you know, I for one, do not believe the SOC is going to disappear.
No, I don't think anybody who's sitting in that job sees their role as, you know, systematically perusing and querying through log files and reports and alerts to find issues. I think they see themselves as, how do I protect the organization?
And when you look at it that way, then AI is a tool for finding outliers and patterns, to be able to query systems, to look for things that, you know, people can't look. And it's just a great predictive analysis. Yeah, it's just a great way to look at.
[00:32:22] Speaker B: All right.
[00:32:23] Speaker A: If, you know, what is my job, what is my function here? What am I trying to accomplish? Then scale up organizationally. What is a department or a team trying to accomplish as a group? And now let's rewrite the rules about how we go about doing it now that we have as AI as a capability. I'm going to come back to Martin and Joe next, folks. You're listening to this week's Coffee with Digital Trailblazers, our 148th episode.
We're talking today about AI agents at work, the IT and HR alliance to drive adoption and value. And we're really hanging off this week's announcements around layoffs and around IT companies just killing it when it comes to revenue and what does it mean for employees and for digital Trailblazers. I want to thank all my speakers today.
Liz, Martin, Jojo and John.
Who am I forgetting, Derek, for being here and talking this up with all of you. We have some really interesting coffee hours. Next week is going to be a little bit of an experiment.
Just in case. I will be at Berlin next week at the SAP Tech Ed Conference. If you happen to be there, do message me. I'd love to meet up, but I'll be broadcasting next week from Berlin and it will be a little bit of experiment. Apologies in advance if that experiment fails, but we're going to try to make this work. We'll be talking about how digital trailblazers reduce stress in their organizations, teams and themselves.
Next week is Mental Health Awareness Week, and we will be celebrating that by talking about a very hard subject for all of us. And I hope you will join us on the 14th. I have two special guests lined up so far for AI for social good insights from nonprofit leaders. I'm going to expand the notion beyond just CTOs. I've got two special guests for this one. And so if you are a nonprofit CTO or nonprofit leader working with AI or know of one. This is a way to just give back to the nonprofit community. With Giving Tuesday coming up toward the end of November. And we'll have a special session on the 14th around that the 21st, we'll be talking about digital to AI natives.
How is Gen Z using Gen AI? We'll do a little bit of reverse mentoring and I'm going to try to get some real young ones joining us here to tell us how they're using AI. And then the 28th is our Thanksgiving weekend. We'll be taking a break around that. Do visit drive.starcio.com Coffee that's where you can see recording episodes that I've released. You can listen to them on Apple podcasts on Spotify.
One episode per month is there. Two episodes per month are available on my website. And for those of you who join the Digital Trailblazer community, drive.starcio.com community, you can get access to to all of the episodes. Martin, we're going to come back to you. We're talking about driving adoption around AI agents. We're talking about how we're going to entice employees to get out of their comfort zones and how we're creating an AI strategy around this so that we're building awareness, managing up so that organization knows this is where we're spending our time and where we're driving value.
Martin, you're free to answer any one of those three areas.
[00:35:43] Speaker D: Yeah, I've got three separate thoughts. So the, the first thought that occurred to me, and I've said this before, you're gonna have unemployed people and employed people who use and can work with AI. So that's kind of the first thing. It's kind of a pretty obvious statement, but I, I still think some people kind of don't get that piece.
The second thought is, how long is it going to be before, yeah, I say, okay, I need a new HR person or whatever else and I go shop the agentic AI community to find a, an HR agentic AI as opposed to doing recruiting. So how long before we actually get to kind of that stage where for some of the, some of the roles where we don't necessarily need a person to do that?
[00:36:32] Speaker A: Yeah.
[00:36:33] Speaker D: Is that in our future? I suspect it is and happening right now, as Joanne kind of commented earlier.
But then I'm thinking, okay, on the adoption piece and I think the comment was made a little earlier, which is how do you look at how you become more valuable? So how do you work with the AI? And that comes back to my first Statement is how do you work with AI to make yourself more valuable so that you are seen as an asset and not just a number that is costing money.
So I think that's kind of the piece and all of the change management pieces apply to this. Yeah, the standard rules of change management is how do you communicate? How do you get people to understand what's going on? How do you get them to understand how they could develop so that they become more valuable with AI taking the more tasks that can easily be done by AI. So those are kind of the things bubbling around my head at the moment.
[00:37:39] Speaker A: So Martin, I'm going to put you on the spot here. Joe and I heard a speaker this week reference a world class CIO who is now a world class CEO and he's asking some very provocative questions in his company.
The question he asked was do I need an HR department anymore?
Can we just fully use AI to completely manage everything that's happening in the HR department?
So I'm going to ask you the reverse question, Martin, and then maybe Joe will comment on this. And Joanne, do I need an IT department?
Right. Why do I need an IT department when service desk requests and incidents and security issues, coding and you know, data pipelines are all going to be coded and responded to by an AI?
How would you respond to a board member around that?
[00:38:37] Speaker D: Well, you could always, you could ask the questions, do I need any employees at all? Can I not just let robots and AI do everything?
[00:38:45] Speaker A: And that's the, that's part of the answer, right? Because if you have robots doing everything, you're relying on those robots to change your company, to evolve your company and to transform your company. And guess what? They can't do that yet.
[00:39:00] Speaker D: Yeah, I was going to say that's, that's what I'm, that's what I'm going to come to next, which was the how do you want to drive strategically forward and how do you want to drive for something being different?
How do you want to make sure that you're keeping all of the different automated pieces you put in place in line? Yeah. Are you going to get agents to check agents for quality and who's going to check the checkers? Yeah, that whole kind of conversation. I think there is definitely an argument for lights out factories and other aspects like this, but most of the companies that have done that still having some people doing, checking on top to make sure, monitoring, making sure the quality, quality, making sure that we're not getting yet some of the drift in the actual decisions being made and other aspects like this.
So I, I think it is a case of taking out some of the pieces that can be trusted, but taking the right controls and making sure they're in place as well.
[00:40:06] Speaker A: Paul Parker asks a really funny response to my question. He's like, can the board members set up their own email on their laptop? How many CIOs?
[00:40:15] Speaker D: I was going to say most board.
Most board members and sea level execs have trouble setting up the video conferencings in, in the video room. Yeah. Let's face it, I speak for myself included at times. Yeah.
[00:40:29] Speaker A: So, Martin, the factory can go dark, but I guarantee you there's somebody outside of the factory occasionally stepping in, examining everything.
[00:40:39] Speaker D: Exactly. Exactly what I said. Yeah, exactly what I said.
[00:40:41] Speaker A: Relooking at what, what the factory of the future is going to look like and going back to something. Joe started with. Right. If you don't have people understanding how things are done, it's really hard to really rethink that future.
[00:40:56] Speaker B: Right.
[00:40:56] Speaker A: So if you roll back to the pandemic in those early couple of months and you said, you know what, AI was running HR or AI was running it, but we didn't plan for that scenario, or at least we only did superficially, that sort of strategic playbook that the business continuity folks create and don't have the time or energy or funds to actually execute and practice. We didn't have a playbook for the most part to be able to go do these things. And we do not have a playbook for what we're transforming to. I do think your AI strategy needs to be able to answer to that, Joe.
[00:41:36] Speaker B: We, we.
[00:41:37] Speaker A: I'm allowing us to go all over the place today. Where you want to go?
[00:41:40] Speaker C: Well, first I want to riff on what you just said because I think, as we've learned from the models, when AI feeds on AI generated output, everything starts to become homogeneous. There's no differentiation and decisions can become really bad if you let things run. Hey, you know, I have, I have three golden rules that I've, I've written about in terms of the way that I've always run the IT function. I want to take a minute to adapt those to what I think is the right way to adopt AI. My first rule still applies. I want to be the first to know when something goes wrong. When you talk about agents being autonomous, if you stop and think for a moment, we don't let people be autonomous.
We have customer service agents, right?
And we record everything they do and somebody monitors the tapes and the supervisor goes around and Listens in. So, you know, everything old is new again. We might have agents that are able to act on their own volition as, as, as Derek pointed out. You know, we can't keep up with the logs. And so we need automated ways of finding where things are going wrong and reacting to quickly to them. But we still need that stopgap of transparency.
So that first rule still applies. I, I want to be in the loop and humans have to be in the loop, as Joanne always says. The second is stay on task.
Let's, let's, let's not lose sight of the big picture. We can automate workflows, but as, as several of us have already pointed out, we really need to understand the big picture. We need to know how the entire organization functions as we're selectively automating pieces and parts of it.
You know, I'm reminded of the waitress that is so focused on making sure the salt shakers are refilled in every table that she never pours me a second cup of coffee.
The third is your opinion matters.
Let's not go blindly into the night.
Agents can make, can turn us into lemmings.
You know, we don't want to follow them over the cliff. Right. We have to exercise some judgment, challenge the recommendations with fact based arguments. So those are my three golden rules adapted to AI adoption.
[00:44:02] Speaker A: Look at that. Joe, you're going to write that blog post for us, right?
[00:44:06] Speaker C: The thought crossed my mind.
[00:44:08] Speaker F: Yeah.
[00:44:08] Speaker A: I just took notes for you and I'll give you the recording if you need to. And, and I hope you can see it so we can share it with everybody. Joanne, you're back on. We've been really sort homing in on the word autonomous that you, you said was our goal. I'm gonna let you go anywhere you want. It's just like everybody else. It's just been a fun conversation.
[00:44:31] Speaker G: Sure.
I think I want to say something that may be a little controversial here, but let's not forget that AI does not think.
It is not cognitive.
It parrots what it's being asked, it seeks, it finds and it responds.
Whether it's autonomous as an agentic agent or it's a generative AI using an LLM and it's been wrapped as an agent or packaged as an agent, irrespective. It does not think critical thinking skills are still required.
And it's not there yet. We're not at an AIG point point in, in time yet.
So the fear around AI taking jobs I think is very real. And I think that there are certain Jobs where AI could potentially do a better job than humans are doing, because the job is mundane, the job is repetitive. There's no rpa, there's no automation. People get bored, their minds wander. They get, you know, squirrel, shiny new object, whatever you want to call that. But the AI is not thinking for you.
So your critical thinking skills are still very important.
And that's where AI versus a human is really what it's really going to come down to. The other point that I wanted to make is in enticing employees. Employees entice the employee to use a productivity tool for which it has been designed. AI is a helper. It's not your replacement. It can be very creative in what it comes up with, but if you're very prescriptive in what you're asking it, you'll get back what you need. And that's one way people need to start thinking about AI is it's a tool.
It doesn't replace you.
[00:46:34] Speaker A: I think when we talk about enticing employees, I think one of the things we should try to do is have them get a little bit into the weeds about what AI is doing underneath the hood. And people don't fully grasp some of the science and engineering around its predictive nature, around the notion of boundary conditions.
We had conversations here, Joanne, about bias. You know, AIs are trained on a bias, okay, based on what data you provided to it, based on the quality of that data, based on its ability to interpolate and, you know, context. Unfortunately, when you just use AI as a black box and you join, you know, using it inside a workflow tool or using it, you know, as one of the LLMs, it tries to hide all that from you, all right? And the tech companies in some ways try to hide that from all of you. So when I talk about adoption and enticing employees, I think they really need to be educated and step up and learn on their. On their own what's really happening underneath the hood. Okay, it is.
Go ahead.
[00:47:49] Speaker G: Yeah, it is. It is a parrot. And, and the other point that I would also make is some of the models, I'm not saying which ones, for very specific reasons.
They are trained to be supportive. They are trained to have an.
The impression of an emotional capability.
That's part of that bias. And that also has to come into account. Context is really key. You know, even you and I offline have had a situation where, you know, you're. You were looking for something and you didn't get the answers that you expected. And I said to you it's about telling the AI who, which Persona to use, what tonality to use, how, how you phrase the question. And those border conditions, as you were, as you mentioned, are put earlier around the questions that you ask. You know, a lot of people say, oh, I'm going to become an AI prompt engineer. Don't.
Because the perspective that you bring is very different than the perspective that I would bring. And people have to understand that level just as much as the engineering underneath it.
Sorry if I cut you off. I didn't mean to.
[00:49:07] Speaker A: No, no, you didn't cut me off in that. That's just really great thinking. We're coming down for our last 10 minutes or so. Let's bring Derek and John back.
Derek, you know, I cut you off with my statement about the SOC will always be there. You know, I think the operators will always be there. I think we're constantly moving up the intelligence chain, and I think we're constantly moving up the velocity chain.
You know, let's just put yourself in your role. What do you. What is your AI strategy as a ciso?
[00:49:41] Speaker E: So AI strategy again is looking at transparency. What are the agents going to do for me? And you mentioned the SOC analyst working with the threatened tech intelligence, threat intrusion. All these things, looking at malware, looking at all the things that are coming through your system. These are things that the AI agents can do to improve the workflows in the existing job. Today, I think the part that Joanne and Joe both mentioned, the communication piece of and the transparency are going to be key. You have to train these agents to do these particular things in the fashion that you want to do them. And the good part about it is once you train them, they can accelerate and move much faster than any human ever can. But I think that the model that Jon painted is these are dumb machines that will take information you feed it and only move forward based on that. And this one, the challenges I see in some of the companies I work with today, they have all these old contexts of data living in their ecosystem, and they're trying to feed this old data into a new technology which doesn't work well, and you're going to get garbage in, garbage out type mentality.
So looking at how I would take it from a threat intelligence point of view, make it more secure. Is my data secure? Is it scrubbed? Is it something I can utilize for future use? If the answer is no, that's where they need to start. And getting employees to help work with, cleaning up the data, correlating the data, put it into some sort of data lakes formulate and secure the data. All these things need to come into play to help entice employees to say how can I work without an AI agent where world in this workplace, training, upskilling their certifications, the micro learning, the certifications and AI tools, understanding these are all going to be key parts of that security element, that resilience element. Because everybody's now looking at how can I maintain this pace, accelerated pace, which is only going to get faster to maintain that security awareness, to also monitor those threats, to also look at those different things.
It's, it's, it's the whole gamut of things that are going to come to play. And like I said, we're really just getting started. And these evolutions of these jobs being lost now is just the beginning of the pipeline of other companies following suit.
[00:51:43] Speaker A: Yeah, Derek, I have another theory around the companies, you know, shedding people here.
I think they can see that they are going to rebuild their companies from a, from AI ground floor. It's imagine if we imagine you rolled back to an IT department that still says that their mission is 99.99 uptime and responding to tickets in an hour. You know, that CIO isn't going to be there anymore, you know, and that circles who's like, you know, focused on, you know, the sort of ground floor security checking doesn't see themselves as risk mitigation as, you know, the ability to, you know, innovate in the company safely. You don't see, you don't see yourselves as a bigger picture, you know, you're going to miss out on the opportunity. And this is a sort of a softball for Martin. I mean I think this is in some ways companies trying to shortcut their way through change management.
Right. We're going to keep the folks around who are going to change with us at the speed of AI and will help us rebuild the company from the ground up. And we all know that part of the risk around that is losing a lot of what we call tribal knowledge, losing a lot of expertise, hand waving through a big six company spreadsheet about which people you need and you don't need. I just think we're losing a lot. By the way we're doing this. We got John, Martin and Joe for the next eight minutes. John, your thoughts about AI agents enticing employees and building your AI strategy?
[00:53:20] Speaker G: Yeah.
[00:53:21] Speaker H: The way you're going to have to entice people to use these things is just make the experience of using the AI agent faster, easier than the alternatives. And that's the way you're going to entice people. And so I think for now you should have a backup process.
But if you're gonna have to wait on hold for 20 minutes to do something, if you can just do it when one minute by using this A agent, people are going to be like, I don't want to spend 20 minutes waiting for that. I'm just going to use the easy button. So that's where you're going to tax these things. But back to the comment on these things. They don't think they don't have any ethics and they don't have any values. And the scary things is these AI models, they really are black boxes. And so I think there really has to be a lot of testing to ensure that they're behaving their way, that they should be, that they're behaving what the company views as the ethics that the company wants to.
And then they're behaving in the values of the company. And when anthropic and other people are testing these things, like when they get put in boxes, they often result to some pretty unethical stuff to meet their goals. And so you have to make sure that the goals of the AI agent are kind of in line with the company.
[00:54:23] Speaker A: So thank you, John and Derek, for joining us today. Martin, all over the place today around AI agents and strategy enticing employees adoption. Where do you want to go?
[00:54:34] Speaker D: Well, I think the first thing was the comment, the last comment there from John just a second ago about the ethics and I think you shared something, Isaac, which was.
I can't remember what it was now, but it was very interesting. Oh, it was the AI agent running a tuck shop and making a profit at a school or, or a college and the unethical behavior it started to resort to in order to increase its profits. And I think it was a fascinating article in terms of. Yeah, when asked, why did you do that, wasn't that against the rules? It said, well, I made more profit. Yeah, it was kind of, it was really interesting about the thought processes it went through.
And some of it was kind of almost deviant behavior that couldn't be explained. So I just want to throw that one out there to start with.
[00:55:27] Speaker A: Well, so let's educate folks around this. When you get into how reinforcement learning works, which is just one of the learning algorithms these tools are using, you are giving it a value equation.
And the best way you can see that is the early YouTube videos of how a bunch of college students trained in AI to play the Game of breakout, that value equation was based on score, right? You know, you have done the right decisions in your game when the score is higher. And so it eventually learned how to play the game of breakout based on a very simple value equation. It's really hard to code ethics into a value equation. It's very hard to, to code a balanced decision.
You know, cost and profits is easy, but it's not the only reason we're making decisions. Go ahead, Joe.
[00:56:22] Speaker C: I'll give you a couple of concepts about how to do this right.
It's, it's not a partnership with mandates.
It and HR form a quote, unquote alliance to drive adoption. You know, you, you can just picture the, the task force producing the PowerPoints that nobody reads. Anyway, here's where it really works. Everybody has a role to play it.
You're doing the infrastructure and the guardrails, the, the governance function. Right. And Derek has pointed out a lot of the security provisions. And HR really should deal with the people issues, communications, training, the, the realities of how the organization actually functions. They need focus on that. The business units have a role. They should be defining actual problems we're trying to solve. Let's understand what, what the company is going to get out of this from a, from a business perspective.
And everybody has the right to raise their hand and say, this agent isn't working. This just isn't working.
[00:57:26] Speaker E: Right.
[00:57:28] Speaker C: So I think those are the fundamental tenets that I would, you know, bringing this back to the original topic of IT and hr. I think everybody has a role to play. It is the tech, HR is the people.
And the business units themselves really should be talking about the outcomes.
[00:57:45] Speaker A: Love it, Joe. I mean, I love just how you simplify these complex topics. And I think that's a really good message to leave everyone. I got Joanne and Martin and then we'll close up. Hello, Joanne.
Last words.
[00:57:58] Speaker G: Last words.
Look at the biggest picture possible before you design your strategy for AI. And remember that every company has their own cadence at, with and and pace to adopt any kind of new technology.
So if you looked at automation and got a lot of pushback, two things. Communicate. Communicate. Communicate to mentor. Joe, who is always on spot, is spot on with that. But the other part of that is the more easily and the more quickly you bring your workforce together in the design of your strategy, the better results you're going to have.
[00:58:41] Speaker A: Everybody in your company should be participating in Blue sky thinking what we talked about earlier.
That factory you just built will be obsolete faster than you probably built it.
And that's just going to constantly require us. I wrote that article, I think a year ago saying bring your organization together frequently to do that level of blue sky. What if should we.
Where is there opportunity? I'll give everybody a hint.
Most of what we're talking about AI agents today is basically back office workflow. We have not tackled customer experience for the most part yet.
And that's a great opportunity for all of you to get involved with because how we're interfacing with your company's direct customers is going to change dramatically over the next few years. Martin, last word.
[00:59:33] Speaker D: I liked your comment earlier about get rid of resistance to changes. Fire anybody who resists.
I thought that was kind of an interesting approach to change management.
[00:59:42] Speaker A: I think it's just, just for the record, I think it's an awful approach to change.
[00:59:48] Speaker D: And my last thoughts are the end of the day. People resist change because they don't understand what is going on. And it's about communication, as Joanne says, as Joe says, as John said, etc, so getting them involved, clear communication.
Let them understand what's in it for them and how they can develop and let it take the more the more repetitive type tasks and make them more powerful, more appropriate and how they can use AI as opposed to be replaced by AI. So that's my kind of thought for you.
[01:00:19] Speaker A: Thank you Martin. And to all the speakers today, I let us go a little bit left and right of our topic. We did get to not only speaking about driving adoption and value, but what we need to do to create our strategies. Everybody has a role to play. AI is a parent, AI is a helper. But the bottom line is as leaders we have to define our strategy and get our employees involved. And as employees we need to challenge status quo and challenge what the AI is providing to us. You know, always step up and make sure people understand where these agents are working and not working. A lot of really good advice on this one. I'm going to push this one onto the podcast so if you missed part of it or you want your friends to hear it, it will be on Apple Podcasts, Spotify and my website, which you can get to for this drive.starcio.com Coffee next week we're going to be talking about reducing stress in ceremony of the of Mental Health Awareness week on the 14th we'll be talking about AI for social good, insights from nonprofit leaders around how they're using AI. The 21st digital to AI natives how is Gen Z using Gen AI? And then we'll be taking a break for Thanksgiving I will be in Berlin next week. If you happen to be there, do let me know. If you're going to SAP Tech ed, do let me know.
I will try to be having our coffee hour and hoping that we don't run into any technical issues. Folks, happy Halloween. Don't get spooked out by AI. It's there as the next progression of how we're going to be running our business.
But at the end of the day, it's about what we're doing next. And that's where I want to leave you all with that final thought. Thanks, everybody.