AI as a Teammate: Crafting Purposeful Digital Workplaces

August 22, 2025 01:01:14
AI as a Teammate: Crafting Purposeful Digital Workplaces
Coffee With Digital Trailblazers
AI as a Teammate: Crafting Purposeful Digital Workplaces

Aug 22 2025 | 01:01:14

/

Hosted By

Isaac Sacolick

Show Notes

The 139th episode of "Coffee with Digital Trailblazers" featured Stephanie Sylvester discussing AI's role as a teammate in digital workplaces. The discussion included AI implementation strategies, organizational readiness, and security considerations, emphasizing productivity enhancement and maintaining human expertise. Future meetings aim to further explore AI's impact on digital transformation.

Chapters

View Full Transcript

Episode Transcript

[00:00:10] Speaker B: Welcome to this week's coffee with Digital Trailblazers. Our 139th episode. We're almost eight months into this new format being on LinkedIn Live, and every week there's always something that trips us up. This one was a minor one. So just giving a few minutes for everybody to join. And thank you for joining us in this August session in this week of some of us working through hurricanes drop offs at college, which is what I was focused on the last two weeks. I was in Tucson a week and a half ago with my son. I was in Albany this week with my daughter. Wow. East Lyme, Connecticut. [00:00:59] Speaker A: Derek. [00:00:59] Speaker B: We could have met up. I drove right through there yesterday. Oh my gosh. That's just too funny. And I'm totally excited. You know, we're always interested in exploring new topics around AI here at the coffee hour and this week we're talking about AI as a teammate. I have Stephanie Sylvester. Did I pronounce that right? [00:01:24] Speaker A: Yes. [00:01:26] Speaker B: Stephanie is here as a special guest. She's the founder of Avatar Buddy. We'll hear more about Avatar Buddy midway into the session. Just giving us just a few more minutes for everybody to join in. Thank you for introducing yourself on the Common Stream. Hello, Steve, thank you for joining. Thank you for the comment last week that made it into our whiteboard. It is, it is published, but I haven't shared the URL yet. I have to remember to do that. I've been running around all week. Hey, Alan, Good to see you again. Alan and I met when I was in Charlotte a few weeks ago. Alan is a good friend and as you all know, when I get around town, I try to meet up with as many people as I can. I do have upcoming trips to Atlanta, to San Diego and San Francisco. And so if you are in those areas or if you're attending workday or Cisco WebEx one, do let me know so we can meet up. Every time I do this, there's always somebody who reaches out afterward and I get to meet someone in real life for the first time. So it's always very exciting. Exciting. This week we do have a special guest and we are talking about AI as a teammate. Crafting purposeful digital workplaces. This is a conversation of more than just looking at AI as a tool or AI as an agent. It's looking at the connection between people and the agents that they're going to and are working with. And I'm starting to see some data around this, some things that our industry is publishing around. What's happening with AI in the Workplace. I want to share some data points for you that have come out from different reports. This one is from a workday report that just came out and it talks about our comfort level with using AI agents in different scenarios. 75% of people said they were comfortable being recommended skills development or areas of improvement by an AI agent. That sounds very much like a buddy to me. But only 30% were welcomed the idea of being managed by an AI agent. And even less, 24% said that they were happy to see AI agents operating in the background without their knowledge. This is from a workday report. It's called AI Agents are Here, But Don't Call Them the boss. Very interesting report that came out. And then also this week, MIT came out with a report on what's it called? The state of AI in 2025. And the data point I was going to share for you in here just lost it. Darn it. And scrolling around trying to get the name of it, but they talk about just a lag effect. Our adoption of AI and large language models is much higher than what's happening with AI agents. They are still in the earlier adopter stages, with only 5% of companies reporting that they've embedded agents on tasks specific for generative AI. 20 have piloted. 60% are still investigating. And so that's the backdrop for our conversation today as AI is a teammate crafting purposeful digital workplaces. And this is looking at, you know, as agents become stronger, better, smarter, how do we empower employees to use them effectively and to be doing more purposeful type work and using AI agents in a more trustworthy way? So, Stephanie, I want to welcome you to the floor. Welcome to the coffee hour. And just do a brief intro to yourself and then tell us a little bit. My first question, how should we really be thinking about training AI agents the same way we onboard and develop employees? Stephanie, welcome to the floor. [00:05:49] Speaker A: Thank you, Isaac, for having me here today. I look forward to learning from you and your panel of hosts and speakers. My name is Stephanie Sylvester. I have over 30 years of IT experience and I have a Master's in Economic Development and International Studies from the University of Miami. I grew up in Belize, came to the US thinking that I was going to learn economic development and go back and develop Belize. Instead, I ended up living in Miami for the last 30 plus years. And I am super excited about this conversation today because I believe that AI is a social construct. And as a social construct, it means that we should be approaching it the way we approach interacting with humans. So I just Want to make a clarification point? I am not saying that AI is human. I'm saying that if you approach it the way you approach humans, you get a much better result. We've been working on AI for nine years and the last two and a half years we have been selling our product and whenever we approach it the way we approach interacting with a human, we have much better results. [00:07:10] Speaker B: What does that mean? Stephanie, especially you call it AI as a social construct. Can you break that down for us a little bit more? [00:07:18] Speaker A: Absolutely. It means that unlike other software that you would just implement and life carry on, you have a business process. You bring in a piece of software, you automate your business process and everybody is good to go. AI changes how we work, how we think, how we interact with each other. It also changes how we behave. And because of that, that's what I mean when I say it's a social construct. So you can't just say, oh, I'll just buy some AI agents, I'll go to company xyz, give them my credit card, they'll allow me to create agents and I'll be good to go. Because one, we have found that that is not effective and people struggle with even figuring out how to configure the agents to have impact then. Secondly, because it impacts everything, your organization has to be ready to change. You have to be ready to change how they think and you have to be ready to change how they behave. We've had unfortunately some customers that were not ready to change how they think and how they behave and our implementations went sideways. And so because of those learnings, we now insist that we do an AI opportunity mapping session with you so that we can walk through how you behave. And if, let's say it's an hour, a two hour AI opportunity mapping session, an hour and 15 minutes of that is talking about everything but AI. We did a session and I could see this CEO getting impatient with me. She's like, when are we going to get to AI? And I'm like, we are getting to AI. And she was like, what do you mean? I was like, we are getting through to AI. So you have to understand what are your employees emotional response. So this is a very emotionally fraught topic. And if you don't get an understanding of that, you could be implementing AI in a way that just doesn't become successful because your employees will consciously or unconsciously undermine the implementation. And then the next thing is alignment of, of of what is the real problem, alignment of how the business works. And again, normally we talk about that, we're like, yeah, yeah, management and Frontline people need to be aligned. But do we really make sure that happens now with AI? If that doesn't happen, it will magnify a thousand times. So all of a sudden, now you're seeing this huge, huge, huge problem in your organization that maybe everybody knew existed but wasn't that big. And that's what I mean when I say AI is a social construct because it takes little small things, it magnifies it, it forces you to reckon. But all of these are good things. We have one customer where we ask for customer feedback, and part of their customer feedback was that it's reduced workplace conflict. And at first we were like, workplace conflict? We didn't even think about that. And then when we kind of like unpack that a little bit, we realize it's releasing. Reducing workplace conflict for two reasons. One, people no longer get irritated because you ask them the same question 25 times. People don't get defensive because they have to ask you the question 25 times. So because they're asking the AI and AI is not keeping track of how many times you ask them for the question, then people are feeling better about themselves. That conflict intention of asking for help and getting support goes down. It looks different now. And that's what I mean as a social contract. Because now when I come and ask you for support, I'm like, isaac, I read this. Is this really the way it's supposed to be? That's a completely different thing from, hey, Isaac, tell me this. And now you're like, oh, my God, I gotta go explain this entire process. Oh, like, I don't have time for this today. Oh, my God. Like, Stefan is a sucky hire. Like, we should just fire her. I mean, like, all of that stuff goes away. I mean, and that's exactly when I talk about social construct. We're changing how we're behaving, how we're interacting, very powerful. So I'm going to pause there and, and see what other questions you have or let somebody else have. Have a point of comment on what I just said. [00:11:52] Speaker B: I have a question. I'm going to ask it and ask you to think about it, because I want to bring our other speakers up just to comment on some of the things you've been speaking about. But everything you describe falls in the category of change management. But you're describing it very differently. We've done change management with technology before, with process change, with realigning employees, with new job descriptions, because technology is automating things they've done before. This just feels different. And you're bringing it in the construct of I can use AI in a transactional way. I can ask AI to write code for me, for example, or I can use AI as a partner in saying, you know, what should I really be developing today, what problem should I be focusing on and how should I go about solving a problem like this? So feels like change management is a very different problem now, and I think you're alluding to that. So I want to give you a pause to listen to that. I have Derek raising his hand. Liz, Joanne, Joe, all here during their summer breaks. Derek, welcome back from your break and tell us how you think we should be training AI agents and onboarding them as we develop our employees to use agents in a purposeful way. [00:13:16] Speaker C: Thank you. And I greatly appreciate Stephanie's comments and I fully agree with her as far as the training process. I mean, when you look at AI agents, I think I look at them as more like a digital teammate. There's still going to be an onboarding structure just like regular employees develop. But you also need to put context around them and guardrails and really understand how this person is going to work with your, your, your job and work with your industry. But most of all, look at what kind of risk may they be. So it's a learning process and this takes time to understand, you know, how they're going to work with you, the information you share with them, the training process they're going to go through, how much they're going to be applied, that. So you're going to embed them with the, you know, the company mandates, the company standards, the company protocols, the security protocols, all those different things come into play. But it's also interactive and getting feedback to see how well they absorb it. So as Stephanie mentioned, the redundancy aspect of going through and asking the same question, you know, you're looking at, can this person take, or this AI agent take this information and do what you need to do and give you what you need. So in essence, over a period of time, like the most important employees, you have a probationary period, you're developing trust, you're trying to understand what they can do. They're helping them align your vision, the vision, vision of the business to what they need to do. But also you want to make sure they're not going to be a security risk. So you're helping them develop what that resilience mindset is going to be to help them not only work with the organization, but also help keep the organization secure. So again, it's a process, but I think it's going to be similar, but it's just going to be different. Guardrails as the training, the training, procedure and process can be a little bit different for an AI agent versus a human. And you know, some of the context that Stephanie mentioned also, I think those things come into play because those are things that need to be learned. [00:14:57] Speaker B: Derek, I just like, like typed as a fiend because you had some really, really good questions about developing trust with your new teammate and how you're preparing them, how they're learning about your job, what are the guardrails, policies, mission regulations they have to know about? [00:15:18] Speaker C: Yes. [00:15:19] Speaker B: And then ultimately, like, how are you judging their performance? I think is really the question we have to train our employees on is, okay, you've got this. It is a new tool, it is a buddy, it is working with you, but when is it ready to actually give you worthy advice for you to go listen to? What do you think of this, Liz? [00:15:40] Speaker D: First of all, I just want to say this is amazingly exciting way to think about AI agents. I'm very excited about this. Typically when I come on the call, I'm talking about business value and governance. That's typically my normal perspective, is trying to make sure that we're thinking about how are we making sure we're getting the value out and how are we making sure that we're governing things in a way that gets us to the top line or the bottom line. But, but here I'm really hearing something that's even more valuable, which is how to integrate the impact of organizational change in a way that, that actually morphs the culture of a company, that actually integrates the culture of a company to maximize the value of the AI agent and actually incorporates that as part of the infrastructure of the culture itself, which is just like amazingly exciting. I'm just floored by this. It's sort of taking organizational change to the next level. So I just, I'm pretty excited about it. [00:16:51] Speaker B: Thank you, Liz. Let's keep going. Joanne, there's a question here that I hope you'll double comment on. So first you're, you know, we spoke about this last week, the idea of bring an agent through a learning phase from an apprenticeship to a journey person. You discussed that last week at our coffee hour. So I'm sure you want to comment on that, but I want you to answer Keith Plemons as a question here. Isn't AI just a component of digital transformation and be treated as such within Larger systems of people, processes and technologies. I have a feeling you have something to say about that too. [00:17:33] Speaker A: Yeah. [00:17:34] Speaker E: Which would you like me to start with first? First of all, it is a component, but I look at the, I look at agentic AI as, you know, it goes through a cycle and it goes through the cycle of sense its environment, detect what it needs to do based on programmatics, act and then learn. And it's a lather, rinse, repeat type process. Now there are other elements involved in that as well. It is a component. But you can look at it as in one part, hmi, a human machine interface. And that's where some of the humanity and, and the word, and my new coined phrase humanify agents comes from. And yes, they do need trust. They knew they do need to be ingratiated with the workforce, but they are also a force multiplier. And the agent can be learning from the individuals just as much as it's learning from what it was trained upon. We view human in the loop as being integral to the training of the agent because we're trying to capture in our systems anyway the knowledge of the workforce, the expertise. Stuff that will not necessarily be captured or curated in any other way, but seeks to teach the agent more about not only the business, but the operations of the business and making it better, better. So in one sense, you know, to Stephanie's point, it is being humanified to be more easily integrated into the operations and that would affect change management and organizational structure. But from our perspective, it's also got a purpose. It's got a purpose of how it runs, how it operates, the information it gives back. And, and this is also one of the differences between large language models and small language models. Because if you're running it against the expertise needed to satisfy a requirement of an individual, two other points are needed. One is that the requirements come from different perspectives, meaning different roles in the organization, the data. And the answer may be the same. It's context that changes around the data. And that's why to the listeners comment I would say yes, it's absolutely a part of digital transformation. On the other side of that is, you know, we have a couple of different agents that are process driven and model based around the operations of the organization. We've given them names to humanify them just so that we have something to refer to, but also as the way to ingratiate the user to share their expertise. So that's another part of the socialization of AI, I guess we view the construct as part of our engine and whether we give it the name Nova or we give it the name Celeste or anything else that is out there. They are constructs and they are serving a business value driven purpose. [00:20:48] Speaker B: Well, Joanne, I'm going to suggest that I really like AI agents need a purpose. I think they need to be trained on a specific role. Just to give everybody an example. I think innovation first, I think customer experience first, I think revenue first, I don't think security first. I would love a buddy who would sit next to me and say, Isaac, if you're going to work with client X or with company Y and you're promoting these ideas around innovation, around transformation, around growth, here are some of the security concerns you should be thinking about that should be at the forefront of what you're recommending your clients. I would love something like that because I'm not a security expert. Derek could have the opposite of it. Right. Derek is a security expert and I'm sure he's giving me a thumbs up. He would love a transformational. Here's how you can apply best practices in security that are potentially going to drive growth, particularly around your brand. [00:21:53] Speaker E: Well, Isaac, let me just sign off. I'm sorry, I don't mean to take other people's time, but like in our case, we thought long and hard about the security aspects. And so we built the agents with both role based authentication and also attribute based authentication. Where those fit in the system is kind of part of the secret sauce. But let's just say that we took that into account and we tailored it to the exposure levels that each individual sort of group in the corporation might have. So C Suite may have very different parameters than a shop floor operator in a manufacturing facility because they wouldn't necessarily be exposed to some of the information in one case in one sense, and they may be overly exposed, exposed to some of the information in another. So there are ways to mitigate that. And I'm curious as to, you know, in Stephanie's case with the buddies, how they've managed to take security into account as well, particularly when it comes to voice of the customer or the individual. [00:23:03] Speaker B: Awesome. Stephanie, I have two questions teed up for you. We're going to go to Joe next and Keith, is AI just a component of digital transformation? We are going to cover this. I love this question. My answer is it's not and it's, it's. And I still think of this as, you know, overlapping circles, concentric circles here. But my issue with AI right now is it's not generating revenue for us. And the MIT report, I alluded to earlier, said that only 5% of companies have found ways to use AI to generate revenue. And what I've always said is if you're not using it to generate revenue, it's just going to impact your cost factor. And I think we need to find some better value equations around AI. AI is a buddy is one of those areas, which is why I love discussing this. Joe, you're up. Welcome from the beach. You have a clear minded head. What are we speaking about, Joe, today around training AI agents, the way we onboard and develop employees. [00:24:06] Speaker F: Well, I just wondered and I'll tee up another question for Stephanie. When I interact with agents, there's a marked difference between interacting with a customer service chatbot that's sort of cut and dried. And to Stephanie's point, it's infinitely patient. I can ask it the same question over and over again and I probably get frustrated because it doesn't answer my question. It gives me the sort of the pat answer. But then I also interact with things like Alexa and Siri and other more, to use Joanne's term, humanified interfaces. And I find it easier to collaborate or to interact with those types of agents. And so my question is when the agents that we put in the workplace lack personality, when they don't understand irony or humor, they don't, they don't have great memories for things that you told them that aren't necessarily business related. They don't understand nuanced context. When, when these issues arise, how do we best prepare our employees? How do we set those expectations that, you know, you're not working with Captain Kirk, you're working with Mr. Spock. [00:25:27] Speaker B: Wow. I'm going to go to John next and then Liz, I'm going to ask you to hold off. I have, we have too many questions lined up for Stephanie so I'm going to go back to Stephanie after John. Hi John, welcome to the floor. [00:25:41] Speaker G: Thank you for having me on here. And I, when I kind of hear about this stuff and the change management related to this and getting somebody ramped up, I really think that there's a lot of parallels worth working with maybe offshore teams or teams located in different countries. A lot of times people are really hesitant to work with people in different countries. But you ask, well, do you want to be doing that work in the middle of the night? And they're like, no, no, I don't want to be doing that work in the middle of night. Do you want this other team to be doing this work in the middle of the night? They're like, oh, yeah, I'd love for somebody else to do that. And then when it comes to working with these team members, if you don't take them through a structured process to onboard them on and continue to work with them and include them as part of the team, like, it'll never work out. And one of the things I was just really reflecting on is people are getting guidance from AI and machine learning on a daily basis. Like every time somebody gets in a car, they say where they want to go. And basically a machine learning model will figure out the optimum route. And when I look at people a lot of times in the workplace, they're so overloaded and they're starting to have products right now that will look at people's calendars and all their action items and all their tasks and like automatically start coming up with priorities for things. And so I think there's so much help that these things can provide to people, but they really just have to think about how are they going to interact with these things and are they actually going to take the help. [00:27:09] Speaker A: Okay, so Isaac, do you want me to start with the most recent question and go back to the first one, or do you want me to start the first one and come down to the most recent? [00:27:20] Speaker B: You go in an order that tells a great story. How's that, Stephanie? [00:27:25] Speaker A: Okay, so let's start with the offshore team integration. It's a great example. And I remember working as a consultant and whenever we started a new project, we would do this thing called Foreman Norman Storm and Performin. And it was a systematic way to get us to work together and provide the highest value for our customers. Very quickly, we were coming from a number of different offices. We didn't know how to. We didn't know each other. And sometimes our office norms were different depending on where in the country it was. And so you take that and you bring that forward and you do the same thing with AI. So as you're onboarding somebody, you give them this volume of data. You give the data, the AI volumes of data. The good thing about the AI is you give it volumes of data, it will be able to process it and consume it and know something with it. I mean, how many of us have started a company then they've given us binders and binders of information. Like, I know I started a company, they gave me three huge binders chock full of information. I have yet to go through that binder. And I left that company about 10 years ago. That's. And I don't believe that I was any special than anybody else. So the AI has all that information. The second thing is that when you have a new employee, you don't just take that new employee and just throw them on the floor and say, knock yourself out. You take that new employee and you usually have somebody that they're shadowing with. Normally the person that's doing a job that's like theirs or adjacent to theirs. And that's what you do with the AI. You find a subject matter expert, you sit with them, you configure the AI the way they first process their work, and then when they do that, they use it, they give you feedback. We use an Agile methodology. It's important to use agile because any other methodology, it's going to be a mess. And so what you do is, is that you sit there, you configure the AI, you have them use it for a week. So you're doing weekly sprints. You come back in, you look at it, you took the AI, you do that maybe two or three times, and now you got a pretty solid AI agent. And then from that, that person uses the AI agent and then you expand it to maybe their team. Just like how you'd have a team meeting and then you'd introduce a new employee at the team meeting. And then maybe two months in you have an all staff and then you introduce the new employee to the all staff, same process. And so by that, by doing that, then you ensure that AI is customized for your organization. It's built, tailored for you. And when we found that, when we follow that process, we have great results. And when we don't follow that process, it's just chaos and confusion and it's just a mess for everybody. And so that's where we are, like, adamant now, if you don't want to go through our process, we can't work with you because you're wasting your money and you're frustrating us and you're messing up our metrics. We want to be able to say 100% of our customers love us. So that's part of the how do you integrate? And the same thing, if you're integrating an offshore team, you don't just come and hire the person and say, knock yourself up. There's a process of bringing the two teams together. Same thing. Somebody asked about how to, how to use customer service agents. And I will say that I'm using a bunch of people's AI agents. And I stop and I think these agents suck. And that's being generous. And why do they suck? And then I Go. And I use my agent. And I'm like, why is my agent better than their agent? What am I doing different? And, and I think that the difference is I'm not 100% sure, so don't quote me on this. But I believe the reason why our agents are different is that we're giving our agents the same information the way you would give a human the information. And instead of having these highly scripted things that you give to call center people where there's like, if they say this, say this, if you say this, you say this. Where then the people don't know why they're saying what they're saying. We're saying, just give them, give the AI the entire manual. It's a 2000 page manual. No worries. The AI will figure it out. So now you've written this 2000 page doc manual, and now you have to create these job aids that you're trying to contort into like one pager for the call center person to answer or the chatbot to answer. And they're reusing those. But if you give them the manual, the AI goes through it and responds like human. And that's what we, we advocate. And when we started, we started with security first, and we very quickly realized that large language model was not the way to go. I think it's part of my inertia. I try to find the path of least resistance. And so I thought large language model is like going to the library, and it's huge. I went to USC for undergraduate and I remember going into the Haney Library the first time and I was like, whoa. When I opened, finally got the door open because it's huge, beautiful, heavy door. I didn't know what to do. And then I was like, okay, I'm here. I want to do world history. So the librarian pointed me in the direction of the world history and I got there and that was a little bit more manageable, but I still didn't know what I was doing. And so I then said, went back to the librarian and said, I am studying Latin American affairs. Where in the world history can I get that information? So the librarian came over, gave me a few books that she said, you can get started. She asked me which professor I was in. It's like, oh, these are the books that he normally recommends. And then I could go through. And that's how AI works. But when you're doing something like that, you need to make sure that the small language model is safe, that it's secure, that it's accurate, that it's bias free. And so we have our customers small language model on an uber secure military grade rag system. And the way we've designed a small language model, it optimizes and it helps you. So just like the librarian goes, here are the three books. We say by the way, here's a document that has this information. It's 200 pages. Here's the four pages that you're going to need. It tells you the page number and, and those kinds of things really partner with people and help them feel good about like oh my God, I got the information, I don't have to go through 200 and something pages. Here's the four pages. I read those, I feel good, I get more context. And it's all about like why, why, why why. A lot of times we don't tell people why they should do something. I found very early on in my career when I was, when I was a new manager, I would sit and I would explain to people why I want something done, not how to get it done, why. And I would sit there and I would co dream with them and they would go off and they always bring back something better than I that I want. And again, I mean I hire interns and people are like I can't believe you hire interns and you're putting your entire company under risk of interns. And I'm like, we run on interns because what we do is we sit and we co dream with the interns and we take our advisors knowledge and wisdom and years of experience and we give it to the AI. And then interns use the AI. Plus the fact that we're telling them why we want them to do the assignment, not necessarily how to use it, do the assignment consistently. Rock star performance. These are from 19, 20 year olds. Sometimes we had a 15 year old in the mix. I'm not going to say all 15 year olds are going to do that. But the 19, 20, 21, 22 year olds rock star. Because we are explaining the why. So when you give the AI the entire context and you put it in a secure place, you're telling it the why and it can figure out the how. And it can figure out the how based on the question that your employees asked it. So all of that comes together in a nice beautiful human digital handshake is what we call it and it works. So I think I answered a whole bunch of questions about taking your organization about how to do security, how to bring it on customer service, how to get better interaction and yeah, this, this talks about change management Right. So we have to do change management. Change management is at the key of core what we're doing, but we're not doing. Normally we do change management from the system perspective. We're buying this new software. Okay, let me tell you how your job's going to change. But we don't stop and say, do you have to change how you think about your job? And with AI we're saying, wait, you need to stop and think about how your job, you need to use new lenses. Let's talk about these new lenses that you're going to use. And as you're using these new lenses, make sure that your process is in a different way. And if you process it in a different way, then different things will happen. And we've seen, and I like to use this example because it speaks to the power of AI on marginalized and low resource and low educated people and how powerful it is in their hands if you just help them view the world slightly different, not teach them how to use the AI, but how to view the world differently. And so I'll just one little story and then I'll pause, but. And I did a TEDX about this. So I met this woman and she wanted me to help her with some stuff with her organization. And I said to her, well, no, I can't help you for two reasons. One, I don't do homework anymore if I can avoid it. And two, I am so overwhelmed with my own company, I don't have time to help you. Why don't you go use ChatGPT? And she didn't, she was, she's like, no, no, no. I said, no, let me tell you why, why ChatGPT is great. And, and, and she was like, okay, but I don't know how to use it. And I said, just like how you're talking to me, go talk to ChatGPT. So she goes, four months later, she calls me uber excited, telling me about all the stuff that she's doing. But one thing that she told me that resonates with me is that she is using AI to help the people in her community have a voice, have, have the AI to help them see their own awesomeness. So she does is she puts her, she puts the, all their work history into the AI and says, write me a resume. And the resume comes out. And she said, this one woman just started crying. She said, I never thought about myself this way. And that's the kind of change management you have to say you'd have to do. Because if you just say, you just look at this person and say you're a low end worker and now there's AI, so there's nothing you can do. Then, then, then we're not going to get where we need to be. But you say you used to be a low end worker. You have lots of talents and, and skills and abilities. We'll give you a team of AI agents. And because we've given you a team of AI agents, you're going to now be able to do xyz. And so this woman is making people feel empowered in a community that's often overlooked in Miami. And that's the kind of change management we have to, we have to double and triple down in our belief that everybody is inherently good and want to do good. And if given appropriate opportunity and given the appropriate tools, we'll be able to do that. And that's the change management and that's why it feels different. And that's what we're doing at Avatar Buddy. That's what we advocate for. This is why I am always grateful and honored and humble when somebody lets me come under their podcast because I said one more platform for me to push out and maybe one person out there will hear me and take the, take up the mantle and say let's use AI to move, to take, take people to the next level. So I'm going to pause there and let others speak and then I'll come back. There's some human in the loop conversations that I want to address as well. [00:41:12] Speaker B: Thank you, Stephanie. I mean, of all the things you said here, the one that really resonated with me is find your awesomeness. I mean, if you told that to every employee and made them think around that and to have a conversation with a language model rather than asking it to do things for you, I think those two bits and pieces of advice, I think we sort of get there as we start using the tools, but you can get a lot more people leapfrogging to that. I have Liz and Derek raising their hands. Liz, just hold on a second, Stephanie. I just want to take a second here. Tell us 30 seconds, a little bit about Avatar Buddy. [00:41:53] Speaker A: Great, thank you, Isaac. As Avatar Buddy is a managed AI as a service company and we create function specific AI agents and digital twins that leverage a small language model to help amplify employees awesomeness and improve operational excellence. And that is done with the support of an AI with our AI advisory team. So companies that want to maximize their investment in AI would work with an organization like Avatar Buddy to ensure that they're able to increase their profitability without having to go down the AI rabbit hole. [00:42:37] Speaker B: Thank you Stephanie. And what's the URL to Avatar Buddy? [00:42:40] Speaker A: Oh yes, and because everyone needs a buddy, Our website is AvatarBuddy AI. You can experience a buddy at our website and also schedule some time with us. [00:42:54] Speaker B: Thank you. And folks, you're listening today to the coffee with Digital Trailblazers. We meet every week here at 11:00am Eastern Time to speak about topics facing digital transformation leaders. Today we're speaking about AI as a teammate crafting purposeful digital workplaces. Thank you Stephanie from Avatar Buddy, founder of Avatar Buddy for being our special guest. We will not have a session next week, Friday going into Labor Day. We will be back in September. I have not announced our sessions for September yet, but I've gotten some pretty good ideas here that you could see in the bottom right hand corner. And so thank you Keith. Thank you Srinivas. And then I think we're going to revisit change management in AI as another topic. It's too rich of a topic for us to just cover here in a part of a session. Please visit starcio.com Coffee Next event and that will redirect you to the upcoming events. Liz, we got a lot of things here we've been talking about and ones that I want to talk about vertical integration want to talk about. And then this last question about restoring dignity to work by supporting self agency and employee learning. Where do you want to go with this lizard? [00:44:13] Speaker D: Oh my God, there's so much here. This is exactly what I was hearing at the beginning was leaning into your awesomeness and why I got so excited. I know that we spent a lot of time talking about change management around like how to integrate the agents and bring them on board as almost like as if you're bringing in a human. But how do you approach the employees? Because what I'm hearing is especially around let's say I love Isaac's example of well I know that I don't think security first so I could use a security buddy, you know. And it in the what's in it for them example of how do I bring in an AI buddy that is actually going to be not confrontational, non threatening, but supportive. Is that really your approach and have you been successful in that way and where have you run into some difficulties with your clients? [00:45:15] Speaker B: Liz I think that's a fantastic question because there's a little bit of a reality in there. If I asked you that question Liz, and said okay, what are you advising a program manager or a project manager who's got 20 years of experience, has done Agile for the last five years and now you're going to bring them an AI buddy and tell them to find their awesomeness when the very first problem that the AI is going to face is it doesn't have integrated cleansed information to work with the same struggle the program manager faces and the PMO and the value management office space. And I know Derek wants to speak next. The same problem with a security officer, right? You take somebody who's been, yeah, managing a SOC security operations center for the last 10 years, bringing all this data together, trying to find issues faster or find root causes faster, and now the AI has the same issue of finding and getting access to all the information and understanding context. Stephanie, how would you address this issue of bring in the relevant information so that your buddy can learn faster? [00:46:32] Speaker A: So first of all, I would say that it's not a one system solution, AI to date, and I don't believe that we'll ever get there. I mean there's no one person that can do everything in an organization. There are people that can come close to it, but no one person that can do everything in an organization. And so from that point of view, you're going to need different AI agents, different AI solutions to be able to get your job done. And so the way we are approaching it is let's take the security analyst per se. So this security analyst has these uber powerful security tools and now those tools come with AI and, and it's spitting out so much data and the person is overwhelmed and they're just getting, they just don't even know what to do with it. And what our digital twin, or sorry, our AI agent can do is one, help them center themselves. It's okay to be overwhelmed. That's okay. Let's break this down. Let me tell you how you use the system, how you help their AI produce something in a way that's appropriate, right? And, and so what we're doing is it's a buddy next to you helping you do your job better, making you feel better and helping you navigate through. So, oh, couple of things, right? I tried to add a person to planner yesterday and I'm like, the person is outside of my environment. And so, you know, if you try to marry somebody external with somebody internal in a Microsoft product, I mean that's like a two, three hour exercise. But what I did was I asked my AI agent to, to, I think it was expert buddy as expert buddy to help me. And in about 20 minutes I got it done and then I don't know how to write the prompts for Gemini to create photos for me, but I asked my agent, I asked marketing buddy how to create the prompts, and then I gave Gemini the prompts. And then Gemini produces the same beautiful picture that I have in my head that I wasn't able to do when I just prompt directly with Gemini. So that's how the AI agents that we are configuring works. It also reinforces your culture because you tell it what your culture should be, and it always reinforces and delivers the information from a cultural standpoint. And so what ends up happening is most people struggle with their job is not a pure skill issue, and it's not a pure will issue. It's a mashup of those things. And trying to deconstruct that is so hard. And what we then do is we just say, oh, Joanne doesn't know how to do her job. Let me just train Joanne again on it. And then like the 10th time, I've just trained Joanne on how to do her job. I'm now frustrated and Joanne's irritated because, like, Joanne's like, why keep training me on something I already know how to do? Because it's not a skill issue. It was a will issue. But because I did not take the time to unpack the will part of it, I did not realize that our agents are configured, every single one of them, to provide both will and skill support in a way that people's self esteem is improved. They learn more, they retain it longer, and their self esteem improves. That's the study that we did. We did research last summer and that was the result from a, um, researcher came with what's happening with Avatar Buddies solutions. So that's how you, how you configure your tools to make sure that they're a role model. And I will say to you, we, we just configured a digital twin to be somebody's personality and a social worker. Now in real life, that person is not a social worker, but her digital twin is a social worker. And then we just did a, we did a demo. We had her digital twin answer question and another person on her team's digital twin answer the same question. And, and the essence of the question was the same, but the way they answered it was completely different. So does the AI have personality? I would argue you can go to our YouTube channel and look at our digital twin and you will see the demo of what we did. So yes, if we did stuff like that, then you do have this warm environment where people aren't afraid of the AI. The AI is being responsive to them. And because we're sitting with the people to configure the AI to begin with, they know what's going into the AI. And because we know what's going into the AI, they are feeling more comfortable. Because if you're building something, you're more comfortable and confident to use it than if somebody just gave it to you as a black box. And, and then. So I'll pause there and then we can continue. [00:52:24] Speaker B: I'm just going to say this, Stephanie, and hopefully it will not draw a response, but I think you just killed AGI. I think you just. AI is not a one system solution. You're going to need multiple AIs. Sounds like to me that you think AGI is not possible. We may need to have another conversation around AGI to help you answer that one because I need to get Derek and I need to get Joanne back. And we have seven minutes left. Derek, go ahead. [00:52:54] Speaker C: Great, great dialogue. And the examples Stephanie gave were actually incredible. And we did. The fact that using AI to help you with Gemini AI, I think is great because it helps to make you more efficient in what you're doing. But I think also looking at the vertical integration piece of it, we really need to look at the using the vertical integration as a cyber resilience strategy to work with the data because you're only going to be as strong as your weakest layer. And I say layer because the AI model will be able to go across all your different units within your business to understand what areas are going to be the weakest point or where you're going to have the greatest exposure. And they can do it much faster. They can be detecting that threat intelligence and give you that response. You need to move forward. But overall it's going to help you align those business goals with your risk posture to move those things forward. And the dignity aspect, you know, for those working with Those in the IT or the SoC type services you mentioned earlier, Isaac, you know, automating those mundane tasks to help free up employees to be more creative is going to be key. So you're going to be looking at personalizing for those particular areas where AI can have the most impact. And the other thing is really helping employees upskill what they need to do. You know, as I mentioned earlier, the example that Stephanie gave, you know, people not knowing and using it and figure out what they can empower themselves to do to be more efficient, to be more powerful, to have more, greater answers and make better decisions, these are all the things that are going to come about with this to help employees with the learning process and help them be more self sufficient so they don't have to rely on somebody else. So the agent, the AI buddy that Stephanie's mentioning here, I think is absolutely awesome. [00:54:27] Speaker A: Thank you. [00:54:28] Speaker B: And bring Joanne. And Joanne, you're working in a space where I actually think it's a little bit easier, you know, finding your awesomeness in manufacturing and construction, the industrials, and saying AI is your buddy, but you can't be displaced because you live and work in a physical world. Sounds like a easier pill to swallow and maybe get some more traction in industries that have an aging workforce, have an incredibly knowledgeable workforce. I'm just wondering if you can comment on that. [00:55:03] Speaker E: Yeah, I think there's, in certain parts of manufacturing, yes, I think it might be easier, but overall I would say it's equally as difficult. One of the biggest challenges that, you know, we've come across with people onboarding and looking at the agents and using the agents is to make sure that the individual's bias is never transferred into the agent. And that requires a lot of skill and a lot of using not only different small language models, but the larger ones as well. And I would say that while some of these agents may be very good for productivity, you can't replace necessarily the human in the loop knowledge that comes about expertise in certain, like the SME expertise that comes. So I think it's a mixed bag. I can see the value of the productivity agents that Stephanie is, you know, creating from a functional point of view, aggregating those together to get the overall business value that we provide in terms of top line and bottom line. I'm not sure I quite get it yet, but this is definitely advantageous for other parts of the organization. I can see using it where there's more human interaction kind of in the B2C way of productivity, but also maybe later on in the B2B way. [00:56:36] Speaker B: Thank you, Joanne. I got Stephanie raising her hand. What's the last word for today's session on crafting purposeful digital workplaces, folks? Thank you for joining. Stephanie, wrap up for us. [00:56:48] Speaker A: So I totally agree with Joanne and we've actually just now started doing where we stringer AI agents together. They're still, it's still manual and we're still human in the loop digital handshake. But I'll just leave you with this example. So my marketing team said, I need you to write four blog posts for me. And I said, I don't write blog posts. And they're like, you have to write the blog post because you're the only one that knows this stuff. And so then I said, okay, fine, I'll have researcher Buddy do some research. And then based on that research, I had Marketing Buddy take that and write some write my four blog posts for me. And because I know marketing Buddy loves to do hyperbole, I asked editor Buddy to check what marketing Buddy did and make sure it was okay and do some edits. And once that was finished, I asked social media Buddy to write posts for LinkedIn, Instagram and tick Tock so I can introduce my articles. And all of that took about 15 minutes. There were checks and balances. And I gave the resulting product to our comms communications officer. And for the first time I got a well done as opposed to, no, let me rewrite this for you. And I think that is because I added an Editor Buddy in there that was configured to be the way she edits. So yes, there's lots of productivity, but also taking people to a higher order of thinking. That's our goal, to have people be the best version of themselves because they're using AI, because they tap into their creativity and their awesomeness. And thank you again for letting me be on this podcast. [00:58:34] Speaker B: Thank you, Stephanie. We wish you great success and luck with Avatar Buddy. I just want to echo, Echo something you just said, something that everybody here can take away with, which is you mentioned a social buddy, an editor buddy. This is essentially how I see AI playing out with agents. Is giving him a role like that, right? We know that we as humans get conflicted when we think about a problem or an idea from too many perspectives. We get lost in our thoughts. Should I be more safety conscious? Should I be more innovation conscious? Should I go slower and faster? Think of two or three agents who are working with you playing out different roles and saying, how would you solve this problem? From a security officer, from an innovation officer, from a data governance officer. And they're going to give you very different answers from those perspectives and then think about that agentic world that says, okay, bring this all together to me and provide me some trade offs between how I think I should work with my particular problem or solution given my objectives. And it's going to give you some rounded out answers around this. Love this conversation. We've got four areas in the bottom right hand square about areas that we will cover. I want to thank Irina for the testimonial for today. No AI needed to see how awesome all the participants are. Thank you for all my guests today. And folks, I do not have September sessions lined up yet. That's deliberate. I was hoping to get some out of them today. And we've got four to work with. So do visit Starcio.com Coffee next event that will redirect to our next event. Do visit drive.starcio.com Coffee-with-Digital Trailblazers Sorry for the long URL. I'll get that up there sooner. If you want to watch or listen to any of our previous episodes that are open to the public, you can join there. And if you want access to all of them, join the [email protected] cio.com community. You get access to all of the recordings as a community member. Folks, everybody, have a safe, August, safe weekend. If you're in a hurricane's way, we'll see you back here in two weeks. On our next topic for the coffee with Digital Trailblazers, Everybody have a great weekend.

Other Episodes

Episode 0

October 24, 2025 01:00:38
Episode Cover

The Co-Creation Mandate: How Partnerships Accelerate Innovation & Talent

In this episode of Coffee With Digital Trailblazers, participants discuss the importance of co-creation in innovation partnerships among companies, startups, and nonprofits. They offer...

Listen

Episode

February 21, 2025 00:58:53
Episode Cover

SMB Transformation: How Smaller, Faster, Safer Wins Against Big and Slow

You are unauthorized to view this page.

Listen

Episode

August 30, 2024 01:01:51
Episode Cover

Accelerating Transformation: Top Digital Trailblazer Competencies

You are unauthorized to view this page.

Listen