Kelly Fitzpatrick: Hello and welcome. This is Kelly Fitzpatrick with RedMonk, here with another RedMonk Conversation on AI. Today we'll be talking generative AI and IT automation. With me today is Kaete Piccirilli from Red Hat. Kaete, I know that you are keynote stage famous for the Ansible community, but for folks in our audience who are not already familiar with the Ansible universe or dare I say galaxy, can you tell us a little bit about who you are and what you do? Kaete Piccirilli: Yeah, thanks so much Kelly for having me. Really excited to be here. Little bit about me, I have been with Red Hat for over seven years and I have had the luck, I call it. But I would say the fun, joy, and pleasure of being a part of the Ansible team the whole time. I currently lead product marketing for the Ansible business unit, and our team is focused on helping automators and developers and others just get connected to using Ansible automation to solve their IT challenges. I always say that we just want to help other people get just as excited about Ansible as we are. Kelly: Thank you for joining me today. And I have questions for you, but I think -- I have so many questions. Before we jump into those, I think it might be useful to start with a little bit of context and kind of recent Ansible relationship to AI. And I have been following Ansible for quite a while, ever since I started as an analyst five years ago now, which means that I was at the in-person AnsibleFest in 2022, which there was an announcement around something called Project Wisdom, which we'll -- kind of more on that later. But I think the very next month, OpenAI makes an announcement about chat GPT. And since then, AI and generative AI have been in just everybody's mind share, not just in the tech industry, but in the general population as well. So now we talk about generative AI and large language models, etc. So given that, Red Hat and Ansible were clearly thinking about this technology even before LLMs became kind of common parlance. What can you tell us about Red Hat Ansible's approach to AI, which so far I have personally found refreshingly domain specific? Kaete: Yeah, it has been an amazing journey. We started it probably close to two years ago now, and we were collaborating with IBM Research to see what was possible. We've got a great partner in IBM, and we wanted to think about a few things in mind, which is how do we bring the power of AI to the Ansible code experience? Now that's kind of generic in nature, but we also found as we talked to customers and the community that there was a little bit of a challenge with automation skills and maybe hiring people to be a part of the team and to be able to accelerate automation as much as people wanted to, right? So we were thinking about it and like, okay, how do we make it more accessible to more IT professionals to be able to utilize automation at scale? And additionally, we wanted to make our Ansible creators, our automation developers and creators, be more productive, more efficient, maybe error-free, be able to accelerate the path to building automation, and maybe even help them do some of the automation that they have a little bit better, think about some of those pieces and parts. So additionally, last but not least, we really wanted to think about a purpose-built model. That's where IBM comes in. That was more efficient, more accurate, and is very specific to the Ansible domain. One of the cool things about the Ansible YAML is that it's pretty structured in nature. So it was actually a great place to start with thinking about how do you train on a language because it's so structured in nature. So the outcome was what we have here today is Red Hat Ansible Lightspeed and the integration with the IBM Watson X code assistant. Kelly: Yeah, and I feel like the kind of skill shortage conversation is one that I know you and I have been having and we at RedMonk have been talking about, like all over the place. And I think it may have been overshadowed again by the whole generative AI kind of coming in. But they're definitely related in a lot of interesting ways. And kind of going back to how much generative AI and AI in general has been just so prevalent in just popular conversations and imagination. We have seen what we call AI washing everywhere where people are just kind of like, okay, now we have AI. Even if it's not necessarily something that has been kind of really baked into offerings in any way, where do you see Red Hat and Ansible's idea of automation fitting into this world of prevalent AI washing? Kaete: Yeah, it's a great question and a great point. We've been really thinking about it -- kind of going back to how do we fill some of those gaps, right? And to make it very usable every single day. We know that teams are just under more pressure to do different things. And we wanna make sure that code was able to scale in different ways. And so it's not just for one area of the business, we wanted to make sure that it was possible to go in every area of IT organizations, right? And so in order to do that, we had to use what we had in front of us, right, which is Galaxy and our contributors and the community, right, and to be able to meet those different teams that have come together to use automation at scale. It is very accessible. The Ansible Lightspeed is very accessible for anyone, and teams can trust the data that comes out of it. And as we thought about this whole process, we thought about collaboration across those different teams. How do you have choice in how you use the automation, right? And then also, can you trust what maybe has been put in front of you? And so thinking about that in mind, we are not here to just add AI to everything we do. We wanna make sure that the people who are using automation today and maybe automation tomorrow, they're maybe not doing it to, you know, just yet, are able to do it in a really, really meaningful way. And so that's what we have been really thinking about as we add AI to the experience for Ansible. Kelly: And I do like that kind of longer term view on AI, as opposed to what can I do with it right now? What are some of the longer term usages and patterns and use cases and ramifications even of something like AI? Kaete: Exactly. Yeah. Kelly: Yeah, and that leads me to, I think another question that's coming out of what you're just talking about data, a lot of recent buzz around generative AI has been larger concerns around who has access to data, what data is getting used to train models, what is happening with intellectual property, how all of this data is getting used, what kind of concerns are you seeing from customers and potential customers, and what is Red Hat doing to address these concerns? Kaete: Super great question. So we've thought, and we talked to a lot of people before we even jumped into some of this, right? So I've talked to community contributors, we talked to folks like you, right? We talked to folks inside and outside of Red Hat. And among the questions that we got the most, which I would love to hear from you as well as if this is stuff that you're seeing, but the question was like, where is the generated code coming from? So if I go ahead and I put in a natural language prompt and I get it back, why is that the answer? Why is that the result? Right? And so that was one thing we really tried to tackle head on with some of our content source matching that we have in the experience. And another question was, how can we trust the data, right? And trust that their data stays private. That's also been a tenet in what we've been thinking about and how we built it. And then where exactly are the data models being trained on, right? Where does that data come from? What were you training on? And if some of that data is maybe mine and I don't want it there, can I make sure it's not there? So those were some of the things and concerns that we saw. Is that similar to what you hear from your clients? Kelly: You know, absolutely. And of course, that applies to so many other different type of offerings in ways that generative AI is being used. And there's a bunch of reasons for it. So it has to do not just with the integrity of data and keeping data private, but also the quality of answers that you get based on the data that has been put into it. Because one thing we've seen that generative AI can be notoriously confident in its wrong answers. Kaete: That's very true. Kelly: Yeah, it's fascinating. It's you know, and I used to teach university writing, and it's kind of like, you know, having that sense of wrongness coming with that absolute confidence that it's right. I'm like, I think I've seen that from a few students, but it's not something usually you often see. So it can be a bit unsettling. So I think there's so many concerns around that. And then even getting into -- we talked about grounding AI models in terms of, you know, what is the information that this stuff is being trained on and are there any chains of attribution of where it's coming from. So there are so many things that the folks that we talked to have concerns when it comes to when it comes to the data involved with models. Kaete: And we've thought about all of that as we have built Ansible Light Speed. If you have -- a lot of the data has been trained on the content collections and the things that exist in Ansible Galaxy, and if any contributor does not want their contributions to be a part of it, we will pull that out of the model, right? And we actually show attribution, we call it content source matching, but some folks might see it as attribution. But the taking a look at the result, you can actually click through and you'll see the top three best matches where those results came from and you can actually click in and see the source author, the licensing behind even that code that is showing up. And that's where we think about collaboration, right? I kind of was talking about, we're thinking about collaboration in mind and how can more people get involved if it is a result and you maybe know that individual, you can reach out and ask questions and get to know them or participate in contributing in some way in different ways. And so we've really built that in mind when we think about what we're putting in front of our automation enthusiasts every day. The Ansible Galaxy is a place, a hosted place, where contributors can actually share the content, the automation content that they have written. And when you think about Ansible and you think about automation, you need those integration points to happen. So it could be content that's written by a partner, like Cisco, or a contributor, like a Jeff Geerling. You know, all of those folks can host their content for other people to utilize and begin to build automation from. And so Galaxy is a place where all of that content is hosted and shared and able to be used by automation fanatics around the world. Kelly: So I want to change gears a little bit and talk about Ansible LightSpeed a little bit more in depth. What should folks know about from an offering standpoint at this point? And also, how is Ansible LightSpeed maybe a manifestation of Ansible's larger approach to AI and IT automation? Kaete: Great question. So Ansible Lightspeed is available for any user today to go and download the tech preview. To date, I think we've had close to 5,000 people actually go in and look and utilize and add to their IDE or their Visual Studio Code, which is where the plugin exists. And so we've gotten tons of great feedback. That's the awesome part of open source software, right? Is you get feedback very quickly. And it was purpose built specifically with developers in mind, right? We wanted to build an experience that's integrated to what developers, where they live and breathe and work every day, right? We don't want them to have to go to another place to input their generative AI request. And so it is built into our experience today with the tenets that we've talked about of collaboration and transparency and choice. And it goes beyond just getting a natural language prompt. It integrates post-processing and pre-processing so that the model may be able to better understand the request and better actually provide a prompt that is really usable, that has some best practices in mind on how you might actually write automation. So additionally, it has some great contextual awareness that is able to, the leverage of unique insights that we get from contributors around the world, users around the world, that is a part of our code base that provides the ability to bring in that subject matter expertise in that generative AI way. It's almost like you have, if you're the SME, you're able to make your content even better than you have today. And if you're the new person, we talked about that skills gap, it's almost like you have someone who's an expert in automation kind of sitting right next to you and helping you navigate building content maybe for the first time. Kelly: Yeah, especially for onboarding scenarios, which has to do with skills and the skills acquisition by the -- I don't know, skills gap is not always my favorite term to use when we're talking about onboarding because it's like, that's something that when you start a new job, you do. Kaete: You have to develop new skills, yeah. Kelly: Yes! It's something that needs to be done. Kaete: Additionally, we've added the ability that's going to be coming in the future to actually scan some of the content that you might actually have in your organization today and tell you maybe if there's some risk in how you're running that automation or ways to actually improve the quality of the automation that you have already built today. So we're just we're really excited about not only meeting people where they are today and generative AI in their hands of the work that they do today, but helping them think about how they could think about risk in their organization and even think about maybe how the work they have today could be made even better to be able to get those results that we all know that leadership is looking for, right? Kelly: Yeah, and you had used the phrase that this was built with developers in mind, and the fact that you start off with this is something that is in the IDE. Because to your point, meeting developers where they are is the best way to help developers, to kind of limit that context switching and that application switching and that tool switching that we often see. So I think starting with that instead of being like, well, we're going to put it in this completely foreign UI and then eventually we will make this something that can be used in, say, Visual Studio Code. Kaete: We want people to use it, right? We know it can help them every day, whether it's starting something new or doing something better. You know, we want them to use it. We know it can help them. So meet them where they are. Kelly: Absolutely. So shifting gears, I think a little bit again, to your mind, and we talked a little bit about meeting developers where they are, but what might Ansible Lightspeed mean for developers in the automation space? And going back to that skills gap question, what does this mean for teams and organizations and that kind of larger collaboration effort that you spoke about? Kaete: Yeah, so we want to be able to help teams enhance their efficiency, their productivity. I know that sometimes those can be scary words, right? It's like, oh, gotta be more efficient, gotta be more productive, gotta do more. But it is a little bit of what we all have to work within every day as a part of our jobs, right? And so if we can help streamline that content creation process and accelerate the creation directly in that environment, like we talked about right where they are today, we can help them maybe build some code that's more accurate, and it's very specific to automation. So we kind of talked about this before. Lightspeed is a very purpose-built model specifically for automation. And so we want to be able to expand the aperture of not only our subject matter experts who are writing and building automation today, but add to teams. One thing I hear about a lot when I talk with customers is my team loves automation. We're all in. But these three other teams, they're not all in. And so we want to be able to reduce those barriers for that code creation because maybe they're just not comfortable with automation or how to write it or how to use it. And if we can empower those individuals and those teams to work together. You'll just, you're able to see just an amazing thing happen across teams, right? That collaboration that really takes place, that helps just expand what teams can do, right? And last but not least, we talked about this before, and I'd love to hear from your perspective is the trust that comes with what those results have to say. And we have been thinking about that in mind with both the accelerated user, someone who's using it every day, and then that new user. That's where we get lots of questions. We're just gonna toss this into a place where someone doesn't know how to use automation? And so it's really to help people get a little bit further every day, right? We won't expect this to be used exactly how the code comes out, the prompt and the code comes out. It just gives people a place to start, right? And that's sometimes a challenge when bringing in people who are developing new skills is they just need a place to start, right? And so if they can trust the start, then they can begin to build that framework within their organization of kind of an automation first mindset, right? So we think it can do a lot of things for teams today and individuals. And I'd love to hear from you, are those some of the things that we're hearing about that is needed in a generative AI solution? Kelly: You know, I think when it comes to -- one of the things that automation and AI have in common is that people can be very apprehensive around both. That automation was always the, okay, I'm gonna get automated out of a job. Now it's like, will I get artificial intelligence, it's like Gen.AI'ed out of a job. I don't know if that turns into a verb in the same way. So there's often a lot of apprehension around that. And we do know that there are individuals and teams who do create, they intentionally create kind of silos of information and things like that, because it's seen as job security. But to your point, the way automation and AI should be used is to kind of take out the like the toilsome parts of a job that nobody wants to do and that even more so nobody should have to learn. Right? So if these become things and methods and ways of looking at the world that make it easier for people to get started in doing the parts of their job that are important to them and that are important to the business, that's all the better. And I do love your point about like, how are we giving generative AI to people who are just starting? Because they have no idea if... Kaete: How would they know? Kelly: ...what they're getting is good code. Like, they wouldn't. And I think this is not a substitute for folks who have been there for a while and kind of go through it in the same way that you would not just take anything that your new hire has kind of thrown together and be like, let us just put this into production without anyone looking at it. Like, that's just not a good practice to begin with. Kaete: Not a good practice. Kelly: And so we are kind of getting close to time and I have one last question for you. So when it comes to IT automation, and you've kind of touched on this a bit, but why not just use a more all purpose kind of generative AI tool, especially for ones that you've already been, you just kind of like kicking the tires with for, I don't know, say your kitten meme generation, right? Why not just use that for IT automation? Kaete: Well, you know, it's possible to use it for IT automation, but I would say, use a, have the opportunity to work on something that is a singular focus on automation, right? If you are putting something into your production environment, you're going to want to use the AI that has the most knowledge about what you're trying to do, right? And so, we've said it from the beginning, we're not trying to make poems or build art or make art. We want you to be able to do automation and do it better and do more automation. Get those teams involved. So our focus is strictly automation here. Additionally... we have added in a couple of those features that solve those questions that teams had concerns about earlier, right? The content source matching. We're gonna always attempt to accurately identify the source of the generated content with the license and the author of showing the result that you get. It's natively integrated, we've talked about that before, meet developers where they are, and the data is isolated to each individual customer and it's encrypted in transit. In the future, you'll actually be able to utilize Red Hat Ansible Lightspeed with IBM Watson X code assistant to train on your own data and that stays completely separate to the generic model. So we've thought about all the data protections in mind. We can have a choice about what data is provided to some of those trained models. So like I said before, you can opt your content out if you are a Galaxy contributor. And we want to be able to make this enterprise ready. So we want to answer some of those enterprise questions that companies have and help get more people using automation and using it as effectively as possible. And so those are some things to think about when you're trying to compare which tool to use and why you would use one versus another. But we're just really excited about what we can do for automation as a whole. Kelly: So Katie, I know we've covered a lot of ground in a relatively short amount of time. These are both very big topics, IT automation and generative AI. Thank you so much for joining me today. Kaete: Thank you so much for having me. It was great to chat with you and it's always so much fun to hang out with you, Kelly. Kelly: Likewise.