AI-Powered Development: Your IBM i Copilot
COMMON POWERUp 20205 – Anaheim, CA
Aaron Magid, Chief Architect, Eradani
AI is transforming software development across all platforms, and IBM i is no exception. But how do you effectively leverage these powerful tools while maintaining the high standards IBM i users expect? In this session, we’ll cut through the hype and show you practical applications of AI in daily IBM i development. From code generation to documentation to testing to cool new AI-based application capabilities, you’ll see real-world examples of how AI can enhance productivity while preserving code quality. We’ll demonstrate best practices for AI integration and provide guidance for successful implementation in your environment.
Video Transcript
Aaron Magid 00:01
All right, welcome everybody.
AI Voice 00:05
Hello, everyone. Welcome. Please take your seats. We’ll get started soon.
Aaron Magid 00:19
Welcome.
AI Voice 00:24
Welcome. Great to see everyone here. Let’s get started in just a moment.
Aaron Magid 00:31
I think that makes sense. That was pretty cool, wasn’t it?
AI Voice 00:37
Yeah, I think so. Should we get started with the session?
Aaron Magid 00:43
Yeah, I think we should get started. Okay, so we’re going to be talking.
Aaron Magid 00:49
About how to get the Most out of LLMs in your daily work. Our session today is going to cover six main points. First, we’ll do a live demonstration of LLM Usage at work. Second, we’ll give an introduction to LLM technology. Third, we’ll talk about using LLMs for runtime data processing. Fourth, we’ll use LLMs to generate code for an IBMI application. Fifth, we’ll use an LLM to act as an on call subject matter expert. And sixth, we’ll talk about how you can get started with these technologies today. Hopefully by the end of this session you’ll be ready to use these technologies. Right when you get back to work.
Aaron Magid 01:30
You know, sometimes I have a scary thought that, you know, I’m really just here to be the eye candy, but, you know, it does a pretty good job. You know, I think that, I think that GPT’s got this stuff pretty well down. But that actually, all jokes aside, that actually is a pretty good introduction to what we’re going to be talking about today. And you know, I wanted to, I wanted to start off, you know, with actually using the technologies, right, because we’re using these things all day and they’re, they’re very important. So just going, going back here, right, Our agenda. The main thing that I want you guys to get out of this presentation is that you actually can use these technologies right now. If you, for example, find yourself in front of a room of, I don’t know how many people are in here, and you find yourself giving a presentation and you need, you know, help with it or you need some research done or anything like that. These tools are incredibly powerful for our, for our everyday work. And the other thing that I want you guys to leave this presentation with, ideally, as my lovely assistant said, is actionable steps that you can take immediately after walking out of this room to start using this technology, because I believe we all have to. I think it’s very much a do or die kind of situation with this technology. And I’m hopefully going to make all of you believe that too by the end of the session. Okay, you guys ready to get started? All right, well, welcome
everybody to AI Powered Development. My Name is Aaron Magid. For anyone who hasn’t met me before or seen any of my presentations, I wanted to give a little bit of my background just so that you can understand where I’m coming from when I give these kinds of presentations. I actually started as an open source developer. I started as a full stack JavaScript developer. That was sort of my first thing and then moved into the IBMI environment about seven years ago. And since then I have seen my role in my company and with the companies that I work with as helping the IBM. I do modern things in modern ways. And I just want to take a second because you might notice that on my slide that’s italicized because I want to emphasize that for a second that we want to do modern things in modern ways. I’ll explain a little bit more about that as we go on in this session. But what that means essentially is using the tools the way they were meant to be used, using the right tool for the job. And more than anything might be a slightly controversial statement here. I hope nobody hates me for this. It means not doing everything in rpg. That’s, that’s really one of the central parts of, of the argument here, right? It means that each tool that’s in our arsenal has a distinct purpose. It was invented for a reason. It was brought out into the marketplace to solve a particular problem. And if we try to use it for a problem that it was not designed to solve, what we get is a bad solution or we get an equivalent solution. That just took a really, really, really long time with a lot of hard work and a lot of wasted money to try to get it implemented right. So we want to make sure that we’re doing things in, in the modern ways. And that’s hopefully going to be something that I make clear in this presentation. Side note about me.
Aaron Magid 04:52
You can go to sleep now. That’s hilarious. So we’ll, we’ll actually, we’ll, we’ll go through those technologies. But I actually wanted to make a little side note about me that I, I actually grew up around an IBMi company. For those of you who, for those of you who know or have used Alden lmi, the change management system that just, just, just so you know, I’m not totally fresh to the IBM I Alden, they got their company name from Al and Don, the two founders of the company. Albert was my grandfather. So, you know, I grew up around the IBM. I grew up actually going into the office and, you know, seeing the machines and tinkering with things and so I think it’s kind of poetic that I’ve come back to it. So anyway, that’s me. All right, so just a quick review of what we’re going to do here. I want to show you how these technologies work. I want to actually use them in our environment here. I want to prove that we actually can do what we’ve been talking about here, doing an AI assisted development environment for advanced IBM I applications. I want to give you guys an introduction to LLMs, some of the key terms and concepts here so that we are all clear on how to use them. And then we’re going to talk about some major use cases for how to use them. And then at the end, you guys have any questions, I am absolutely here for them. So first of all, I want to prove it. I want to just actually use this environment. So what we’re going to do here, first of all, we’ll talk more about the specific environment, the specific ways that we use these technologies. But there are three main ways that we’ll use these technologies. One is runtime data processing, being able to actually process data that we otherwise might not be able to handle. What I’m going to do here, going to come over here and run a program and I’m going to say, I’ve got a program here that’s going to talk to an LLM and ask it, how is the user feeling?
Aaron Magid 06:51
And I can give it a prompt here. I’m going to send that data out and I’m going to take a look at the spool file that it generates and it’s going to come back and it’s going to say, wow, that’s a 10 out of 10. Right. So I’ve got my application integrated with my LLM that’s doing that processing. So that’s the first piece. Second piece is I wanted to just bring up, how did I actually build that? Well, I have a program here, I have an RPG program here that I’ve been using in integrations like this. And I just want to point out again, we’re just going to burn through this because I want to just show that I can actually do this. This is my environment. This is where I do this work. I’ve got an RPG program here. I’ve got this parameter in mode. What I’m going to do here is I’m going to say, hey, can you change the in mode parameter on my RPG program so that, let’s see, in mode, sorry, that’s actually not a parameter. Oh yeah, it’s a. It’s a parameter. No, sorry, it’s a variable. So that it is a parameter to the program instead of a local variable. Just going to do that. Right. And I can go over here. I’ve got my RPG code here, I’ve got my assistant. I can give it a request. It’s going to go reason about what exactly it needs to do to make the update that I would like to my RPG program. It’s going to go pull up the code. It actually is figuring out a couple of other related objects, including that I’ve got a command related to this program, and then it’s going to go out and it’s going to actually try to make the updates to the RPG program. Right. So we’re not going to spend too much time on this. I just wanted to point out that it is now actually working on the thing that I have asked it to do. I used to say that you can’t use LLM coding assistance for rpg, that the results weren’t good. I’m not actually so sure about that anymore. I mean, I know that Watson X is coming around and that it’s doing, you know, amazing stuff around those technologies, and that’s amazing. And I’m sure that it’ll get better results than these tools. But I also have actually seen surprisingly good results, especially with free format code with these tools. So there’s already a certain level that you can get to without, without anything else, right. The things that are on the market right now, you can get. And I’m sure when we get, you know, when we get the, you know, the real live version, I’m really excited to get into the, you know, into Watson X and be able to use those, those models, those tools so that we can get to the next level. But again, I don’t want anybody to be waiting to use this technology for a next release, because you can use it right now. Case in point, this is going over and it’s editing my program and getting the changes in there. So we don’t need to necessarily wait for this entire thing to go through. If you notice here, it actually just went over to the command that’s around this program and added the parm into that command because it realized that if you update the RPG program, it had better update that command, get the related objects in there, and actually have that applied to my application. So that’s my updated program. And anyway, it’s still working on it, but I’m going to leave it there. But you get the point. So given that environment, right, where I can talk to a system, right as we did at the beginning of this session, where I, you know, and it can actually give me some actionable information where we can use it for code development or for code generation, and we can actually use it for runtime data processing. Let’s talk through some key information that we need to understand in order to actually utilize this. So the first thing that I wanted to actually talk through in this presentation is what are we talking about here? What are these LLM things? Right. I just showed you a development environment where I could type in a prompt and I could actually get actionable changes made to my RPG programs where I can utilize combined technologies from multiple languages generated by an AI assistant. And I can take advantage of all these technologies. But in order to actually use these things, we can’t just be blind users of the technology. We have to actually understand what we’re talking about. So first I wanted to give us a foundation in those terms and those technologies that we need to understand. So number one, first thing, LLM, right? What is an LLM? People have been talking about these things lately and I just wanted to take a moment to explain what that is. You know, I hear people these days say AI, AI, AI, AI. Right? As if it’s a new thing. It’s not a new thing. AI is not a new thing. Alan Turing talked about AI. It is not a new thing. But what’s new is these large language models that are giving artificial intelligence a human like or human compatible interface. That’s what’s new. So when we’re talking about AI, when you hear people talking about AI and they have that little sparkle logo on their applications, what they really mean is LLM. Most of the time there are actually companies from before the GPT revolution that have other AI algorithms, which is just kind of funny to me because there’s a lot of value there, but nobody thinks about it anymore. It’s just kind of a funny thing. One of the key things that these large language models do really well is natural language processing. Anyone’s not familiar with that? Natural language processing is the process of reading natural language like human generated human written text and extracting useful meaning from it. If anyone in here has ever tried to do that programmatically without the use of AI, it is extremely difficult. I hate doing those kinds of things. I’ve actually had to do them before. And it is incredibly hard. Right? LLMs do it, no problem. They are super easy. You feed them in, you ask them a question about the data and they’ll tell you. So that’s one of the Key things that we can use them for APIs. If you’ve been coming to Common, you should know what an API is. Right? An API gives us an interface over the web to talk to an external system. It is the backbone of all integrations these days. Right. Even other things like messaging systems. If you talk to Kafka, you’re actually talking to it usually over at over a web service called you’re actually talking to your message brokers. So these are very important to understand, to be able to talk to other systems. Which again is the core of being able to use the latest technology. There’s a new entrant into this, into this world, into this field as of very recently. So on a quick show of hands, who here has heard of MCP or the Model Context Protocol? Okay, cool, Nice. There are some terms that are actually coming out that are so new\, MCP is not one of them. Vibe coding is where I actually saw this, where they’re so new that if you ask GPT about them, where GPT actually uses them, but if you ask them about it, it doesn’t give you an accurate answer. It talks about something else because it doesn’t yet know about these things because they’re arising so quickly. But the Model Context Protocol is really important. Basically what happened with this is the developers at Claude realized, or Anthropic realized that they needed a standard way to get LLMs to be able to understand tools and to be able to remember things. And that’s what the Model Context Protocol does. What it allows is a standard way for these AI tools to retain memory and context about conversations, that they’re having disparate conversations and also to use tools. And I’m focusing on that use tools because if you want to build the really cool stuff, this is one of the most important things to understand. If you’re building a custom AI based system, Model Context Protocol, remember that that is how you’re going to get it to understand how to use your system. Right. MCP is what allows my AI assistant that I showed you a second ago to talk to my Visual Studio code and read files out of it and interact with the files. That’s what allows it to do that autonomously. So it’s very, very important. Another one, kind of a buzzword, but agentic AI. Anybody heard that one? Okay, right. Fundamentally the idea that with an LLM, what we’re going to do, instead of asking it one off questions and getting one off answers, what we’re going to do is we’re going to give it autonomy to make decisions, plan out an execution plan and then execute those steps. Right. Basically giving it a complete task rather than an individual question and answer. So again, cranking up the capabilities here. I don’t know that we need to talk about model training, although I do usually talk about that here. But it’s at this point a fairly low level thing that most of us aren’t actually involved in. But it is important to note that you can, beyond just giving context to an AI, which is kind of like giving documentation to a new developer who hasn’t seen your system, you can also teach it about your, your systems. And it can actually incorporate that knowledge into its data, into its weights, so that it’s able to really effectively and really efficiently talk about your systems. So that’s also an important thing if you’re getting really sort of into the weeds.
Aaron Magid 15:58
Let’s just take one moment about LLMs. I think one of the things that’s important to understand about LLMs is why they’re different from the AI technologies that preceded them. One of the major differences with these technologies is their human like interface, right. That allows you to talk to it in a way like what you would do with another human being. Right. That’s what allowed it to be accessible to every user on the planet. And also they require very little training. Training data, relatively, right? You’re talking, you know, hundreds or thousands of examples to really get it to understand something as opposed to hundreds of millions to get, you know, traditional systems to, to actually accurately figure out the patterns and data, right? So these things are very easy to train and they are very, very fast and they’re very flexible. So as you all are thinking about this, one of the things that I want to make sure to impress upon everybody is you can use this for code generation, but you can also use it in your applications.
Aaron Magid 17:05
But why should we care, right? We’ve talked about LLMs, we’ve talked about the technology, talked about all these terms, all these, you know, little bits of information. Why, why is it important to us right now? I made the fairly provocative statement at the beginning here that this is do or die. We have to do this. You must be using these technologies. I believe that. I believe that anyone who’s not using these technologies in the next three to five years is going to be having some very difficult conversations, but that’s just how this is going to go. But why? Well, the first thing is the ability to develop extremely rapidly, right? These LLMs can generate code very, very quickly. But I actually think, more importantly, and I talk about this A lot. Whenever I talk about AI is. Is this second point that’s on my list here, which is reduced cognitive load. What I mean by that is. And anyone here who has generated code with AI has probably, I hope, has probably felt this. I feel it all the time. It is a lot easier to skim a sorting algorithm generated by AI and make sure that, yeah, that roughly looks right than to sit down with a blank canvas and try to remember how to implement a sorting algorithm and look up all, you know, think through it, map it out, you know, pencil it out, and then write the code and then do the review and all the testing. What I have found is that even in the scenarios where an LLM is slower than me at doing something, I can still get more done in a day using these technologies, substantially more, many times more done in a day. Because essentially what I’m doing is the job of a senior or chief engineer, not the job of a junior engineer. I’m not writing code. I almost never write code anymore. I am reviewing code. I am developing plans. I am designing features. I am reviewing code. I’m almost never writing it. And that’s just a key point to understand here. And what I found is that I’m. I have more energy. I’m actually more awake in the second half of the day than I used to be because I’m not spending all this time. Yeah, go ahead. You had a question?
Aaron Magid 19:24
Yes, That’s a really good question. Thank you. If I had candy, I would throw you one that I should have candy next time. That’s what I should do. Yes, it’s a really important point. The thing is that what I’ve found is that the technologies that. You’re sort of pitched there, right, Because. Yes. And I see people becoming complacent. I’m watching it happen, actually, to two people, and they are unlearning certain things where they’re not advancing because they’re relying on AI. But at the same time, it can do things, in a lot of cases, faster and better than we might be able to do them. So we need to use it. So the question is, how do you resolve that? For me, I think it’s actually mostly just a personal exercise, meaning it’s. I’m not using the LLM to make my job easier so that I can go home earlier and just kind of relax and kick back and, you know, whatever, like, you know, watch a movie while the AI is writing all my code for me. If the AI is writing my code for me, what I’m doing is designing the next feature while it’s writing the code for me. Meaning you won’t become complacent, you won’t forget your skills if you’re still pushing and using them. I think it’s actually just a perfect personal challenge now.
Aaron Magid 20:47
Okay.
Speaker 2 20:58
And if we only trust mostly do they really learn to code?
Aaron Magid 21:07
So yeah, I would say that.
Speaker 2 21:13
I can code stuff for, for my code, but new person can also do this, but doesn’t have the ability to see the flaws in the code.
Aaron Magid 21:30
Right. Yeah, no, you’re, you’re totally right. And I think it’s going to be actually a very difficult thing for junior developers. I’ll be honest, I don’t actually have a great answer for you there. I think junior developers are going to have a really hard time in the next couple of years because they’re going to have to figure out how to learn this. Now what I’ve done personally, and this, maybe this will help is because I, what I found is I’m branching out into technologies that I didn’t know before. And what I’ve done in a couple of cases is flip the roles. Meaning whereas normally I’m reviewing the code and reviewing the feature and the LLM is generating it, sometimes what I’ve done is actually written, attempted, tried my hand at writing the code and then had the AI review it. And even though it’s not perfectly brilliant in all things, it can tell you, it can get you a second pair of eyes on it so that it can actually help you learn in a one on one environment. But yeah, junior developers are, are going to have a really tough time because actually jump ahead. I actually have a. Normally I was planning to say this later, but it’s fine, doesn’t matter. One of the things that has happened at my company is we don’t do coding interviews anymore. We actually do not interview with a programming challenge. So what do we do? How do we validate if people can do the job? Well, we do, we do an architecture interview. What we do is we say we give people a diagram. We use lucidchart, we give them a diagram. And on one side is one of our questions. On one side it says I have this data in my ibmi. It’s in this database table. It’s filling it, filling up over time. I need to get it into this system because we do a lot of integrations. Fill in the arrows, tell me how we can make it work right. And we have them diagram it out. And what we’re gauging is can you design the integration and Think through what you need and design a bulletproof system capturing all those edge cases. And can you explain it to, to another person? Because if you can do those things, you can explain it to an LLM. And what we have seen is that that’s actually the more important skill. So that’s actually one very direct impact that this technology has had on our teams in our company and it hasn’t let us down so far. I think that there are going to be some bumpy changes that are going to happen over the next couple of years. It’s going to be hard for junior developers. It’s going to be hard for, it’s going to be hard for everybody because we’re going to have to learn this new way of working. I saw an article recently actually that was talking about how there’s an epidemic of cheating in American schools. Right. People are students all over the place. Huh? Right. People are having, you know, they’re having GPT write their essays for them. Right. I mean, why not, if it can do it and it can do it in a convincing way that the teacher can’t figure out? I mean, what are you just gonna have like a pesky moral compass or are you going to, you know, a teenager is going to think about their personal growth and realize that writing this 20 page paper is actually for me, not for the teacher. No kid in school understands that. Right. It’s all busy work to them. Yeah, that’s the thing. Well, that’s the thing. So what this article was saying is maybe that’s not actually a bad thing. Maybe what needs to happen is the paradigm needs to shift. Two where instead of expecting the student to just jump out and, you know, write the paper, what they need to do is talk to AI and have a conversation and focus on critical thinking skills. Right. And the things that are more non aiable in this particular time. And I think as developers, that’s part of what we need to think about, you know, is what are the skills that are very valuable in the era of AI rather than what are the skills that I thought I would be doing, what are the skills I thought I would be learning? Because it’s changing. One of the key things that I always like to mention here, I see a couple chuckles I appreciate that is, well, this goes, this actually goes back to the don’t be complaining complacent thing. But I believe that if we keep learning, we don’t have anything to fear. And I actually, I actually had GPT generate a chart for me once to illustrate this point. I should have it in this presentation, but I don’t. Basically the gist of it was to say if you as a developer are capable of doing, let’s say you know this much, let’s say this is a bar on a bar chart. You know, if you as a developer are able to do this much, right, you’re able to do that much. And an LLM is able to do that much, right? It’s a little bit less than you, but it can still do some things. There are two options when you use this technology. One is you can use the AI to fill in the lower part of your bar so that you only have to do this part right, and the AI can do the rest of it. And then you can, you know, kick back and watch a movie while you’re coding. The other option is you can do that, right? You can have the LLM handle the stuff that you don’t want to do, and then you can focus on all of that. Suddenly your capacity is up here, right? So that’s what I mean by. That’s why I like this cartoon, right? Is you don’t. If you find yourself with lots of time on your hands, great. That means you’re using AI really well. Now you need to go do the next thing, right? You don’t want to become complacent because if you do, then what’s going to happen is the guy who’s doing that, you know, more advanced thing, he’s going to come in and he’s going to take your job and the guy next to you, right? And that’s one of the key points here. Actually. I. One of the best quotes that I heard recently was, AI will not replace people. People who use AI will replace people who don’t. That was the. That was. Again, I think it comes down to that relationship that you, you know, you need to be leveraging it for, for all that it’s worth in. In improving your capabilities and your skills. So great discussion, by the way. Thank you for that. I really, I appreciate that. That was great. Okay, let’s talk about three ways that we can use LLMs in our daily life. And again, I really want to give you guys practical stuff. I don’t want to stand here talking, you know, at you and giving you whatever, encyclopedia entries on things. Not really helpful. So three ways. One, runtime data processing. LLMs are really good at processing and matching data, right? So if I have users coming in and giving me instructions, if I have user information, user feedback, but even if I also just have patterns in my data, if I just have data and I want to have the LLM do something with the data. They are really, really good at that. They’re really good at extracting meaning from complex data. So I have seen people do integrations where they do things like what I showed at the beginning. Doing sentiment analysis, right? Taking customer messages and checking how is this customer feeling so that I can alert a customer service representative if they, they are not happy, right? Or taking a whole record from a customer from my database. You know, with joint, you know, with all the joins on the 300 different, you know, tables that I need to get all the data out about that record, right? About that particular person and extracting meaningful information from that data. They’re really good at that kind of thing. So that’s one thing that we can do with them. That’s something that you would use in your applications at runtime via API calls to the LLM APIs. Another one is code generation, obviously, right? That’s what everybody’s talking about because Google, Apple, Microsoft, Amazon, all those guys are trying to, they’re chasing general, artificial general intelligence. They are trying to basically automate programming. Because you know, if you. I read once that in a typical tech company, 80% of the company’s expenses are go to are software developer salaries, right? So you can imagine how powerful that would be if you actually had an ability to reduce that. So that’s what they’re going for. Third piece, which I added recently to my list is expanding developer knowledge. And I’m going to talk more about that one because I think that’s a little bit less intuitive. But just in a nutshell, I have the GPT Voice Chat open on my phone basically the entire day when I’m working, working. And what it is doing is answering my questions
about the technologies that I run into in my job. I do about 50 Zoom meetings a week with various different companies all over the place. And each of them are using different technologies. It is impossible for me to know everything there is to know about every different MFA provider on the planet, right? One of the things I do is MFA integrations. I can’t know all of them. There’s new ones every week. Like it’s just not possible. So somebody comes up and they mention a new system, right? First thing that I do is I go to the GPT voice chat. I say, what is this? And it goes this, you know, this vendor is an MFA platform that does this, this, this, this and this. They have these features. And I say, great. Now I understand it within the context of my job. Now I can reason about it. That’s what it does for me, right? Or I’m doing an integration and a new database technology comes up. I ask it, what is that? Instead of reading the documentation, I have a subject matter expert that is always available for me to ask questions to at any time. And that means that I can, in a conversation, talk real time about a technology that I have never heard about, right? I don’t have to say, I’ll get back to you after I’ve had a few hours to read their documentation because it can distill it for me in seconds. So that’s another key piece here that again allows me to do a lot more with my day. And that’s really what I’m targeting here, right, Is use the technologies to do a lot more with your day. You got a question?
Aaron Magid 31:53
Yeah, good question.
Aaron Magid 31:58
I’m so glad that somebody asked that. So the term for that, right, Is hallucination. I, I, I’m big on one liners. I like, you know, I like, I like notable quotes. But one of the things that I like to say is I don’t call it hallucination, I call it imagination. What it’s doing is, it’s thinking, it’s coming up with a suggested answer. And the thing is, we’ve all done this, whether as adults or as children, right? You think in your head about something that you, you know, whatever, a conversation, something like that, an event, a story, and it goes on a wild, fantastical journey, right? It goes way out into the realm of totally implausible, right? The reason that happens is there’s no reality check on it. Your thoughts have no reality check on them, right? Unless you impose one, right? If you’re just sort of fantasizing, right? They just kind of go wherever, right? LLMs are able to hallucinate. I believe this is just my personal philosophy. LLMs are able to hallucinate because they fundamentally do not have a feedback loop, right? If you go to GPT and you ask it about a technology and it makes something up, there’s no, there’s no actual way for that to be proven directly to it. And there’s no penalty, there’s no test. And so it’s able to spin out. What I do with LLMs is I always make sure there’s a feedback loop, meaning you don’t just ask it and then blindly accept its answer. There’s a couple of concrete ways that you can do this. One most powerful thing that you can do with an LLM. And I recommend that everybody do this anytime I ask GPT a question, the immediate next message that I send it, three words, are you sure? That is the immediate next thing. I don’t even read its answer before I send are you sure? Because what that does is it gives it that feedback loop where it’s now going to check its answer as though it is somebody else looking at it. And all the time it comes back and it says, you’re right to question that. Here’s what the actual explanation is. And my next message is, are you sure? And sometimes it’ll come back and say, oh, you’re right. That’s also not true. Here’s the real answer. And I’ll say, are you sure? Right. And when it comes back and it says, I am, yes, I am sure, I’ll usually take it. If I’m still not trusting it, I might say, give me your sources. Right. But the other piece is as an architect and having seen a lot of technologies, I can often tell. I can usually, you know, tell when it’s BSing me. Right. I can see if that technology doesn’t actually make sense. Right. And that’s the key skill that I think that we as engineers need to be focused on. Because if you can, if you can sense the bs, then you can use it. Yeah, go ahead.
Speaker 2 35:02
Theory.
Speaker 2 35:06
I think the biggest strength of AI is also, we all know what’s been written in the Internet. Good and the best.
Aaron Magid 35:18
And it has its sources of evidence. Yeah.
Speaker 2 35:21
So how or who will tell the AI? That’s why the next. Wrong.
Aaron Magid 35:26
Yeah.
Speaker 2 35:27
Now.
Aaron Magid 35:31
Yeah, yeah, yeah. OpenAI. Yeah.
Speaker 2 35:38
You can’t
Speaker 2 35:42
text on everything.
Aaron Magid 35:43
Yeah, that’s true.
Speaker 2 35:44
I think that’s also a point of the user to do this, as you said, like.
Aaron Magid 35:54
Yeah. And another thing that I also just want to point out is don’t fall into the trap of expecting AI to be perfect. It was designed to interact like humans. Guess what? Humans lie all the freaking time. Right.
Aaron Magid 36:14
So. And hallucinate that. That you might want to get checked out, but. Right. But all the time. Right. Why do we have all kinds of fraud prevention, you know, methodologies in our companies and say, like, don’t give one person access to these two pieces of data at the same time. Right. Because they’re going to steal with it. Right. So there’s a key point here. Right. Which is if you expect it to be perfect, you’re going to be disappointed. One of the things that I have on one of these slides, but I, I love that we’re just having a conversation here, so I don’t mind you know, going, going off of my slides. But one of the things I say on here is treat an LLM like your personal junior developer. And I mean that, every word of that. Like your personal junior developer. Treat it like a junior developer, not a senior developer. Junior developer, right? If you were bringing in a kid out of college to work directly next to you in, you know, at your desk writing code, when you ask that person to write code, you would hopefully not just take their code and throw it blindly into production, right? Same thing, right? Treat it like your personal junior developer and you’re going to get good results. Right? But again, we can’t get complacent. It goes back to that same thing, right? This is not an excuse. AI is not an excuse for us to kick back, sit on the couch and watch a movie while it’s doing our work for us.
Speaker 2 37:41
Us.
Aaron Magid 37:42
As best case scenario, even if that worked, your boss would say, why am I paying you so much, right? That’s not how this is going to go. The way it’s going to go is you’re going to use the technologies, they’re going to supercharge your capabilities and allow you to step up to a level that you have never previously been able to get to, or someone else is going to take your job. Which is why I say do or die. Nice full circle moment there. Go ahead, Matt.
Aaron Magid 38:14
It’s a really good question. And actually, quick inside view of my company. We actually do that. We actually have AIs talk to each other and we actually have them do that. But we don’t blindly push their stuff into production. Right? We always have a view. But, but, but you’re right that what’s happening here is you can step up on top of whatever level the AI has achieved. That’s the key thing, right? That’s why, that’s why I did this analogy. You know this way, right? Stacking things, right? The AI figures out how to write basic code. Great. You don’t need to do that anymore. Now you’re not a junior engineer. Now you’re a senior engineer. What does that mean? You’re writing specs and giving it to the junior engineer and sometimes writing the complicated code.
Speaker 2 39:00
Code.
Aaron Magid 39:00
Once the AI gets to the senior engineer level, then you’re going up to the chief engineer level. What does a chief engineer do? They generally don’t really write any code. Sometimes they might, but it’s really only for prototyping, right? You write prototypes, you design things. You give that to the senior engineers, they write specs, then they give that to the junior engineers, they’re doing the pull request reviews and you only look when you see something that actually catches your eye that might be wrong or a high profile project. Right. Eventually it will get to that point and you’re going to move up to the executive level where you’re saying, what’s the business problem I need to solve? And then you’re delegating to that entire AI organization. Right, That I believe that’s what’s going to happen. We’re just stepping up. Right. Each time it gets smarter, you just step up on top of it and you use it. Right. That’s, that’s the key thing. So to answer your question in all seriousness, you can absolutely have the AI review your pull requests. Just make sure that the next level is competent humans. Because the danger is when you get to unexpected pockets of not as competent places and AI actually has a tendency to reveal that. So it’s, that can be where things get dangerous. One of the things that I, that I wanted to point out here, it’s just a question that’s been echoing through my head is are text based interfaces going to make a comeback? And I’ve actually decided recently, I think, I think the answer is yes. And what I mean by that is I started as a web app developer. That was my first sort of thing that I did. And I always thought web apps are the, you know, that’s how we’re going to do things, right? We’re going to build web apps that’s going to give people pretty flashy buttons for things. What I have seen recently is that I is that you are able to build much faster, much cheaper, much more sophisticated business process flows out of agentic AI than a web app. The question that I’ve been batting around actually last couple of weeks is are web apps a thing of the past? I’m not actually sure that we need them anymore, but that’s speculation. So I always like to put that question in there because it’s something that I think about when I have in my copious amounts of spare time. Yeah, right, but, but here’s the thing. Right, but, but here’s the thing. I, you, I used to do this and I would talk to companies and they would say, I want a web app. And I would say great. And they would say I want it for $10,000. And I would say your starting cost is gonna be a quarter million. Right. That’s going to be your V1 prototype. Getting that live. Right. And honestly, that’s relatively cheap. Right? That’s, it’s Expensive to build? Well, it actually costs a lot to build a good web app because it takes a whole bunch of work to figure out the flows and figure out how you’re going to do things and build all the forms and fix all the bugs and get the UI designers in there. I have seen people set up in hours app entire applications out of AI agents that are basically then able to do everything that that prototype web app would have done, including, it’s crazy, including sometimes rendering a web like screen for you out of the AI in real time. Right. Without actually having to dump all that money and time into building the web app. So anyway, I don’t want to make any enemies here. I probably already have. But that’s, that’s just a question that I want to put out there. And I am sure that I will be back at a
conference eventually giving a session on one way or the other, whatever I, whatever we come to on that technology. But that to me is the next question. You know, we actually talked about a lot of this already, but just some suggestions for you as you use this technology, right? Runtime usage, how can we use it? sentiment analysis is a great thing to get your feet wet because it’s a really simple prompt. It’s just, here’s a message from the user. How are they feeling? Give me a scale. Give me a rating on a scale of 1 to 10. Right? And that’s what I did in that previous. Actually, let me show it again. That’s what I did in this previous session here. I bet you my session timed out. Oh, wow. It didn’t. Awesome, right? If I come in here and I say, oops, and I say, this is a.
Aaron Magid 43:33
Get that apostrophe out of there. Right. You know, if I give it a message and I say, ah, you know, this is kind of middle of the road. I’m not sure if I really like this or not. And come over here and I get this. Oh, four out of ten, right? I better rewrite this presentation. Right? That’s. That’s the kind of thing that we can do really, really quickly, right? Sentiment analysis used to be a really hard thing to do. It’s not a hard thing to do anymore. All you have to do is send the user’s message over to an LLM and ask it, scale of 1 to 10, how’s this going? Right. And I’ve actually seen companies implement that very successfully in customer service, right. So that you’ve got. Each time an email comes in, it’s going to a junior customer service rep. Maybe you don’t trust that person to run it up the chain when you, you know, to sense when the customer is unhappy, Right. But these tools can do it. So just a powerful thing that I’ve seen used out there. Another piece is data extraction. Pulling out patterns, analyzing data, right? Pulling out all the records for a particular customer, looking for patterns. They do that really, really well. It’s not as good as purpose built AI algorithms that are not LLMs. But the key thing about the LLM is I can dump the data in and I can say, tell me about it. And a few seconds later I’ve got those analyses coming back again. Not as good as the purpose built systems, but the speed is unbelievable of these tools. And another piece which I think actually is very powerful is just adding human tones to things, right? Instead of hard coded messages, instead of, you know, very robotic things. Actually what I did at the beginning of the presentation, that, that little voice thing where it spoke up, I actually told it, if I say welcome more than once, vary the message a little bit, right? So that it’s not just saying the same thing each time, right? And it’ll do that. All I had to do is just say just vary the text a little bit, right? And it, and it’s able to do that, right? So we can use that kind of thing to make much friendlier, much more. Much, yeah, just much, much friendlier interfaces. You know, it’s, it’s, it’s less robotic. So how can we actually do this? How can we actually get into these technologies? One thing, LLM APIs, if anyone hasn’t done this already, you can go to ChatGPT, you can go to Claude, you can go to any of the major models and there will be APIs that you can call to send data in and get the model’s response back. That’s the basis of everything that you do here. You set up an account, you put in some credits, and you know, 10 bucks, whatever you need, and start sending data. As long as you can make API calls and receive the responses, you can integrate with the AI systems, right? So that’s one of the things I want to recommend to everybody right now. When you go back to work or before you go back to work, if you know how to call an API from your systems, go get an account, take some data from your programs, send it in, get the response back. It’s really straightforward to set up and at that point you can start to get a sense for what you can do with these technologies at runtime.
Aaron Magid 46:46
Much more advanced. But if you are really getting into this MCP Servers Model Context protocol. If you have proposed proprietary data or proprietary tools that you want your AI assistants to be able to use, MCP is how you do it. That’s how you can teach them how to use your applications, your products, your tools, your custom environments, all that stuff. Again, that’s a developing technology. It is very new. I gave this presentation four months ago and I did not talk about MCP because I hadn’t heard about is very new but. And it takes a lot of coding but very, very powerful. So another critical piece there. I don’t know if we actually have to. I already did this. I just did this as I noticed. I don’t know if you guys need to see it again, but. Okay, so let’s move on to the next type of usage. And I know we’re actually running low on time here. Generating code, right? We talked a lot about runtime data processing, right. One of the key points that I want to make here that I haven’t already talked about about generating code is how it speeds up your development cycles, right. One of the main things that it does, it doesn’t replace developers, but again, it’s like you have your own personal junior developers at your disposal at any time to ask them to do things right. And actually I just want to show you something. That change that I made a second ago to my RPG program where I had it change the request or the sorry, the in mode to a parameter and it updated it. This tool actually tells me how much it cost. It cost me 30 cents, right? And again, when you think about money that can seem like, you know, it’s actually, you know, oh, I’m actually spending money on this. But if you think about this from the actual cost level, right, if you take a typical developer salary, right, where you’re talking, you know, a mid level engineer, you know, like an open source engineer, I know those salaries are typically talking like if you take all in like 75, 80 an hour for sort of like a relatively low or mid level engineer, right? You’re talking, let’s just round it down to a dollar a minute, right? So if this saves you 20 seconds of coding, that’s 30 cents, right? And I promise you this saved me more than 20 seconds of coding, right? So again, these, these tools are actually, they’re, they’re very effective and I’m actually shocked at how cheap they are. Personally. I thought they were gonna be way more expensive. So anyway, just some, some powerful things here. But there’s a, there’s an important point here that I really want to make and that is you have to not become the bottleneck. The LLM is able to generate code, it’s able to accept prompts, but you have to make sure that you are ready and you have to make sure that your tools are ready. This agentic assistant that I’ve been using in here, by the way, this is called Klein C L I N E. I strongly recommend that all of you try it out. It is mind blowing how powerful this thing is and it’s free except for the AI charges that you get from the back end models that it’s using. It is awesome. I actually heard about it a week ago and I rewrote my presentation around it because it is awesome. So Cline Klein, very, very powerful. But here’s the thing. Klein has no idea what a source member is client has no idea what a, what a library is. It has no idea what an object is. It knows VS code, it knows files, IFS files, meaning it knows git. So if your tooling is not at that level, you can’t use this tool. Right? And that’s one of the key points. Don’t be the bottleneck. Right? We have to make sure that we are using the right tools for the job so that we can take advantage of all of these technologies as they are coming out. Right? And I’m actually doing, I’m actually doing some more sessions or actually and then some of my colleagues are doing sessions at Common here where we’re going to talk about how you get your IBM I into these technologies, how you get into git, how you get into VS code. It’s not actually about git, it’s not actually about VS code. You may not like VS code, totally fine. But VS code is the standard. And that means that when somebody comes along and invents something crazy that revolutionizes development, they’re not going to add support for pdm, they’re going to put support in for VS code, they’re going to put support in forget. So if you’re not in VS code and you’re not in git, you miss out on that and you become the limiting factor on that technology or your tools become the limiting factor on that technology. As much as we may love the tools that we use and I know plenty of people who, who will die on the SCU hill and honestly I respect it actually, but those people will not be able to use this technology. And so when this technology gets to the place where it can write all the RPG code for you and everyone is Ready to step up to be senior engineers. All those people are going to get left behind because they won’t be able to use the tools or they, you know, they’ll be stuck paying, you know, exorbitant fees to, you know, to be able to use it. But even then that won’t ever keep up. They’ll always be behind. You’ve got to use the new technologies. We’ve got to do, like I said at the beginning, modern things in modern ways, right? We’ve got to be using these technologies the way they were designed to be used.
Aaron Magid 52:57
So I mentioned Klein, I, you know, and actually on that note about, about generated code, Open source is another critical, critical piece of this. LLMs are going to generate code for you and they’re going to want to take advantage of standards. That’s what they’re trained on. They’re trained on standard code. They’re not trained as much on how to make an API call using a low level sockets library. They’re trained on how to make an API call using open source packages like Axios for JavaScript. Right? So if we’re not using open source technologies, we’re also not getting the benefit of that. But there’s another very important point here which I’m really hoping actually that, that the Watson X technology changes. But up until now the only way to get into these assistants has been to use them with open source languages. And while we have support coming for RPG and I’m really looking forward to that, these open source languages, you know, honestly are I think always going to have a larger training data set. And so one of the key points that I want to make to everybody here is four things that are appropriate to do in open source technologies. Do them in open source technologies and one of the really powerful things is that we can use AI assistance to write that code for us so that we don’t have to actually be experts in those languages. Right? We can write in languages that do what we need them to do without having to actually learn it, you know, like really deeply. You need to understand it at a high level, but you don’t need to actually be an expert JavaScript developer to write JavaScript applications anymore. And so that’s one of the key points here, right? We want the to be able to just like people, they will be more efficient if they are using the tools that are designed to do the job that they are being asked to do, right? Just like a person, they will spend a lot more time struggling and a lot more time trying to generate code and a Lot more cost and a lot more errors if they try to do it in a language that wasn’t designed to do it. And I can give some examples, but like I said earlier, I don’t really want to make enemies. So let’s talk about some challenges here that people run into and then I think we’ll wrap up. There are traditional ways of doing things that don’t fit well with these technologies. And we need to make sure that the technologies that we’re using and the way that we’re using them is designed for what we’re trying to do when we are doing runtime. LLM integrations. I’m going to make a, again, fairly provocative statement that we should be doing that not in RPG, we should be doing that in JavaScript or Python or a similar language. And the reason I say that, there’s a couple of reasons. One is that LLMs are fundamentally, they are the essence of flexible, right? They work in human language. They are designed to take data in any format and transition it to any format. They do not care about Fixed length character fields, they do not care about decimal data errors. They do not care about those kinds of things, right? Those are all very foreign concepts to them. Them working in a language that has very strict types is great when I’m building a core database application because it reduces the errors I’m going to run into. It does not work well in my experience when I am working with an LLM because the LLM wants to be able to be flexible. And so what we need is a technology that can handle that. Flexibility doesn’t mean that RPG doesn’t have a place in our modern applications. You know, I know some people will hear what I say and they’ll think that, that, that’s what I’m saying. It’s not what I’m saying, but I want RPG to actually occupy its place and not someone else’s place, right? And that’s, that’s one of the key points here. Your core database integrations, your core systems, all that logic, any financial processing and things like that that you’re doing 100% do it in RPG or Cobalt. Those are great languages for that. LLM integrations, API calls those kinds of things, JSON processing, not a great place to do it. And you’ll get better results in these kinds of technologies. One of the main reasons for that is open source technology, right? If I talk to an LLM and I say make an API call in rpg, the statistic that I have seen in the applications that I have worked with is that it takes on average approximately 300 times as much code to call an API effectively at a production worthy level in RPG as it takes in JavaScript doesn’t mean that RPG is bad. It’s not because JavaScript is great, it’s because JavaScript is designed to call APIs when you get to real production integrations, it does it really elegantly with a single line of code typically. Again, nothing wrong with RPG there, just not what it was designed to do. Those open source modules allow us to do incredible things with a single line of code. Which means that when my AI assistant is trying to generate that call, if I’m generating it in rpg and by the way, I’m including JSON, parsing authentication and security and all those things in the API call, that’s why I got the 300. I saw some people like, you know, why is it so much. That’s why talking about all those things.
Aaron Magid 58:11
If the LLM is generating 300 lines of code, that’s 300 opportunities for bugs, right? And that’s what I’ve seen is that you will get worse code out of it. If it’s generating one line of code, it’s usually a lot better. And so again I, nothing that I say here should be taken to say that RPG is a bad language or COBOL is a bad language or the IBM. I would nothing. I am not here to say that anything, any of these technologies, I’m not even going to use the words but you know, I’m not here to say any of that. But there are things that they are designed to do and there are things that they are not designed to do. And in the interest of not being the bottleneck, we need to make sure that we are taking a pragmatic view of those things, a practical view of those things and saying this is what this technology can and should be used for and so that’s what I’m going to use it for. And again, I have worked with people, I am working with people right now who have never seen an open source language in their entire career, who are building sophisticated applications in the same open source languages that they have never seen. Because these assistants are actually writing all the code for them and they’re reviewing it at a high level. So again, we are actually able to do that. I’m just going to hammer this a little bit more. Tooling, very, very, very important. Standard, standard, standard tooling is very, very important. Right? Again, if I want to use Klein, I need to be in VS code, I need to have my code in GIT Right. If I’m not using those tools, if I’m using a proprietary change management system that does not support that, I can’t use it. If I don’t have a sophisticated modern build process, I can’t test the changes, I can’t ship them out. If I don’t have unit testing and regression testing, I can’t test my changes at all. Right? I can’t use the things that the AI is generating for me. I have to wait for human qa, which means that our human QA now becomes the bottleneck over our AI, and we can’t actually move as fast as the AI. So again, we have to use these tools. And we’ll be doing some other sessions on that later on in the conference. I’ve talked through actually a lot of this, but, well, I like this. So I’ll put it up on the screen for a second, take a second to read it. But
Aaron Magid 1:00:35
Last thing that I want to mention to everybody before I. Before I shut off here, because I know I’m actually a minute overtime, is prompt engineering. I just want to put that term out there. I want everybody to remember that term, prompt engineering. There are a lot of different things that go into that, but prompt engineering fundamentally is the science of writing a prompt for an LLM which will give you the outputs that you actually want. It is actually hard to do to get really good outputs, and there’s a lot of pieces there that are important. This is not technically a session on prompt engineering, but I’m happy to talk anybody through it after the session if you’d like. But that’s the last thing that I. That I wanted to mention is we, once we’ve got the LLMs, once we’ve got the tools, once we are no longer the bottleneck, we also need to make sure that we can use them effectively. And again, I would say that prompt engineering is a discipline at least as complicated as coding. Just to give you a sense of the way that I look at it and what the capabilities are and how important it is to learn it. Otherwise you’re going to get outputs like this. And I have actually seen some pretty egregious outputs from some of the models that I work with. So anyway, whoops, I just went way too far. But here, let me get my thank you on here. So just a quick recap. Thank you all for coming. I hope that was useful. Just a quick review of what we did. We used an actual AI assistant to generate some code that was Klein C L I N E. Again, strongly recommend that everybody take a look at it. I was using that in VS code I was using all these technologies that we’re talking about. We also did some sentiment analysis on some real time data from an RPG program to actually be able to use these for runtime processing and we also talk to GPT One thing that I oh you know what actually one thing that I didn’t get to but it’s okay we’re out of time I just want to explain what you know what I do with GPT having the voice chat open on my phone right Ask it questions, have it there as that subject matter expert to answer your questions does a really good job I know we talked about that but anyway so that’s what we’ve done here and hopefully that has given you a picture of how we can actually use these technologies moving forward and how we can get the best of all worlds out of this and really supercharge our development. So thank you all. I hope that was useful.