Yes, i Can: Modernizing RPG Applications with AI and Real-Time Analytics
Step into the future of IBM i development by learning how to enhance your RPG applications with artificial intelligence and real-time, graphical dashboard monitoring capabilities. This demo-driven session showcases a complete modern development stack that combines the power of AI with industry-standard observability tools. Watch as we rapidly transform a traditional RPG application using AI-based code generators into an observable system that can process natural language commands and provide real-time business insights through dashboards.
Through practical demonstrations, you’ll see how to:
- Integrate AI capabilities directly into your RPG applications
- Process natural language commands for order processing
- Generate new RPG code using AI assistance
- Instrument your applications for real-time monitoring
- Create dynamic dashboards using Grafana
- Track key business metrics and performance indicators
- Set up automated alerts and notifications
We’ll walk through a real-world scenario of modernizing an order processing system, demonstrating how to add AI-powered text message processing while simultaneously monitoring order volumes, transaction values, and system performance through custom dashboards. You’ll learn how to leverage both AI and open-source monitoring tools to create a fully modern development environment for your IBM i applications.
Key Takeaways
Practical understanding of AI integration patterns for IBM i
Working knowledge of modern monitoring tools and techniques
Real-world examples of successful modernization
Implementation strategies for your own applications
Best practices for combining AI and monitoring capabilities
We look forward to seeing you there!
Video Transcript
Maia Samboy 00:03
Hi, everybody. Thank you for joining us today. We still got a couple more people coming in, so I’ll just give it a couple more seconds.
Maia Samboy 00:16
Welcome to yes I can, modernizing rpg applications with AI and real time analytics. You all probably know the drill by now, but if you would like to ask a question, the Q and a box is going to be in the zoom toolbar and we are recording this session. You will all receive a copy of that recording tomorrow morning. And in this session we do have quite a few of us and a lot to cover. We will be going over how to integrate AI capabilities directly into your RPG applications. Process Natural language commands for order processing, generate new rpg code using AI assistance, instrument your applications for real time monitoring, create dynamic dashboards using Grafana track key business metrics and performance indicators, and set up automated alerts and notifications. And without further ado, I’m going to pass over to you, Dan, so you can get us started.
Daniel Magid 01:13
Great, thanks, Maya. Let me go ahead and share my screen here.
Daniel Magid 01:25
All right, great. Welcome, everybody. I just want to add one thing to what Maya said, because we’re not just for rpg programmers. So all you COBOL programmers out there, this works for you as well. In fact, it was very funny. We had a call earlier from a COBOL programmer who asked us, Gee, should I come attend the webinar? And we said yes. And he said, well, I don’t know. Being around rpg programmers makes me itchy, so I’m not quite sure what that’s about. Anyway, it will work with rpg, it will work with COBOL, whatever applications are running on your IBM I and I’m really excited about this webinar presentation. I think it’s going to be fun because it’s going to be much more show than tell. I’m going to go through a few slides just to introduce things and set some context. And then we’re really going to get into actually working with the IBM I and working with integrations using our AI assisted generator. And we’re going to also take a look at modern dashboards for monitoring all of the communications with your IBM I and keeping track of what people are sending you, what’s in the messages you’re getting, what the volume of messages are, what errors are, you know, all the kinds of things you might need to know, not only to create APIs and create integrations, but also to manage them and monitor them. And by the way, it’s gonna be me and Aaron. And the reason the two of us like to do these things together is we wanna be able to show legacy systems. And I’ve been around the IBM I for decades. For those of you who know me, I’ve been around since the early eighties. So I’ve been around these sort of older technology for a long time. Aaron grew up in the whole open source world and so he’s been around the open source of. So we’ve got kind of old and young and I’ll let you figure out who is who. So let’s talk a little bit about what we’re going to talk about. So we’re going to talk about connectivity as the foundation for modernization. So our philosophy at Eridania is that really what makes it possible to do all the modern things with the IBM I is just creating the connectivity, the layers around the IBM I that allow you to talk to whatever technology you might want to work with so you don’t have to go in and replace all those rpg applications. They’re running the core business functions that you have in your business. They work, they’ve been around for a long time. They are optimized for how your business operates. But what we want to do is make it possible for them to talk to everything else. And that’s really what the connectivity layer is about. Let’s make it so the IBM I can talk through and with any technology that might come down the pipe. And then we’re going to talk a little bit about the different kinds of ways you can connect to the IBM I and some of the things that you can do. And then we’ll take a look at the actual generative AI and how it can help you build the applications. Again, RPG or Cobol applications. And then we’re going to look at a lot of sample use cases. So we’re going to show you some examples of actually doing this work in real time. And then as I said, we’re going to also talk about the operation side. So actually managing and monitoring those connections. And as Maya said, if you’ve got questions, please put them in the chat. And we’ll be monitoring the chat. And at the end we’ll also have some time for Q and A. So just real quickly, those of you who’ve seen presentations that I give know that I like to start with this slide, which is the Eridani philosophy of the world, which is there are a lot of people out there that think that the IBM I is this old technology box that’s limited to green screens and DB two and COBOL and rpg programs. And basically that’s all it can do when the reality is you can do anything with the IBM I that you can do with any other system. You can do all the modern things. You can do mobile and web and Internet of things. You can do APIs and AI as we’re going to see today. You can do all that with the IBM I and take advantage of all the architectural advantages that the IBM has always provided. And it’s, it’s reliability, it’s security and you know, it’s low cost of ownership. So you can do all that stuff with the IBM I and take advantage of, of the new technology as it comes out. And that’s really Airedani’s purpose in life is to help IPMI users do that. And so the latest thing is real time dashboards and AI so that you can now do the kind of latest thing that people are doing and do it with your IBM I. So basically underlying what we’re going to be talking about today is the air dot e connect integration hub, which is this framework for creating integrations. And building an API is much more than simply creating an HTTP endpoint. You actually want to make sure that any connection you’re creating to your IBM I is secure that who’s coming in and what it is that they’re allowed to do, that you’re monitoring it, that you can troubleshoot if things go wrong, that you can manage the operations. The integration hub is really the framework that all of this is built on. It provides lots and lots of different integrations. We’ll talk a little bit about APIs and messaging so you can, if you want to connect to message brokers like Kafka or Amazon SNS and SQs or Google Pub sub. So you can talk to different kinds of messaging layers and you can do things like EDI. So you can actually modernize some of the connections that we’ve been using for years and years and years. And you can see on the left and the right there the different kinds of things that we’ve worked with customers to connect. So let’s talk a little bit about sort of the underlying technology. And that is what makes all this integration work. So the purple box here is really that Airdani connect framework. And when we generate these connections, these APIs, these integrations, we provide a lot of functions that come with those things that we’re building. So we generate the code to do your authentication and authorization. So if you want to use JSON web tokens or you want to use OAuth two, if you want to use single sign on technologies like active directory or LDAP or SAML or Kerberos we generate all that code for you so you don’t have to write any of that. We do all the encryption, we generate the code to do data transformations between unstructured formats like XML, like JSON, like comma delimited files. And we’ll turn that into the IBM I structured data types and vice versa. So we can do those translations back and forth, see some examples of that. We also look at that data that’s coming in to make sure that nobody’s sending in injection attacks so they’re not sending an executable code with the data. We’re going to make sure the data is formatted properly so that they’re not trying to send alphabetic data into a numeric field where they’re going to crash your programs. So we generate the code to look at the data and make sure that it’s correct. And we also generate all of the logging code, all of the monitoring code, all of that’s happened, all the error handling code. So all that’s happening for you automatically. We can connect to the IBM I in a variety of different ways. We can use ODBC, we can use things like XML service. We also have special connectors for special use cases. If you’re sending very, very large messages, messages maybe, that have gigabytes of information in them. We have a connector specifically for sending large messages. If you want to send a lot of records, like you want to populate a data lake or you want to populate a data warehouse with data from your IBM I, and you’re sending out millions and millions of records, we have a connector that’s, that is optimized for that. We have an event driven connector that allows you to turn the IBM I into a publisher or subscriber to event brokers. And then we also have a built in FTP SFTP server and then support for EDI. So we have a lot of different connectors. And these are just configuration options that you use when you generate your APIs as to what you want. And if you decide to change connectors, you can just change the configuration option and that will change the connector that your system is using. So you don’t have to write that code, you don’t have to maintain that code. So we’re going to talk a little bit about generating these connections. How do I generate these integrations? How do I secure the integrations, and then how do I manage them and how do I monitor them? So let’s talk about first this whole idea of generating these connections. So the way we’ve done this is, we’ve set up a workbench inside of versus code. Now anything I’m going to show you in versus code you actually can do through the IBM I command line. But we created a plugin for versus code. We know that IBM is really moving towards the versus code environment. We wanted to be on board with that. So we put the plugin directly into versus code. So you can create these integrations right inside versus code. And we’re actually going to take a quick look at exactly how that works. Get into the show part. I’m going to bring up my versus code workbench. Now I’ve got my versus code workbench here. I’m going to start out with the simplest thing we do, which is I’m going to generate an API that will allow me to access a database query. We can call programs, we can call queries, we can call series of programs, we can call stored procedures. I’m going to start out just with a database query. I’m going to bring up the plugin and I’m going to say I want to generate an inbound API to Eridani. An inbound API means I want to set up a connection, an API so that people can access something on my IBM I, somebody’s going to come from the outside, call that API to get some access to the IBM I. Again, we can call program stored procedures. I’m going to just do a query here. I just need to tell it, well, where is the code going to go? So where is this endpoint going to be? So I’m going to say it’s a slash rpg or slash API. Oh, actually let’s put it at demo demo customers. And I can use whatever method I want to use and then I can just give it the query. And the query can be as sophisticated, as complex as you want it to be. But I’m going to make a real simple one here
Daniel Magid 11:23
and I’m just going to call this function customers all because I’m going to get all the customers from my customer table. I could also give it a bunch of input parameters if I wanted to, but I’m just going to go ahead and generate just like that. So basically now Aerodyna Connect is going to generate the code for that API. And again, if we go back to the slides for just a second, it’s generating all the stuff we talked about here. So it’s generating the security code, it’s generating the encryption code, the data transformation code, the data sanitization code, logging, error handling, all that stuff is being generated right now with generating this API, it has done that. That’s all done. Now I’m going to go ahead and just build that. We’re basically building now the basic API that we’re working with. Actually I can follow that over here where it’s actually doing the build over here. It’s done. Now I’m going to start up my API server.
Daniel Magid 12:26
That’s done. Now I’m going to go over to my browser here. We’ll go out and we’ll hit that API endpoint. I’m going to go to the API demo customers and I get back my JSON payload. I sent in a query. It would have given me back IBM I formatted data but the, the engine automatically transformed that into JSON for me. So now I have the JSON stuff from that IBM I query. So basically now I’ve set up the stub of the API. It’s actually a working API. If all I want is, this is all I wanted to do is get the information query, I’m done now. But let’s say I actually want to add some business logic to this. So I actually want to add some business logic to the process. So what I’m going to do is I’m going to go back over to my workbench here. Notice I have this box here in my workbench that says once Eridani Connect has received the input from my API call, it should. What I’m doing now is asking it to go ahead and perform some business logic. I’m asking it to do something else with that API information. What I’m going to do, I’ve actually already built some stuff here and let me just go ahead and bring up my
Daniel Magid 13:49
document with the information in it.
Daniel Magid 14:02
Okay, so here I have it. So here’s the document. Oops. Let’s go over here
Daniel Magid 14:18
and I’m going to just copy this text. So I’m going to copy the text out of here and go back into my workbench and I’m going to paste it into the box. Okay, so basically what I’m doing is I’m saying to the AI engine in English. So I’m using a large language model. I’m just saying in English. The query returns address information for each customer. Use the return address, city, state and zip code fields as input to a call to the geocode XYZ API. Notice I’m not giving you a URL or anything about how to call that API. I’m just saying call that API with that information. And what I want back is the latitude and longitude for each address. So I want to get back to latitude and longitude. Now to call that API, it needs an API key. So I’m giving it the API key as well. And I’m saying just add that to the list of data that I’m getting back. So I’m going to go ahead and generate that now.
Daniel Magid 15:17
So again, now it’s going to generate for me, it’s going to add to my API code, which by the way, the code, I can see the code right over here. I can see exactly what it’s doing to, so I can follow along as it’s generating the code for the things that I’m doing.
Daniel Magid 15:39
We’ll give it a second to generate that code.
Daniel Magid 15:45
Now you can see it’s added a whole bunch of code to my API. Now I’m going to go over here and build it again over here, same process. I can just watch it building.
Daniel Magid 16:03
So notice it’s generating JavaScript code for me, but I don’t need to know anything about the JavaScript code, it’s generating it for me because I can just tell it in English what I want it to do. Now I’m going to refresh that, get the new code into my API, then go back over to my browser. I’m going to re hit that API. So I’ll just refresh the API now. It might take a little longer here because it actually has to go out now and make an external call. Oh, now there it is. Came back here. You can see on each record now I have the latitude and longitude. So it’s now added the latitude and longitude to that API call so I can continue to add, so I can iterate on this and continue to add more and more information.
Aaron Magid 16:50
Right. And actually, Dan, I just want to focus on one of the things that you said before because we kind of went by it quickly, which is that you didn’t have to actually get in and modify the code for that integration. Right. If you’ve been to our presentations before, you’ve seen us say this for years, right? That our philosophy is that we want to use the right tool for the job. So if we’re doing API logic, we’re going to do that in a language like JavaScript or typescript, because those languages are designed for that task. So they’re going to do much more sophisticated integrations with a lot less code than, for example, trying to do it directly in RPG. The hurdle that we’ve seen a lot of companies struggling with, which I think is totally understandable, is, okay, you’re adding, if you want to use the right technology for the job, that means you actually have to know how to use the other technology, you have to know how to use multiple technologies. And one of the really powerful things here with the whole AI revolution that’s been going on is the ability to work in languages that you don’t necessarily know or are not necessarily an expert in. I just wanted to make sure to focus on that. That’s something that Dan did here. Dan is not a JavaScript programmer.
Daniel Magid 18:06
Actually, Dan is not a programmer.
Aaron Magid 18:09
I was trying to be, I was trying to be nice, but he can build this right? And that’s I think, very powerful in that that means that we can use the right tool for the job even if we don’t necessarily have any experience with that particular tool.
Daniel Magid 18:29
Great. Now there are a lot of other really cool things you can do with the AI. For example, if I look over here and I say, gee, I don’t know what this logger debug thing is, what does that do? I can say add comments to the code to explain what the logger dot debug function does for in, in a way that can be understood by a non programmer like Dan. I guess I won’t put that in there exactly.
Aaron Magid 19:13
It’ll say, who’s Dan?
Daniel Magid 19:14
Exactly?
Aaron Magid 19:16
Actually, you know, he probably would, that’d be a funny thing to test. It actually probably would try to put the name in there. I bet it would. It would probably address all the comments to you.
Daniel Magid 19:28
So again, so now it’s going back through and it’s generated and notice it’s adding to the code. So I can just continue to add to the code. I actually could also tell it to remove code, say oh wait, remove what I just asked you to do, or remove this part of the code so you can actually tell it not to do things, but here. So here you can see it now added the comments over here. Log a debug message with the input arguments. This allows seeing what arguments were passed, whatever is over here to the right to help with troubleshooting, seeing what arguments were passed without having to set breakpoints. So that’s what it’s doing. So I can actually tell it, go through the entire set of code and add comments to help me understand what’s happening inside the code. So again, you can iterate on this and you can continue to add stuff. Now we talked a little bit also about how you can, you can also monitor exactly what’s happening with your API calls. I’m going to flip back over here. Go back to my browser.
Daniel Magid 20:35
Aaron is actually going to show you a lot more about this in a minute, but I’m just going to show you real quickly the basics. That here’s the monitoring dashboard where you can keep track of how many requests are coming in, what kind of response time are you getting on your request, how much cpu are you using, how much memory. If there are errors, you can go in and you can look at the errors. You can actually drill into each error and see what was sent out, what data was sent out, and what data did I get back from that particular call? So you can see the data that came back and then you can also see what IP address it’s coming from. So there’s a lot of information here. In monitoring what’s happening with your API environment, you can actually look at every single API endpoint and see what’s happening with every API endpoint. But again, Aaron’s going to go into some more detail on that, so I’m going to skip over that for now. Let’s go back to our workbench. And what I’m going to do now is I’m going to take it up a little, up a level of complexity and say this time, instead of actually calling a database query, we’re going to call a program. So this time I want to call an RPG program. So let’s take a look at that. I actually have an RPG program out here. Why? It keeps going back to asking me to reshare. All right, there we go. Okay, so I actually have it now I’m going to just close off these things we had before. Now we’ve got an RPG program out here. There’s a sample program to call. It’s this get ord rpg. And if I look at that program, you can see that it takes in a customer number and an order number, and then it returns a dimension data structure with the order information. So it’s got the order header and then it’s got detailed line items. So it’s got a bunch of data that it’s going to return. Over here I can see there’s a YAML file. Associated with the YAML file is basically the translation table we use for translating between things like JSON and XML and the IBM I data structures. We’ll generate that. I’m going to show you in a minute generating a YAML file. But let’s go in and say I want to create now an inbound API that is going to call that program. So, right notice I’m going to get a program instead of the query and I give it the location
Daniel Magid 23:12
orders and it’s a get and we’ll just call this get orders and I’m going to have it generate the code now for that. So now it’s generating the code for calling a program instead of calling a query. And again I could do a stored procedure as well, but it can generate code to call anything basically on the IBMI. Again I get that same AI box here where I could add more information. Let me close this to get some more space on the screen. So it has now generated that for me and same thing as I did before, I’m going to go ahead and build it.
Daniel Magid 23:58
So now it’s actually compiling the code again.
Daniel Magid 24:06
Okay. And now I’m going to go ahead and start up the API server,
Daniel Magid 24:15
go back over here and say this time I want to call this, I want to call my demo orders. As we talked about, it takes in a customer number and an order id and it gets me back this very complex JSON data structure. And you can see if I open it up here, you can see there are arrays within arrays within arrays. So it’s now called a program and gotten back the JSON response from calling that program. Now one of the main reasons I did this is because it gives me a JSON payload to show you the next thing, which is let’s say I want to go the other way. I now want to call from rpg out to a external website, something that gives me back a JSON payload. I’m going to copy this and let’s just pretend for a moment that I didn’t actually get this JSON payload from the IBM. I got it from some google API or some other business partner API. What I have is just the sample of the JSON. What I can do now is I can go to my AI engine and say I want to translate that into a YAML file. Remember we work with YAML. That allows us to do the translations to the IBM I. So I want it to give me back YAML. I am actually here directly in Claude AI. I’ve created a generator in Claude AI that will take JSON and turn it into YAML specifically for working with Eridani. I’m going to go ahead and do this. By the way, it took me about five minutes to do that. So now I’ve got the, I’ve given it the JSON and I’m going to run this inside of Claude. And here it says, here’s the Aradania YAML file created from the provided JSON. So it created now that YAML file for me. So now as soon as that’s done,
Daniel Magid 26:12
give it another minute to finish. Okay, it’s done. Now I’m going to grab all that YAML, go back over to my workbench,
Daniel Magid 26:27
and I’m going to create a new file out here,
Daniel Magid 26:33
and I’m going to call it get orders.
Daniel Magid 26:41
And then I’m simply going to paste in that YAML file. Now I’ve given it that YAML file. Now I can go to that YAML file and say, okay, eridani create for me an outbound call from my rpg code. And I’m going to put that at demo API orders. And the function is get orders. And we’ll say rpg this time. And I’m going to have it create an rpg program called Get Ord RPG. And I’m going to tell it to generate that code. So now it’s going to generate, actually the rpg code. And this, again, could be cobol code. It’s going to generate the code that I need in order to call out. And you can use this as a copy member that you copy into an existing program. You can run it by itself. So it’s up to you. You can copy and paste it into the program. It’s completely up to you how you use it. So now it’s created all of that. So now I can actually go and we’ll just take a quick look at what it did here. So I’ve got this get ORT RPG program, and let me close some of these windows. So we have some more space on the screen here. So I can go in and look at this RPG program. And here’s the RPG program that it generated. So it actually generated an RPG program. Actually, here’s the program. That’s the data descriptions. Here’s the. And it gives me all the information about what it did that tells you what this program is for. And then it’s created all the code that allows me to call out to an API. So down here you’ll see I have the actual Erudani procedure. Call here to Eridani, send request to send out a request with the data, and then a receiver command that says, okay, get the data back and process it. So basically, I’ve created a whole call out system now that I can add to my RPG application, or again, my COBOL application, where it will call out to an external API, get the data back, and then perform whatever business logic. And I could use the the generator to generate business logic to actually process that incoming data that I’m getting back from my call, so you can go both directions, inbound and outbound. That’s the basics of generating these integrations using air dyni connect and the AI assist engine. What we’re going to do now is look at some specific use cases. We’re going to actually, if I can jump in. Please do.
Aaron Magid 29:18
Before you jump in, we just had some questions come in, so I wanted to.
Daniel Magid 29:22
Yeah, yeah, yeah, go ahead.
Aaron Magid 29:23
Yeah. So a couple of things here I wanted to get through the questions that we have in here. So just to wrap up, one of the things that Dan was doing in there, right. He actually pulled up the code, but I do just want to drive home the point that that is not something that you actually have to do. Right. One of the key points that we wanted to achieve in these integrations is to generate a program that can be called immediately once it’s generated and act as an interface to these newer technologies. The reason we do that is so that your RPG or your cobalt or whatever else you’re writing in can have access to these technologies. Again, the fundamental philosophy behind this entire system is to say, there are a whole lot of capabilities out there. Sometimes we have difficulty integrating those directly with RPG. Other languages are going to have more access to some of these technologies. So what we’re doing here, the strategy is to say, let’s have RPG talk to another layer, which we can generate with an LLM. That’s then going to do whatever it is that we need it to do. In that way, we’re then able to get RPG able to access all of that wide open world of newer technology that’s available. There were a couple of questions that came in, and I wanted to just address those really quickly. One was, which large language models are supported by the natural language interface? So I wanted to answer that one for everybody here. So when we were developing this, we actually tried out several different models, and none of them were reliably generating what we needed in order to build really good integrations. So the natural language interface that you’re seeing on Dan’s screen is actually a proprietary eridani interface. There are several different components in there. We do use some public LLMs for some of the generation steps. Some of the prompt processing is being done specifically by Claude AI from anthropic. So we do use that one as part of the process. One key point here that’s always important to mention when we work with this is that our system. I don’t know if this is the question that’s behind that question, but our system does not learn from any of the inputs that you give it as part of your generated operations. So it’s a key point of this, that while it’s aware of your code, while you’re generating integrations, it’s aware of your entire application. It’s not going to incorporate that into its knowledge base, meaning you’re not going to, if you’re an Airedani connect user, you’re not going to run a generate operation and have someone else’s code spit out into your application because it’s not incorporating that into your application. So I do just want to mention that. So I guess shorter answer to that question in terms of which language models are supported. It’s a proprietary process that we’re using there that has some public LLMs and also some proprietary generators in there.
Daniel Magid 32:44
Basically, we’re creating a framework around the LLM so that we can do the specific things we need to do to support IBM I users, which is a move in the whole AI industry, where you’re seeing more and more of these very specific use cases for the AI. So that the AI doesn’t just go off and generate anything. It has some guardrails about the kinds of things it’s supposed to be doing, and it gives you much, much, much more reliable results.
Aaron Magid 33:10
Right. Second question here, which is related, is does the AI only build or can it also tell you what might be wrong with your code, if applicable? Absolutely. It can help you with errors. I might do that later. One of the things that I like doing with it is saying, I’m getting an error on line 34 that says cannot find name XYz fix it. And it generally does a pretty good job of getting those errors and fixing them. So that is absolutely something that we do here. And when you’re in that iterative process, that’s a fairly common thing. You give it a prompt, you might describe something that doesn’t quite work, it’s going to generate something. Your IDe might report some errors. And that’s where we iterate and we say, okay, let’s take it from there.
Daniel Magid 34:03
By the way, under the covers, in the eridani use of the LLM, we’re not just going out and generating the code. We actually have a multistep process that generates the code, and then we actually run it through a second LLM to check the code to say, is this code valid? Is there anything wrong with the code? So we’re actually doing a lot of checking to again increase the odds that what you generate is just going to run. So we’re really working very very hard to get you as close to that environment where you can just describe it and you get code that runs so you don’t have to get into the JavaScript. But on the other side of that, what I want to make really clear is we are generating standard JavaScript code. What that means is that you actually can get in and work with it if you want to. As Aaron mentioned, the LLM is the AI engine is aware of your code, so if it regenerates, it won’t lose the changes you’ve made, so you can make your changes. What that means is it’s a big advantage over what the traditional load code platforms are, which is they’re a proprietary vendor environment where you can only do what they’ve given you functions or buttons to do. By doing it this way you can generate using the AI assist engine. But if there’s something you want to do that Eridani doesn’t yet know about, you actually can go get that code, drop it in and use it. So you can use whatever technology might come down the pipe. So you’re not limited to whatever Erodani can do to keep up with things. You can go out and get whatever you want and add it in.
Aaron Magid 35:37
Quick side note here, another one that came up. Would you expand on where is the Eridani LLM hosted? Those services are hosted in Aerodani infrastructure. Essentially we provide endpoints that are accessible to aired on the users. You have a product key in your application that allows you to access our endpoints. That system is all API driven.
Daniel Magid 36:01
And that’s just for the generation. We are not actually hosting your runtime environment, right, exactly.
Aaron Magid 36:06
We’re not hosting runtime environment, we’re not hosting data, we’re not processing any of that. We’re hosting generator services.
Daniel Magid 36:17
Okay, is that it?
Aaron Magid 36:19
One other question here was about the generated RPG code and asking the style of the generated RPG is basically a hybrid of fixed format and free format, and asking whether or not they’ll generate fully freeform. That is actually an option in the generators. You can go to free form and you can have it convert to free form if you want. However, that being said, I think it’s important to note that that RPG code is not RPG code that you actually interact with. That code will be generated and compiled and will run and essentially will sit there and do what you need it to do. If it’s a critical need, if you have a corporate policy that it has to be free format code, even if you’re not looking at it, then you can use the converter to switch it over to free format. And that is bundled in the development environment, so it’ll do that. But again, it shouldn’t really be code that’s a part of your daily work.
Daniel Magid 37:17
We need to keep moving.
Aaron Magid 37:20
Sorry, one other quick question I just want to answer. Follow up on the hosted environment. So your runtime Iridani connect application, once you generate this code that’s all hosted on your servers, the generator services the actual system that you’re talking to. That’s then doing the generation process that stays in our infrastructure, that’s a proprietary process that stays in our services. So we don’t host that on customer machines. But once you generate the code, that code’s yours. That runs in a containerized environment wherever you want on your servers, in the cloud, wherever you want.
Daniel Magid 37:56
Right. And the reason being is we have to have this hybrid environment where we’re creating, as I talked about, the sort of guardrails around what the AI is doing. So we have a whole lot of stuff that’s going on to tell the AI really specifically what it is it’s supposed to be doing. We’re then talking to the AI engine and then we’re bringing it back, going through a whole series of checks of that code before we return it. So basically that infrastructure is what we host. Everything else once it’s generated, once you deploy it, it can be on Windows servers, Linux servers, on your IBM. I, it’s completely up to you where you want to run it.
Aaron Magid 38:31
Yeah.
Daniel Magid 38:33
Okay, so let’s talk real quickly about the first additional use case, I guess the next use case, and that is for security. So in that, as I talked about, when we first generate that integration code, we are generating with whatever security systems you’ve said, whatever authentication methods you said you want to use. And that’s again, one of the advantages of doing this in JavaScript is we can use the latest version of OAuth, we can use the latest version of encrypted token authentication. So we’re getting the latest versions of things rather than trying to keep up with something by writing something custom in RPG or COBOL. So it’s all happening with the latest version of these things. So we’re integrating that. But maybe you want to add to that, maybe you want to add say a multi factor authentication layer where you are actually integrating, maybe you have a multifactor standard, something like duo or you’re using Okta or something like that, that you’re already using for multi factor authentication. You just want to add the IBM I to it. So you can actually generate the code to do that. That is what Aaron is going to show you an example of now.
Maia Samboy 39:43
Right.
Aaron Magid 39:43
So this is actually, this was an interesting project that we worked on actually a couple of years ago to bring this to IBM. I was MFA for IBM I services, because again, with this philosophy of bridging the gap between the technologies is a fairly straightforward thing to do. So what I’m going to do here is I’m going to go through and generate that. And again, I’m going to use those same tools that Dan was using because again, those tools are very sophisticated and are able to do a lot. And again, when you couple the generative capacity of these LLMs and the generator services with all of the available technology, the open source community, what you end up with is a lot of components that you can bring together in an intelligent interface. You end up with very, very powerful tools. What I’m going to do here is show you guys a demo of working with Duo. Dan, can you shut down your screen share and I can share. I’m going to use duo specifically as my multifactor authentication layer. I’m going into my workbench here a little bit, same thing that Dan was doing earlier. What I’m going to do here is I’m going to generate an API. We’re going to go through a quick thing here. I’m going to speed through this a little bit because you’ve already seen some of this stuff. I’m going to say API demo balances. I like using this sample file that some of you will recognize.
Aaron Magid 41:17
We’re going to go here and we’re going to say I want to get the account balances that are in this file on my IBM I, and that’s information that I might actually want to secure. I might want to be a little bit more careful about who I give access to that information. Again, I’m going to go through generate my API just like Dan had earlier. I generate my API. I run the build. I can start up my application and then come over here to my postman. I can run this. And there are my records. It’s the same thing that Dan did before, just generating an API. Now from there, I’m going to go back to this box again. I’ve written out some of the things that I need for the duo process. I’m going to paste that in there and I’ll explain what I’m doing while it’s generating. What I’m doing here is this is actually very important. I want to make sure that it takes advantage of this open source module. This is one of the key points here. When I wanted to build this integration, I went over and I went to duo and I said, okay, how can I talk to you? What they said is, I did talk to a person. I went to their site, actually, I might have asked Chat GPT, but anyway, I went over to that information, however I got it, and said, how do I talk to this system? What I saw is, well, you can talk to the API, but that’s actually a pretty complicated process. You have to set up your signatures, and you have to do the whole encryption and hashing process. You have to get everything right, you have to do the whole whatever, you have to generate your keys. I didn’t want to do any of that, to be honest. I didn’t even read through the entire page there about the documentation. I think I actually just saw the first couple of sentences and then brush that off. As I’m not doing that, I look for another option, and it turns out there is actually an open source module from duo that will handle that authentication. For me, that’s this duo security duo API. Again, this is a critical point, because what we want to do here is fuse that open source code and all that open source technology that’s available out there with the power of the natural language interface so that we can build these applications super, super quickly. Again, there’s a critical piece here, which is that of that philosophy, which is that even with an LLM, if I was doing this directly in RPG, I wouldn’t be able to use these modules. I would have to get all the code generated myself, which would be a huge amount of code that I would have to have in order to do this. Again, going back to that philosophy of the fusion of open source plus the generator services makes really, really powerful systems. You can see here, it updated my code and I gave it an API key, a demo account information that it can use to authenticate. I’m going to go back here and I’m going to repackage this. I didn’t actually finish here. Let me actually just explain the end of this prompt. What I said here is use this key and this secret that I got from duo. Run a multi factor call via push notification. If the MFA call is approved, then run the database operation and return the result. Then go get the account balances. If the MFA call is rejected, then return a JSON response saying MFA approved. False, and say MFA rejected. Right? So once I build this, restart it,
Aaron Magid 45:13
and I’m going to go over here to my postman and send this call again. Get that guy a second to come through.
Aaron Magid 45:26
And over here, if you can actually see this on my phone, but I actually have duo open on my phone that’s actually saying, do you want to accept or reject this request? I’m going to go over here and I’m going to say approve. When I do that, you can see on my screen in my postman, it actually came back and it ran the database query and got me those records. Similarly, if I send that again and I wait for duo to come up, I come back here and I say deny that I don’t want that. And then I’m going to say, no, that was not suspicious, so that it doesn’t report my demo, and then it’s going to come back and it’s going to say approve. False. MFA rejected. That’s the exact message that I asked it to send if the MFA is rejected in my prompt. Again, going back into that process, this is a fusion of the open source technology. To be able to use this duo security module to be able to run a push MFA call with this is formatted, but it’s essentially four or five lines of actual code in here. To be able to do that in a few lines of coding, plus the natural language to actually generate that. That’s what I had to show there. Dan, I know we’re getting a little low on time, so I’m going to jump back to you.
Daniel Magid 46:46
Okay, so let’s. So I’m trying to think of the next one we were going to do is a texting one, but you just kind of did show that. Well, let’s go ahead and do the text.
Aaron Magid 46:56
Yeah, yeah. Okay. You know, I can just jump in with that.
Daniel Magid 46:58
Yeah, go ahead.
Aaron Magid 47:00
So sorry, I went a little bit over. So there’s another key. There’s another key use case here of this technology that we haven’t discussed so far. We’ve been using LLMs and generative AI for generating integration code. What we’ve been doing is saying, I’ve got an rpg program, I want to run this function, go generate the logic for me. Or I’ve got an API, I want to do this, go generate it for me. But there’s another use case here, which is runtime generative AI that we want to have access to. There is a very powerful set of use cases there just to illustrate the difference. One use case that we had suggested once was I might generate API code, and that’s fine, and then put that in production from one of these generative AI systems. But another cool case was confirming information. We’ve talked about things like, I’ve got order data from a customer and I’m going to send it back to them for confirmation and ask them for changes and say, do you need any changes to this? And allow a customer to say, oh yeah, you got my name wrong. I’m Aaron with two a’s, not one. Sorry, can you just change that for me? Take that back and actually update our information in a reliable way using that information. So there’s another key point here. I have a program here that I’m going to show, and some of you may have seen this process in prior demos that I’ve done because I like this program and I’ve been using it for my, for my generative AI demos. Basically what this is going to do is it’s going to send out a text message from an rpg program again, using that open source technology to my phone and ask me should my program continue or not. Based on what I say in my answer to that prompt, it’s going to translate that into actionable steps for my rpg program. So I’m going to pull up my 5250 here. Let me just connect, get this session going.
Aaron Magid 49:39
And I’m just going to put my. I know you can’t see what I’m doing here. I’m just putting my library list together.
Aaron Magid 49:50
Okay,
Aaron Magid 49:54
so what I’m going to do here is I’m going to run this confirm program.
Aaron Magid 50:13
Okay, cool. And if you notice here, I’ve got a couple of prior chats on here. But if you notice here, I’ve actually got a message on my phone that says, should we continue? Same message that I had in that prompt. What I’m going to do is I’m going to say, sure. So I just sent that back to my program and you can see my program came back and I’m going to give it a four. That’s full file and you can see confirmation received, continuing the operation. My program is aware of what that meant. Well, okay. I could have pre coded that. Answer that, sure. In there. But one of the things that I can do in here, because I’m actually taking that through an AI process, is I can process that text. I can process much more sophisticated messages in my rpg program. So one of the ones that I like to do, I just ran it again, so I got that. Should we continue? One of the ones that I really like doing is coming in here and giving it a little. Let’s see. Here it is. Give it a thumbs up. Emoji is what I just sent it. So I just sent it a thumbs up in my messaging app here. And I’m going to come over here and I’m going to say, take a look at that one. It’s going to say confirmation, receipt, continuing the operation. Right. And just for completeness here, if I come back and I say something more, you know, harder to understand, something along the lines of, well, I thought about it and eventually decided that I’d rather not continue here, right. And come back with a message that’s a much more longer message with actual natural language. And I can come back here and I can say, take a look at that school file and it’s going to come back and it’s going to say, yep, cancellation. Got it. So again, one of the key points here is to say there’s another use case of this, which is not just generating code for developers, but also processing data at runtime that allows me to take in natural language and actually from my users and actually know what to do with it so that I can produce real usable interactions much, much faster with my applications.
Daniel Magid 52:40
One of the cool things about this is that means you can actually, instead of writing a complex mobile user interface with forms that are parsing data to look for certain values and fields, or having drop down lists, you simply have a natural language conversation like, do you want to add anything to this order? Or we’ve got this special deal going on. If you order ten more of these, then you’ll get a bigger discount and have them just converse with you through these natural language interfaces, rather than having to write a whole UI application to accomplish the same thing. Right?
Aaron Magid 53:16
And again, going back to that original philosophy, working directly with these LLMs and doing the string processing and actually building prompts, extracting outputs in an effective way from RPG is going to take a lot more code and frankly not be as stable versus doing it in a language that is actually designed for those kinds of operations. That’s why we generate things in applications that have 1ft in each side so that we can say, okay, my RPG is going to take in the inputs and do all my business logic and that’s great. And then when it needs to talk to the LLM, it’s going to shift over to the typescript layer, which is generated code to talk to an LLM, generated by an LLM, which is an interesting point. And it’s going to handle that because it’s able to do that in a couple of lines of code. Right. So that’s, again, getting back to that philosophy it’s a really important part of that process to make sure that we’re using tools that are suited for what we’re trying to do in addition to the, to the natural language interface.
Aaron Magid 54:29
So Dan, I know you had some.
Daniel Magid 54:32
Points on here, just a couple of things. We’re actually not going to have time to do all the demos we had planned. So let me just go through a couple of things here. We’ll probably have to spin up another one to go through some more of these use cases. But let me just show a couple of things, just some other ideas, things that you can do. So the other thing that we support is EDI processing. So that you can send in an EDI document and we can parse that EDI document, pull the field data out of the EDI document, and then you can actually use the AI engine again with the aerodynamic framework built around it to actually, then to actually take that data and put it into your database, to map it into your database. That’s another, another piece that we are supporting is the ability to work with your EDI documents. And then the other is the ability to work with event brokers like Kafka or Azure service bus or SNS and sqs from Amazon so that you can publish messages. Aerodynamic Connect will read those messages and then turn them into something that the IBM I applications can read. So you can, instead of writing individual integrations to every one of your applications, you can simply have them posted to topics in an event broker and have those read. Have the event brokers actually processed by the backend application. So you can turn the IBM I into a publisher and a subscriber to those messages.
Aaron Magid 56:02
Right. And Dan, actually if I can jump in with the EDI. One of the key things with EDI is the process for an EDI integration is typically very complex. What we found is that with EDI, the big problem that we see a lot of companies run into is onboarding new trading partners or making changes to trading partners because the mappings for the data are so complicated. We have all this non standard data that’s being transmitted back and forth and we need to be able to process the data as it comes in. While you were going through that initial piece there, I actually went in and just went through and generated an EDI integration. And I just wanted to mention what I’ve got here really quick in our last minute or two here. The way that we’re approaching this problem of EDI mappings. Can you go stop your screen share for a sec?
Daniel Magid 57:05
Done.
Aaron Magid 57:06
The way that we’re approaching this, this EDI process is by saying, just like with everything else, let’s use these advanced tools that we have to actually generate the mapping that we need for an EDI process. Because in reality the mapping process is usually not extremely deeply complex. It’s usually just very tedious in the number of field definitions that are there. One of the things that we have here, I know I’m doing this pretty quickly, but we have in our generator an EDI option that says here I want to generate a business process that’s going to handle documents that look like this. What I did here is I pasted in a 204 EDI document that I have as a sample. What it gave me is a series of, of components to my integration layer that do things like for example define the database for IBM IDB two that will store the appropriate data from this EDi document. So it gave me the create table statements that I need to handle that information with appropriate fields and lengths for that document. It gave me mapping code in here with queries and configurations that I need to actually be able to build or to actually run this integration and ingest the data into my application. So it’s doing all the mappings for me, it’s doing all of the processing, it’s parsing out data using default values, it’s pulling things out from looping fields. It’s handling that entire process for me in this integration so that I can actually send that document into an endpoint here and have it process that new document minute one, without having to actually go through and figure out all these processes. So this is a process that we’ve been working on that is showing a lot of promise in developing EDI integrations and in doing onboardings to try to reduce that time that it takes to read through all the documentation and get, and work with trading partners to get integrations up and running that traditionally have been required.
Daniel Magid 59:29
So in the -30 seconds we have less. You don’t happen to have the EDI dashboard in a state that you could.
Aaron Magid 59:36
I actually, you know what, actually, it’s a really good question. 1 second here. Start this guy up and I can go over to my summary here. I actually do have this. Let’s just get this guy up and going here. So I actually do have a dashboard over here. Dan showed you earlier a dashboard that we provide along with the application that’s bundled with Airedani connect application. That’s that swiper stats. One of the key points about monitoring that we did want to mention. I know I’m going over here is that all of the metrics that are provided by this application are in a standard format called openmetrics, or Prometheus, if you’re familiar with that. The reason we do that is because every dashboarding system on the planet basically is going to know how to work with that data. I’ve got a dashboarding system here called Grafana that’s showing me the data that’s coming through my EDI system and giving me all kinds of metrics, including average processing time, document size, data transfer sizes, number of documents per trading partner. It’s giving me document processing rates, the number of documents I process processing at an extreme rate here. It’s actually capable of alerting me based on different conditions that come in. So again, going back to that original philosophy, getting our tools into standard formats instead of trying to reinvent the wheel, getting our tools into standard formats allows us to work with the latest technologies and work with the tools that are designed to do what we’re trying to do. Instead of building my own alerting system or my own monitoring system, I’m using the open source grafana that I got for free. It’s an open source tool and I’ve got that configured on my system reading metrics for my IBM I applications real time because they’re using the standards. So that’s the key point there. I know we’ve done presentations on that in the past, but I’m going to.
Daniel Magid 1:01:43
Thanks Eric. We may have to get back together and open this up, but I noticed that Maya has reappeared on the screen, so that probably means we’re out of time, right?
Maia Samboy 1:01:51
We are just about two minutes over, but yes, we can definitely schedule another session if we want to get the rest of those demos done. And keep an eye out on your email for the recording of this session. And if you have any additional questions, feel free. Feel free to respond to me at any time and I’ll pass those along to Dan and Aaron.
Daniel Magid 1:02:09
Just one quick answer on resource use on the IBM I typically, it’s very, very little. It’s a very, very lightweight server, so it does not use much IBM resource at all.
Aaron Magid 1:02:21
What we usually see is when it’s idle, it’s usually using about 40 megabytes of memory and usually less than a 10th of a percent of a typical cpu. So it’s usually not even showing up on. It’s usually showing up as a zero on the work active jobs when it’s not doing anything, obviously the more you’re doing with it, obviously it’s going to need some resources, but we rarely see it even in high volume, calling it thousands of calls per second, we rarely see it break 500 megabytes of memory, and we rarely see it really use substantial resources.
Daniel Magid 1:02:57
Great. All right. Maya. Sorry.
Maia Samboy 1:02:59
No, you’re totally fine. I’m glad people could stick around for those. Those last couple. Couple points. And again, yeah. If you have any follow up questions, feel free to reach out to me. Thank you, everyone, for coming, and we will see you next time.
Daniel Magid 1:03:13
Thanks, everybody.
Aaron Magid 1:03:14
All right. Thank you all.
Maia Samboy 1:03:15
Have a good one.
Aaron Magid 1:03:17
All right.