Beyond Git - DevOps Pipelines for IBM i

Webinar Icons Time

Session Time

60 Minutes

Webinar Icons Date

November 8th, 2023

11 AM PDT | 12PM MDT| 1 PM CDT | 2 PM EDT

You’ve learned the fundamentals of Git. You may even have your RPG source code in a Git repository. Now what do you do? 

You leverage continuous integration and delivery pipelines, of course!

Join IBM i expert Dan Magid for Part 2 of our “Git Demystified” series to learn how you can leverage the latest tools to automate builds and deployments, and manage move-to-production processes while maintaining control, compliance, and release governance.

Dan will provide recommendations for getting started plus live demos tailored to the IBM i environment. 

Save your seat for this advanced follow-up to “Git Demystified” and take your IBM i DevOps to the next level!

Transcript Text

Great, let’s get into this. So hello everybody and welcome to Beyond Git DevOps pipelines for IBMI. I’m MIKE SAMBOY and I’ll be hosting this session. So you’ve learned the fundamentals of Git. You understand how to manage the development of Git using Git for RPG cobalt. You know how to the basics of a Git-based repository. But now what do you do? Well, you leverage delivery pipelines of course. And that’s what we’re here today to talk about. So in part two of our Git demystified series, Dan Maggot is going to show how with a little help from Eradani, you can leverage the latest tools to automate builds and deployments, manage your move-to-production process, all while maintaining control compliance and release governance.

Now I know all of you know the webinar drill by now. We’ve been on enough of them. But I do want to remind you that you can click on the Q A button to ask a question at any time. We’ll get through as many of the questions as we can and follow up with you if you run out of time before answering your questions. This session is being recorded and I will send this out to you along with Dan’s slides from today.

Now for those of you who don’t know Dan, DAN MAGID has decades of experience with the IBM midrange and is a frequent speaker on IBMI modernization. Dan began his career at IBM working with System 34s, System thirty six S and System 38s. He went on to become CEO of Aldon and over.

By the way, I started in kindergarten, just so everybody knows.

Yeah, that’s good. And I hope you don’t mind me hitting decades like that. I did hit decades a little hard. But anyway, Dan went on to be the CEO of Alden. Aldon Change Management System, went on to become the leading software for change management on the IBMI at the time. And then Dan did a bunch of executive stuff for rocket software pertaining to the IBMI and then he came and started Eradani and we’re very glad he did. So Mitch, as you see, everybody knows Mitch. Almost everybody on this webinar, I’m sure, knows Mitch by now. Mitch Hoffman has agreed to be our Q&A guy in the background because the last few webinars we’ve had so many questions that we need somebody dedicated to answering those. So thank you Mitch, for joining us.

So Dan, that’s the basics. They’re all yours. Take it away.

Great, thanks. So much, Mike. All right, great. Well, welcome everybody to today’s webinar. As Mike said, this is our second in the series of presentations on demystifying Git and working with the open source DevOps tool chain. Our objective is really to show everybody how IBMI users, just like everybody else, can use all of these very powerful open source tools for managing your software development. There are literally tens of millions of people who are using these tools, and there’s no reason you can’t use them with the IBMI.

So like in the first webinar, I’m going to be using some slides and some demonstrations to show you what’s possible. Now, I also know that some of you who are here did not attend the first session. And so I’m going to start out actually with a little bit of a review of Git just so that everybody is familiar with where we’re starting so you understand the basics of what we’re doing. And then I’m going to talk about the main topic of what we’re going to be talking about, which are working with pipelines, that is automating the whole process of the application development lifecycle. So automating things that happen from the time you start making changes through moving them, through testing and building the objects and deploying them to the places where the code actually needs to run.

So we’re going to talk about builds, how you do builds in the IBMI environment using the open source tools, how you promote things through the lifecycle and how you deploy the objects to the places where they actually need to run. We’re going to talk a little bit about parallel development and branching because one of the very powerful things that you can do in a git environment is have multiple people working on the same things at the same time without worrying about losing changes.

So if you think about the kind of history of Git, Git came out of a project to develop the Linux operating system, and so there were hundreds of developers working on it all around the world. And so each person had their own copy of the code base. They would make their changes, and then when they submitted their changes, they would get merged into the code base. So there’s a whole lot of comparing and merging and making sure that all the changes were saved. So something that Git is very good at.

And then also, as Mike said, we’ll have some time for questions at the end. So one of the things that people asked me actually after the last webinar, they said, well, what of what you showed us was actually open source versus what are Eradani pieces, what are some of the components that we actually have to use from Eradani? So I wanted to give you a sense of that.

So this is kind of a diagram of the open source lifecycle for managing software development. And our objective at Eradani is really to just fill in the gaps where the open source tools don’t work with IBMI things natively, where they don’t really have an understanding of how things work in the IBMI world. But otherwise, we want you to be able to choose to work with whichever tools make the most sense for your organization or whatever your companies have chosen as your enterprise change management tool. So we want to be able to work with those things.

So for example, we want you to be able to choose the development environment. Do you want to work with VS code? Do you want to work in PDM and SeU? Do you want to work from RDI? So you can do that and we’ll just take care of making those things. Talk to Git. So you can use Git and you can have Git hosted locally on your IBMI. You can have it hosted up in the cloud with Azure DevOps or Bitbucket or GitHub or GitLab or any number of Git hosting environments. So we want you to be able to choose which hosting environment you want to work with, if you want to do that, or again, to just host it on your IBMI.

I want you to be able to work with whichever issue tracking systems that you want to work with. And then when it comes time to actually move things through the process, we want you to be able to use the open source tools like Azure pipelines or Bitbucket pipelines or GitHub actions or Jenkins and have those things talk to the IBMI. And so we feed it the information that those tools need in order to do things like build IBMI objects, right? So those tools don’t understand IBMI create commands, they don’t understand dependencies among objects. So we want to feed that information to those tools so that you can automate your processes with those tools, just like you’d automate processes for any other platform.

And then the other piece we want to be able to do is to be able to package up objects and deploy them. So not just to be able to build things, but also to say, I just want to take these objects and move them to some other place. I have multiple testing environments that I want to populate with the objects, or I have production LPARs, multiple production LPARs. And I want to be able to ship the objects to those places. So we want you to be able to do that. And again, that’s something specific to the IBMI and how you package things and how you send them and how you install them. So the Ardani pieces are really the pieces that allow these open source tools to work with IBMI code natively.

So just real quick, a little bit about Git that’s different than typically how we think of things in the IBMI world. So if you’ve ever used the traditional tools on the IBMI for doing change management, what they typically do is they track things as they move around through the lifecycle. So you’ve got everything in your libraries and source files and source members. Everything is being managed within that structure. And as you move things from development to test to QA to production, things are moving through those libraries. And what the tools are doing is they’re keeping track of where those things are and they’re automating the movement from place to place in those library source files and source members.

It actually has a separate repository where it’s keeping track of all of the versions of every file. So it actually has a repository that is separate from where you’re actually working on things and where you’re doing things. It’s like a file cabinet where it’s putting everything into the file cabinet so that you have this sort of secure repository. What’s cool about that is with the traditional tools, if you wanted to get your whole code base and give it to somebody, you’d basically have to copy all those libraries. You’d have to get copies of all those libraries and give them that entire structure. With git you can just copy the repository and you have all the code and all of the folder structures, library structures, that’s all in that repository. And you can then rebuild the system directly from that repository.

So everything is in that file cabinet, but you’ve got that file cabinet which is separate from where you’re actually working. So if you think about it, you make your changes over here and the places where you’re working. And then as you do things like Git add, where you’re adding something, saying here’s something I’ve changed and I’m ready to commit it. Or you do your git commits, then things are actually being updated here in this repository, in that kind of file cabinet. And everybody has their own copy of the file cabinet. So each person can have their own copy of that repository. And we’re going to take a look at how all that works in an actual lifecycle process of a change.

Now there are a lot of things that git is not. So git is a source version control tool. It only manages source code. It does not care about the generated objects, the build results. Those are not things that are part of what it deals with, so it doesn’t manage those things, it doesn’t create those things. It doesn’t do the build process for you. It certainly doesn’t understand IBM object dependency, so it doesn’t understand tables and views and how they’re related and how programs use them and how modules are related to programs, doesn’t understand any of that. It also doesn’t promote things automatically through a lifecycle. Now we’ll see how it keeps track of what’s happening inside the lifecycle and actually supports moving things through a lifecycle, but that’s not its main job and it doesn’t do any kind of a deployment. It doesn’t have a thing to deploy objects from one place to another. So those are the things that you need your DevOps pipelines for. So that’s what the pipeline tools are for is they use git as the source, as the place where all your code is, where all the source code is, and they pull from that to then go ahead and put things into the appropriate places. As things move through the lifecycle, so they automate the build process, they automate the promote process, they automate the deployment process. So that’s what pipelines are for and that’s really what we’re going to be focusing on today and talking about.

So if you set up some kind of a lifecycle like you have here, where you have your tasks, you’re working on your tasks, and then when you’re finished with a task, you might move it to a testing environment. And when you’re working on the task, you need to be able to create things in your development area to do your unit testing. And then when you move into QA, you need to be able to create things at the QA level to be able to do your QA testing. And then you move to production, and maybe you do a create to production, maybe you don’t, maybe you just distribute the objects to production. That’s kind of a decision you make, but that’s kind of the whole lifecycle process of making a change.

So git is keeping track of the versions and it knows which versions of all the files are currently associated with your test environment and which versions are associated with the production environment. But it’s not managing the process of moving things through that lifecycle, that’s what the tools are for.

So as part of that process, there is this issue of how do I do a build? And these tools, tools like Jenkins, like GitHub, actions like Azure, DevOps, they don’t really know how to deal with IBMI objects. So they need to have some understanding of how those things are built. And the way they work is they work using something called a make file. A make file is really a set of rules about how to do a create. So in the make file, all of the dependencies are identified, and in the make file, the create commands or the build rules are all identified. So that’s the approach we’ve taken for automating this at Eradanii, is to use that same make file process.

And so if you’ve got an environment like this where you’ve got physical files, logical files, tables, views, modules, programs, and they’re related, and you want to be able to build them, you need to have a make file that understands those relationships. So a make file is basically, like I said, a set of rules. A rule has three parts. So the rule has the target. That’s the thing you want to build. At the end of the day, that’s the end result of the build. And then the prerequisites, what are the things that, that build things need. What does it need in order to actually be built? And then what are called recipes. Recipes are basically the create commands. It’s the functions that have to be performed in order to build something.

So you have this idea of the target you’re building. What are the things it needs to be built, and then what are the instructions for actually doing the build? And then the make function uses that file. When you say, here’s a set of things I’ve changed, now go make those, build those, and all the things that use them. And so it uses that make file to do that.

So basicalLy, you can create this kind of an environment where you’ve got those dependencies. And this is just what a make file might look like, right? So I’ve got the target, here’s the thing that I’m building. Here are the things that it needs. These are its prerequisites, and here’s its create command. And I have, for everything that I have to build, I have that set of rules in my mate file. So that’s the input to the actual build process.

So we actually build that information. So Eradanii, one of the functions we provide is the build for that function. You actually can explore that information. So we make that available. So if I go over here to my RDI workspace. So here I am in RDI. I can go to the Eradanii Explore dependencies view here, and I can see, I can open up and say, oh, so this program, what does it use? And I can go in and see. Okay, so here’s some modules. Here’s a service program. Here’s the modules the service program uses. So I can explore those dependency relationships right here through RDI. I can go the other direction. So here I’ve got a module and I want to go in and see. Okay, so what uses that module? So there’s this service program and the service program. There’s this program that uses the service program. So I can see those dependency relationships and I can explore those dependency relationships directly through RDI.

And I can also access all the git functions directly through RDI. So if I need to do git things, I have the ability to do standard git functions directly through RDI. So I can do my work directly from here if I want to do that.

Let’s go back over. So now one of the things that came up after the last view was not changing. Oh, it’s not? We see the IBM. Yeah. Okay, let me try resharing again.

How about now? Yeah, thank you. Okay, so one of the things that came up at the last presentation was how do I get started with this? Because I’ve got things in libraries and source files and source members, and I want to get things into the directory structures that git needs. And so basically what happens is we have a function that does that for you. So you’re working here in your libraries. In your libraries you’ve got source files and source members, and you may have multiple libraries that make up your environment and you need to get it into this directory structure.

So we have a function called Git Init for the IBMI. Now git init is a standard git function, but what git init will do is it will take all of the stuff in your libraries and build for you a repository for git. And it can do it locally in the IFS. So you can have a local repository and it also can build that repository for you in the cloud. If you’re using something like GitHub or GitLab or Bitbucket, it will create that repository for you in the cloud.

So all you need to do is point it at the library or libraries, and it could be many libraries. You point it at the libraries you want to manage and run the git init function, tell it which repository you want it to build, and it will then build that repository for you. So that’s basically how you get started so that you end up then with a git environment built both locally potentially, and potentially in the cloud as well.

So let’s take a look real quick at what that looks like.

So hang on, I’m just move the git control, the zoom controls out of my way here. So here you can see I’ve got my browser view up and I’m looking at a git repository that we built from an IBMI. And you can see you’ve got a lot of stuff here that looks just like their IBMI source files, right? So it looks like I’ve got IBM source files. And in the source files, if I open up the source files, I can see the source members within them. And if I want to, I can go into a source member here and I can see the actual source code for it. If I hit the blame button, I can go in and look and see all the history of changes. So I can say here’s the last change made to this code and who made it. And if I want to, I can go backwards in history and see the change before that and the change before that. And I can just keep going backwards and backwards and backwards in history to see all the changes. So I can take advantage of all the capabilities of GIT, or in this case GitHub for working with this. And again, I’m going to use GitHub today for our demo. But all the things you see me do we can do with Bitbucket, we can do with GitLab, we can do with Azure DevOps with all of the different kinds of repositories that you might work with.

So anyway, so this is working here in my environment now. In my environment I can have many what we call branches, so I can be working on task one. So let’s take a quick look back over here and talk about the DevOps process. So now we’re going to actually get into what does it look like to go through the lifecycle of a change. And this is the environment I have set up. So I’ve got a base release. This might be like code that you get from a software vendor. So this is their base code. And then I’ve got my production environment, which in this case I might have. Release one is where I’m putting things in right now for production. And later on maybe I’m going to move to a release two. But I want to keep release one out there just as history. So I always have it. But that’s going to be my production environment. I’ve got an integration test stage that I go through where I test things together and then I’ve got a place where I’m keeping all the changes for this task. So that I can always see exactly what I changed for a particular task.

Now, again, I could have multiple tasks at this level that I then move into this integration environment and integrate together. But this is basically the environment I have. So as a developer, I work on my own code, work in my own little environment. I can then push them to this integrated environment where the work of multiple developers might get pushed together. Then I have this testing environment where I might have multiple tasks that are being tested together and then I move things into production. You could have as many of these stages as you want.

So you could go from the task environment to the integration test, to quality assurance, to user acceptance, to release, to staging to production. You can have as many of these as you want. But basically that’s the process that we’re going to be working with.

So the first thing we want to do is go in and make some changes. I want to make some changes to source members and I’m going to do it in my development library, my DM Dev one library. And so what I’m going to do first is I’m just going to go over here. So now I’m in visual studio code. So we saw that you can work in RDI, but you can also work in visual studio code. So that’s where I’m going to go for my first set of changes here.

So I can go in and say, okay, I want to change this physical file. So I’m going to open that up and I’m going to expand the size of these things. So I’m going to make this one bigger and this one bigger. So I’ve made a couple of changes to the physical file and then I’m going to make a change here to this RPG program and I’m going to put in a comment here that says

increased size of customer fields, whatever. So that’s my comment that I made. So I’ve made some changes, let’s say we made some changes to the program. I’m going to go ahead and close that and then I’m going to make a change also to this SQL table. Now in the SQL table, I’m going to actually decrease the size. I’m going to make these smaller. So this is a different change. This is a change to make things smaller.

And I’m close that now. At this point I need to tell Git if I’m ready to sort of start now if I’m happy with where this code is. And I say, you know what, gee, I want to actually mark this as a set of changes that are in a known state, I would commit them, but I can decide do I want to commit everything I’ve been working on or just some of the things I’ve been working on I’m going to do is I’m going to say, you know what, I only want to commit the changes I made to increase the size of the field.

So I’m going to go ahead and do an ad which means stage this particular physical file, the customer file and I’m going to stage

the RPG program. So I’ve done that. I’m not going to stage the SQL table at this point. I’m just going to do those things and I’m going to say okay, go ahead and commit to my local repository, to my development repository. I want to commit those things that I just did. So these are changes to increase size of customer table.

So I’ve gone ahead and committed those. If I go in now and do my git status, if I say okay, I want to know what’s the status of git, I can come in here and go into my status function and I’m going to have just the normal status. And here it tells me oh, you’re working on task one, you’re ahead of the origin by one commit. So that’s where I am right now.

Now what I want to do is I’m going to say, you know what, I’m going to make another change to that RPG program and this time I’m going to say what I did is I increased or decreased, I decreased the size of the items.

So I’m going to go ahead and save that.

And now I’m going to do another git add. This time I’m going to add that RPT program again.

And this time I also going to do an ad for that table, the SQL table and I’m going to do a commit of those. So now I’m going to commit those things

and these are changes.

So I’ve committed those separately. So again, now if I go in and look at my git status

I can see that I now have two commits. So I have two committed things that are waiting to go. So I’ve now committed them to my local repository. It’s updated but I haven’t yet shared them. So if I want to share these changes I need to push them up to the shared repository. And in my case I have that shared repository up on GitHub. So I haven’t yet sent those things up there yet.

And maybe before I send them up I actually want to do some testing. I actually want to try these things out. I want to work with them. So I’m going to flip back over here to my green screen and I’m going to go and do a work live with my demo testing library. So it’s demo Dev One OBJ. It’s where I put my objects for testing. And If I go look at that, there’s nothing there. And what I want to do is I want to do a build of what I just did, of the changes I just made.

So I’m going to come in here now and say okay, I want to do that. So I’m going to do that through the menu. I can actually just run a command, but I’m going to go to our menu and say I want to go through the Eradanii menu and I’m going to run this option seven which is the run the Eradanii make which says again, I want to do a build.

So I’m going to say I want to build into my demo dev one OBJ library and my source library is demo dev one. Now there could be multiple libraries here. So you could be building into multiple libraries using multiple source libraries, but I’m going to just use this one here.

And so now it’s actually doing those builds. So it’s looking at all the dependency information and saying what do I need to build in order to make this an operating application that you can actually run? So now it’s done that. So now if I go back in to my work live Emdev one OBJ, it’s now populated with all of the objects. So it created all those objects for me. And if I add that library to my library list, I can now run my hello world program and it runs for me and sends out some messages. So basically I’ve set it up so that I can do my testing.

So I’ve done my testing, I say okay, that’s great, I’m happy with that. I need to now keep moving things through my process, through my lifecycle. So now I’m going to come back over here and say okay, so the next thing I need to do, actually let’s go ahead and come over here and say okay, so the next thing I need to do is I’ve done those commits and again here I am. I’ve just simply moved over to RDI and if I want to, I can do my git status here directly through RDI to see where I am.

I hope I didn’t lose my connection here.

Okay, there we go. So here’s my git status. So it says okay, you’re ahead by two commits, so say great, so I want to go ahead now and push those up. So now if we go back to look at our process, I’ve made the changes, I did the local commit, now that I’ve committed them to my local repository. So that’s what I did here originally is I’ve committed them to my development repository. Now I want to push them up to the shared repository, to the place where we share the code.

So I’m now going to do a git push

that says move things up to another repository. So push moves from one repository to another. So now if I go over here and look at my cloud repository, so I go down and I look, notice I don’t see those changes. But if I refresh my screen

I now see those changes that I just made the increase. Here’s the last change that I made. So I can see the changes that I made. And if I go here into my commits, if I look at my commits I can see oh, okay, so here are the changes that I made and I can go in here and say what did I actually change? So here in one commit I can see, okay, you changed the RPG program to say increase the size of customer fields and I made these changes to the customer fields. And if I go back here and look at the other change

now I can see here’s the RPGs that said I decreased the fields and here’s the changes to items so I can separate out those changes. So I can see which changes were made for which commit. So I now have that information here available to me. And again that’s a standard kind of git function that you would use to see the differences, the changes that you’re making in the code.

So that’s kind of making a change, setting them up to be shared. So now those changes are shared. So now let’s see the next thing that I want to do. Let’s say that, okay, so I say I’ve made those changes but now another developer actually wants to go in and work on that code. They want to make some changes and they’re also working on task one, but they want to make some changes. It’s going to be different set of changes.

So what I’m going to do is I’m going to change my current library to DM Dev two. So I’m now a different developer and if I go in work member PDM, we’ll try PDM this time. So I’m going to go in as developer two. Now if I go into developer two and I take a look at the source code, notice I don’t have those comments I added that showed increase and decreasing field size, right? I don’t have those changes because those were made by developer one pushed to the shared repository. But I never pulled that code down here to my environment.

So the next thing I would do is, if I’m a good developer is I would probably do a poll to get the latest changes. So I’m always working with the latest changes. But let’s say I don’t do that. But I just go in and say, you know what, I’m going to just go in and start changing code even though I don’t have the latest changes.

So I’ll just say here, here’s a change made for new regulations for the webinar on eleven eight. All right, so now I’ve made a change. I’m going to go ahead and do my git add to say okay, I’m happy with that change. I could do my build and do my testing but now I’m going to do my git commit, say I’m done with that, this is an additional change

and I’m going to go ahead and run that. So I’ve now committed those locally again to my develop so it doesn’t impact anybody else yet. But now I want to push it up to the shared repository. And in the shared repository I’m going to have a problem, right, because I’ve made some changes that developer two didn’t have. So I’m going to go ahead and do that git push that says move this up to the shared repository just like we saw before. And the git push fails and I can go in and say well let’s look at the job log and see what happened.

And what it tells me is your updates were rejected because the remote contains changes you don’t have locally. So the next thing you need to do is a git pull which will pull those changes down to your development repository, merge the code together for you. So that’s what I’m going to do. I’m going to go ahead and do the git pull, say give me those latest changes.

So now it’s pulled those things down to me. If I do my,

if I do my git status

it now says I have two commits that are ready to go. Now why do I have two commits? Well, Git looked at the changes and it said, you know what you didn’t make any changes in the same place. So I’m pretty confident I can merge that. So it went ahead and did the merge. So it committed first the change, just the change I made as developer two, and then it also committed the merged version that it created. So that’s why I have two commits here. And I can say, okay, since that’s been done, I’m going to go ahead now and do my git push. And so now the git push is successful and if I go back over here to my main repository and refresh my screen,

I’m going to have that code. And the last thing that happened was this merge. But again, if I go into my commit list, I can actually go in and see, well, here’s the merge, here’s that additional change that I made so I can see each of the commits, each of the things that were done so I can track those kinds of merges.

So that’s giving me the ability to have more than one person work on the same thing at the same time. Now if I had made changes in exactly the same place, git would have said, okay, you need to go look at the source code and resolve those merges manually. So you’d have to actually go in and look at the code and decide which changes you want to keep and which changes you don’t want to keep. So there’d be an additional step of actually working with that merged code.

Okay, so now I’ve got the changes, they’re sitting in my task environment. The next thing I want to do is I want to move it up my process. So if we go back over here, so we did this process where we moved stuff from development one to the shared repository, from developer two to the shared repository. We did the pushes, we did the pulls to merge things.

So the next thing I want to do is I want to move things to the release one integration environment. So I want to move it into that from a git standpoint to that branch so that I have the code that’s currently related to integration tests.

So I’m going to move that through the lifecycle and I’m going to do that using the git functions to do that. So if I come over here again, go back into my repository, what I want to do now is create what’s called a pull request. A pull request says I’d like to move stuff to the integration test stage, but I’m not allowed to do that. I want to create a request to do that.

So I’m going to go ahead and create the pull request.


Now, when you create a pull request,

hang on, let’s go back here. You can provide rules around the pull request, so you can create different rules for every stage of your lifecycle. And again, you can have as many of those you want, but maybe you want to have a code review done before this code can move. Maybe you want to do code scans to look for vulnerabilities before you move this code forward. Maybe you want to have end user department heads sign off on it so you can set up those rules so that I can submit the pull request. But those rules will have to be fulfilled before that pull request can be fulfilled.

So here I can go in, I’m saying go ahead, let’s go ahead and create this pull request.


And so here I can go in and say, you know what? I want to add some reviewers. So here I want these reviewers to be part of this. Or I can say, I want to add a new rule. So here are some rules that I want to add. And these are different kinds of rules that you can create. And you can create your own rules. So there are lots and lots of different kinds of rules you can create. You can also create approvers. You can also say code can only go from the task branch to the integration branch to the QA branch to user acceptance. That has to follow that path. So you can set your process up to do that. But I’m going to go ahead and say, just go ahead and move this and confirm that.

So now it said, okay, you’ve now moved that code forward. Actually here, let me do, I actually want to do that pull request. Now that I think about it, I used a different process. Let me create a new pull request.


And what I want to do is I want to do a pull request from that task branch into my integration branch.


So let’s create that pull request.


All right, so now I’ve moved that into my integration branch. So if I now go in and look back at my code view and I say, show me the integration branch,

I can see I have those changes. So here are the changes that I made. They’re now related to that integration branch. But at this point those changes are only in my repository, right. I’ve just put them into the repository. Now I actually want to build the code on my IBMI. Now I’m going to do this using a series of commands to do it. The next step, I’m going to show you how you can automate all of that, but I’m going to show you so you’ll know what’s happening under the covers when we actually do the automation.

So I’m going to come back over here to my green screen and I’m going to just first do a quick work live on DMR One intoBJ, which is where I put my integration test objects. And there’s nothing there, so I have nothing in it. And then I also want to move things, I want to deploy things to another testing environment. In that case, I just want to move the objects. Maybe I’ve got a bunch of testing environments, I want to populate them. Also I’ve got another library called DMR One Int DeP for deployment. If I look at this one, it’s got a bunch of SQL stuff in it, but it doesn’t have any of my application files. There’s none of my RPG code. None of that code is here.

So what I’m going to do now is I’m going to say, okay, I want to go ahead and get the code there. So first thing I’m going to do is I’m going to go into workmember PDM and I’m going to go look at the code in my DMR one int library. So if I go and I look at it, what we’re going to see is I don’t have that code yet. I don’t have the increase decrease. I don’t have that code yet in this environment.

So what I need to do first is I’m going to change my current library to DMR one int. And then what I’m going to do is I’m going to do that git pull. I’m going to say, okay, pull that stuff down to this environment for me.

So I did the git pull, got the code from the repository for the integration environment. And now if I go take a look at it now, I’ve got those changes, right? Increase the size of the fields, decrease the size of the fields. I now have that code here.

So now what I’m going to do, like we saw before is I’m going to do my make and this time I’ll just use the command. So I’m going to do Edo make and I want it for that DMR one int OBJ library.


And I’m using my source library and I’m going to do that build. So this is just like we saw before, but this time I’m doing it for my integration environment.


And so now if I go in and look at that library,


all the objects are there. And again, I could run that if I want to, but I’m also now going to do, I also want to move it to that deployment library this time. I don’t want to do a build there, I simply want to move the objects. So again, I’m going to use a command here, but I’m going to show you in a minute how you can automate this so you don’t have to go and use the commands directly.

So I’m going to use this command, this Eradanii command called package. And so the package function has instructions for how to actually do the deployment. What things do I want to deploy, where do they have to go, what has to happen once they get there? So you can have rules for that. And they’re held in a control file. So I’m going to just tell it where to find that control file. So it’s in Qsys lib Dmr one int libetxtrc file and it’s packagemember. So that’s the control file, and I’ll show you that in just a second. But that’s the control file that says here’s how to do this deployment. And you don’t have to recreate that every time. That’s something you set up once. And when you move to that environment, it uses that control file and you could have multiple versions of it if you want different things to happen different times.

I’m going to have it create a save file called Demo One, and I’ll just put it into my DMR one int OBJ library. And so now if I go in back and look at that library, I now have that save file created for me. So the save file is out there and I can have it sent wherever I want it to be sent. And then the next thing I’m going to do is actually unpackage that and install it.

So now what I’m do is do the I package and unpackage function. And the save file is that demo one in DMR one int OBJ. Now I can also then tell it, I also want you to create a backout file which says, I want you to be prepared that if things go wrong, you can restore everything to where it was. So that’s part of the process. It’ll automatically do that for you. But I’m just going to go ahead and run this now. And so now it’s done that.

So now if I go and I do a work live with my DMR one int dep, that deployment environment,

I can now see I have all those objects in the deployment environment as well. And if I do an add libel of my DMR one int dep, I add that deployment library here again, I can then run my hello world program directly there. So basically in that case I created the objects into a library and then I deployed the objects to another library. So I did all that with the command.

So the last thing I’m going to do here is actually do that in a more automated fashion. So before I go over and do that, I’m going to look at the next stage. The next stage are my release one libraries. So if I go back over here to my process, I’m going to do this, I’m going to move things to the release one, which is in this case my production library. I’m going to move the code, I’m going to do the make, I’m going to package it up and I’m going to send the code to another environment.

So if I go here real quick and look at worklive DMR one OBJ, my release one object library again right now it’s empty. Same thing. If I do a worklive DMR one Dep, the deployment library, I think I’ve got the SQL files in there. But again, none of the application code is there.

So now what I’m going to do is I’m simply going to do a pull request. So I’m going to do a pull request here and I’m going to say I want to go from release one integration to that release one library. So I want to go to that next stage of my lifecycle. So I’m going to create that pull request and then I’m going to run the pull request.

And so now what I’m going to do is I’m going to go to this actions button and actions are things that you can automate. And I look, oh, there’s an active action going on. And if I go in and look at that, I can actually track what it’s doing as it’s sending things over to the IBMI. So you can actually have the automated workflow happening as it’s moving things from one stage to the next stage of the lifecycle.

And the code for that, what it’s actually running here is it’s running this script that I created. So it’s actually running this script to do that. So you can see it does the make, it does the pull. So it does the pull, gets the code, does the make and it deploys the code out. So all of that is part of that script that the system is running so that it can automate that process.

So the idea here is that you can have it do that stuff for you automatically. So you don’t have to do it manually. You don’t have to do it manually each time. So now let’s go ahead and take a look at what’s happened here.

Hang on here. So here I can see that process is finished. So it did that whole deploy process for me. So all that happened here. And I think actually, let’s see if we go. Yeah, so here I can go and actually see here’s the whole log of everything that it did. And you can see the whole process and everything that it did in that workflow process.

And then last thing here. So now I can go and look at my DMR one OBJ library and I can see it built the code there. And I can look at my DMR one DeP library and it built all the code there. So it completely automated that process.

So basically the idea here is we’re creating that environment where you can work in whatever development tool you want. You can make your changes, store them in git, use git repositories in the cloud if you want to. You can track all the stages of the lifecycle, and then you can use your pipeline tools to move things through to do the build, to do the deploy, to have that all be automated.

So like that last step where I simply said pull that I want to do a pull request to this environment, it actually automated, getting the latest version of the code, doing the build to the object and then deploying the objects out. So that’s the idea to create that automated pipeline environment. And with that I will turn things back over to Mike. That’s basically kind of automating pipelines using Git.

And again, just want you to, you don’t have to do git actions, which is what I do. It could be Jenkins, it could be Bitbucket, it could be azure pipelines. Whichever tools you want to use, Mike, I will turn it back to you.

Well, thank you, Dan. It’s a lot of information. Thank you so much. We have some good questions that have come in. Before we get to those. Just very quickly to the group on the webinar. As you know, we’re winding down the year. This is our last webinar for this year. We’re planning an exciting 2024. We do have a webinar coming up the end of January. Stay tuned for that. We’re also mapping out our webinars for the whole year, next year and I would love to hear from you. Any sessions, topics, things that you would want to learn about, you can put them in the chat here. You can email me at last two things. We’re also planning some road shows next year. If you want us to come to your town, just put in the chat your city and state so we can add that to our potential list and we’ll see you at all the normal user groups next year. We’ll talk more about that once we get to the new year. So Dan, I think the questions should be up for you.

Can you see the questions, Dan? I can. I’m looking at. So, hey Doug, I got one from Doug Peterson. Do you need to exclusively issue Eradanii DevOps commands or git commands within the IBMI environment? Or is it necessary to handle change management processes from both the IBMI and external repositories like GitHub BiTbucket? And the answer is you can run those commands directly inside the IBMI environment. You don’t need to use GitHub or GitLab or any of the hosted repositories. Everything can be done locally on the IBMI, or you can use those cloud repositories and you can mix and match commands. You can have commands that are running directly from GitHub and other things running directly in the IBMI environment. So there’s a lot of flexibility in how you can do that. And I think if we have another webinar, we’ll go into more details of the kind of underlying configuration files that I was using to determine what actually happens during the make process and what actually happens during the deploy process. What I meant is that you previously mentioned a Make file, which I defined the rules for creating my programs. I’d like to confirm the test library you developed locally to test the changes. Yes, that is also built using the make file. Actually, what’s really cool is you actually can maintain the make file and promote it in the process. So you always have an updated version of that make file traveling with the code. One of the cool things about that is you actually could deploy the code along with the make file to a new IBMI partition and it could build your entire application for you. So yes, it is using that make file for each of the builds from James, are there any segregation of duties built into the system? At what point do the developer no longer have access? So you actually set that up and yes, you can have segregation of duties. You can say the developers can make their changes and maybe they can push up to like, I had that original sort of task shared environment. So you can say, well, developers can do that, but they can’t actually move it to QA. They can request a move, they can post a pull request that says, hey, my code is ready to move. And then you can decide who actually has to sign off on pulling that code to the next stage and then building the code in the next stage of the lifecycle. So you can decide who can do that. And you can have different people at QA user acceptance production, you can actually set up different rules for each of the stages. Glenn, so is VS code pointing to the source physical file to the IFS? So in the case that I was showing you there, the VS code. So we’re using the actual IBM plugin, we’re using code for I. So it’s actually using RSE. So it’s actually pointing at the library source files and source members. And we’re keeping things in sync with that. And the directory

YAML code. Yeah. So where did the YAML source come from? So that was a script we created. So the YAML source you see there that I showed where you saw the main Yaml, that is a script we created. So that is something you would create to say, here’s what I want to have happen now. That’s why all of the functions in Eradanii DevOps are command driven. So if you want to do a make, you can run a command to do a make. If you want to do a pull where you want to get the latest version of the code before you do the make, that’s a command driven function. You can run that command. If you want to do a package to deploy something, it’s a command. If you want to do a deploy, it’s a command. So everything is a command that you would then add to that YAML file. That’s one of the things we do. We have a training program for working with this tools, and one of the things we do is teach you how to create those scripting those YAML files.


Great, guys.

We’re right at time. Go ahead and wrap up. You go ahead.

No, I saw Kent had that. There’s a lot of overhead to simply move a change. So again, what you would do typically is automate all that, right? So you’d simply say, I want to move to QA and you would set it up so that I just had to do a pull request to QA and then all of the stuff to do the pull to the environment, to do the create, to do the make process, to do the deploy if it has to deploy, would all be scripted. So I, as the user, would simply do the pull request and everything else would be automated. Anyway, sorry. It’s a really good place to kind of wrap up because that’s what we’re really trying to do is create an environment where you can use these standard tools, but automate everything you’re doing on the AI.

Wonderful. All right, everyone. Well, thank you, everybody, for joining us. Dan, thank you very much. Mitch, thank you for Manning the Q&A department. I appreciate it. Everyone. If we don’t talk to you, we’re getting into the holiday season. Have a great holiday season. We’ll be back with our webinar starting again in January. Have a great day, everyone.

Great. Thanks, everybody. Bye.