WEBINAR

Warp Speed IBM i Modernization with APIs

Webinar Icons Date

February 24, 2021

Wednesday
Webinar Icons Time

2:00 pm - 3:00 pm

Eastern Standard Time
Webinar Icons Ticket

Free

Webinar Icons Time

Session Time

60 Minutes

Overview

With high speed, fault tolerant, loosely coupled APIs you can now rapidly extend your IBM i applications with the latest technologies. Until recently, if you wanted to support high speed connections between applications you had to write direct calls to the IBM i database. These were often fragile and hard to maintain. If a change was made to the IBM i data schema, it would break the interface—often in production.

With Eradani’s new high speed connection technology, you can connect via loosely coupled APIs. The loose coupling allows you to make changes to either side without breaking the connection. When you do make changes, Eradani will automatically regenerate the connection code for both the IBM i side and the open source side to ensure everything keeps working.

Now, you can easily take advantage of the millions of Open Source components to enhance your IBM i applications without slowing down your business process.

You’ll Learn

Star Bullet

Easily enhance IBM i applications without slowing down your business processes

Star Bullet

Fully take advantage of millions of open source components

Star Bullet

Quickly use high speed, loosely coupled APIs to extend your applications (with real world examples)

Presenters

Dan Magid

Chief Executive Officer &
IBM Champion, Eradani

Dan has spent over thirty years leading companies that help customers implement new technologies in legacy environments. Previously, Dan led worldwide software development groups that built highly successful modernization and DevOps tools and was the CEO of Aldon, the leading provider of DevOps tools to the IBM i marketplace.

Aaron Magid

Vice President,Open Source Technologies &
IBM Champion, Eradani

Aaron has been writing modern applications to leverage open-source technologies on the IBM i for more than 10 years. His applications are part of commercial products that are installed in thousands of IBM i shops. His work combines open-source languages such as PHP, Java, Node.js, and Python with traditional IBM i,technologies to create leading-edge IBM i solutions.

Video Transcript

Welcome, everybody, to this webinar on warp speed modernization using APIs. We’re doing this because over the past several months, we’ve seen a big shift in our conversation with IBM i customers around the topic of modernization. The conversation has broadened from a singular focus on user interfacing, creating new web admin, mobile user interfaces, to encompass the many challenges that people face in dealing with this increasingly interconnected world of business, where systems are talking to each other and you have to be able to communicate with other people to be able to do business, to participate in the supply chain. Many of our customers have Windows and Linux applications that they need to connect up to their IBM i system, or they want to be able to call out to public web services to add data to their IBM i systems. So what we’re going to be talking about today is how you can use new technology within your IBM i applications. And this has been really a great move because we’re seeing a real turn away from the idea that you have to move away from the IBM i in order to take advantage of new technology. But in fact, you can API enable your IBM i, and it can then take advantage of all the great new things that are happening out in the IT environment. So we’re going to start out by looking at this move to APIs because it’s happening across the industry. There is an accelerating move to connecting things via APIs. And then we’re going to talk about how you can take advantage of it by rapidly enabling your IBM i programs and data, by API enabling things quickly. And we’re going to also talk about how you can add open source components to your IBM i applications. So what makes this really possible is now we can have very, very high speed API connectors so that we can have connectors that are high performance enough that you can actually integrate calls to open source modules right within a business function. So right within the middle of a business process, you can call out to an API and get results fast enough so it doesn’t slow down your response time. So you can create these very, very high performance and reliable APIs, and we’re going to take a look at how you do that. And then we’re going to talk about how you can make sure that they are easy to maintain over the long term. Because one of the things that customers have called us about is they said, you know what, it was easy to create a couple of APIs on my IBM i, but now that I have lots of them, it becomes very, very hard to keep them up to date. So as we change the data schema or as we change the business functions or as the web services change on the other side, how do we make sure that we keep those things up to date? So I’m going to talk a little bit about those functions using PowerPoint, and then Aaron is actually going to dive in and show you a live application, and he’s actually going to web enable things and show you how the API enablement process works. So just a little bit about APIs. This is actually the growth in API collections that are on the Postman site. So Postman is the most popular place where people set up APIs and create their API collections, the actual code around their APIs. And as you can see, it’s increasing, and it’s increasing at an increasing rate. And this is just the creation of APIs at one particular place. This is Postman. There are lots and lots of other places where people are putting their APIs. This is just giving you a sense of how fast the creation of APIs is growing. The adoption is growing even faster. So the number of API calls that are happening every day are in the many, many billions. So we’re seeing this rapid, rapid growth in the number of APIs that are out there and then the number of calls that are being made to APIs, so the use of these APIs. This is a survey of organizations as to how important are APIs as part of their whole digital transformation strategy. So as you can see, APIs are critical to almost everybody. It’s really a very, very important part of the whole API transformation strategy, and that’s because APIs play a role in almost everything you want to do in transforming your existing environment. And there are lots and lots of use cases. I’ll talk about just some of the use cases that we’ve heard from our customers, so the many things that they’re doing around modernizing their systems.

So the first one, as I talked about earlier, is the first thing we heard from people is they want to be able to use the latest technology to build really great user experiences and responsive GUIs. And so the first thing they wanted to do with APIs was say, I need to API enable my IBM i functions so that I can access them via a modern user interface. So for example, we had a customer, their use case was they said, we have these warehouse managers and we ship our products out to our customers, and the warehouse managers want to know when the shipment is going to arrive. And the way it works currently is they call our company, they get our customer support person, they look up the shipment information on our system, which just tells them the shipment number. They then have to call the transportation company to give them that shipment information. The transportation company then goes and finds out where that truck is, and then they give that information back to our customer support person who then calls the customer to give them the information. The problem is that round trip took a really long time. What they wanted was to give their users a mobile device interface so that the warehouse manager could simply be in the warehouse and he could click on a button and see where the shipment was going to arrive. And so that’s what they did. And that’s actually this little screen you see here on mobile device is actually that application where now a warehouse manager can go on a handheld mobile device and look up his shipment, hit a button, and it will give him a map that shows exactly where the truck is at that moment in time. So it’s actually going out to the IBM i, getting the shipment information, sending that out via an API call to the trucking company, and then getting the map data back. So again, using APIs to get that really, really modern user experience. Then people are also looking to modernize business processes by integrating applications. So for example, we had another customer that they would have customers that would call and ask for copies of old invoices. And the problem is all the invoices were archived on a Windows system. And so what they’d have to do is look up the order on their green screen, get the order number, then go over to the Windows system, key in the order number in order to find the invoice, and then email it out. Now using APIs, they have now made it so that they simply hit a function key on the IBM i, which goes out and calls to the Windows system, sends them the order information, generates the invoice, and emails out the invoice, and then sends a confirmation back to the IBM i. So they’ve automated that entire business process by using APIs. And then we have customers who want to be able to modernize their communication with their end users, with their customers through these APIs. So we have customers now who have their entire order process automated. So the POs come into their system. It automatically then sends information to accounting. So it sets up the billing. It sends information out to the ordering area in order to set up the order to be fulfilled. And all of that is happening automatically in machine-to-machine conversations. We have other customers who have told us that their vendors in their supply chain are now requiring, or their customers in their supply chain are now requiring that they communicate via APIs. In fact, a lot of people are replacing proprietary EDI systems with now open APIs. So even in order to participate in the supply chain, you have to be able to talk to them via these APIs. And then there’s so many great sources of information, of things that you can get and integrate into your IBM i applications by simply calling out to publicly available APIs. These are just some of the ones that our customers have been using, calling to UPS to get transportation information, or calling Magento for point-of-sale integration, or Venmo for payments. All of these are just different things that people are using, calling out to public APIs from their IBM i.

And the other thing people are looking at doing is, can we modernize our developers? Can we start to get our developers into using some of this new technology? One of the people I talked to recently, one of our IBM i customers, told us, they said, it’s amazing to him that RPG programmers are worried about learning things like JavaScript and PHP. He said, if you can learn RPG, which is a language that has developed over decades and has huge amounts of functionality and is very, very sophisticated, if you can learn RPG, you can learn JavaScript. JavaScript was a language that was built to be easy to learn. So you can learn these open source languages very, very quickly. So many customers are looking and say, can we start to get our RPG developers to adopt this new technology and take advantage of the huge value of their domain knowledge to help us to move into new technology? The other thing we’re seeing is that people want to take advantage of the millions, literally millions, of open source components that are available. So rather than having to write all my code, I can just go out and grab some application code that’s already available and use that in my application. So I can just add that to my application. And actually, it’s another example of this over here. This little screen over here is actually the interface for IBM’s Cloud Connector product, which allows you to send IBM i files directly up to things like Amazon S3 or to the IBM cloud. So the user interface to that was actually built using open source components. So we were able to pull down components directly from Amazon for connecting to Amazon and pull down things from IBM to connect to the IBM iCloud. So we were able to assemble that very, very quickly. In fact, the green screen version of that took us about nine months to build, whereas the JavaScript piece of it took about four days, because we were able to simply wire together the components. And the other interesting thing we’re seeing people talk about is modernizing the licensing on the IBM i. And this is something that we talk to a lot of the IBM i vendors about, which is we have to move away from licensing that is locked to a particular model or serial number. Because the problem is that then prevents us to do things like moving to the cloud, where your applications and your tools may be moving across machines all the time. Or it gets in the way of HADR, where you have a disaster and you have to key in all kinds of license keys in order to get things to get up and running again. So we need to modernize away so that we can take advantage of all the things that they do in the open source world, which is in the open source world, you can very easily spin up new machines, and you can spin up new environments, and you don’t have to worry about, gee, how am I going to get my software to run in those environments? So there are lots and lots of parts of this whole API or this whole modernization environment, and all of it is enabled by these APIs. So by creating APIs, you’re enabling these things. And so you can start by doing the user interface thing, but once you’ve created those APIs, you’re now in a position to take advantage of all of these other kinds of capabilities. So APIs become very, very useful ways of modernizing your systems. And that’s really what Eradani is in the business doing. We’re in the business of helping people take advantage of this. So to give you these very, very high-speed connectors to your IBM i and from your IBM i so you can call into the IBM i very easily from open source, and you can call out from your IBM i very easily to open source or to other kinds of functions. And what we really have focused on in our last couple of releases is making sure these are really high-speed because, again, we’ve started to hear from customers who say, I want to be able to integrate open source modules directly into a business process, and if I’m going to do that, it has to be extremely fast. It has to be a really fast connection. So what Eradani does is we wrapper your IBM i with basically a high-speed middleware layer that makes it so that open source people can call into the IBM i doing things the way they’re used to doing it, and IBM i users can call out to the open source world doing things they’re used to doing. So as an RPG programmer, I can call out to open source using RPG code, so doing a standard RPG code. And as an open source developer, I can call into an RPG program or into a stored procedure or a command. I can write a function, call it the way I normally would do it, and we’ll take care of translating that into something the IBM i can understand. So we want to make it easy to go back and forth between the two environments. And as part of that, we integrate things like the whole security authentication layer so that you can ensure that people who are talking to the system are talking to you in a secure way, because most of us know that our IBM is are running our core business applications. So we want to make sure we know who’s getting in there and what they’re doing. So as part of doing that secure connection, we use modern web-based techniques for doing security. So we don’t do these sort of basic authentication thing, which is you send the IBM i credentials, user ID, and password every time you make an API call. Problem is is every time you send those, that’s a potential security vulnerability. And the other problem is of using basic authentication is that you’re storing the credentials in the browser so that the user doesn’t have to keep re-signing on every time. The problem with that is is that that’s another place where those could be discovered. So we don’t do things that way. We actually support the modern, the latest way of doing security, which is we’re using encrypted tokens. So you send the user ID and password down once, then we send back the encrypted token. And from that time forward, all the communication is done via these encrypted tokens. And you can manage them. You can have them expire. You can decide how long they’re good for. And we take care of all the managing of the certificates and managing that communication. So the way things are done sort of basically on the IBM i, the IBM i does provide tools for doing this.

This is sort of the path of doing a call from the IBM i out to a web service and all the things that have to happen. You call out to the database, you do your HTTP GET through IWS, you spin up a JVM. You have to deal with the digital certificate manager and manage the certificates. You call the web service, you get the data back into a clob on the IBM i, and you have to run something like the Agile in order to parse it. So there’s a whole bunch of steps. And so what we’re trying to do is reduce the number of steps that are involved by eliminating this whole middle section and replacing that really with code that’s really designed to do this. And so we’re doing this using Eradani Connect, but it’s using JavaScript to do this. And JavaScript is really designed. That’s what it was built for. It is built to do this kind of web service processing. And so Aaron’s going to show you a little bit about how that gets done. So basically, I’m going to turn it over to Aaron now, and he’s going to show you a demo of actually doing this. And the way he’s going to do this, he’s going to do a lot of the things that I’ve talked about. So very short demo, he’s going to actually cover a whole bunch of what I just talked about. So he’s going to show you calling from a web UI, a very simple web UI, calling via an API to the IBM i, having the IBM i run a function, and that function is to say, oh, I need to actually get some information from Google Maps. So it’s going to call out to Google Maps, and basically what it’s going to do is it’s going to send an address to Google Maps and have Google Maps then give it a latitude and longitude, which is what he needs in order to call the weather service to get the weather. So at the end of the day, what he’s trying to do here is he wants to go from a graphical user interface through the IBM i and get weather data. So he’s going to enter the weather data through a web user interface, call the IBM i. The IBM i is then going to call the Google Maps to get the latitude and longitude, and then it’s going to use the latitude and longitude to call the web of their service and then pass that information back to the web service. So with that, Aaron, oh, and one other thing. He’s actually going to do this in two ways. He’s going to show you doing this using the standard tools that IBM provides, using the standard things, and then he’s going to show it to you also doing it with Eradani Connect. So you can sort of see what are some of the, what’s the value we’re trying to add to that process.

So Aaron, I’m going to turn it over to you. All right. Well, thank you for that awesome introduction and going through all of those topics for us. So as Dan mentioned, I’m going to go through a demonstration of how this stuff actually works. I’m going to go through this application that Dan has up on the screen here where we’re going from a web user interface to an API to the Google Maps API into our RPG programs to get weather forecasting data. And like, again, like Dan mentioned, we’re going through two strategies. But what I want to point out before I start saying anything else is that everything that I say in this entire process is going to go back to one key idea. And that is that we always want to make sure we’re using the right tool for the job. There are a lot of tools that can perform similar functions, right? There’s a lot of capabilities out there that can do just about anything. But the tools that were designed to perform a specific use case are typically going to be more maintainable, more secure, more productive, and more performance than tools that weren’t designed to do that. And that’s part of what I’m going to show you here. So remember, key idea is we want to use the right tool for the job. And if you’ve seen us do any other presentations, you’ve probably heard us say that about 10,000 times. But it really is important. So I’m going to go ahead and share my screen here. And so as Dan mentioned, right, our core business logic right now, we are a weather forecasting company that we’re taking on here. And our core business logic is that we have this get weather forecast program. What that’s going to do is it’s going to take in a latitude and a longitude, and it is going to get us back the weather forecast for the upcoming week for those coordinates. And I’m going to show you real quick, if I hop over to my IDMI, I do a DSPWF, and I’m going to run this with the coordinates of the Eradani address, which is 3789 and negative 122, 28. If I run that, it’s going to store my weather forecast in a school file. And if I open that up, you can see that in my 5250 here, I have dates, low temperatures, high temperatures, and a bit about the weather for today and the following seven days. We’ve got our web user, or sorry, not web user, I jumped ahead of myself there. We’ve got our program here with our core business logic. And again, what we’re trying to get to in modernizing is something more like this, right, where we’ve got this interface where I can go in and I can, I as a web user from my machine, I can run an API call from these applications. I can say, get me the weather, and it’s going to get me that information. So this is our end goal here. From that, from this over to this is what we’re trying to get to. And we’re going to take, over the course of this presentation, we’re going to take two different paths. And I want to reiterate that to make sure that we’re clear about the structure here. We’re going to take two different paths. One is going to be calling through the SQL HTTP functions in RPG. We’re going to be using HTTP post-clob built in, in our RPG code. And in the other case, we’re going to have our RPG program. Everything is running from RPG. We’re going to have RPG reach out to an open source program that is going to execute our Google Maps call, bring the data back, and then go back in and continue with that get weather forecast. So the first step here, again, is we’ve got our program. What we’re going to do is we’re going to change this system so that it looks a little bit more like this, right? We’ve got our adder WF at the top, goes to HTTP post-clob, calls to Google Maps, and then continues in with the get weather forecast logic, right? So take the address instead of coordinates, takes an address, uses Google Maps to get the coordinates, calls our original business logic. The other option is the SPWF is going to go to node, call the Google Maps that way, and then get the coordinates and send it back, right? So it’s the same process, same implementation, same input, same output. The only difference is the top path uses HTTP post-clob, and the bottom uses an open source integration via Eradani Connect. So the first point that I want to cover in here in comparing these two paths, as we’ve already seen that it’s possible, right? I’ve got the web user interface here. I can run this. The first key point that I want to cover is productivity, right? When we talk about different methods of developing something, one of the most important questions is which method is going to allow us to build robust applications fast, right? Because the faster we can build our applications, as long as they’re bug-free, the more time we get to spend on building other features and extending our applications rather than spending a whole bunch of time focused on one particular thing. So in this case- So Aaron, let me just make sure something’s clear, ask a question. So I see here on the web page, you’ve got the address, and then you’ve got the result, but this is not calling from the web page directly to the weather service to get that. It’s actually sending the address to the IBM i. The IBM i is then sending the address to Google Maps. It’s getting then back the latitude and longitude, which it is then sending on to the weather service, and the weather service is then returning that data, and you’re then reading that into this web user interface.

Is that- Yes, you are absolutely right, and thank you for clarifying that. Yeah. What’s happening here, there’s an API layer on my IBM i. Everything here is on the IBM i. It’s sending my address. The RPG program is then getting the coordinates from Google Maps, and then calling the weather forecast, and then coming back full circle and giving me back my weather forecast. Yeah. So thank you for clarifying that. Yes. Everything here is RPG-centric in this application. And speaking of that, I have here actually the source code for that RPG program, and so you can see that the Adder WF version, right, we’ve got this exact SQL. We’re running that HTTP post-clog to get into the Google Maps API to get that information, right? So this is happening in my RPG code. And one thing that I want to make really clear here, again, is the HTTP post-clog system, the assist tools, HTTP functions are well-built. They are really effective in their implementation. They’re not bad code, but they also are using tools that weren’t designed to do what they’re trying to do with it. And that’s going to be at, again, the core of what I’m going to show here, right? So the first question is productivity. The first point is productivity. And what I want to point out in this code is that we’re making our API call to the Google Maps system, and then we have about 300 or 400 lines of business logic and error handling around that API call. And again, this works, and it gets the API call done, and it’s actually really clean, nice code. But if I flip over to the JavaScript version, and remember, we’ve got about 300 or 400 lines of code here. If I flip over to the JavaScript version, this is the Google Maps call. Right there. That’s it. That’s the whole thing. Right? It is one line of code, and it’s done. And the reason that that’s possible with one line of code, or depending on how you read this, it might be up to five, but the reason that this is possible that we’re able to build this in such little code is because of the open source modules. Google has actually created, as have many other providers, most providers, Google has created an open source module for JavaScript that communicates with their APIs. So for the RPG side, the process for implementing something like this requires that we go over to the Google Maps documentation. We look through it. We read it. We understand it. We make sure that we go through all of this information so that we know how to talk to this Google Maps API. And then we have to make sure that we’re handling all the potential edge cases in our RPG code, and that the JSON parsing is working, and that everything is functioning. On the JavaScript side, what we do is we go over to NPMs, the Node Package Manager, where there are a little over a million and a half open source packages available that are downloaded 124 billion times a month. And I come in here and I say, at Google Maps, and I search on that, and I go to Google Maps services, which is a module that is built by Google, provided by the same team that built it. It’s the same operator. It’s got the big Google logo. It’s the same team that makes the APIs. And we download this module, and that allows us to do this in one line of code. And I think really the, you know, proof is in the pudding here. When we implemented the RPG code, it took us a couple of — it took us several hours over a few days to make sure that that program was working properly. The JavaScript version was implemented in six minutes, 47 seconds. And we timed that because we knew that it was going to be a lot faster. But the point here is that when we pull in the open source code, we’re able to be exponentially more productive with our code because we’re pulling in all this stuff that’s already been written for us. And the last thing I want to mention about this, you know, Google — most API providers won’t admit this, but it is true. Google actually admits it. They have a note here that says, the reference documentation can be found here. The TypeScript types are the authoritative documentation and may differ from the descriptions. And what that note is telling us is Google is basically saying our documentation might be wrong. Don’t trust it. The authoritative documentation is the open source module.

And I just want to make sure that what I’m saying here is clear. Google is telling us not to use their documentation. They are telling us to use the open source module because that’s what they’re expecting you to do. And the same goes for every other provider out there, every open source shop that is working and making these kinds of services, they’re expecting you to use the modules. You know, and if you’ve been to one of our webinars before, you’ve probably heard us talk about the Cloud Connector project that our team worked on. And in that project, we actually ran into this directly. We were building a connector to Amazon Web Services. And after dealing with bugs and trying to fix these problems for weeks, what we found out is their documentation was just wrong. And we were trying to implement it from RPG and make the API calls directly, and they just had old documentation. This is from Amazon. Their documentation was just wrong because they were not expecting us to do that. They were expecting us to pull in the open source module and use that. And that’s why when we implemented the JavaScript side of that project, it took four days to do the entire project. The Amazon integration was a couple hours because we used the open source code. So again, the productivity of working with these is massively higher if we can get into these tools. And again, going back to that core idea that I was mentioning, we can use HTTP Post Cloud. We can do that. It does work. But we are massively more productive if we use a tool that was designed to do the job that we are trying to perform here. And that is the power of going to the open source code. Several hours over a couple of days versus six minutes and 47 seconds. And that, I think, is the best way to demonstrate that. So what I just talked about there was productivity of development. So speed of development is really what we’re talking about there. Aaron, I don’t know if you’re going to talk about it, but the implications for that also on maintenance, which means that as things change, the modules are getting updated automatically. So you don’t have to worry about that. Yeah. I’m actually going to cover maintenance a little bit more a little bit later. But yeah, that is a really important point. And Google is maintaining this module, so we don’t have to. So OK, so we’ve talked about productivity there. What I want to focus on now, we’re going to switch over to performance. We’re going to start talking about speed of development. We’re going to talk about speed of execution now between these two methods. And again, we’re going to keep on comparing the HTTP Post Cloud with the open source connectivity layer. Now, I actually have a little tool built into this call right here, where basically what I can do is you see I’ve got the address box there where I can run my get weather call. But I can also work with these two boxes on the side here. And basically, if I hit this test button, what it’s going to do is it’s going to run the program using the method that I selected. And it’s going to report back some statistics. So just to run this real quick, I just want to show that this is working. If I run this with ECC, which is short for Eradani Connect Client, that’s the open source path through the Node.js and the open source module, it can run. And similarly, I can run it with the SQL HTTP function. So that’s going through that same path, and it’s getting me my weather forecast. But there’s something that I want to draw your attention to here, and that is the difference in timing between the two calls. So I’m going to reset this interface real quick by doing a get weather call. And I’m going to run the ECC call, and I’m going to hit the test button. So I don’t know if you could tell the difference if there was a lag, but that was very much less than a second on that call to make the call using the open source side. Now I’m going to switch it to the SQL HTTP function. And actually, first I’m going to run get weather to reset the interface, and I am running my test. So there we go. By my count, that was, I think, about 5 and 1/2 seconds on that call in order to get that. And again, what I’m focusing on here, what I want to make sure to mention is it’s not that HTTP post-clob is a bad tool. It’s not that it’s not well built. It actually is, I’ve seen some of the code, it’s really, really effective. The problem is that it’s using technologies that were not designed to do this. It’s built on top of Java. Java was designed to be a big, honking enterprise program, one big system that is constantly running. It wasn’t designed for these lightweight, little interactions, which is what we’re using this API call for. Node.js was designed for lightweight interactions, which is why when we try to run this through the Java layer, it’s incredibly slow. When we try to run it through the open source layer, it just shoots across that execution time. And the thing is that these issues, I only ran one call in there. But the reality is that when you’re in production, it’s not going to be one call, hopefully. I mean, your APIs are going to be running, servicing, hopefully, thousands, or tens, or hundreds of thousands, or millions of calls to your APIs. And the thing is that these performance issues get worse the larger we make our tests, the more calls we do.

So I’m going to show you real quick an ECC test. I’m going to run it with 10 trials this time. And again, you can see it comes back in a similar amount of time, actually. It comes back pretty quickly. And I’m going to run the SQL HTTP version, and I’m going to test it. And the thing is that what’s happening behind the scenes here is actually there’s a lot going on while we’re waiting for this SQL HTTP function to return to us. And again, this is 10 calls simultaneously. If I go back and I do a work active jobs, I spelled that right, my partition here is at 100% CPU, right? It’s not that the Java code is slow, or that the Java code is bad, or that anything is poorly built in there. It’s actually really, really well built. But when I start this up, I am starting up a Java program when I run my API calls. And so doing those 10 calls is throttling my partition. It’s using up my entire CPU capacity, and probably a massive portion of the memory, too, though I’m not showing that here. And that’s what makes it run slow. Again, HTTP post-clob is very well built, but it’s using Java, which was not designed to do this. And so the system has to work really, really hard to make it function. And that’s why we run into these problems. That’s why we have these performance issues. So it’s come back since then. Usually when I run this with 10, it takes about 40 or 50 seconds to get back to me on that call. And again, that’s compared with the web API world, where I started my career, where an API call that takes more than 300 milliseconds, a little less than a third of a second, is unacceptable. So we want to make sure that when we are using these tools, we’re using a tool that was designed to do what we want to do. And I have worked with many, many companies, many shops, that stick to the SQL HTTP functions or tools like that because it’s in RPG, and it’s what they know, and it’s simple. And that is a wonderful benefit of these tools. But what I have seen with these shops time and time again is that when they go into production, the performance just doesn’t hold up. And again, one thing that I want to point out is that most of this code is already written for us. And the JavaScript code here is really one line of code. And going back to what Dan mentioned earlier, JavaScript is a much simpler language than RPG. And I really want to reassure people that if you have learned RPG, you can learn these technologies. And the benefits will be incredible for the systems that you’re running. So we talked about productivity. We talked about performance. The third thing that I wanted to focus on just for a second is reliability. And again, we’re going to go back to using the right tool for the job. RPG as a language is a wonderful tool for our business logic. Our core applications are actually, I believe, best written in that language. It’s a great tool. It’s really, really powerful, really well organized. It’s really effective on the IBM eye. But it was also designed with the idea in mind that your data is going to be consistent and well-structured at all times. And the truth is that web APIs don’t do that. They return different structured data depending on what you do. And if you’re using a rigid system where you are tightly coupled to the exact payload structure that you get back from the API, what you’re going to find is that eventually that API is going to return a different structure than what you’re expecting, and it’s going to break the call. And oftentimes, that will happen because of differing inputs, which often shows up as a bug in production or a program crash in production. And I’m actually going to show you that real quick because it just so happens that this Google Maps API does that, just like most other open-source-based APIs. If I come back in here, and I’m going to run my Adder WF program, which is the one that goes to the HTTP post-cloud function, I’m going to type in 833 Mendocino Avenue, Berkeley, CA 94707. That is the Eradani address right there. And I’m going to wait for that call to get back, and it’s going to get me my weather forecast. But now what I’m going to do is I’m going to change that for anyone who appreciates 30 Rock. I’m going to change the address to this one. And this time, my RPG program is not as happy, right? And the problem that’s happening in here, what’s sort of interesting about this is that I actually stumbled across this while I was working with the APIs and preparing this demonstration. I just tried that address, and it did this. And what’s happening here after I found the error and went through the debugging process is that for some addresses, the Google Maps API returns a different JSON structure. It actually changes fundamentally the object structure that it sends back from the API. And so what that means is that because my RPG program is focused on picking out particular fields from the JSON, and it’s actually parsing the JSON pretty effectively, when the JSON structure changes, my RPG code doesn’t know how to handle that. And in order to handle that would take a large amount of business logic and a large amount of translation in order to be able to handle all the different object structures that I might get back from here.

On the flip side, if I run this from my DSPWF version, which is going through the open source side, I immediately get a result back. And I can display my school files, and I’ve got a weather forecast. Because what’s happening here is the Google Maps open source module is handling the uncertainty for us, and the JavaScript code is handling the uncertainty for us. I’m going to look at the JavaScript code for a second just to show that what’s happening is we run our geocode operation, which gets us our coordinates. And then we are simply checking all throughout this process. We’re making sure that our objects are the way we want them to be. And if they’re not, we’re using the alternative structures. It all happens in one line of code. And then we have another two that just creates a backup for us to make sure that we’re handling this. And the thing is that what the point here is that JavaScript was designed to handle this uncertainty. It was designed to be in these kinds of environments where you never know what’s going to happen next. You never know what you’re going to get from the API. You never know what you’re going to get from users. And so it’s built into the language to handle these exact issues. And so when we do this with the JavaScript program, again, going back to the same RPG code, it’s the same RPG code. It’s just the Google Maps API call is being made by the JavaScript. We don’t have those errors anymore. We don’t run into those problems because this is the right tool for the job. And the other thing that I wanted to mention here on the reliability note, which Dan brought up earlier, is that I don’t have to maintain this code. If Google changes their geocoding API, they push an update and fundamentally change the way the API works, I don’t care. I don’t have to re-implement my API. Actually, I was just working with one of our customers about this exact issue. They’re providing an API and they’ve now switched and the structure of their API has changed. And now they’ve got consumers who are complaining about that because they’re directly calling the API and now the structure is different and it’s breaking their code. With Google, with this open source integration, we never have to worry about that because if Google changes their API, they will update their module. And then what we do is one command to update our version of the Google Maps module and we’re done. We have the new version and we are ready to go with their updated APIs. So the reliability of this system is so much higher. It’s so much more stable. And again, all of this ties together. Going back to productivity, that means that I can focus on building features. I can focus on expanding our systems and streamlining our systems rather than focusing on just maintaining calls that we’ve already created because Google decided to change their APIs. That is one of the key values here. These systems are reliable in production and they are really easy to maintain because you have Google’s engineering team backing you up. It’s not just you. We did a webinar probably a year ago called 11 million developers on your team that was all about that. You are backed by the whole open source JavaScript community, not just by the people on your team. And that allows you to be massively productive. So we’ve covered three things so far. We’ve covered productivity. We’ve covered performance. We’ve covered reliability. One thing that I wanted to add on here is just a quick note about security because I saw earlier on when Dan was doing his presentation, I saw a question about Okta. I’m not sure if that’s been answered. But along with that open source idea, integrating with something like Okta is extremely easy. If I wanted to integrate, get Okta-based authentication into my system, what I would do, most likely I happen to know this, but a quick Google search would lead you to the same place. I’d probably use Passport.js, which is an open source module currently downloaded 1.1 million times a week that is to authenticate users. And it supports basically everything that you need. And the thing is that all of these companies that create their authentication methods, they add strategies for Passport because it’s so popular. So if I wanted, I think there’s something like 430 authentication methods that are supported using the Facebook authentication, Twitter, OpenID, OAuth, BasicAuth, whatever it is that you need is already supported in here. And so the answer, and this is the thing, is when you have a question about do you support a particular authentication method, the answer is pretty much always yes because these companies that create the authentication standards create the modules. And that’s how they expect you to interface with their systems. And so integrating with these other authentication methods is extremely simple because we just pull in the module that was built by the companies, and we just integrate it into our application. And effectively, that’s it. So again, productivity, performance, reliability, security. The last thing that I wanted to mention there about these open source modules is just a bit of parting wisdom here around maintenance, which is that the most important point when we talk about maintenance, I believe, and that I’ve seen in practice at the many, many shops that we’ve worked with, is that we have to be standard. And what does that mean? Passport is a great example of this. Passport.js is an open source module that handles your authentication to your web APIs for you. And it supports, I think, again, 430, 480, somewhere around there, different authentication mechanisms that you can just bring in for free. But in order to do that, it expects that you are using a standard web application architecture. And if you are stuck in a proprietary system that is totally locked in to a very tightly controlled low code or no code system, you can’t do this. The reality is that when we work with a non-standard architecture, that’s when you start to run into problems integrating with systems, because you have to do the integration manual. That’s when you run into the problems. But if you were to, for example, design your API systems, like we do at Eradani, using Express as the core of it, Express, which is a web framework that is currently being downloaded almost 17 million times every single week. If you’re using this framework, Passport was actually built to work with Express, which means that if you use this framework, if you’re standard, you have the 430, 480, whatever that number is, authentication mechanisms already implemented for you. And I just want to make sure to point out, there is a lot of temptation, I think, in the development community always, to go the drag and drop, low code, no code route. But the truth is that I’ve seen at company after company with the different generations of low code and no code applications, is that what happens time and time again is that when you work with a proprietary framework that is not an open source framework, work with a proprietary tool, if you need Okta, if you need Facebook authentication, whatever, if you need Google Maps, you have to wait for your proprietary vendor to implement those functions, which means that everybody who’s using that low code, no code platform, if there is, let’s say, 100 developers at that vendor working on the platform, it means that every single person, every company that is using that platform, that is using that non-standard proprietary system, is sharing that development team. And so you’re going to be waiting a very long time for the implementations for the new features that you need. But when you work with the open source community, things are usually implemented here before they are even standardized. And so what happens is if you’re running on a standard architecture, you’re using open source standard tools, you will be able to implement things faster than you would in a low code, no code platform, and also have access to every open source module in the community, to every feature of those 1 and 1 half million open source modules. So just to wrap that up, I really want to drive that point home because it really is critical. And I have worked with a lot of companies that have gone down the low code, no code route and ended up locked into very bad situations. So I want to make sure to point that out. So just to bring this full circle, again, we talked about productivity. We talked about performance. We talked about maintenance, reliability, security, and how we can implement those in this application, how we can make sure it’s there. And the key points that I really want to drive home are one, use the right tool for the job. You can hammer in a nail with the butt of a screwdriver. You can do that. It’s not what it was built to do, and it’s not necessarily a great idea, but you can do it. But what we want to make sure is that we’re using the right tool for the job. That will get us the most efficient, most effective implementation. And the second thing is we have to stay standard. Because if we don’t stay standard, we lose that power of the open source community, and we’re stuck in that proprietary bubble again. But if we’re able to follow those rules, we’re able to keep those in, we can leverage the full power of these systems and be massively more productive. So that’s what I’ve got for you. 

I’m going to pass it back to Dan so that he can wrap this up. Great. Thank you, Aaron. Thanks for taking through that. I just actually brought up some slides just because one of the questions we got, because I know you focused a lot on the ability to integrate open source modules because that is so powerful. But we had a question is, well, if you’re not going to use open source modules, let’s say you just want to call in between your own systems, does it then create any value, or is it only if you’re using the open source modules? And the answer is a lot of the key value actually is just in the ability to create these APIs very, very quickly. Included in Eradani Connect is an API generator. So basically, all you need to do is say, I want to create an API. This is doing it through the command interface, so I want to run this command. Basically, you tell it what method do you want to do. So am I going to get some information, read some information, do I want to post, am I updating, am I sending new information, what’s the function that I want to run, what is the open source function that I want to run. And then there’s this thing called the data model. The data model is what helps us understand what the data looks like. And this is an example of a data model over here on the left, and it is built based on your RPG data description. So this is just a sample data description of RPG data structure, so you can see you’ve got customer ID here is 8P. You can see it’s over here as well, customer ID, 8P. The reason we create this data model is we generate then from that time forward both the RPG side and the open source side of the description of the data, so we can make sure that they stay in sync. This becomes critically important because as you start to build, if you end up with tens or scores or even hundreds of APIs, what happens when somebody goes in and makes a schema change on the IBM i side? How do I know what I have to change? Well, we’ll track that for you and we’ll make sure that the two sides stay in sync, so you don’t break the API because you made a change in one side that isn’t showing up on the other side. So basically, what you’re doing now is you have the ability to maintain all of the data descriptions and ensure that they stay in sync. So again, you run the command. It says here, create the API. We’ll generate the API based on the information in this data model. Oh, actually, I didn’t mean to have this. This is actually a screenshot from our new graphical user interface that’s coming out. So instead of running a command, you’ll be able to fill that data in and then say, just generate the API. So again, so the productivity is here as well. You just generate the API. It builds it for you and you don’t have to worry about building all that code into it. Now, included in that, there’s a whole lot of stuff. So when I do that generation, it not only creates that API definition, but it builds all the code for validating the data. It builds the authentication code for authenticating the user and determining what it is they’re able to do. It generates the connection code to talk to the appropriate connector on the IBM i. It generates all the code for parsing the JSON. It creates the code to actually execute the program, and it creates the code to translate back from parameter data into JSON. So all of that is part of that generation step. So you go in and you say, here’s what I want to generate, and it builds that for you automatically. So just to wrap up, the main things are that using Eradani Connect, the security is built in and you can use the latest in security modules. It’s very easy to maintain because you maintain the information about the API in one place and it generates the code for you. All of the error handling code is built in, so it’s handling anything that comes back and it’s logging everything so you can see exactly what’s happening. And I know we are using very simple data sets here, but it will handle very complex JSON data structures. It’ll handle very complex IBM i data structures, and it can handle all of the translations between the two. And it’s designed to be very, very high performance and handle the asynchronous nature of the open source environment. And again, our people are available to help you with those things. And with that, I’m going to turn it back over to Mitch. Mitch, any other questions? That was the only question I saw. Any other questions come up? Yeah, there is one, a good one from David.

He wrote, it’s about DevOps. He said, I need to create APIs on dev, and then I want to send them to QA. And non-production systems then deploy out to 12 systems. How easy is it for me to create on one system and send and deploy those APIs to another system? Who wants to answer that? Well, let me take a stab at it, and you can fix it up for me. But basically, what’s really interesting is that typically, in developing APIs, you’re developing them on your own machine. You’re actually starting with APIs that are running on your machine, calling out to the IBM i. So you may start there. And then you may deploy it to a server where it will run for test or run for production. And at Eradani, we actually manage all of our code using open source tools. So we’ve got everything running inside of Git and using Make for doing the builds and the deployment. So that’s our open source web code, all of our API code, as well as our RPG, our DB2, our CL code. Everything is managed in Git. We actually keep it up on GitHub. And when we do a build, it builds everything, all the open source stuff as well as the RPG code that needs to be built, anything that actually has to be built because of the changes that were made. It doesn’t build everything, just everything that needs to be built. And we do that via Make. And Make also does the deployment out to the places where it needs to run for that particular stage of the lifecycle. Aaron, do you want to add to that? Yeah, well, I just wanted to bring a real-life example to it. I mean, there is the example of internal at Eradani. But one point that I wanted to mention is one of our customers, in particular, we’ve been working on setting up Azure DevOps with. And basically, they’re now set up. And what they do, they are making APIs using Eradani Connect on their IVMI. And what they do is they write the code on their machine.They just make the code changes. And then on their local development machine, and then they commit it to Git. And then they go to Azure DevOps, and they click the Go button. And that handles the entire deployment process, everything that needs to happen from the build process, formatting, to the code checking, to all the validation, deploying it, restarting the systems, everything that needs to happen. It’s all automated. So to answer the original question here of how complex is it to do this, the answer is it’s extremely simple. And usually, what ends up happening is you write your code, you hit a button, and you’re done. And we can help you set that up, right? I know we’re out of time here, Mitch. So I know there’s one more question. Let me just answer the last question, which is about the pricing model. We try to make a very, very simple pricing model. It is not based on how many API calls. We don’t count stuff. We don’t care about your IVMI serial number, your IVM model number. We don’t tie it to any of that information. It’s really just based on the number of production LPARs that you’re running against. And we can get a more detailed answer for your particular environment.