AWS Feed
Transitioning Mainframe Workloads into Agile Services with AWS and Micro Focus
By Guy Sofer, Product Manager, Application Intelligence and Modernization – Micro Focus
By Phil de Valence, WW Tech Leader, Mainframe Modernization – AWS
Increasing the agility of mainframe workloads is critical for organizations processing core-business processes and data on mainframes.
Together, Amazon Web Services (AWS) and Micro Focus can help you accelerate the transition to agile services with a tool-based evolutionary approach, avoiding the expensive, risky, and slow rip-and-replace approach.
In this post, we describe the overall approach to agility and detail the transitions and Micro Focus tools for transforming mainframe workloads into agile services on AWS. We specifically highlight how to refactor towards both macroservices and microservices.
Micro Focus is an AWS Advanced Technology Partner with more than 40 years of experience in modernizing and replatforming mainframe workloads with thousands of successful projects, including a recent modernization of Kmart’s mainframe to AWS.
Mainframe Workloads Modernization
Mainframe workload modernization may seem long and risky, and that can indeed be the case with the all-or-nothing or rip-and-replace approach.
This old methodology, much like the waterfall model, involved a complete overhaul and manual re-engineering, causing analysis paralysis at best, or failing projects at worst.
There is a better way.
More and more companies are choosing a completely different path, implementing agile, iterative, flow-like modernization.
These organizations are benefitting from low cost, low risk, and gradual modernization that’s easier to justify, plan, and implement.
This approach has proven to have the highest success rate based on research covering 50,000 projects detailed in a report by the Standish Group.
Figure 1 – Success rate by project type. Compiled by Standish Group International.
With Micro Focus and AWS technology, companies can quickly replatform their mainframe workloads to the cloud with high return on investment (ROI), increased agility, and tangible business benefits in the short-term. The earned efficiency and budget savings allow funding the modernization of the remaining mainframe workloads.
The figure below represents the overall approach from a mainframe monolithic workload to agile services, by modernizing to macroservices first, and then optimizing to microservices when suitable.
Figure 2 – Overall approach from mainframe monolith to agile services.
This evolution maximizes the benefits because it brings agility to all of the functions within a large mainframe workload. It provides this agility in the least amount of time by using the automated, mature, and fit-for-purpose Micro Focus tools.
12 Agility Attributes for Mainframe Workloads
Agility is the ability of a business to respond to change quickly and inexpensively. To become agile, mainframe workloads need to adopt 12 attributes.
In a previous blog post, we described these agility attributes including agile development with CI/CD, modern development tools, knowledge-based development, elasticity, managed services, consumption-based pricing, and more. Each attribute facilitates change at higher speed and lower cost.
In the next sections, you will notice these agility attributes are the main reason and objective for the first phase of the modernization approach.
Short-Term Replatforming to Elastic Compute
During the first phase, the mainframe workload is replatformed and modernized to run on elastic compute or containers. The application code is ported and recompiled to execute on Micro Focus Enterprise Server.
Optionally, some of the application data can be converted and moved to a relational database for better availability, elasticity, and for further modernization.
Micro Focus Enterprise Server provides a mainframe-compatible production engine that allows mainframe COBOL and PL/I applications to run virtually unchanged within an Amazon Elastic Compute Cloud (Amazon EC2) or Docker containers. You can learn more about the modernization approaches in this Micro Focus brochure.
The architecture below shows a mainframe workload using elastic compute spread across multiple AWS Availability Zones and data centers. It benefits from a Micro Focus Performance and Availability Cluster (PAC) combined with AWS Auto Scaling.
Figure 3 – Elastic compute with Micro Focus Enterprise Server on Amazon EC2 or EKS.
This architecture relies on Amazon EC2 instances or Amazon Elastic Kubernetes Services (Amazon EKS) containers. Such architecture is designed to meet or exceed mainframe workloads’ non-functional requirements for high security, high availability, elasticity, and strong system management.
You can learn more about the quality of service in this blog post: Empowering Enterprise Mainframe Workloads on AWS with Micro Focus.
Agile Development and Deployment
During the first phase, we also bring agility by introducing a CI/CD pipeline following DevOps best practices combined with a modern Integrated Development Environment (IDE).
The CI/CD pipeline increases speed, agility, quality, and cost-efficiency with as much automation as possible. It’s flexible and designed to be modular based on tool preferences for each stage of the pipeline.
Figure 4 – Example CI/CD pipeline for a mainframe workload or macroservices.
One example of such CI/CD pipeline is shown above. It leverages Micro Focus Enterprise Developer for a modern IDE, Enterprise Analyzer for code analysis, Unified Functional Testing (UFT) for test automation, and Enterprise Test Server for the test execution.
It also uses AWS Code fully managed services for source code management, pipeline, code build, and deployment. You can learn more about such COBOL and PL/I CI/CD pipeline in this blog post: Enable Agile Mainframe Development, Test, and CI/CD with AWS and Micro Focus.
Evolution to Agile Services
Initially, the typical mainframe workload is composed of tightly coupled programs using a shared data store. During the first phase, we perform a tool-based, short-term modernization to agile macroservices.
Macroservices are coarse-grained services, which typically share a central database. They possess the desired 12 agility attributes and avoid the challenges of microservices up front. As soon as services are created with Application Programming Interfaces (APIs), they become accessible for reuse and innovation.
During the second phase, we optimize further by creating microservices when appropriate. A microservice is an independent deployable service which owns its own data for information hiding, and consequently requires one database per microservice.
This microservice extraction is labor intensive with a lot of manual re-engineering for both program and data extraction. Microservices are not suitable for all workloads.
There are benefits and challenges with microservices, which we’ll discuss briefly in this post, and consequently we only extract them when and where it makes sense.
This evolution is not just a migration but a modernization on many dimensions.
Figure 5 – Evolutions towards agility.
A key benefit of this move to a macroservices architecture is that the 12 agility attributes are built in, as shown below.
Figure 6 – Agility gains with the short-term macroservices architecture.
You can learn more about the overall evolution in this AWS re:Invent session: Mainframe Workloads’ Fast Track to Agility.
Macroservices Creation
Macroservices identification and scoping can be done at any stage: when the application is on the mainframe, during the replatforming phase, or once a complete workload is deployed on AWS.
A macroservice is a portion of a mainframe workload exposed as a service or group of services. That means macroservices can be created by merely exposing some programs as services keeping the mainframe workload monolith the same.
Macroservices can also be created by identifying and separating groups of programs or modules which are fairly independent from the rest of the workload monolith. Macroservices should be created focusing on high business value and using tools to minimize manual efforts and risks.
If a macroservice takes too long or is too complex to create, it likely means it should be postponed for the subsequent optimization phase, allowing completion of the prior phase quickly. This is one of the reasons we keep a shared database for macroservices, avoiding the complexities of data extraction and database split upfront.
Micro Focus Enterprise Analyzer facilitates the identification and scoping of macroservices. It provides the ability to query and investigate the application code, find the needed logic, map, and visualize boundaries and dependencies, document the business rules and logic, and define the macroservice.
At this point, we invest most of the time identifying macroservices candidates, map their logic, and prioritize the work based on high business value and low degree of required code changes. The result should be minimal required testing, increased speed, and lower risk.
The figure below shows the overall call map generated by Micro Focus Enterprise Analyzer for a large mainframe application with millions of lines of code. You can see some programs are heavily interconnected while other groups are more independent.
Figure 7 – Micro Focus Enterprise Analyzer call map with automatic clustering.
We can use these clustering algorithms to get a better view of program dependencies and suggested macroservices scopes. We can refine the scoping by drilling down further into Business Area, Business Function, and Business Rules reports. We get an extensive view of a potential macroservice with the Call Map, Source Dependencies, and Data dependencies views.
Once the macroservice scope is clearly defined, the needed resources are gathered, and the development and deployment pipelines are created. These resources include programs, copybooks, dependencies, data files, and data tables. The macroservice’s resources are deployed collocated with the rest of the workload monolith or onto separate compute instances.
Next, we create the service interface for the macroservice. We have multiple options to create and expose the service as a web service, as a RESTful service, or via queue messaging. We illustrate two options here:
- Enterprise Developer Interface Mapping Toolkit (IMTK) detects the linkage section content and maps it into a new RESTful service interface with customizable HTTP methods and paths.
Figure 8 – Enterprise Developer Interface Mapping Toolkit (IMTK).
- For applications that use CICS, then Enterprise Developer provides the CICS Web Service Wizard. It creates REST or SOAP Web Services while using and preserving the existing CICS configuration for other existing use-cases.
Figure 9 – Enterprise Developer CICS Web Service Wizard.
The COBOL or PL/I macroservice is then deployed onto AWS elastic compute. Its service interface is readily available for reuse and innovations.
Microservices?
A microservice is an independent deployable service. That means it can be developed, tested, deployed, and scaled independently from other services. It is loosely coupled and communicates using lightweight mechanisms with other services.
A microservice is modeled around a business capability, and this allows grouping strongly-related functions and aligning them with business outcomes.
A microservice owns its own data. This is so that each microservice is independent and decoupled from other services. It’s also for information hiding, which prevents unintended coupling. If you see a database shared across services, these are not microservices yet.
Figure 10 – Macroservice versus microservice comparison.
There are many benefits around using microservices, such as autonomy, technology choices, scalability options, reduced time to market, and smaller blast radius. There are also drawbacks and challenges with microservices.
Network latency and failures can make things unpredictable. Microservices’ operational complexity can increase exponentially with the proliferation of functions, databases, and interfaces. That’s the reason why microservices should not be the default target architecture.
Actually, coarse-grained agile macroservices often meet the customer business needs really well. We should incrementally create microservices only when and where it makes sense, with a specific goal and scope for each.
Microservice Extraction
Once we have good business and technical goals for creating a specific microservice, we can engage into scoping the microservice for a particular business capability with specific program and data elements.
One popular technique for scoping microservices is to use Domain-Driven Design to help modeling the business domain, to divide it into Bounded Contexts and Aggregates, and to map these to individual microservices.
Event Storming is a lightweight method to model business processes following Domain-Driven Design. The business capability scope then needs to be mapped to a technical scope with specific program and data objects supporting it.
Micro Focus Enterprise Analyzer provides knowledge and insights into the COBOL and PL/I application code to refine and decide the exact technical scope for a microservice. For example, it can display Create, Read, Update, Delete (CRUD) reports to highlight the data access and usage. We can also use views such as call maps, source dependencies, data dependencies.
To understand the complete data lifecycle for a business process, we can do an impact analysis using the data flow diagram as shown in Figure 11. In this diagram, we can easily identify how the three programs (in blue) access data files (in cyan) and database tables (in yellow).
Information about data relationships like used and shared data and read-write relationships is readily available.
Figure 11 – Impact analysis with the data flow diagram.
Once specific programs and data elements are identified for the microservice, this will support data and logic extraction from an existing application or macroservice.
Using Micro Focus Enterprise Developer, we use the code slicing capability that creates new independent programs based on existing code sections or paragraphs. It refactors source code, including dependencies such as data divisions or linkage sections. It also provides the option to change the code of the original program to point to the newly-created programs.
During this activity, we can also decide to strip away mainframe-specific constructs such as EXEC CICS APIs or legacy mainframe APIs. You can find details about code slicing and see it in action within AMC Tech Tips: Innovative Refactoring in COBOL, and this webinar.
Data extraction is required for a proper microservice with one database per service. It’s a more manual process to split the database schema, and is also more complex because it can impact data consistency and ACID transactions.
Some patterns can be used such as database views, wrappers, replicas, or synchronization. If transactionality is impacted or broken, we can use compensation patterns, such as Sagas or Try-Confirm-Cancel (TCC). You can learn more about program and data extraction in Sam Newman’s book, Monolith to Microservices.
With logic and data extracted, we can use the same techniques described above to build service interfaces. Finally, we deploy the newly-formed COBOL or PL/I microservice onto elastic compute. It can be deployed on Amazon EC2 compute or in a managed Docker container service such as Amazon EKS, Amazon Elastic Container Services (Amazon ECS), or AWS Fargate.
The COBOL microservice can also be deployed as a serverless AWS Lambda function. In this case, under the hood, it’s compiled into Java bytecode which can run in a Java Virtual Machine within a Lambda function.
Extracting microservices is an incremental effort for both program and data extraction. It is done in small steps to learn from the process and adjust where needed.
Go Build Agile Services
Macroservices bring the 12 agility attributes to the many business functions within a mainframe workload. That means there’s a steep increase in overall agility for all the business functions as soon as the workload is replatformed in the short-term.
This is very different from a rewrite to microservices straight from a mainframe, where incremental small microservices for small business functions provide small agility gains over a long period of time.
The macroservices-then-microservices evolutionary approach minimizes risk and accelerates transformation by leveraging proven Micro Focus tools. It benefits from the unique AWS Cloud capabilities for reliability and agility.
We invite you to learn more about the value and robustness of this approach by following the many links available throughout this post. We recommend demonstrating the value for a particular mainframe workload via a proof of concept (POC) or pilot. A good first step is the Micro Focus Value Profile Day.
.
Micro Focus – AWS Partner Spotlight
Micro Focus is an AWS Competency Partner that enables customers to utilize new technology solutions while maximizing the value of their investments in critical IT infrastructure and business applications.
Contact Micro Focus | Partner Overview | AWS Marketplace
*Already worked with Micro Focus? Rate this Partner
*To review an AWS Partner, you must be a customer that has worked with them directly on a project.