Following my last Post Steve Heffner of Pennington Systems emailed me a very comprehensive text about his approach to legacy language conversions. He makes some interesting points, so I have decided to post Steve's comments in their entirety.
I agree that fading languages represent a major obstacle for CIOs in moving to more modern IT practices, as well as presenting staffing and maintenance problems. I also agree that RPG is on that list of languages, along with Pascal, Fortran, and, to a lesser degree, PL/I. COBOL is a different matter. There is such a huge quantity of COBOL out there that sheer inertia will keep the COBOL ship afloat for a long time. In addition, recent COBOL standards enhancements have made it a more modernizable language, and COBOL vendors such as Micro Focus are adapting it even further to the brave new world of SOA, Web 2.0, and cloud computing. (Full disclosure: My company is a member of Micro Focus's Migration & Transformation Consortium [MTC], and we are doing some work with Micro Focus.)
Back to fading languages. I understand your "no JOBOL" stance, but I also agree with your pragmatic embrace of language translation as a "bridge" strategy. There are a number of reasons for such a bridge:
Regarding the "JOBOL" issue, it's true that a straightforward, literal translation of one language to another ends up with results that are neither one nor the other, and likely worse than either. This is exacerbated by the fact that some language translators produce output that can be very difficult to read or maintain. In addition, application modernization involves much more than just code translation. The code must be assessed thoroughly before any porting activity begins. And the port is very likely to involve issues that are not language-specific, such as code quality, a move to a different hardware platform and/or operating system, and repurposing the code to adapt it to a new environment such as SOA.
What does this mean? It means what you need is NOT a simple, dedicated translation tool, but instead what I call a software engineering "meta-tool", which can be tailored and "tuned" to get the best possible result, including accommodating the vagaries of a particular body of code. Such a meta-tool must have a powerful rules language you can use to tell it what you want it to automate and how, and that rules language must provide significant leverage, so that a substantial job can be automated with minimal effort. The rules language must also be usable by any competent senior systems software engineer, not just by mad geniuses hidden in the vendor's back room.
I happen to be the author and vendor of such a meta-tool, named XTRAN. It is an expert system for the symbolic manipulation of what I call Unambiguously Parsable Languages (UPLs) -- typically computer languages. XTRAN's powerful rules language allows the automation of virtually any software engineering task involving existing code, with the leverage mentioned above. XTRAN accommodates a broad range of languages, including many assemblers, 3GLs, 4GLs, XML, and HTML, as well as proprietary, scripting, data base, and Domain Specific languages.
We classify software engineering tasks that involve existing code into three broad categories:
(Note that "existing code" includes code that has just been created.)
Of course, some software engineering tasks involve combinations of these three categories. An application modernization effort is virtually guaranteed to involve all three to varying degrees. A meta-tool like XTRAN can be used to automate all of the aspects of application modernization with a single tool, single training, and single skill
Translation that may be needed to move the code to a modern, standard, and portable language. You can try to cobble together a tool set from different vendors' "point solutions" for analysis, modernization, and translation. But you will inevitably encounter both gaps in tool coverage and mismatches among tools -- the output of one doesn't fit the input requirements of another. With a meta-tool's rules language, you can create your own tool set, with no gaps and no mismatches. And the leverage provided by a good meta-tool means that you can do this with reasonable effort. Not only that, but the resulting tool set is yours; if you're in the software services business, this can represent a substantial competitive advantage.
One reason automation of the software engineering work is so important is to minimize the introduction of bugs during the activity resulting from human error. In other words, the more you automate, the less debugging you'll have to do. For large projects, this can even mean the difference between success and abject failure.
Even after the modernization effort, a meta-tool such as XTRAN can be used to automate ongoing software engineering efforts such as monitoring (and perhaps remediating) adherence to quality standards or coding conventions, as well as providing occasionally needed ad hoc analysis and re-engineering automation.
Even if a decision is made to scrap an existing application in favor of off-the-shelf software, a thorough assessment of the existing system must be done to ensure that you know what you must replace. Many functional specifications are woefully stale, if they even exist at all. It is critical to understand what your systems do before trying to modernize or replace them.
The current economic distress will motivate a lot of CIOs to try to do more with what they've got, to minimize ongoing costs, to adapt their current systems to new needs, and (most importantly) to improve their IT systems to provide the business agility mandated by the speed of market change in the Internet age. All of these goals call for automating the software engineering workload as much as possible.
For more information about my company, XTRAN, and me, please visit WWW.Pennington.com.
The impact of Web 2.0 development on legacy modernization strategies
Computing on the Edge – creating modernization pull
Cloud computing and Software as a Service (SaaS), are about to change the way we look at legacy assets and modernization initiatives. New IT development will be mostly delivered at the Edge of an organization, focused on providing new functionality to user communities and trading partners using Web 2.0 capabilities, delivered through cloud computing as a software service.
Platform as a Service (PaaS) solutions, provide development tools and frameworks for rapid cloud computing application development. PaaS solutions such as Force.com, Google App Engine and likely similar offerings from Facebook.com, Amazon.com and others provide a rich portfolio of application functionality in the form of services, and a cloud computing platform to host any applications developed using the PaaS framework. New application development will simply be a task of filling in the blanks and creating the specific logic required for the application being developed.
Oh yes it’s going to be a pretty exciting place out on the Edge. However, these new developments will always be limited in their value if they cannot interact with Core transaction processing applications in the organization. At the simplest level this will be about sharing data and the need for managed data integration solutions. At its more sophisticated level new cloud computing applications could be developed using a combination of PaaS solutions and internally created Legacy Platform as a Service (LPaaS) solutions.
LPaaS provides a new view of legacy application modernization. The goal is not to radically re-architect existing Core applications, but to allow demand from the Edge to drive the creation of a framework of Core computing services made available through the cloud to support Edge application development.
What needs to be in an LPaaS?
The important issue is that the LPaaS provides a platform of Core computing functionality through the cloud on an as needed basis. What makes up the functionality set should be driven largely by requirements of Edge. The LPaaS should not be a monolithic development project in its own right. Having said this, it is a reasonable assumption that any organization’s LPaaS should contain some basic service functions. This might include:
Unlike a PaaS which consists of a set of application services specifically created for cloud application development, an organization’s LPaaS will be a conduit to existing non-cloud computing functionality that already exists in the company’s Core applications. Therefore the LPaaS requires an enabling infrastructure that facilitates the LPaaS services to be made available through the cloud for new applications being developed on the Edge.
Service Oriented Architecture, SOA is the logical mechanism to facilitate the creation of a company’s LPaaS.
Using SOA and WOA enablement to create LPaaS solutions
There are two ways to look at SOA. The first and more common way of considering SOA is that it is an application development standard, the pragmatic answer to object oriented development’s weaknesses. This is of course correct, but only represents one aspect of the true value of SOA. The second, and in my view much more important, aspect of SOA is that it facilitates the integration of functionality at a granular service level, through the standards of Web Services and cloud computing. The most important part of this second point is that SOA as a web driven integration standard only cares about how the interfaces are exposed to be consumed, and not how they actually are delivered at the service level.
This means that SOA and the more easy to implement WOA, Web Oriented Architecture, have enormous implications for modernizing, or at least making available functionality buried in Core legacy applications. So long as the functionality can be exposed as a Web Service or WOA service, it doesn’t matter how the functionality is actually performed at the service level.
To enable an LPaaS, an integration framework must be created that enables Core data and legacy application functionality to be rendered to new Edge solutions as reusable services. A services approach provides the required abstraction from the legacy technology and the range of granularity that will be required by new application development assuming a PaaS like architecture.
Many Core applications, particularly Commercial Off the Shelf (COTS) packages will already provide some level of SOA enablement and there should be little difficulty to make services from these applications part of the LPaaS. Older, frequently in-house developed, legacy applications such as those running on the mainframe or departmental servers will need some form of SOA enablement in order to make services from these applications available as part of the LPaaS.
There are several ‘legacy to service’ enabling solutions available on the market which range from making services available using sophisticated screen scraping techniques, to wrapping legacy application programs to make them available to SOA adapters. Many organizations have already employed these products to enable legacy application integration.
When identifying which service enabling solution is right for your needs it is important to consider the nature of the LPaaS your organization is likely to require, both in the short term and over time. The following key factors should be considered in conjunction with obvious issues such as the availability of specific service integration adapters for your legacy technology stack:
The LPaaS Strategy
So, what are the steps to creating an LPaaS for your organization?
First build the team. This could be called the LPaaS organization, or LPaaSO! The LPaaSO should have the necessary technical and business analyst resources to build the LPaaS and manage its further development. This is going to be an eclectic group consisting of members representing Edge computing initiatives and members representing the Core legacy applications.
Developing an LPaaS strategy will not necessarily diminish the need for traditional large scale modernization projects, such has cost saving rehosting initiatives or legacy language conversion strategies, but an LPaaS strategy will compliment these other modernization projects and ensure adequate focus is being given to matching modernization steps to new Edge initiatives.