n e o m o t u s
BLOG.NEOMOTUS.COM

2009 modernization market predication from Geoff Baker of Transoft

In response to my request for Application Modernization predictions for 2009, Geoff Baker of Transoft submitted the following message of both economic doom and migration project hope:

January 2009 Geoff Baker of Transoft (www.transoft.com) writes:
 
In 2009, global businesses are facing a refinancing timebomb as they renew or restructure existing loan facilities and the paucity of available debt financing will squeeze even blue chip companies. What will this backdrop mean for application modernization and migration strategies and vendor companies, such as Transoft? In summary, the large COTS legacy replacement projects are less likely, so migration will continue to be (more) attractive.  There will be more companies risking the ‘do nothing’ option. But the best businesses – small and large - will continue to require business improvement and for modernization and migration vendors there will continue to be project work in 2009, but the projects must be well justified, generally fixed price and are likely to be somewhat smaller than in the past with some implemented iteratively, based on achieving each milestone in turn. Such projects might include replacing legacy data sources with RDBMS, Business Intelligence or implementing a legacy SOA strategy to progressively provide core business services to new internally facilities and external partners. Transoft with its rich portfolio of modernization products and migration tools together with 20+ years experience is well placed to take advantage of this new darker dawn.

Guest Blog Post by Steve Heffner of Pennington Systems


Following my last Post Steve Heffner of Pennington Systems emailed me a very comprehensive text about his approach to legacy language conversions. He makes some interesting points, so I have decided to post Steve's comments in their entirety.

Steve Heffner's comments in response to my post "Get Out Your Crystal Balls":

Paul,

I agree that fading languages represent a major obstacle for CIOs in moving to more modern IT practices, as well as presenting staffing and maintenance problems.  I also agree that RPG is on that list of languages, along with Pascal, Fortran, and, to a lesser degree, PL/I.
COBOL is a different matter.  There is such a huge quantity of COBOL out there that sheer inertia will keep the COBOL ship afloat for a long time.  In addition, recent COBOL standards enhancements have made it a more modernizable language, and COBOL vendors such as Micro Focus are adapting it even further to the brave new world of SOA, Web 2.0, and cloud computing.  (Full disclosure: My company is a member of Micro Focus's Migration & Transformation Consortium [MTC], and we are doing some work with Micro Focus.)


Back to fading languages.  I understand your "no JOBOL" stance, but I also agree with your pragmatic embrace of language translation as a "bridge" strategy. There are a number of reasons for such a bridge:

  • To get code out of proprietary languages (or dialects of standardized languages), in order to get it off fading platforms for which maintenance and repairs are becoming increasingly expensive or even impossible to do.
  • To get code out of a fading language (proprietary or not) for which it is getting increasingly difficult to find programmers -- either experienced ones, or new ones willing to enter what they see as a career dead-end).
  • To consolidate bodies of code in different languages, perhaps acquired as a result of mergers or acquisitions, and get them into a common language to minimize training and maximize skill sharing, common run-time libraries, etc.

Regarding the "JOBOL" issue, it's true that a straightforward, literal translation of one language to another ends up with results that are neither one nor the other, and likely worse than either.  This is exacerbated by the fact that some language translators produce output that can be very difficult to read or maintain. In addition, application modernization involves much more than just code translation.  The code must be assessed thoroughly before any porting activity begins.  And the port is very likely to involve issues that are not language-specific, such as code quality, a move to a different hardware platform and/or operating system, and repurposing the code to adapt it to a new environment such as SOA.


What does this mean?  It means what you need is NOT a simple, dedicated translation tool, but instead what I call a software engineering "meta-tool", which can be tailored and "tuned" to get the best possible result, including accommodating the vagaries of a particular body of code.  Such a meta-tool must have a powerful rules language you can use to tell it what you want it to automate and how, and that rules language must provide significant leverage, so that a substantial job can be automated with minimal effort.  The rules language must also be usable by any competent senior systems software engineer, not just by mad geniuses hidden in the vendor's back room.


I happen to be the author and vendor of such a meta-tool, named XTRAN.  It is an expert system for the symbolic manipulation of what I call Unambiguously Parsable Languages (UPLs) -- typically computer languages.  XTRAN's powerful rules language allows the automation of virtually any software engineering task involving existing code, with the leverage mentioned above.  XTRAN accommodates a broad range of languages, including many assemblers, 3GLs, 4GLs, XML, and HTML, as well as proprietary, scripting, data base, and Domain Specific languages.


We classify software engineering tasks that involve existing code into three broad categories:

  • Analysis -- pulling information out of the code, such as code quality measures, platform dependencies, etc.  This is especially important when assessing code in anticipation of a change, be it re-engineering or translation or both.
  • Re-engineering -- applying systematic changes to a body of code. Examples include automated code improvement, a hardware and/or operating system shift, a change to a different 3rd-party software vendor's API, or adapting the code to an SOA.
  • Translation -- changing the code to a different language with the same functionality, for any of the reasons given above.

(Note that "existing code" includes code that has just been created.)


Of course, some software engineering tasks involve combinations of these three categories.  An application modernization effort is virtually guaranteed to involve all three to varying degrees.
A meta-tool like XTRAN can be used to automate all of the aspects of application modernization with a single tool, single training, and single skill
set:

  • The original system assessment, which (especially for large bodies of code) must be automated as much as possible in order to reduce it to a manageable task and allow as much analysis as possible.
  • The re-engineering needed to repurpose the code, be it a platform shift or adaptation to SOA or Web 2.0.
  • The re-engineering needed to improve the code's quality, either before or after any porting activities, to minimize ongoing maintenance effort.

Translation that may be needed to move the code to a modern, standard, and portable language. You can try to cobble together a tool set from different vendors' "point solutions" for analysis, modernization, and translation.  But you will inevitably encounter both gaps in tool coverage and mismatches among tools -- the output of one doesn't fit the input requirements of another.  With a meta-tool's rules language, you can create your own tool set, with no gaps and no mismatches.  And the leverage provided by a good meta-tool means that you can do this with reasonable effort.  Not only that, but the resulting tool set is yours; if you're in the software services business, this can represent a substantial competitive advantage.


One reason automation of the software engineering work is so important is to minimize the introduction of bugs during the activity resulting from human error.  In other words, the more you automate, the less debugging you'll have to do.  For large projects, this can even mean the difference between success and abject failure.


Even after the modernization effort, a meta-tool such as XTRAN can be used to automate ongoing software engineering efforts such as monitoring (and perhaps remediating) adherence to quality standards or coding conventions, as well as providing occasionally needed ad hoc analysis and re-engineering automation.


Even if a decision is made to scrap an existing application in favor of off-the-shelf software, a thorough assessment of the existing system must be done to ensure that you know what you must replace.  Many functional specifications are woefully stale, if they even exist at all.  It is critical to understand what your systems do before trying to modernize or replace them.

The current economic distress will motivate a lot of CIOs to try to do more with what they've got, to minimize ongoing costs, to adapt their current systems to new needs, and (most importantly) to improve their IT systems to provide the business agility mandated by the speed of market change in the Internet age.  All of these goals call for automating the software engineering workload as much as possible.


For more information about my company, XTRAN, and me, please visit
WWW.Pennington.com.

 

Get out your crystal balls

 

New Year Predictions

Well the festive season is over, and with many of us making and breaking resolutions for the New Year, I thought I would make January “Modernization Prediction Month” at Neomotus. I have reached out to my friends and colleagues in the modernization and migration world and asked them to let me have their predictions for market trends and modernization strategies for the coming 12 months. If any predictions come in I will post them here.

2009 is code conversion time
To get the ball rolling here is one of my modernization strategy predictions for the next 12 months. I think 2009 is going to be the year that COBOL and RPG application owners get serious about converting their application code base to either Java or C#.

What, this can’t be, Paul Holland telling people to convert COBOL to Java!  Anyone who has worked with me will know that I have spent the last 10 years advocating strongly against such a strategy, but times they are a changing.

For the last couple of years I have been involved with several DEC VMS migration projects. In many cases the applications were written in PASCAL or FORTRAN. Naturally, no one wanted to preserve the PASCAL or FORTRAN source code during the modernization project. In fact the goals were not only to migrate the application functionality, but to convert the code base to a more modern language. No argument from me. The lack of skilled PASCAL and FORTRAN resources meant that the systems were becoming impossible to maintain, the application cost of ownership was increasing as scarce resources became more expensive and difficult to recruit, and the lack of standards compliance and development flexibility became a risk to the business.
I believe that we are now seeing the same factors starting to apply to RPG and COBOL applications.
 
In my mind there is no doubt that these issues would have even higher profile if the offshore outsourcing industry hadn’t started in the Y2K boom. Indian outsourcing firms built up large numbers of young COBOL and RPG programmers at that time, and it is largely this resource pool that has postponed a more rapid reduction in these legacy programming skills.

I am not saying that the lack of COBOL skills has reached a crisis level, although I certainly do think that there is an accute lack of RPG skills. What I am saying is that CIOs must anticipate that RPG and COBOL programming resources are going to be increasingly difficult to find and good ones are going to be expensive. How long is it going to be before COBOL and RPG have the same issues as PASCAL and FORTRAN? In 2007 Micro Focus, the open systems COBOL giant, did a survey of its customers and found that 75% of CIOs expected to need more COBOL resource over the next 5 years and 73% were already having a hard time finding trained COBOL professionals. As for RPG the availability of skilled resources is even more of an issue and the conversion exodus may have already started. This is even being recognized by IBM with some of its recent announcements.

But conversion doesn’t work

Reading from the old Paul Holland hymn book – "if you try to convert COBOL to Java you get JOBOL". It’s true I have yet to see a COBOL conversion tool that produces object oriented, non-verbose, well structured Java or C# code. However, if you accept the premise that you need to get away from where you are, automated conversion into a code base that can be maintained by Java and C# programmers, even if they complain about its structure, is a far less expensive and less risky route than rewriting. Once you have made the change then you can chip away at structure during the on-going maintenance of the application. However, for automated conversion to be viable it must ensure that the applications functionality is at least preserved and where possible improved.

Furthermore, I think the market is currently getting the conversion tools it deserves. I have no doubt that with more market demand, conversion tool vendors will produce better products. Ceratinly there are more players in the market. Look at the work my good friend Thomas Sykora is doing with his ML-iMPACT RPG conversion product, or Steve Heffner’s XTRAN converter. Veryant have already developed a COBOL to Java converter, they just choose to market it as a COBOL compiler. How long before it's positioned as a viable COBOL conversion solution? Vendors such as Software Mining and Jazillian have been working on converters for years, and TSRI has had a history of successful code convresion projects. Who knows, even Micro Focus might be working on the ultimate COBOL to Java converter. They certainly have all the pieces they the need to produce such a product. After all wouldn’t that be a bit like Exxon working on developing alternative energy sources?

Legacy Platform as a Service - LPaaS

The impact of Web 2.0 development on legacy modernization strategies

Computing on the Edge – creating modernization pull

Cloud computing and Software as a Service (SaaS), are about to change the way we look at legacy assets and modernization initiatives. New IT development will be mostly delivered at the Edge of an organization, focused on providing new functionality to user communities and trading partners using Web 2.0 capabilities, delivered through cloud computing as a software service.

Platform as a Service (PaaS) solutions, provide development tools and frameworks for rapid cloud computing application development. PaaS solutions such as Force.com, Google App Engine and likely similar offerings from Facebook.com, Amazon.com and others provide a rich portfolio of application functionality in the form of services, and a cloud computing platform to host any applications developed using the PaaS framework. New application development will simply be a task of filling in the blanks and creating the specific logic required for the application being developed.
 
Oh yes it’s going to be a pretty exciting place out on the Edge. However, these new developments will always be limited in their value if they cannot interact with Core transaction processing applications in the organization. At the simplest level this will be about sharing data and the need for managed data integration solutions. At its more sophisticated level new cloud computing applications could be developed using a combination of PaaS solutions and internally created Legacy Platform as a Service (LPaaS) solutions.

LPaaS provides a new view of legacy application modernization. The goal is not to radically re-architect existing Core applications, but to allow demand from the Edge to drive the creation of a framework of Core computing services made available through the cloud to support Edge application development.





What needs to be in an LPaaS?

The important issue is that the LPaaS provides a platform of Core computing functionality through the cloud on an as needed basis. What makes up the functionality set should be driven largely by requirements of Edge. The LPaaS should not be a monolithic development project in its own right. Having said this, it is a reasonable assumption that any organization’s LPaaS should contain some basic service functions. This might include:

  • Authentication ServicesSecurity and governance services
  • Transaction processing and cross application referential integrity services
  • Critical Business logic services
  • Data mapping services

Unlike a PaaS which consists of a set of application services specifically created for cloud application development, an organization’s LPaaS will be a conduit to existing non-cloud computing functionality that already exists in the company’s Core applications. Therefore the LPaaS requires an enabling infrastructure that facilitates the LPaaS services to be made available through the cloud for new applications being developed on the Edge.

Service Oriented Architecture, SOA is the logical mechanism to facilitate the creation of a company’s LPaaS.

Using SOA and WOA enablement to create LPaaS solutions

There are two ways to look at SOA. The first and more common way of considering SOA is that it is an application development standard, the pragmatic answer to object oriented development’s weaknesses. This is of course correct, but only represents one aspect of the true value of SOA. The second, and in my view much more important, aspect of SOA is that it facilitates the integration of functionality at a granular service level, through the standards of Web Services and cloud computing. The most important part of this second point is that SOA as a web driven integration standard only cares about how the interfaces are exposed to be consumed, and not how they actually are delivered at the service level.

This means that SOA and the more easy to implement WOA, Web Oriented Architecture, have enormous implications for modernizing, or at least making available functionality buried in Core legacy applications. So long as the functionality can be exposed as a Web Service or WOA service, it doesn’t matter how the functionality is actually performed at the service level.

To enable an LPaaS, an integration framework must be created that enables Core data and legacy application functionality to be rendered to new Edge solutions as reusable services. A services approach provides the required abstraction from the legacy technology and the range of granularity that will be required by new application development assuming a PaaS like architecture.

Many Core applications, particularly Commercial Off the Shelf (COTS) packages will already provide some level of SOA enablement and there should be little difficulty to make services from these applications part of the LPaaS. Older, frequently in-house developed, legacy applications such as those running on the mainframe or departmental servers will need some form of SOA enablement in order to make services from these applications available as part of the LPaaS.

There are several ‘legacy to service’ enabling solutions available on the market which range from making services available using sophisticated screen scraping techniques, to wrapping legacy application programs to make them available to SOA adapters. Many organizations have already employed these products to enable legacy application integration.

When identifying which service enabling solution is right for your needs it is important to consider the nature of the LPaaS your organization is likely to require, both in the short term and over time. The following key factors should be considered in conjunction with obvious issues such as the availability of specific service integration adapters for your legacy technology stack:

  • Range of service functions
    • Data, Business Logic, Composite services
    • Stateless transactions
    • Service granularity required
  • Invasive versus non-invasive adapters
  • SOA versus WOA protocols

The LPaaS Strategy

So, what are the steps to creating an LPaaS for your organization?

First build the team. This could be called the LPaaS organization, or LPaaSO! The LPaaSO should have the necessary technical and business analyst resources to build the LPaaS and manage its further development. This is going to be an eclectic group consisting of members representing Edge computing initiatives and members representing the Core legacy applications.

  • Recognize the initial ‘pull’ for LPaaS services from existing Edge initiatives
  • Create the enabling SOA infrastructure. Recognize where applications and data can already be rendered as services. Identify solutions to enable other Core legacy applications and data to be rendered as services.
  • Create LPaaS standards, incorporating mechanisms to presenting complex composite services made up of multiple legacy functions.
  • Integrate LPaaS team into Edge computing initiatives with mandate to respond to pull from these projects

Developing an LPaaS strategy will not necessarily diminish the need for traditional large scale modernization projects, such has cost saving rehosting initiatives or legacy language conversion strategies, but an LPaaS strategy will compliment these other modernization projects and ensure adequate focus is being given to matching modernization steps to new Edge initiatives.

Blog Software