Legacy Platform as a Service - LPaaS

The impact of Web 2.0 development on legacy modernization strategies

Computing on the Edge – creating modernization pull

Cloud computing and Software as a Service (SaaS), are about to change the way we look at legacy assets and modernization initiatives. New IT development will be mostly delivered at the Edge of an organization, focused on providing new functionality to user communities and trading partners using Web 2.0 capabilities, delivered through cloud computing as a software service.

Platform as a Service (PaaS) solutions, provide development tools and frameworks for rapid cloud computing application development. PaaS solutions such as Force.com, Google App Engine and likely similar offerings from Facebook.com, Amazon.com and others provide a rich portfolio of application functionality in the form of services, and a cloud computing platform to host any applications developed using the PaaS framework. New application development will simply be a task of filling in the blanks and creating the specific logic required for the application being developed.
 
Oh yes it’s going to be a pretty exciting place out on the Edge. However, these new developments will always be limited in their value if they cannot interact with Core transaction processing applications in the organization. At the simplest level this will be about sharing data and the need for managed data integration solutions. At its more sophisticated level new cloud computing applications could be developed using a combination of PaaS solutions and internally created Legacy Platform as a Service (LPaaS) solutions.

LPaaS provides a new view of legacy application modernization. The goal is not to radically re-architect existing Core applications, but to allow demand from the Edge to drive the creation of a framework of Core computing services made available through the cloud to support Edge application development.





What needs to be in an LPaaS?

The important issue is that the LPaaS provides a platform of Core computing functionality through the cloud on an as needed basis. What makes up the functionality set should be driven largely by requirements of Edge. The LPaaS should not be a monolithic development project in its own right. Having said this, it is a reasonable assumption that any organization’s LPaaS should contain some basic service functions. This might include:

  • Authentication ServicesSecurity and governance services
  • Transaction processing and cross application referential integrity services
  • Critical Business logic services
  • Data mapping services

Unlike a PaaS which consists of a set of application services specifically created for cloud application development, an organization’s LPaaS will be a conduit to existing non-cloud computing functionality that already exists in the company’s Core applications. Therefore the LPaaS requires an enabling infrastructure that facilitates the LPaaS services to be made available through the cloud for new applications being developed on the Edge.

Service Oriented Architecture, SOA is the logical mechanism to facilitate the creation of a company’s LPaaS.

Using SOA and WOA enablement to create LPaaS solutions

There are two ways to look at SOA. The first and more common way of considering SOA is that it is an application development standard, the pragmatic answer to object oriented development’s weaknesses. This is of course correct, but only represents one aspect of the true value of SOA. The second, and in my view much more important, aspect of SOA is that it facilitates the integration of functionality at a granular service level, through the standards of Web Services and cloud computing. The most important part of this second point is that SOA as a web driven integration standard only cares about how the interfaces are exposed to be consumed, and not how they actually are delivered at the service level.

This means that SOA and the more easy to implement WOA, Web Oriented Architecture, have enormous implications for modernizing, or at least making available functionality buried in Core legacy applications. So long as the functionality can be exposed as a Web Service or WOA service, it doesn’t matter how the functionality is actually performed at the service level.

To enable an LPaaS, an integration framework must be created that enables Core data and legacy application functionality to be rendered to new Edge solutions as reusable services. A services approach provides the required abstraction from the legacy technology and the range of granularity that will be required by new application development assuming a PaaS like architecture.

Many Core applications, particularly Commercial Off the Shelf (COTS) packages will already provide some level of SOA enablement and there should be little difficulty to make services from these applications part of the LPaaS. Older, frequently in-house developed, legacy applications such as those running on the mainframe or departmental servers will need some form of SOA enablement in order to make services from these applications available as part of the LPaaS.

There are several ‘legacy to service’ enabling solutions available on the market which range from making services available using sophisticated screen scraping techniques, to wrapping legacy application programs to make them available to SOA adapters. Many organizations have already employed these products to enable legacy application integration.

When identifying which service enabling solution is right for your needs it is important to consider the nature of the LPaaS your organization is likely to require, both in the short term and over time. The following key factors should be considered in conjunction with obvious issues such as the availability of specific service integration adapters for your legacy technology stack:

  • Range of service functions
    • Data, Business Logic, Composite services
    • Stateless transactions
    • Service granularity required
  • Invasive versus non-invasive adapters
  • SOA versus WOA protocols

The LPaaS Strategy

So, what are the steps to creating an LPaaS for your organization?

First build the team. This could be called the LPaaS organization, or LPaaSO! The LPaaSO should have the necessary technical and business analyst resources to build the LPaaS and manage its further development. This is going to be an eclectic group consisting of members representing Edge computing initiatives and members representing the Core legacy applications.

  • Recognize the initial ‘pull’ for LPaaS services from existing Edge initiatives
  • Create the enabling SOA infrastructure. Recognize where applications and data can already be rendered as services. Identify solutions to enable other Core legacy applications and data to be rendered as services.
  • Create LPaaS standards, incorporating mechanisms to presenting complex composite services made up of multiple legacy functions.
  • Integrate LPaaS team into Edge computing initiatives with mandate to respond to pull from these projects

Developing an LPaaS strategy will not necessarily diminish the need for traditional large scale modernization projects, such has cost saving rehosting initiatives or legacy language conversion strategies, but an LPaaS strategy will compliment these other modernization projects and ensure adequate focus is being given to matching modernization steps to new Edge initiatives.

 

What did you think of this article?




Trackbacks
  • No trackbacks exist for this post.
Comments

  • 12/18/2008 9:22 PM Randy Brown wrote:
    LPaaS and cloud processing services are another step from the distributing processing architecture brought about in the 1980's and beyond with smaller mainframes and then servers . Thought I don't doubt their importance in further bringing forward applications to better enable them for more flexibility for the business, there are still many steps as "hinted to" in this article.

    SOA enablement of legacy applications is the paramount step in legacy modernization. This endeavor requires a great deal of work both from the IT professional and the business SME. Without fully understanding the application, laying out the road map for success and delivering the expected outcome in usable sub-systems, this is just a clog in the never ending quest for legacy modernization.

    If SOA enablement can be achieved without much bloodshed, legacy systems can be modernized to a degree to enable the business to survive in its marketplace. After this step is conquered, cloud processing will be the eventual destination for the business but not until then.
    Reply to this
  • 12/18/2008 11:18 PM Dave Roth wrote:
    Very interesting article
    Reply to this
  • 12/20/2008 3:19 PM Geoff Baker wrote:
    Paul Holland's article demonstrates once again the importance of having an SOA strategy for your Core legacy applications, with it being central to the introduction of LPaaS.

    Some of the key issues to consider when considering SOA for legacy applications, include:

    • How to ensure that an application that normally supports an internal community of a few hundred users can handle, with adequate performance, potentially many thousands of users when opened up to provide services to Web users, mobile users and LPaaS users. It is essential to choose middleware that not only delivers the legacy-based services but also provides scalability and performance.
    • Most core applications need to know about each user via some form of security logon which also dictates what the user is capable of doing or not doing within the system. Although the aim should be to create stateless services it will be necessary to maintain some knowledge of who is user is (i.e. the context) and pass this additional information automatically with the service request parameters to the relevant service. With this contextual information the service may disallow use for this user, for example, an unauthorized user trying to access the Salary Service.
    • Paul Holland has already touched on the consideration of the granularity of the service. The larger and more monolithic the service the longer it is likely to take to process and the more occurrences of that service component there may need to be simultaneously running to meet the needs of the enlarged user community. Fine grained services may take as little as 20ms to satisfy a user’s request. But the choice of how granular a service is will depend on considerations, such as: how often will this service be used; and how easy or difficult is it to encapsulate the business logic required to satisfy the service.
    • The consideration of invasive versus non-evasive SOA adapters for your legacy application will depend on various factors. Do you have access to the application source code; if not then some form of virtual screen adapter technology will be required to gain access to the business logic of the application. Data services (such as GetCustomer or GetProduct) can usually be provided directly from the database, whereas the UpdateSalesOrder service will require access to the application’s core business logic. Invasively created services will tend to perform much better than those using virtual screens which will be important for larger user communities.

    SOA and WOA are the fundamental building blocks to ensuring the success of delivering core legacy business services as LPaaS into new SaaS Edge applications.

    Finally, being part of a team called LPaaSO sounds fun!

    Geoff Baker
    gbaker@transoft.com
    www.transoft.com
    Reply to this
  • 12/28/2008 9:12 AM Vaughan Merlyn wrote:
    Applying a 2.0 solution to a 1.0 problem is not intuitive, but is, I believe, spot on! Unfortunately, from my experience, many of the people charged with taking care of legacy systems (especially the older systems) are very much stuck in the 1.0 mindset they are maintaining, so taking a 2.0 approach is a huge stretch for them. We need to take some of our "edgier" architects and developers, who really understand the 2.0 world, and approaches such as SOA and SaaS, and get them deeply involved in legacy transformation. Perhaps the economic climate will spur this type of activity?

    Anyway, a good article and a great perspective, Paul!
    Reply to this
  • 1/2/2009 5:15 PM Thomas Sykora wrote:
    This is very interesting article and certainly a very good topic for a blog. Paul brought the most relevant aspects to the point at a general level.
    The underlying idea of “cloud computing” and SaaS same as SOA (and any other acronyms for the attempts to standardize an m:n relationship and the many between-layers of hosts and users) are in fact very old and have been seen in the IT industry since decades under many different names. Each fashion-wave they came up they used different programming standards. This just proves that they are right and provide benefits. It is only that the underlying technologies have proven to be incompatible and such the realization of these concepts have always generated so much efforts that in many cases they have not been economical.
    The other extreme is a dedicated platform, DB, programming language and developing tools, fit to develop fast to the point without having to care about how to keep it universal and accessible for any other platform. A good example is the AS/400 proprietary environment which served very well to the purposes of their users and was in fact compatible to nothing else as for decades these applications were not meant to combine with anything by purpose of IBM. This was acceptable since anything else would have just brought the peril of spoiling the simplicity and reliability.
    All the above mentioned acronyms are good for IT managers but programmers need no flip-chart concepts; they want tools and reliable standards, because their results are not meeting protocols but functioning software.
    What do they get? Searching standards in the IT industry feels sometimes like mountain-climbing on a heap of pudding but since more than 10 years we have Java and with .NET (combined with Microsoft’s market domination) another standard well adopted by the programmers.
    Anything else would be the hard task to map different objects and methods between different platforms and middleware systems and this has been always the obstacle. “Wrapping” is easy to pronounce, but if a piece of software was not meant to be “wrapped” it is hard co “wrap” it the right way for all purposes without changing its logic.
    In regard to legacy systems as on AS/400 I now wonder whether the market will really jump from technologies of 1980ies to now upcoming ones and adopt SOA and WOA wrappers to combine legacy with software developed for specific PaaS or if an step by step approach to first migrate legacy to a modern languages will not be the more reasonable one.
    In summary: say SOA, SaaS and any other acronyms three times a meeting these concepts make definitely sense but insist on real standards.
    In Paul Holland’s blog I am looking forward to the response from the market. I think there will be a lot of experimenting. Will large companies choose Google for their “cloud”? Who did really try and what results did they get? If this blog attracts people to share their experience, it will be a great initiative.
    Reply to this
Leave a comment

Submitted comments are subject to moderation before being displayed.

 Name

 Email (will not be published)

 Website

Your comment is 0 characters limited to 3000 characters.