The article discusses:
- The challenge of End of Life software
- The related risks in data management
- The need to move to cloud
- Steps to the cloud-first strategy
Do you have End of Life (EOL) software in use at your organization?
Very few organizations don’t have at least some sort of technical debt. Often it’s looked at from an infrastructure point of view with hardware because it has a date of acquisition. But software ages in exactly the same way. And some of the impacts of software ageing are larger and of greater risk to an organization than hardware.
Custom applications are software too. We all know about planned EOL dates for large vendors such as Microsoft, but don’t forget your internal applications. They have even greater risks usually through staff attrition, lack of documentation, and lost tribal knowledge. Smaller software companies often go bankrupt. What do you do then? What exactly is EOL for a custom app?
I have found this demonstrated to me at many of the clients I have worked with over the years and in this post, I will speak about a Financial Services company that had a number of issues come together.
The following characteristics existed in this financial services company’s application environment:
- Older software
- Old hardware
- No documentation
- Unencrypted data at rest and in flight
- Lack of communication between Infrastructure and Application groups
- Lack of direction from IT leadership
- Lack of adequate testing
- Fear of failure
What existed in their landscape was a brittle application set with poor application architecture and development staff who knew that no one could validate when they said. How long would it take to code something? Two days or two weeks, the answer would depend on the developer’s mood at the time. Would it be tested fully? Development practices were old and stale with multiple repositories, manual code promotion and no documentation of any kind.
And what about the data? Were there risks in how it was managed? Yes, there certainly was. So let’s start there.
The risks in data management
As technology permeates every aspect of our lives, an ever-increasing stream of information about individuals is generated, gathered, and tracked. This Personably Identifiable Information(PII),unique data that helps identify an individual, needs to be handled and stored with a focus on data privacy.
It’s in an organization’s best interest in keeping the PII they collect and maintain fully secure. Government regulations such as the European GDPR mandates organizations to comply with safe methods of data collection and storage.
When PII data is stored on EOL systems such as databases, there exists a high potential for exposure. If the data was not encrypted at rest because of poor application design; or constraints from the past that required data such as Social Security Numbers to be stored as open text, then you are just at the beginning of the problem statement. Has that data been archived over time into many, many copies? Are there test environments that hold multiple copies of the same PII data? If data sprawl is an issue because departments are too afraid to delete anything, your risk may be 100 times bigger than you ever expected. This was the case at this particular organization.
Your exposure to that risk of attack needs to be quantified. A general rule of thumb currently states that a copy of each instance of a complete set of PII is the equivalent of $200 worth of exposure to the organization holding it. Doesn’t sound like much? Do you have a million customer records? That’s 200 million in exposure. How many database servers have an active copy? Production for sure, then BI reporting, billing, CRM? What about testing copies? Whichever way you add up the numbers, you are exposed to significant risks.
The solution. Move to cloud?
The solution for the financial services company was to move to the cloud. And transform along the way. It would be via a combination of lift and shift to IaaS and transformation to PaaS based solutions. Testing practices needed to become more comprehensive. Development processes would need to be changed. All new software would be designed to be cloud native.
But did they have to move to the cloud? Were there options to remain on-premise? Yes there was. To solve the data problems, it was possible to implement Transparent Data Encryption at the database level to remove security exposures. But that did not solve the EOL problems. In fact, it would prolong them.
Performance could have been improved on-premise with new hardware. But buying newer hardware would lock the company into another extended stay in their own data center. Having their own data center, which was actually just a sophisticated computer room being run by a large IT staff meant more of the same costs would continue. A lot of the work could be achieved in the cloud in an automated way thereby reducing staffing overheads, but new hardware reduced this ability to adopt new practices.
New software could be written to take advantage of new hardware performance but if not supported by new development practices, and especially on-demand testing capability, old practices would continue.
A move to the cloud was essential!
Making the move to cloud
The financial institution understood that changes would need to be managed in what was going to be both, a technological and cultural set of changes. People, Process and Tools would be affected. A program of work and suitable owners of all work streams were assigned and leadership gave a cloud-first mandate.
The following steps were taken:
- Update enterprise architecture guidance to adopt cloud-first best practices
- Update security guidance to encrypt PII data in flight and at rest
- Develop cloud operating model and roadmap for adoption
- Update data governance policies to encompass cloud
- Develop a services framework
- Ensure application architectures reflect all of this guidance
- Train and implement DevOps best practices
- Implement SaaS based CI/CD capabilities
- Develop and maintain an inventory of all technology assets
- Create and adhere to a board-approved “sunset” policy for all identified assets
- Devise a roadmap for replacing technology approaching obsolescence
- Track availability of updates and vendors’ end-of-life plans
- Develop procedures for secure removal of data from hardware being returned to vendors
The net result was an organization that fit the template of a cloud-centric approach and all of the benefits. Agility was increased in terms of software development and deployment. Disaster recovery and business continuity were enhanced. Testing was improved via automation and the ability to run performance testing on equivalent compute capacity. Data security was improved and therefore, risk was reduced.
The cultural impacts were felt in terms of greater communication between stakeholders across IT and the business and sense of ‘team’ and common goals was amplified. And financially, many costs were reduced whilst the visibility of the consumption of resources was increased.
Obsolescence was the driver to bring about change in this organization but it could well have been data security or hardware refresh cycles. Cloud-based services are the most effective in combating all these kinds of problems.