Refactoring takes time, expense, and expertise. Keeping legacy applications running as-is takes 'merely' documentation and a sense of pragmatism.
I submit that the best approach for dealing with a legacy application is to Leave It The Fuck Alone. (LITFA) Don't extend them. Don't try to understand their inner workings any more than you have to to meet business needs. Just Don't. Leave it the fuck alone. You can keep them running a long long time, generating revenue, while you plan out replacement greenfield apps for the time when you really honestly do need them.
If you have a problem with it, fix it and document the solution. It will happen again. Put a little extra time in and fix it a little better this time, updating the documentation. Over time you will come to understand its behavior. Under no circumstances make any major changes to it. Keep everything tiny and fix exactly one thing at a time.
Refactoring legacy apps is good in one and only one situation, you've got massive amounts of money riding on it, enough so that it's worth doing everything right.
If anyone wants to offer me a book deal and reserve, I'd be happy to write it.
Though honestly I think the best path forward is to start a company offering fixed-price legacy app maintenance. Another thing I'd consider doing if I got the right offer.
Reminds me of a passage from "Working Effectively With Legacy Code" by Michael Feathers. The book is about dealing with those times when you absolutely must make changes to that old system.
This is from the end of Chapter 1:
It's tempting to think that we can minimize software problems
by avoiding them, but, unfortunately, it always catches up with
us...In poorly structured code, the move from figuring things
out to making changes feels like jumping off a cliff to avoid a
tiger. You hesitate and hesitate. "Am I ready to do it? Well,
I guess I have to."
This article seems to uncritically assume that microservices are a good idea. If you want to microserviceize a legacy app, go ahead, but I doubt that using containers is a good excuse to do so.
Also, it seems to me that containers would be perfect for code written in weird no-longer-supported languages since you could presumably containerize the compiler without having to fight with incompatibilities caused by newer distros.
An honest question, I am just beginning to understand containers so it might be naive.
Containers share kernel from the base operating system so when kernel updates to new incompatible version with breaking changes, how containerize application will continue to work.
The kernel interface tends to be super stable (eg on Linux, kernel interface ABI breakage is a blocker bug, and Linus rants about that every now and then when people ignore this).
So it shouldn't be too painful to run RedHat 4.1 (from 1997) in a container on top of current Linux 4.8 and use that to run software linked against that ancient set of libraries. Probably easier than getting all those libraries to run in a current era userland.
I am experiencing so many issues with Docker and current kernels. I can only envision an attempt to run RedHat 4.1 over a modern system as a recipe for disaster.
What kind of issues you have got into? I would like to hear from you.
Though I don't have any experience of managing containers, argument does not make a lot of sense to me. They are pitched as holy grail which will get rid of entire IT team managing the operating system i.e. upgrades, patching, and configuration management. It just looks counter-intuitive to me as any new upgrade, bug fix , or security patch still needs to be applied to base OS. Containerized application can move freely from one host OS to another host OS but they will still depend on same underlying environment.
> They are pitched as holy grail which will get rid of entire IT team managing the operating system i.e. upgrades, patching, and configuration management.
They don't. Containers push a lot of that responsibility out to the developers.
You still need operators if you're running your own IaaS substrate.
Disclosure: I work for Pivotal, we do some stuff with containers.
With respect to your "Future Roadmap", I'm obliged to encourage you to look at Cloud Foundry.
It has some code in common with Docker at the lowest level, but the bulk of it is written from the ground up, fully TDD, fully pair-programmed, proved in production for several years running.
I'm also obliged to disclose that I work for Pivotal, the majority donor of engineering to Cloud Foundry.
I actually did this a while ago. I wanted to try out creating a docker base image from scratch so created one for RHEL 4 and then launched it as a container on RHEL 7. I didn't do too much with it other then prove out that I could run processes in it, but as far as I could tell it was functional. And the weird looks I got from people when I told them about it were great!
RHEL 4 is about 7 years newer than Redhat 4.1... It may very well be possible to get Redhat 4.1 running too, but RHEL 4 is not quite in the same league.
> Me: Hey Boss, it's time to containerise legacy application x!
> Boss: Sounds like a good idea.
> Me: Ok, so we need a complete redesign into microservices to be 100% webscale, let's split it into 30 smaller projects then we can deploy it with swarm or kubernetes or whatever. Give me a team, 3 months and plenty of coffee.
> Boss: It's a fricking legacy CRUD app. Just shove the thing in a dockerfile and move on.
So use a volume on a Gluster or Ceph volume or similar. There are plenty of solutions.
I do agree that state is a challenge, but Docker at least pushes people towards being explicit about state rather than leaving data all over the place.
Of course this doesn't weigh up the cost man hours needed to port it....
The reason why shoving legacy apps into VMs was/is so compelling was because it required little work, and gave you great things (like easy backup, point in time recovery, stop-the-world stepping, cloning for dev environments etc, lower running costs)
porting something effectively to containers will in a large number of circumstances require effort to make networking usable. Programming time is expensive. Testing is expensive.
What does containerising actually give you for a legacy app? Will it give you HA? (not unless its capable already) will it give you painfree backups? (well depends how you map and manage your volumes)
Just hoover it up into a VM, set it, and forget it. It'll be much cheaper in the long term.
This is one of the companies I've been working with. We've built a container aggregation and automation platform for this exact use-case.
We can take pretty much any application, new, or legacy, wrap it in a container, and then automate all the business services required to turn it into a trial-upsell cloud service.
> Most legacy apps in their original states are far from being microservice-oriented.
an understatement of the year, especially when it comes to transactional ones, which a lot of legacy ones are. Distributed transactions across microservices ... sounds delightful, almost like CORBA dist transactions.
I submit that the best approach for dealing with a legacy application is to Leave It The Fuck Alone. (LITFA) Don't extend them. Don't try to understand their inner workings any more than you have to to meet business needs. Just Don't. Leave it the fuck alone. You can keep them running a long long time, generating revenue, while you plan out replacement greenfield apps for the time when you really honestly do need them.
If you have a problem with it, fix it and document the solution. It will happen again. Put a little extra time in and fix it a little better this time, updating the documentation. Over time you will come to understand its behavior. Under no circumstances make any major changes to it. Keep everything tiny and fix exactly one thing at a time.
Refactoring legacy apps is good in one and only one situation, you've got massive amounts of money riding on it, enough so that it's worth doing everything right.