7 steps to re-engineer legacy software

Building a legacy worth remembering

Anyone who’s ever opened a .exe or .dll file with a text editor probably noticed the first two bytes of file header are equal to “MZ” (hexadecimal 0x4D5A). It turns out that Mark Zbikowski, a Software Architect at Microsoft who worked on the pre-Windows MS-DOS Executable file format in the early 80s, chose his initials “MZ“ to be part of the header information on every binary. These two letters, in essence, have become his legacy. Mark‘s initials, now firmly entrenched in computing history, served as a building block into MS-DOS and future operating systems.

Looking at this legacy from another angle, a dozen operating systems and 40 or so years later, Microsoft still uses the same file system header. With multiple additions and hacks, the presence of 0x4D5A remains firmly embedded into all sorts of operating systems, antivirus scanners, virtualization platforms, ATMs, embedded devices, and XBoxes. The legacy file header is obsolete, yet irreplaceable.

When planning a new project, no one strives to build legacy software. However, the reality is that after several years or decades of service systems, if not well maintained, become obsolete. In other cases, app deterioration relates to broader subjects such as organizational changes, a shift in a target market, staff rotation, or even third-party vendor’s decision to sunset a vital operating system, build tooling, API, or programming language.

Many organizations struggle to successfully modernize outdated software. Devbridge’s Legacy Software Modernization white paper unpacks many of these struggles, as well as how to resolve them.

“Fixing aging systems takes courage, calculated risks, and an iterative approach in planning, funding, and execution. Untangling the issues of an aged system and modernizing can significantly shift the trajectory of organizations for the better.”

– Ed Price, Director of Technology at Devbridge.

This blog targets the technical nuances of legacy work for developers. I have spent 20+ months of my professional career working on several modernization or refactoring projects. I share insights on how to transform legacy applications successfully.

* The contents of this article are geared towards those looking to rebuild or refactor existing systems, not those replacing legacy software with something completely new.

1. Audit the application.

On any legacy project, start by evaluating the application. Identify the obvious flaws or weak spots to remedy first. Most legacy applications suffer from one or more of the flaws noted below.

THE ISSUE: Rarely used, but still vital admin panel or old UI

An application offers elements that work via old frameworks and libraries (e.g., jQuery 1.9, Handlebars, Bootstrap 2.1, and VB.NET).

THE SOLUTION: The dev team needs to take time to understand the complexity of a legacy subsection of a product and how that impacts the effort to refactor, build, release pipelines, and test. Consider the value of completely rewriting outdated functionality with a tech stack used everywhere else in the application.

THE ISSUE: Odd hacks and dead code

The product features hacky lines of code with comments like // Workaround well-known bug from Microsoft or //https://stackoverflow.com/a/12345; dead code that is never deployed or used in production and obsolete configuration file entries not used in the current code.

THE SOLUTION: Document how many times these instances occur. Analyzing these issues and removing dead code or outdated hacks saves valuable hours otherwise spent documenting, refactoring, deploying, and regression-testing code that does nothing.

A real-world example of a quirky hack

When refactoring an ASP.NET MVC 5 app, the team discovered an odd HTML model binder hack which was explained only with stack overflow URL in the comment. Initially, removing the hack was seen as a full regression test of the whole application. The team determined that the hack was designed to fix the native checkbox behavior of ASP.NET MVC 3. They were puzzled why the hack was still valid for version 5. While the application was checkbox-heavy, and dozens of places used checkboxes for data filter selection, most of them were rendered dynamically via a JavaScript framework.

A more in-depth analysis uncovered only one native checkbox in the whole application and that was 'remember me' on the login page. Removing surplus code and re-testing authentication functionality proved almost effortless. Furthermore, the hack had no effect on version 5 of .ASP.NET MVC and it was essentially just dead code.

THE ISSUE: Outdated core operational functions

Many applications have core functionality, which is a key reason why that a particular app even exists. Like in Mark Zbikowski’s case, the code may have been produced by a famous colleague who now holds a prominent title in the company. Such core code cascades throughout the system with every end-user transaction, and, if left untouched, jeopardizes the success of a modernization initiative.

THE SOLUTION: Identify and evaluate all the pros and cons of the current version. Assess if the code is understood and well-documented, runs on the supported OS/frameworks, and is scalable.

THE ISSUE: Automated test suite

Sadly, it is quite typical that an automated test suite in a legacy product is slow, brittle, outdated, or does not run at all. The code of the test suite needs to be audited. Look out for hacks and dead code, unscalable “core” unit test setup routines, non-uniform coding practices. If you don’t have a test suite yet, come up with a plan on how to create it.

THE SOLUTION: Bring automated test suite components up to date. Make sure the test suite is operational on the CI/CD server. Simplify or remove tricky and unmaintainable test setups or assertion code.

THE ISSUE: Non-uniform coding practices

Major code differences, whether front-end, back-end, or even automated test suite code, increases in complexity and adds effort to refactoring work.

THE SOLUTION: The effort to rewrite non-conformant code needs to be documented and estimated so that the team has visibility into it. Express risks of non-uniform code in business terms (e.g., security risk, increased maintenance time, flawed end-user experience) so that these Agile backlog items are weighted and prioritized in upcoming planning sessions.

A real-world example of non-uniform code

AngularJs 1.x is very different from Angular 2+. In many cases when in AngularJs application upgrades, our teams found that JavaScript code was not uniform. The code included:

  • a mix of newer .component() constructs vs. older .directive()declarations

  • $this vs. controllerAs syntax

  • varying unit testing approaches for different scopes of functionality

  • partial TypeScript or ECMAScript 2015 support.

These non-uniform code patterns worsened the overall health of the code and increased the initial estimates for the code rewrite.

2. Set up source control.

Think about how to set up the source control to support the code highlighted in the refactoring. Is the team continuing to use the same git repo just with another branch? Or should they create a separate repository? Which repository/branch needs purging after final deployment, and which one will become a new source of truth?

Do not lose any git history while refactoring. Even if you are doing massive reorganization of code (e.g., splitting code into multiple repositories or merging code from separate repositories into one), there are git techniques that preserve all git history. Look up the documentation for the git filter-branch --subdirectory-filter and git merge -–allow-unrelated-histories.

To avoid inadvertently losing some git history while doing modifications to subtrees of the code, make changes as a single commit by using git mv. Also, if the situation warrants hosting code temporarily in two locations: move the code subtree to a new location, copy code to the old site, refactor as needed, and then delete the code from the former location.

3. Unify tools.

Teams working on legacy application re-architecturing, refactoring, or re-platforming tasks need to move, rewrite, upgrade, and delete code a lot. For ease, unify tooling, including, but not limited to:

  • runtime versions (e.g., JVM/.NET, NodeJS, IIS/Tomcat, docker)

  • coding rules (e.g., ESLint, StyleCop, IntelliJ CheckStyle, PMD)

  • artifact management (e.g., Azure DevOps Artifacts, JFrog Artifactory, Nexus, private npm feed)

Teams, in general, should prefer code-based annotation files that reside in the root of the repository (e.g., editorconfig, .nvmrc) with a shared IDE ruleset coding style. By contrast, they do not want documentation that only exists in the company‘s intranet, development wiki, or even worse, undocumented coding standards.

A real-world example of tools unification

For a .NET application transformation, the team chose to use private NuGet and npm feeds hosted in Azure DevOps. .eslintrc, .npmrc, sass-lint, StyleCop rules, and code snippets exist in the code repository with documentation in a wiki referring these files. Standard .NET compilation routines are enhanced with Directory.Build.props and Directory.Build.targets files (a relatively new addition to MSBuild version 15).

_* Developers may opt to use either Visual Studio 2019 / JetBrains Rider for back-end code and VS Code / JetBrains WebStorm for front-end code as long as they conform to the coding standards defined. _

4. Build fast and lean.

Build time is probably the most critical factor influencing team behavior. A slow build on a local machine causes each individual to become less effective; slow builds on CI/CD incapacitate the whole team. Pull requests become too big and contain unrelated changes as developers subconsciously fight longer build times by batching more code into fewer feature branches.

More and more features or bug fixes bounce back because the simplest smoke checks failing on new code could be a sign that the local and CI/CD build definitions do not match. Drill-down deeper and try to understand if the local production-like build process is too manual and tedious, or, on the contrary, the CI/CD build process is more complicated than it should be and needs to be simplified.

5. Set up pull requests.

Pull requests are a MUST for modern development processes. For details on how to adopt pull requests, check out: Improving code quality with pull requests. In addition to that evaluate how pull requests impact:

  • Technology. Use the technical capabilities of the source control server and CI/CD server for efficacy. First, decide on a branching strategy. Usually, a lightweight flavor of git-flow is all you need. Second, create effective git policy rules (e.g., prohibit direct push to sensitive branches). Third, introduce some git hooks to enforce quality checks. Fourth, run CI/CD on feature branches and test as much as possible. Lastly, provide an environment for developers to deploy the feature branch code before creating a pull request.

  • Operations. Map out efficient operations. Answer the question to ensure the team follows best practices and is set up for success. Some qualifying questions to consider:

    • What is the minimum number of approvers for a PR?

    • Who can approve what (e.g., can database developer review and approve Javascript code)?

    • How will the sprint item # link to a pull request?

    • How does the versioning rule look for the feature branch vs. the main branch?

    • Should there be a link back version or build # back to the deployed application version?

  • Psychology. Be positive and encouraging when reviewing a pull request. At the same time, be firm. Be mindful of the initial agreements made with the whole team, not allowing subpar quality code to pass through. Encourage even the quietest or least experienced team members to contribute to PR reviews. Learn and improve along the way. Professional attitude and good manners strengthen the team, help each member become more productive, and progress everyone’s skills.

6. Establish environments.

Ensure there are enough testing environments. Most likely, a team working on product refactoring will need even more environments than they currently have to test old vs. new behavior frequently. For example, when evaluating a crucial upgrade for an application ORM upgrade, try spinning a dedicated environment to test proof of concept of this attempt just to find out if the performance results are unacceptable. Later on, to properly tackle this task, use two feature branches and two environments. Ensure the first branch and environment contains code changes related to ORM upgrades and another pair contains performance-related upgrades. Monitoring and continuously comparing both environments bring visibility and certainty of each code commit. Finally, once happy with application performance, merge the latter feature branch into the main codebase.

The effort and corresponding costs to spin up new environments will vary depending on current system architecture and deployment scenarios (e.g. next to impossible to create an additional environment for mainframe application vs. very easy for containerized apps).

Keep the application in a working state and limit the number of breaking changes to one at a time. While you may be tempted to break and fix multiple things at once, this approach is rarely successful in reality – failures start to cascade, you lose clear visibility of the progress and it takes more time than anticipated to stabilize the application. Red, Green, Refactor, the approach to take in TDD is perfectly valid in this scenario. Keep a single track of potentially breaking changes and advance to the next one in line in sequence. Each partial refactoring or upgrade attempt needs to meet agreed quality standards before moving to the next one in line.

7. Collaborate continuously.

When assembling the project team, include stakeholders, subject matter experts, project managers, end-user representatives, and specialists from various engineering areas. Operations or support staff are sometimes under-represented in the team and shouldn't be. These people are the ones maintaining the legacy software and chances are they have a lot to suggest regarding application resilience, monitoring, and logging. If working with several teams (e.g., a team that is still working on feature delivery and a team dedicated solely for refactoring), think of ways to rotate team members to help with knowledge transfer.

Wrapping up

Some of the biggest companies in the world rely on legacy software every day. This article has also reviewed the most critical factors contributing to applications becoming legacy. The engineering team tasked with tackling outdated application issues may decide to either rebuild the systems completely or refactor current systems by doing component upgrades, in-place remediations, and face-lifting critical areas. No matter the decision, when upgrading or refactoring existing applications:

Do an external audit on your application first and make sure any concerns raised are materialized as Agile backlog items

Don‘t lose the git history.

Do unify coding rules, tooling, and external dependencies management.

Don‘t make engineers suffer by creating inefficient and lengthy pull request process.

Do feature enough environments for deployments and A/B testing.

Do Red-Green-Refactor and limit work tracks for breaking changes

Don‘t Act as an isolated engineers-only team

Additional resources

Legacy software modernization

Five steps for a successful legacy rebuild

Improving code quality with pull requests

Never miss a beat.

Sign up for our email newsletter.