Devbridge is officially transitioning to the Cognizant brand at the end of 2023. Come visit us at our new home as part of the Cognizant Software Engineering team.

Navigating beyond digital transformation

How to accelerate organizational change and transform at scale with the right elements in place

Download white paper

Lean tech for nimble teams

Successful leaders recognize that a transformation requires not only process change, but a shift in thinking around technology and architecture. The software a team builds is a digital artifact—be it a mobile application, web app, or microservices implementation—the value of which is realized as a meaningful customer experience. The approach of organizing around product and empowering teams will not alone guarantee speed. The underlying technology architecture and selection must be extensible and conducive to frequent release cycles.

The risks of legacy architecture

Legacy architecture and infrastructure are frequent obstacles to digital transformation. As defined by Gartner, a research advisory company, legacy architecture is “a set of information systems that may be based on outdated technologies, but they remain critical to day-to-day operations.” Many organizations, (core banking, as an example) have years of investment in monolithic systems that have grown unwieldy over time.

Successful leaders recognize that a transformation requires not only process change, but a shift in thinking around technology and architecture.

Touched by hundreds of software engineers over several decades, these systems have been the subject of numerous last-minute hacks to meet regulatory or business requests. As a result, they have become a convoluted, unmanageable web of tightly coupled systems with growing technical debt.

Some of these legacy platforms have achieved an industrialized state of resilience to changing business requirements and market factors, but there are numerous artifacts that plague the business. Ancient terminals, green screens, and user experiences that are more defined by the system’s evolution than the needs of employees are common. Elements that create a single point-of-failure remain. Complicating this, fewer and fewer engineers are equipped to support and understand the baked-in business logic and colloquial code knowledge that has been lost to layers and layers of code. No scimitar is sharp enough to slice through these complexities.

How do companies unravel this ball of yarn without risking their core operational functions? Create an abstraction layer through the use of web services, product information management systems, and redundant data stores outside the target systems of record. This initial step addresses the challenges in adapting to modern business needs, but cannot be seen as anything other than temporary relief. In order to cure the patient, the ship’s doctor (frequently with the title of Enterprise Architect or CIO) must develop strategies to address the architecture of legacy systems beyond abstraction alone.

Diagnosing the problem

The first step to a cure is to review the symptoms. Which legacy applications can the business tolerate? Which systems should receive investment to adapt, or in many cases, be migrated to a new platform? In some cases, these systems can be eliminated altogether by the process of application rationalization.

When it’s evident that maintenance is too costly, and the aforementioned artifacts are too detrimental, the best path forward may be to upgrade, rather than putting a bandage on old problem applications. A phased approach may be used where abstraction is leveraged as the first phase, with a full rewrite as phase two.

When choosing architecture and systems, don’t target a specific vendor or technology stack. Doing so provides maximum flexibility (avoiding vendor lock-in) and the ability to choose products that have the best business and architecture fit. An additional benefit is added flexibility when hiring—companies aren’t forced into a specific technology and can draw from a larger pool of resources.

The long-term benefits of modular architecture

Software and systems architecture should ensure that the design is modular—constructed out of standalone modules with defined functionality and interfaces. Any module can be replaced with another (whether refactored or implemented using another programming language) that implements a matching set of interfaces. This becomes increasingly important in the long term. As systems scale, integrated applications need to be sunsetted. A modular architecture will guarantee that maintenance costs induced by future changes are lower, when compared to monolithic applications.

A modular architecture includes splitting down the product into smaller domain-specific services that are independent from each other (including storage and protocols/formats). This kind of separation allows several benefits for these components.

A modular architecture will guarantee that maintenance costs induced by future changes are lower.

Replaceable: The team can rewrite the whole service independently from the rest of the system as long as it provides the same interface.

Technology agnostic: Services can be written without dependence on specific tech stacks and with the best storage fit.

More resilient to downtime: One service failure does not bring the whole system down, which allows for a variety of fall-back strategies to be employed.

Scalable: Separate services can be scaled as required, without impacting other parts of the system. For example, costs for infrastructure can be managed per-service basis, allowing to maintain ideal state of utilization and costs.

More efficient resource usage: Each service can have its own scaling and resiliency configuration, so the resources are distributed more evenly.

Easier extension and versioning: Extensions for a single service can work independently, transitioning dependencies to a new version incrementally instead of a “big-bang” release. The result is a higher tolerance for failure, and lower risk in deployment and maintenance.

Team decoupling: It’s easier to decouple dependencies between different teams and decouple release cadence for services.

The above benefits require a certain architectural maturity and visibility. Having a proper infrastructure in place will significantly simplify rollout and maintenance.

design NBDT

Divide and conquer.

Implementing the above approach means deciding which subsystems or specific functionality will be exposed. A service-oriented architecture (SOA) is best implemented with anatomic, single-purpose services. These services can be orchestrated or combined at a higher level to produce more complex applications. Companies that utilize this atomic, service-oriented approach will be well-positioned for the next level—microservices and serverless architectures.

The next stage in the journey is to enact a purposeful and a systematic approach to modernizing applications. This begins with taking inventory and analyzing applications, dividing the code base into two categories. The first category of most import to a Lean-Agile/Lean-Build.

container NBDT

By involving product and development, the true needs of the system will be realized.

Measure consists of an application’s unique feature. These features should be architected into SOA, exposing the business capabilities to the enterprise via web services. The second category contains the application’s features that could or should be obtained elsewhere—commodity services by use of other internal or third-party systems.

Modernized applications then link to internal, SOA-compliant modules as a set of web services. The result is a complete application refresh that didn’t require the organization to refactor the entire application—a process that often derails the effort before it even begins.

By using this categorization method, organizations will expedite the delivery of business value and be adaptable as stakeholder requirements change, all while moving the organizational vision and mission forward.

Large organizations may need to attack the problem one domain at a time, slowly and methodically changing systems architecture. Most enterprises are too big to do it all at once. An optimal approach is to outline a strategic 1- to 3-year plan. The plan can then react to ever-changing business and technology improvements.

Continue to embrace the Lean-Agile/Lean-Build-Measure loop during the adaptation of these systems. By involving product and development, the true needs of the system will be realized. Take those learnings and successes as feedback for their next domain to attack.

build learn measure NBDT

Embrace continuous deployment. Deliver immediate value.

The metrics are in place, the teams are empowered, and the architecture is evolving alongside product development. The value of these efforts is best realized in delivery. The best way to equip your organization is to enact practices of Continuous Integration (CI) and Continuous Delivery (Cd) (but not necessarily rapid or real-time deployment). The next step is to establish Continuous Deployment (CD).

Many organizations will cite the various reasons they can’t adopt DevOps. These reasons can include risk, regulation, information security, and other controls that prevent them from going beyond traditional CI, where the product is automatically built with test automation for unit, integration, and regression, then packaged for deployment to a targeted environment. These organizations should recognize that, like any manual process that is automated, proper checks and balances can be established to make Continuous Deployment (CD) a reality.

One might ask, “What’s the difference between continuous delivery and continuous deployment?” Continuous deployment is the next step in the CI/CD evolution. Continuous deployment takes delivery and automatically deploys it into environments, even all the way to production. At this advanced state, responsive teams are set up for success and organizations see the greatest benefit.

To be fully automated, organizations must set up their CI and CD inclusive of continuous deployment in the early phase of a product’s development—what we, at Devbridge, call this phase Sprint 0. Baking continuous deployment into processes early vastly reduces the infrastructure and deployment resources required for each life cycle of the development effort. This results in lower costs and faster delivery.

Continuous deployment works. It gets products to market sooner, at lower cost and at higher quality. We’ve learned this through work with clients over the years, but it’s backed up by industry best practices, in code deployed every ~11 seconds by Amazon, Netflix code deployment CI/CD, and others.

How do companies achieve continuous delivery?

We advocate and assist with our clients by suggesting the following:

  • Select a pilot program that can leverage DevOps. The pilot’s budget should fall in the $1-5 million range so that results can be demonstrated quickly.

  • Build a beachhead team from two organizations. This should be a cross-functional, blended team with people from both sides who can work together. Internal teams pick up skills from an external resource while operating within the context of a real project—not just theory.

  • Use delivered product as a foundation of DevOps and a driver of cultural change. Advertise engagement across the organization as a metric of success.

Organizations that embrace DevOps as part of the Agile process—not a separate add-on—can react to market or product failures. By embracing continuous deployment, organizations will deliver immediate value to their customers. As a result, these organizations will see exponential business value.

continuous NBDT
product group NBDT

The journey has just begun

When digital transformations begin, they often involve a small and empowered group protected from the rest of the organization. The team is enabled to focus on the task at hand. Distractions are deflected. They’re evaluated on their ability to adapt to change and deliver instead of navigating internal structures and funding models.

Transformed teams deliver products that become flagships for the rest of the organization. These teams’ successes regularly exceed average delivery estimates. With ownership of the product, employees feel engaged. The delivery effort is supported by a responsive and integrated DevOps effort to support the continuous realization of value. As a result, businesses see better results, faster—what’s not to like?

Transformed teams deliver products that become flagships for the rest of the organization.


Scaling teams and aligning them around product delivery is not without its challenges. Product, especially product design, often becomes a centralized service that teams call upon when needed. Product ownership becomes distributed amongst analysts and managers. Engineers are allocated but often shuffled—robbing Peter to pay Paul. This is where empowered, top-down leadership is so important. If the highest levels of a company are not aligned, departments will scramble for resources, reverting to old ways of business. The transformation will stagnate and ultimately fail.

Instead of requiring (or allowing) a line-of-business to come up with artificial projects, they should propose products that are designed to address opportunities and goals that have been communicated directly from leadership. These products will naturally cross line-of-business boundaries, requiring cross-functional teams to operate and deliver on those products or new products being developed.

As we’ve detailed, even after a transformation is underway, the journey is far from over—it’s just begun.

Continue to:How we can help