With the advent of cheap, pay-as-you-go compute and all things “cloud,” the role of software engineering has never been more critical or varied. But there are some pitfalls, or anti-patterns, that are easy to fall into and can wreck any effort. Anti-patterns can be described as common design traps that are easy to fall into.
It takes planning and effort to steer clear of anti-patterns in real time. In the software engineering world, there is a constant “refactoring” that goes on to help protect, correct and improve from these anti-patterns. The tech-agile world uses these events like retrospectives to understand what has worked, what needs to be corrected and any action items needed. This constant process improvement is important because cloud anti-patterns must be recognized before they become prohibitively expensive to correct or refactor.
The "Lift-and-Shift" anti-pattern
The usual “Lift and Shift” anti-pattern occurs in a project/PMO-led initiative trying to achieve cost savings by shifting applications from a local data center to the cloud, with its promises of cheap compute and easy elasticity. Senior IT leadership and project managers hear these promises, and a project is spun up to lift an application and its tendrils out of a local datacenter and shift it into a public cloud infrastructure.
There are several problems with this approach:
- Organizations will invariably treat public cloud infrastructure exactly like they do the local data center, meaning that patching, lack of horizontal scale and all other existing problems still exist, but now in a much more expensive environment.
- When compared to the effort of moving from one datacenter to another (in this case, the cloud), there is generally no return on investment (ROI), so the expense, labor model and application complexities are the same post shift.
- The cost model of moving an application from an on-prem datacenter to the cloud also has financial considerations beyond ROI. Local datacenter costs are classified as Capital Expenditures (CAPX), which are depreciable for tax purposes, while cloud expenses are classified as Operational Expenditure, which can be more expensive due to the tax implications.
The "Refactor to Micro-Monolith" anti-pattern
This anti-pattern starts out well but can become a pitfall very quickly. Generally speaking, a great way to start a nicely designed microservices architecture is to make the effort to refactor a monolithic application into multiple distinct APIs across domains or business services. The pitfall lies in the infrastructure deployment because now all these APIs have to live on the same set of virtual machines in the cloud to communicate.
Some problems with this approach:
- Although the software has been refactored, it has not been architected to be deployed independently, meaning there will be tight interdependencies between API’s and releases will be tightly coupled.
- Generally, if we are leveraging VMs as the discrete unit of work, horizontal scale may be achievable at a systemic level and not at a “unit-of-work” level. In other words, to horizontally scale, one must deploy pre-provisioned VM’s to scale.
- This solution may become more expensive than the problem as scale is achieved only at the VM level.
“Platform Thinking" as the answer
When rethinking these anti-patterns into a cloud-native architecture, one is drawn to the concept of a digital platform. A digital platform is a runtime environment that enables development teams to quickly deploy software and cloud-native microservices. The promise of a digital platform is at the heart of software engineering: Code re-use, or the Don’t Repeat Yourself (DRY) principle.
Things like logging, API resurrection, observability and elasticity are shared services across all APIs on the digital platform. The cost of development is reduced to the development activities of business logic itself, not the infrastructure provisioning, DevOps activities or other platform services.
Technologies like Kubernetes, Cloud Foundry, Mulesoft, Heroku or Netlify can all be leveraged in this way, enabling companies to create “platform enablement.”
Many still think of DevOps as just a set of tools, technologies and practices that get code deployed from lower to higher environments. This certainly is a component of DevOps, but there are two other areas to be considered: Infrastructure-as-Code and Configuration-as-Code. Smart, mature organizations ruthlessly attack manual effort and automate as much as possible.
Infrastructure-as-Code treats cloud infrastructure as ephemeral, deterministic and repeatable. Successful cloud journeys for companies need to adopt this principle from the very beginning. It’s only through repaving infrastructure that companies can begin to reap the promised benefits of cloud computing.
Like everything, avoiding anti-patterns in software engineering has trade-offs. One simply cannot make business decisions based on architectural purity. In the world of dollars and cents, technology decisions need to be based on ROI and software shelf life.
A strategic partnership
For companies to succeed in their cloud journey, they need technical resources that are both tactical and strategic in nature. These resources can take many forms, from full time employees to niche consulting. It is incumbent upon IT leadership to have valued partners to assist with software assessment activities and business goals. It’s through understanding a company’s cloud goals, coupled with tactical assessments, that cloud journeys become successful.
It’s not going to be just one quick win, but several small wins that move a company forward to cloud success.
What can Blueprint do for you? Contact us to talk about your journey, goals, and how we may be able to help you write your success story.