In Agile shops especially, “build vs. buy” is an ongoing and evolving debate among CIO’s and DevOps teams. You see a demo of a new product which solves a problem, you look at the talent on your bench and you think: Yeah, but we could do this ourselves. Then you remember the mountainous backlog waiting for your team in Jira, and you quickly realize that you may not have the luxury of building it OR buying it right now. So it’s actually “build vs. buy vs. the status quo”. And it’s in that calculation where some real opportunities and challenges lie for your business — especially in what it means to be able to continue to get your features to market faster than your competition.
Adding to the challenge, with the recent Cambrian like explosion of containers everywhere from startups to enterprises and beyond, the time tested cloud architecture pattern of “letting the platform do the hard stuff” is being tested anew. Don’t get us wrong: We heart containers! Containers are amazing! Traditional app deployments vs. containerized app deployments is like the difference between mom sending you a recipe and sending you a cake. Everyone loves cake! However, the reimagining of deploying and running app workloads in containers is having unintended consequences for all of us, some of which may have been hard to anticipate when Kubernetes was initially released in June 2014… let alone when Docker debuted allll the way back in March 2013. But indeed, containers are inspiring new and urgent conversations about longstanding operational concerns like scheduling, security, monitoring, and performance, not to mention somewhat newer topics like containerized service discovery and overlay networking.
But when you’re a developer in a hurry like all of us are these days, you probably tend to treat almost any blocker you encounter for the first time by asking yourself if there’s actually a quick workaround that will leave that supposed “blocker” where you found it, and off you go. All you really want to do is get the feature you initially set out to build in this sprint working, with minimal refactoring in production. You’re already writing a containerized microservice, so what’s the harm in writing another containerized microservice or at least a one off API integration to support it? It’s just a little more code, after all. And containers are supposed to fix all of those “works on my laptop” issues, right? You’re not necessarily thinking about this blocking problem as a platform problem, per se. It’s just a dependency problem. You resolve or work around dependencies for a living, and this is just one more of those. And since you’re likely not running a fully realized container orchestration environment on your laptop today (you’re just running a basic container engine), you may not really have all of the container orchestration concepts “at your fingertips” to step back for a minute and think through why something doesn’t already exist on the platform that does this. All of which can mean that the leap from containers to container orchestration isn’t necessarily a straightforward one.
Given that you’re in an established cloud shop… and you’re already doing simple containers at any kind of production scale… one of the first conceptual hurdles you have to clear when you begin looking at the various enterprise container orchestration support offerings on the market, is a constraint that can initially look like a cloud anti pattern. Because in such a case, you have probably followed one of the other time tested cloud architecture patterns to a tee: horizontal scale. So it’s entirely conceivable that you’re containerizing apps and running them 1:1 on dedicated instance roles. (These cloud instance roles might predate your adoption of containers by years, and have subsequently been containerized. No judgments! These things happen.) But this can make the math of actually licensing an enterprise container orchestration solution at the instance/host level somewhat cost prohibitive for you.
In our view, perhaps the best way to overcome such a financial constraint is to maximize the ROI of instance/host based enterprise pricing and simply embrace the idea that your horizontal scale is now going to happen at the container level– not at the instance/host level– and treat all of the container engine cloud instances like you would bare metal; that is, consolidate now, push for higher and higher densities over time, and trust your new container orchestration solution to maintain the container counts and availability/replication schemes that you need to keep your apps running and the business happy.
So whenever you’re ready to begin evaluating enterprise container orchestration solutions in your organization, be forewarned that they have all come a long way, and some have amazing differentiating features. Thus, it may end up being a tough decision for you and your fellow stakeholders. As always, it’s important keep the architecture discussion as requirements driven as possible.
We’d recommend that you seek consensus on minimum requirements such as:
- Automation
- Self service
- Advanced networking support
- Service discovery
- Persistent storage support
- Secure design
- App centric point of view
And if you still have trouble reaching that consensus, sometimes it’s also helpful when we remind ourselves that inaction is a decision too.
Bottom line, enterprise level container orchestration is totally ready for primetime, it’s likely to be totally worth the investment, and your organization totally has an opportunity to do better than just maintaining the status quo in how you are delivering your containers at production scale going forward.