Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
I received a lot of feedback on my recent blog regarding “Public vs. Private Cloud”, in which I argued that private clouds, shrink-wrapped software, and — in general — on-prem infrastructure, are going the way of the Dodo.
There is nothing new or earth shattering in what I wrote. Many experts have been saying the same thing for years. We all seem to agree that we’re moving towards the cloud. Yet, for some reason, enterprise companies continue to invest in, and perpetuate, the old model for infrastructure deployment. With all the hype around cloud adoption, it’s easy to forget that over 90% of all IT spend still goes to traditional on-prem deployments. Inertia continues to be a big factor in Enterprise IT organizations just as incrementalism reigns supreme in the R&D organizations of “old school” system and software providers.
I spent most of my career building operating systems and distributed system software delivered as shrink wrapped software and meant to be deployed on-prem. I’m proud of what we all accomplished as an industry. But we’ve come a long way. Those battles are pretty much over and the industry has moved “up the stack” so they can continue to innovate in new and uncharted territories.
Very few companies are starting new processor architectures or building operating systems from scratch. The world standardized on one of two processor types (x86 or ARM), one of two operating systems (Linux or Windows), one of two relational databases (Oracle or SQL), and so on. There is no longer any point in arguing that MIPS was a better/cleaner processor architecture than Intel. I personally spent a huge chunk of my career on that processor and am proud of the work we did but there are no longer any companies out there building systems based on the MIPS architecture. More importantly, there are no companies offering applications compiled for that instruction set. It’s time to move on.
The same logic should be applied to on-prem infrastructure hardware and software. We need to agree, as an industry, that the Enterprise-IT-owned-and-operated data center will also soon go the way of the Dodo.
“One of the insights from our research about commoditization is that whenever it is at work somewhere in a value chain, a reciprocal process of de-commoditization is at work somewhere else in the value chain. … The reciprocality of these processes means that the locus of the ability to differentiate shifts continuously in a value chain as new waves of disruption wash over an industry. As this happens companies that position themselves at a spot in the value chain where performance is not yet good enough will capture the profit.” — Clayton Christensen. The Innovator’s Solution.
By the time you take all the variables in the equation into account, the total cost of ownership of any such solution far surpasses any cloud based solution. Here, I’m including all the hidden costs of installing, managing, patching, upgrading, securing, and testing infrastructure hardware and software in support of enterprise application delivery. Note that I’m not comparing the cost of moving your hardware to the cloud but rather your applications. There’s a big difference. You must abstract away the hardware in order to gain the advantages of cloud, not simply replicate it off-site.
Perhaps the only factors in favor of on-prem infrastructure are compatibility and familiarity. But at the rate this industry is moving, you will be rethinking that particular application or service in five years anyway, so why worry about compatibility with what you were running five years ago? Continuing to invest in on-prem infrastructure is the equivalent of throwing good money after bad down a bottomless well.
The typical smart Enterprise IT person usually spends a large percentage of his or her time and efforts getting close to purveyors of on-prem hardware and software. As the organization grows, he is constantly pressured into buying more servers, improving security, increasing storage, adding better email archival and compliance tools, adding load balancer appliances and firewalls as he is asked to offer better service availability for employees as well as customers. He is also constrained by his budget and the strategic decisions of upper management.
The easiest answer is to ask for the same budget as last year and keep buying more “stuff”. It’s the path of least resistance. And because most enterprise applications run on-prem today, it’s often easiest to just add to the existing infrastructure rather than completely overhaul the application.
These same smart guys often end up creating a symbiotic relationship not just with the sales teams at those hardware and software vendors but also with their respective R&D organizations. And they pressure these well-meaning R&D organizations into signing up for huge enterprise license agreements “if only” they can get a specific feature added to the product. The PM in charge shakes hands, promises to “look into it” for the next release of the OS or the appliance or the firewall, and creates the appropriate specs to get it into the next release. This is how incremental improvements that only solve the problems of a single organization end up as “requirements” that shape major releases of all platform software.
“You know, I have one simple request. And that is to have sharks with frickin’ laser beams attached to their heads! Now evidently, my cycloptic colleague informs me that that can’t be done. Can you remind me what I pay you people for? Honestly, throw me a bone here. What do we have?” — Mike Meyers. Austin Powers: International Man of Mystery.
Meanwhile, back at the ranch, that IT organization has gone through three re-orgs and leadership changes, has changed direction four times, and has laid off or churned most of its staff.
Finally, a year or two later, the fateful moment arrives and we deploy the new version of software on all our servers. And, of course, they all promptly crash. The engineers spend all weekend debugging the problem in the customer’s environment and come back with their verdict: “We ran into a specific bug that only manifests itself when you run version x.y of that firmware on the network controller as well as version a.b of the network driver from the vendor and you have to add more than 5000 firewall rules through this API that the customer requested. We had accounted for two out of three variables but had to cut the testing for that particular combination of variables in order to meet our schedules.”
So, basically, your data center is the first time all these pieces of hardware and software have come together — and I’ve only described the simplest of scenarios.
Every enterprise deployment is bespoke.
By this point. the smart Enterprise IT Guy is polishing up his resume and quietly moves across the street to a competitor. The developers who worked on the software are long gone as well so some poor engineer in a maintenance team gets to “fix” the problem — which usually means introducing a hack because he or she doesn’t really understand the intent of the original author.
Multiply this by two dozen hardware and software vendors and you see why the private cloud/local data center story/on-prem enterprise application deployment model is doomed to failure. The costs associated with the “old” model of computing are often not included in the math when opting for on-prem solutions.
Playing System Integrator to dozens of disparate pieces of hardware and software, owning and operating every level of the stack by people who don’t have access to the code, no longer make sense.
The war is over. Just like we gave up and standardized on one processor architecture and moved up the stack, it’s time to admit that there is much better hygiene in the public cloud world than there is in the spaghetti world of shrink wrapped on-perm software. Reducing the combinatorics increases reliability by reducing complexity.
Go up the stack, young man! Stop fretting about infrastructure, outsource it all to cloud based services, and move up the stack if you really want to add value to the business. And, in doing so, think at the application level, at the service level, not at the infrastructure level.
The only investment I would make in on-prem software at this point would be to improve utilization of existing infrastructure and applications. If it helps squeeze more out of the existing hardware and software, go for it. Otherwise, stop. Stop buying hardware, stop buying software, stop upgrading (except for security fixes), just stop!
Instead, go spend the time to understand the real requirements from the organization on the specific enterprise application. Take the top five requirements, find the cloud vendor that offers them most effectively, and start using it as-a-Service. Don’t worry about every random and esoteric feature that your employees currently use. They’ll figure out how to do their job some new way. Worry only about really nailing the top five requirements. If the rest of your requirements are really important, the software-as-service provider will sooner or later offer it — after the rest of the community-at-large has thoroughly tested it — and not as some one-off feature that you get to be the guinea pig for.
The sooner we all abandon the “old” model and move up the stack, the better off we all are.
And, if you’re in the on-prem infrastructure hardware or software business: Stop listening to your enterprise customers when they ask for bespoke features. You’re not helping . Chances are, you will build a feature that doesn’t actually do what the customer really had in mind, will divert crucial development and test efforts that are doomed because they are guaranteed not to mimic the customer’s kaleidoscopic and unique environment, and will end up disappointing everyone in the end.
IT personnel will correctly point out that they are often powerless when it comes to making such major architectural changes. The purse strings are held by lines of business within the corporation, they only get to implement what the various Business Units want. Having spent millions of dollars on data centers and related infrastructure, those decision makers are reluctant to abandon the status quo for the promise of the cloud.
The good news here is that developers in those same BUs are already moving to the public cloud in droves. They don’t want to be bothered with infrastructure details or delays in hardware procurement, storage allocation, and network reconfiguration required to shoehorn a new application into an existing data center. It’s so much easier to pull out your credit card and buy some capacity on a public cloud. Why fill out forms and wait weeks when you can start coding in minutes? It’s only when the application is ready for deployment that the IT team is consulted.
Given this trend, it will only be a matter of a few years before all new applications are cloud native and the on-prem infrastructure is relegated to the dust bin of history or at best begrudgingly maintained for legacy application support.
My recommendation to IT personnel, in this case, is to avoid adding more load, more users, more applications to the existing on-prem infrastructure. Cap the investment and aggressively move new applications and users to the cloud. Buying more hardware will only increase your depreciation budget over the next few years, thereby further reducing your ability to cut the overall Capex expenditure. If you need additional capacity during the transition, try renting bare metal servers from public clouds instead of building new data centers or upgrading existing ones with new hardware. This at least gets rid of part of the Capex problem and gives you a chance to validate the cloud provider’s reliability and availability while you wait for the next hardware refresh cycle. This, by the way, is the only way in which a “hybrid cloud” strategy makes sense: the outsourcing of hardware to public cloud providers for bursting or failover instead of amassing vast quantities of hardware on-prem just in case you need it.
I fully recognize that the enterprise application market has a very long tail. There are still many companies out there benefiting from the IBM mainframe based market. Many others will continue to flourish for the next decade or two (at least) on the on-prem infrastructure hardware and software market.
But its days are numbered and we all need to step up and rethink our Enterprise applications in the process. We might as well start with a platform (the cloud) that is twenty years newer and fresher in terms of architecture. As opposed to continuing to spend 80–90% of our budgets on perpetuating the legacy enterprise stacks that were designed and implemented in the eighties and nineties.
Here’s the rub, though: To do so will require really sitting down and understanding the top requirements on a per-application level. As opposed to assuming that 100% backwards compatibility trumps all others.
So much has changed over the past two decades. We have learned so much about availability, about reliability, about distributed systems architecture, about telemetry, about analytics and about security. Trying to shoehorn all of those learnings into a dated deployment architecture and a monolithic code base is like wearing a straight jacket and then picking a fight with Mike Tyson. You know it’s not going to end well.
A new generation of startups are disrupting every industry on the planet: not just consumer brands but also enterprise brands like Workday and Salesforce and Atlassian are becoming the standards. I can’t think of a single new startup that concentrates on on-prem software alone. They may offer an on-prem version of their product but all of their development and testing efforts are geared towards cloud based solutions. Carry that trend forward a couple of years and you will see the end of the traditional model.
The startup community and the VC community have spoken clearly. The former are now running world-wide operations and delivering services to millions of customers while the latter have bet heavily on their eventual success. Some small subset of these startups will become the IBMs, Microsofts, and Oracles of tomorrow — and they will get there with 100% born in the cloud software stacks. In fact, they are already delivering enterprise-class software to thousands of enterprise companies and millions of end users.
Who do you think will be more agile five years from now? The enterprise companies who amassed their own data centers and spent their time being System Integrators for the old guard or the ones who bet on the next generation computing platform — the cloud?
[Originally published here in slightly different format.]
Other related blogs:
- Enterprise Apps: "Bring Out Your Dead!"
- Public Cloud or Private, that is the Question!
- The Public Cloud: A Defense
- Dr. StrangeCloud - Or, How I Learned to Stop Worrying and Love the (Hybrid) Cloud - AirWatch Blog
The Death (and Re-Birth) of Infrastructure: The Rise of Public Cloud was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.