That's what they say anyway. It has a beginning, middle, and end.
In the beginning, when it's a fledgling project with some business goal or other, the software demands a lot of flexibility. Developers are experimenting with new functionality, fixing bugs, and refining the system to perform under load. At the same time, users have (a prior version of) the software and are using it to solve business problems related to the software project's goals. Ideally these groups are actively communicating, probably mediated by a support team, and the production deployment(s) of the software dynamically accommodate frequent changes to the software.
This is where Public Cloud shines. Before public cloud, you might have had a collection of servers in a data center somewhere nearby. If you need to update the software, you check the release notes, apply the updates / migrations / recommendations / etc. to a test environment, note what might need to change, fix that up in prod, schedule a maintenance window, all kinds of operations. With Public Cloud suddenly you could automate that whole process and have the cloud provide the necessary resources for a new version; then you just move the load balancer target and you're golden. The cloud takes care of that too.
And they bill you for that.
In the middle of the software development lifecycle, our fledgling project must spread its wings and fly. The project flies out of the nest where it's been incubating in early feedback, user-centered design, and intended use cases, and in to a a more general user base with the potential for more edge cases, security threats, and unintended outcomes. This is usually the part where the software's creators have to explain in writing how to set the software up correctly. So it's common to simplify the software deployment process at this stage to make it easy to deploy, and also to make recommendations about scaling parameters so that admins can plan to provide those resources. The scaling parameters are typically based on organizational feedback and heuristics from earlier in the software's life.
A lot of times the automation and recommendations are written against a specific cloud provider. A lot of times, that provider is AWS. Sometimes this is because AWS has some specific service that the software needs. Sometimes the cause for choosing a particular cloud provider is related to the deployment automation and not the software itself; for complex distributed systems this distinction is not always clear. Whatever the reason, the assumption is that AWS has got this, and that AWS will have the capacity to provide resources for the software more-or-less on demand.
And they bill you for that.
In fact, for most data-centric applications, if you keep it there you'll have a place for your growing app data in the cloud indefinitely.
And they bill you for that too.
If, on the other hand, you move the software deployment out of the cloud, and into your own private or hybrid cloud on your own hardware, you're going to need a "cloudy" interface to your infra stack. So you set up an Outpost?
And you get the idea.
That's why Kubernetes, why Helm, why Platform as a Service, and why Cloud Native in general. Because before you went to the cloud you probably had a bunch of ESX servers running Windows VMs that hosted .net code. Now that whole ecosystem is essentially a subset of cloud native. Linux won. Woot.
Boy howdy is it complicated to set up your own Linux based cloud. It's not at all like installing ESX a few times, importing a vCenter image, and clicking around a hierarchical view of your data center resources. Where to even start? Canonical JUJU? RHEV? Proxmox? OpenStack? ClusterAPI with a custom libvirt driver? Hmm.
AWS would just bill us for that. Which at least leave us an actionable business decision. One that weighs the cost of making Linux business-consumable against the uncertainty of being able to successfully implement cloud in-house at scale. So how to get the same kind of confidence in our ability to implement linux-based cloud and run it with an in-house team, as we have in Amazon's ability to deliver (and bill for) arbitrary cloud resources on demand?
One way to build confidence is repetition. AWS with any sort of infrastructure-as-code yields repeatable operations that can be triggered on-demand and fully automated. It would be nice if, say, I could apt-get cloud-orchestrator and then install a ClusterAPI management plane that would talk to that. OpenStack might be the closest thing I can think of to this, but OpenStack is notoriously complex and can consume a lot of hardware, making it difficult to use as a test environment. Repeatedly deploying openstack on real hardware in an automated fashion is also difficult.
Another way to build confidence is simplicity. Simple solution paths are obvious from a "thinking" standpoint, and generate minimal cognitive load. This could be because the solution is actually easy to explain, or because the solution appears similar to other historically successful paths (like the apt-get example above; simple right?).
If we take a close look at our assumptions so far, we've made some educated guesses about sizing and the applications' needs in different scenarios. Once those assumptions are validated, and the the operating ranges are readily defined, how much value is there in the elasticity of the cloud? The value lies mostly in those early stages of the lifecycle. For a lot of mature software, you just need access to a pretty fixed set if resources. To maintain software that automatically scales its components independently such that it triggers Kubernetes to add or remove appropriate nodes automatically. That's not simple.
So if our mature software leaves the "nest" of the public cloud where changes are quick and easy, to take flight on its own infrastructure, then let's not make the infrastructure complicated. Rather than build something infinitely scalable, resource conscious organizations would likely choose to build something that's flexible enough to accommodate dev, staging, and prod versions of the app through the normal operating parameters. It'll change size over time, and it may even span both on premise environments and cloud environments, but it'll be predictable and repeatable and the operators of the software will know how to make it perform as needed; it will, in fact, be pretty simple to operate after many repetitions on a well understood platform.
At that point you have a mature software project that takes well understood care and produces well understood business results. Naturally you optimize as much cost out of it as you can, and run it that way for as long as you need to.
In the end, It will likely be supplanted by a different piece of software with a superset of the functionality. It may make sense to move the software back to the cloud at that point. Or maybe just host the data in-place and use cloud compute. Regardless, the cloud infrastructure that's in your data center is still yours, and will be waiting to accept the next app as it stabilizes and needs a simpler, more cost effective hosting environment.
But there's a significant gap in the simplicity equation here: can we factor out elasticity and still have a cloudy interface? Kubernetes could do this for us now with fixed-size clusters that are large enough to accommodate the app at its largest, data stores that allow for plenty of room for live data at appropriate protection levels, and network connections that are secure.
Not even dealing with node auto-scaling at all? That's pretty simple. But it does rely on the kind of careful measurements and flexibility that you have in the cloud in order to figure out how to size your hardware. Once you're at that point, rethink the value of infinite elasticity.
Because they bill you for that.