If you talk to anyone who sells cloud computing resources, you typically hear the same message: “The cloud will save you money!” But the truth is, it’s very hard to know how much money the cloud can save you. Cloud providers often do a good job of telling you how much an instance with a hypothetical workload will cost. However, they often fail to tell you how much your environment will really cost in the cloud, and how you can actually save money.
Enter the cloud computing calculator. These tools typically asked you for a hypothetical workload size consisting of memory, CPU and storage. They map to a cloud instance size, and return a cost. The problem with these tools was that they either relied on you to specify the size of your workloads (which you usually matched one-to-one with your current configurations), or the tools did their own straight import of the workload size and ignored actual utilization. Both of these approaches resulted in workload sizing that was probably far larger than what you really need.
Truth be told, moving your workloads, as they are configured, to the cloud will most likely cost you significantly more. The reason? We often overprovision our workloads with the intention to grow into them as we mature. How long will this growth period be? In most cases, you may never reach the full potential you initially estimated. But as a result, you will be paying for resources in the cloud that you never will use. A little overhead is great for those peak bursts of power certain VMs need from time to time for complex or long-running tasks. Unfortunately, this overhead comes at a cost! So you need to either plan for it, or apply the extra resources on a case-by-case basis.
To find real value in a cloud migration, we need more information. We want to know the long- running performance characteristics of a workload, as well as some meaningful metrics. At minimum, we want to look at seven days of history, and ideally 30 to 365 days to account for scheduled and seasonal peaks. Looking at Peak load, 99th Percentile, and 95th Percentile adds some context to how much your workload can consume, and where you expect the majority of your workload to perform.
If you are willing to take a minor performance hit for long-running tasks, you can make a trade-off for potential significant savings, running workloads scaled to your 95th percentile of CPU and RAM. Knowing the workload, when the spikes in utilization occur, and what level of performance you are willing to live with can save a lot of money in the cloud scenario.
The CloudPhysics Private Cloud Cost Calculator can baseline your current data center cost per workload. As with all cloud models, the details you put in determine the accuracy of your output. Too many inputs can make the process difficult—and may not even apply to all workloads. Too many configurations for the cloud data makes the calculations even less accurate, as each VM potentially has its own unique costing scenarios.
We started by looking at what we really need to know when moving to the cloud. When we evaluated what to measure, we quickly distilled the cost down to the same attributes used to price a workload: CPU, memory, storage, and some infrastructure licenses. Additional savings in power and cooling could also be achieved. We feel this is also a driver for cloud adoption, so it was included.
In our initial release, we are looking at a near “apples-to-apples” comparison of compute and storage resources. We do have some room for improvement, such as backups and network bandwidth, but these are time-based, configurable variables which vary dramatically from cloud to cloud and traditionally require some degree of consideration.
We are confident that the combination of rightsizing and private cloud cost modeling will add significant value for most organizations. While there are always additional costs, we needed a baseline. We are confident that this baseline provides the knowledge and costs IT managers are craving and technology partners have traditionally lacked.