We welcome the VMware vCenter Server on IBM Cloud Simulator, bringing Superhero insights to mere mortals.
In the movie Avengers: Infinity Wars, Doctor Strange takes on the daunting superhuman task of viewing all the possible outcomes of the Avengers chances at beating their rival Thanos. With millions of possibilities, Doctor Strange saw only one positive outcome where they succeeded. If only we could possess this skill in IT we could model the ideal hardware configuration with the ideal performance, size, and costs. This power of vision to see the ideal outcome should not be the skills of just superheroes.
Introducing a SuperPower for us mortal IT engineers
CloudPhysics is happy to announce the completion and availability of the VMware vCenter Server on IBM Cloud analytic to assist customers and partners in sizing their current virtual environment and mapping to the ideal server configuration on IBM Cloud. We allow you to select individual workloads, hosts, clusters, or global data centers and map these resources into IBM server hardware. Each iteration can be completed within seconds and quickly show the ideal configuration and cost per year for the solution.
The VMware vCenter Server on IBM Cloud Simulator
In the past, what was weeks of analysis and computation for a mortal engineer to even consider a solution can now be completed within seconds with pinpoint accuracy. Doctor Strange could have a new rival with CloudPhysics users.
Why do I need a simulator for my data center analysis? With this service comes a complex array of options and sizing configurations to meet even the most demanding enterprise needs, but this also introduced nearly an infinite number of combinations for a user to configure. With numerous customization points comes complexity.
CloudPhysics started working with IBM and Intel early in the year with a vision to provide a simulator that delivered the perfect balance between capacity, performance and cost. With millions of combinations, the chances of any individual finding the ideal outcome would have been nearly impossible. With multiple processor configurations, complex drive options, and Intel Optane choices, the effort to find the lowest cost solution with the right configuration and performance would have been months of labor in spreadsheets and calculators. The flexibility to change variables such as drive sizes, quantity, and performance options made storage a complex calculation. Add to this dozen of memory configurations, multiple Intel processor options, and knowledge each of these choices incur a different cost meant price and performance modeling could not be done manually without significant effort.
In addition, we needed to look at the complexity of variables in your existing data center performance IOPS, throughput, and VM size. We quickly find no human could find a cost-effective solution without over purchasing. Don’t get us wrong, the power of choices is incredibly attractive, but requiring a degree in mathematics made modeling real workload performance to a new environment difficult.
This would be the work of data science, not superheroes. CloudPhysics set out early to model this scenario with a VMware host packing model in 2017 with the launch of VMware first bare-metal cloud solution. We bring it forward today with upgraded hardware, performance, and software options.
Why does this matter? Let’s get technical…
VMware own initial bare-metal cloud release was far simpler with a single host configuration at the time. This left us to focus on the sizing and packing model to ensure the cluster size met the needs of the workloads. Sizing for a histogram of VM performance ensured we were not looking at the sum of PEAKS but rather at a sum of performance over time. 30 days of analytics and performance on a 20-second granularity quickly revealed the environmental needs, but mapping to the host model would require a little extra effort. We quickly realized the VMware vSAN could be the limiting factor for many clusters. A single VM with 20TB of defined storage would require multiple hosts in a vSAN cluster to offer sufficient capacity. This meant the model for sizing was dictated by disk capacity and not CPU for VMware on AWS. Not a great fit for very large storage workloads when it launched.
IBM takes a different approach
One size did not fit all and IBM realized choices offer the flexibility to scale a cluster by the constraining resource. If we were storage constrained, we could add more drives or larger drives. If we were constrained by system RAM, we could increase the memory per hosts. If IO and storage IOPS would be critical, the additional of Intel OPTANE SSD would benefit the solutions. Lastly, the density of compute and maximum available vCPU’s would dictate the size of the host and the number of workloads we could manage. IBM addressed this by offering 7 CPU solutions in dual and quad socket configurations. This made for a lot of variables when modeling a customer’s current production VMware environment.
CloudPhysics looked at the data, looked at the peak resource requirement for each constraint, and the choices available. We allowed the user to specify headroom, capacity, and performance options to accommodate growth. We then took all the VMware software choices and started modeling the solutions. The idea seemed simple, but the permutations were enormous.
Reigning in the options
To help the user find the ideal cluster configurations based on a couple of criteria we simplified the options identifying 2wo very unique scenarios.
- What is the lowest cost solution that meets my needs and future growth?
- What is the fewest host configuration that meets my needs?
Each scenario offers a very unique angle on the cluster design and could lead to significant cost impacts for an organization. For some, the cost is critical. The lowest cost cluster solution does not necessarily mean the fewest hosts. As a result, you may find you can save tens of thousands of dollars on a cluster with more smaller hosts. The alternative could be the fewest hosts to reduce host license costs and to maximize the server density. This option may result in larger CPU configurations with more cores, more system RAM, and more storage. All of which typically come with their own premiums.
The model shows modeling a cluster is a balancing skill between costs, performance, density, and software licenses. But, the exercise need not be difficult with the right tools and visibility into the needs of the data center.