A primer on our performance tiers and allocation

Anyone who has dealt with hypervisors at scale know all too well the challenges that come with effective resource allocation. How do you make the most efficient use of memory and CPU? How do you not burn one or the other when you’ve allocated all that’s available? This post explores how OrionVM tackles these issues, and how newer generations of server hardware have enabled an evolution in our approach.

How OrionVM allocates resources

Coming from legacy cloud providers, OrionVM’s resource allocation system is easy to grok. Rather than pulling from a list of predefined SKUs for memory and CPU, OrionVM virtual machines are instead assigned a performance tier, and the partner is able to choose the memory and CPU required.

Custom tiers exist for specific workloads (we’ll cover that shortly), but broadly these performance tiers include:

  • High Memory: 1 vCPU core per 8 GiB of memory
  • Standard: 2 vCPU cores per 8 GiB of memory
  • High CPU: 4 vCPU cores per 8 GiB of memory

You can see how provisioning compute can be as easy as choosing memory and/or CPU, and scaling based on the performance tier as required.

From a customer perspective, these ratios allow for closer matching of existing resources on another provider, and makes quoting and provisioning resources significantly easier. Virtual machines can also be allocated to a different performance tier, or scaled up and down in an existing tier, based on evolving workloads and requirements.

Methodology

Allocating resources in this way appears simple, but comes with some important benefits from an architecture perspective. When OrionVM first launched its cloud in 2010, resources were allocated based on a power of two, meaning memory and vCPU cores tessellated efficiently across the cluster.

Take a hypothetical server with 128 GiB of memory allocated for virtual machines (hypervisors require their own memory as well). Two virtual machines of 64 GiB could theoretically boot into this space for full utilisation. But a 64 GiB, 32 GiB, and 16 GiB still leaves 16 GiB free. If you start to allocate arbitrary numbers (say, 11.5 GiB), you may end up with memory you can’t meaningfully allocate anywhere. This results in wasted capacity, which erodes any cost benefit.

This approach was carried across onto the current version of the platform, with the addition of Standard and High Memory.

An evolution in methodology

Server hardware densities have evolved since we first launched the platform, with significantly more cores per socket, and memory per chassis. These have facilitated the expansion of our core tiers to include more memory levels beyond powers of two, while still maintaining an efficient allocation ratio across hypervisors.

These additional RAM levels now include:

  • High CPU and Standard: 24, 40, 48, 56 GiB
  • High Memory 80, 96, and 112 GiB

These are now available on the platform for allocation. Exiting clients can also adjust their existing compute to conform to these new memory sizes. vCPUs continue to be offered at 1:8, 1:4, and 1:2 ratios to memory, or rounded up to the nearest core.