HANA News Blog

Beware of cloud costs

Jens Gleichmann • 13. September 2024

Unforeseen cloud cost increases

    The TCO calculation of cloud costs is a moving target. In the past, all hyperscalers have increased their prices or, like other cloud service providers such as SAP, have directly included an annual increase in the contract. The focus is usually directly on the services consumed, but associated costs, which are still small today and have never been on the radar, can explode. This is the case if the providers have changed their licensing/subscription model. Most recent case to name is Broadcom/VMware but now also another vendor announced a change: RedHat.

Most SAP systems are still running on SUSE and as far as I know also all RISE (+GROW) with SAP systems (public+private). In the past more and more customers changed the OS strategy due to lower/equal costs and RHEL features. They migrated their systems to RHEL.

RedHat announced back in January this year that the costs for cloud partners will be changed effective April 1, 2024. They called it scalable pricing. The previous pricing structure featured a two-tiered system that categorized VMs as either "small" or "large" based on the number of cores, or vCPUs, with fixed prices assigned to each category regardless of the VM's actual size. The duration of VM allocation determined subscription fees, as the number of cores or vCPUs did not influence the pricing, resulting in a capped cost for RHEL subscriptions.

Old RHEL Subscription model categories


  • Red Hat Enterprise Linux Server Small Virtual Node (1-4 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Large Virtual Node (5 or more vCPUs or Cores)

This means the new price is depending on the number of vCPUs. Unlike before, the cost of a subscription will no longer be capped. Now some of you remember the chaos of the socket subscription or the oracle licensing model incl. the analogy of the parking spots. This means a physical core costs as much as the virtual core. Ok, fair enough if I use more cores I have to pay more. But this means the costs can also be lower for systems with less vCPUs, right? It can be a good thing, this new fancy "scalable pricing". So, when the costs are dropping and rising and who is affected?

Redhat:

"In general, we anticipate that the new RHEL pricing to cloud partners will be lower than the current pricing for small VM/instance sizes; at parity for some small and medium VM/instance sizes; and potentially higher than the current pricing for large and very large VM/instance sizes."


    New RHEL Subscription model categories:


  • Red Hat Enterprise Linux Server Small Virtual Node (1-8 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Medium Virtual Node (9 - 128 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Large Virtual Node (129 or more vCPUs or Cores)


Amazon's statement:


"Any new product (such as a new instance type, new instance size, new region, etc.) launched after July 1, 2024, will use the new RHEL pricing. Therefore, any new rates (such as new regions or new instance types) launched after July 1, 2024, will be on the new RHEL pricing, even if the Savings Plans were purchased prior to July 1, 2024."

Microsoft Azure statement:


"In response to Red Hat’s price changes, Azure will also be rolling out price changes for all Red Hat Enterprise Linux instances. These changes will start occurring on April 1, 2024, with price decreases for Red Hat Enterprise Linux and RHEL for SAP Business Applications licenses for vCPU sizes less than 12. All other price updates will be effective July 1, 2024."

GCP statement:


"On January 26, 2024, Red Hat announced a price model update on RHEL and RHEL for SAP for all Cloud providers that scales image subscription costs according to vCPU count. As a result, starting July 1, 2024, any active commitments for RHEL and RHEL for SAP licenses will be canceled and will not be charged for the remainder of the commitment's term duration."

This means with the receiving of the July invoice in August the first customer took notice of the change. The smart cloud architects who considered the usage of cloud Finops noticed it earlier ;)

Let's have a look at the MS Azure pricing calculator, because they still have the most market share. With 12 or less vCPUs you will save money. But what does it mean for customers running SAP HANA instances, due to the fact that the smallest MS Azure instance is 20vCPUs? The cost of RHEL for SAP was previously $94.90 (constant line at the bottom of the graph) per month for all certified HANA instances.   

MS Azure: RHEL price per certified HANA SKU [source: MS price calc: europe west / germany west central]


If you are running large instances with 128 and more vCPUs you will see that the costs are differ with more than $10,000 per month! If you are using large sandboxes or features like HANA system replication (HSR), you can multiple this additional costs.   

MS Azure: Difference of 3 years reserved instances (3YRI) RHEL for SAP total instance costs [source: MS price calc: europe west / germany west central]


If we drill it down to the RHEL only costs per month, we can also the see the price per vCPU and the switch from 64vCPU to 128vCPU price.   

MS Azure: Difference of 3 years reserved instances (3YRI) RHEL for SAP costs per month (before vs. now) [source: MS price calc: europe west / germany west central]
MS Azure: total RHEL pricing per SKU with costs per vCPU [source: MS price calc: europe west / germany west central]


Summary

For those using RHEL for small instances < 12 vCPUs the new RHEL pricing model is a saving. For SAP customers with larger instances it is a nightmare with unplanned additional costs which take directly effect by July 1, 2024.

Paying the same price for a physical core as for a virtual core is a simple calculation, but it does not have the same value. Anyone using multithreading will be penalized more heavily, and once again the customer will question whether Intel is worth the effort when comparing the cost and the performance gained.

Are customer now stop migrating to RHEL in cloud projects due to this heavy price difference?

Is it possible to get some RHEL volume discount?

The previous TCO calculation can be thrown in the trash.

Strange findings:


  • price calculators of Google and MS Azure are not up-to-date with the new RHEL pricing model
  • if you choose RHEL for SAP with HA option you get the old price of $87,60 for <= 8vCPUs and $204,40 for larger instances in the price calculator of MS Azure
  • MS Azure is calculating the large virtual node price ($7.01 per vCPU) for the 128vCPU SKUs which should have the medium virtual node price ($7.88 per vCPU) by definition

Options:


  1. AWS and may be some others can provide RHEL volume discounts in exchange for an annual spend commitment. Please work with your Technical Account Manager to assess if you qualify.
  2. Resize your instances which do currently not need the number of vCPUs
  3. migrate to SUSE (but sure SUSE will also adapt their costs in the near future - just a matter of time)


SAP HANA News by XLC

SAP HANA NSE - a technical deepdive with Q&A
von Jens Gleichmann 6. Januar 2025
SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. In this blog we will talk about the technical aspects in more detail.
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery
von Jens Gleichmann 5. Januar 2025
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery engagement model dedicated to manage service delivery for RISE with SAP S/4HANA Cloud, private edition customers.
Proactive maintenance for SAP RISE will start now in 2025
von Jens Gleichmann 5. Januar 2025
Proactive maintenance for SAP RISE will start now in 2025 with minor tasks like updating SPAM/SAINT and ST-PI / ST-A/PI. For those companies which are familiar with frequent maintenance windows, they are good to have such time frames to hold the systems up-to-date and secure. However, for larger companies where such frequent maintenance windows are not common because every minute of downtime is costly and may only really be necessary once, the situation is quite different.
Dynamic Aging for NSE - combined with Threshold and Interval option
von Jens Gleichmann 28. Dezember 2024
Dynamic Aging makes it possible to automatically manage at which point in time older partitions can be moved to the 'warm' data store. The data in a new OTHERS partition is 'hot' data, that is, stored in memory with the load-unit attribute implicitly set to COLUMN LOADABLE. As an extension of the Dynamic Range Partitioning feature Dynamic Aging makes it possible to automatically manage when older partitions can be moved to the 'warm' data store (Native Storage Extension) with the load-unit attribute for the partition set to PAGE LOADABLE. Warm data is then stored on disk and only loaded to memory when required. Dynamic Aging can be used with both THRESHOLD mode (defining a maximum row count number in partition OTHERS) and INTERVAL mode (defining a maximum time or other numeric interval between each new partition). For example, for a partitioned table which is managed by dynamic partitioning and containing date/time information, you can specify an age limit (for example six months) so that when data in an ol
automatic maintenance of the 'others' partition
von Jens Gleichmann 28. Dezember 2024
You can create partitions with a dynamic others partition by including the DYNAMIC keyword in the command when you create the partition, this can be used with either a THRESHOLD value to define a maximum row count number or an INTERVAL value which can be used to define a maximum time or other numeric 'distance' value. The partition can be either a single level or a second level RANGE partition and dynamic ranges can be used with both balanced and heterogeneous partitioning scenarios.
HANA Range Partitioning details
von Jens Gleichmann 23. Dezember 2024
For heterogeneous partitioning schemas Dynamic Range Partitioning is available to support the automatic maintenance of the 'others' partition. When you create an OTHERS partition there is a risk that over time it could overflow and require further maintenance. Using the dynamic range feature the others partition is monitored by a background job and will be automatically split into an additional range partition when it reaches a predefined size threshold. The background job also checks for empty partitions and if a range partition is found to be empty it is automatically merged to neighboring empty partitions (the others partition is never automatically deleted).
A success story regarding BW/4HANA and different data tiering and optimization methodes.
von Jens Gleichmann 20. Dezember 2024
A success story regarding BW/4HANA and different data tiering and optimization methodes. 1) Removed overhead in key attributes which reduced the PK size (often more than 50% of the overall table size) 2) optimized the partitioning design 3) used NSE for write optimized ADSOs 4) introduced NSE for several ADSOs 5) optimized usage of inverted individual indexes
ACDOCA table growth - how to handle it
von Jens Gleichmann 10. Dezember 2024
ACDOCA table growth - how to handle it in a S/4HANA system
HANA 2.0 SPS08 Roadmap
von Jens Gleichmann 6. Dezember 2024
SAP HANA 2.0 SPS08 Roadmap and features Q4 2024
Partitioning process
von Jens Gleichmann 26. November 2024
SAP HANA scaling and tuning with proper partitioning designs
more
Share by: