HANA News Blog

Beware of cloud costs

Jens Gleichmann • Sept. 13, 2024

Unforeseen cloud cost increases

    The TCO calculation of cloud costs is a moving target. In the past, all hyperscalers have increased their prices or, like other cloud service providers such as SAP, have directly included an annual increase in the contract. The focus is usually directly on the services consumed, but associated costs, which are still small today and have never been on the radar, can explode. This is the case if the providers have changed their licensing/subscription model. Most recent case to name is Broadcom/VMware but now also another vendor announced a change: RedHat.

Most SAP systems are still running on SUSE and as far as I know also all RISE (+GROW) with SAP systems (public+private). In the past more and more customers changed the OS strategy due to lower/equal costs and RHEL features. They migrated their systems to RHEL.

RedHat announced back in January this year that the costs for cloud partners will be changed effective April 1, 2024. They called it scalable pricing. The previous pricing structure featured a two-tiered system that categorized VMs as either "small" or "large" based on the number of cores, or vCPUs, with fixed prices assigned to each category regardless of the VM's actual size. The duration of VM allocation determined subscription fees, as the number of cores or vCPUs did not influence the pricing, resulting in a capped cost for RHEL subscriptions.

Old RHEL Subscription model categories


  • Red Hat Enterprise Linux Server Small Virtual Node (1-4 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Large Virtual Node (5 or more vCPUs or Cores)

This means the new price is depending on the number of vCPUs. Unlike before, the cost of a subscription will no longer be capped. Now some of you remember the chaos of the socket subscription or the oracle licensing model incl. the analogy of the parking spots. This means a physical core costs as much as the virtual core. Ok, fair enough if I use more cores I have to pay more. But this means the costs can also be lower for systems with less vCPUs, right? It can be a good thing, this new fancy "scalable pricing". So, when the costs are dropping and rising and who is affected?

Redhat:

"In general, we anticipate that the new RHEL pricing to cloud partners will be lower than the current pricing for small VM/instance sizes; at parity for some small and medium VM/instance sizes; and potentially higher than the current pricing for large and very large VM/instance sizes."


    New RHEL Subscription model categories:


  • Red Hat Enterprise Linux Server Small Virtual Node (1-8 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Medium Virtual Node (9 - 128 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Large Virtual Node (129 or more vCPUs or Cores)


Amazon's statement:


"Any new product (such as a new instance type, new instance size, new region, etc.) launched after July 1, 2024, will use the new RHEL pricing. Therefore, any new rates (such as new regions or new instance types) launched after July 1, 2024, will be on the new RHEL pricing, even if the Savings Plans were purchased prior to July 1, 2024."

Microsoft Azure statement:


"In response to Red Hat’s price changes, Azure will also be rolling out price changes for all Red Hat Enterprise Linux instances. These changes will start occurring on April 1, 2024, with price decreases for Red Hat Enterprise Linux and RHEL for SAP Business Applications licenses for vCPU sizes less than 12. All other price updates will be effective July 1, 2024."

GCP statement:


"On January 26, 2024, Red Hat announced a price model update on RHEL and RHEL for SAP for all Cloud providers that scales image subscription costs according to vCPU count. As a result, starting July 1, 2024, any active commitments for RHEL and RHEL for SAP licenses will be canceled and will not be charged for the remainder of the commitment's term duration."

This means with the receiving of the July invoice in August the first customer took notice of the change. The smart cloud architects who considered the usage of cloud Finops noticed it earlier ;)

Let's have a look at the MS Azure pricing calculator, because they still have the most market share. With 12 or less vCPUs you will save money. But what does it mean for customers running SAP HANA instances, due to the fact that the smallest MS Azure instance is 20vCPUs? The cost of RHEL for SAP was previously $94.90 (constant line at the bottom of the graph) per month for all certified HANA instances.   

MS Azure: RHEL price per certified HANA SKU [source: MS price calc: europe west / germany west central]


If you are running large instances with 128 and more vCPUs you will see that the costs are differ with more than $10,000 per month! If you are using large sandboxes or features like HANA system replication (HSR), you can multiple this additional costs.   

MS Azure: Difference of 3 years reserved instances (3YRI) RHEL for SAP total instance costs [source: MS price calc: europe west / germany west central]


If we drill it down to the RHEL only costs per month, we can also the see the price per vCPU and the switch from 64vCPU to 128vCPU price.   

MS Azure: Difference of 3 years reserved instances (3YRI) RHEL for SAP costs per month (before vs. now) [source: MS price calc: europe west / germany west central]
MS Azure: total RHEL pricing per SKU with costs per vCPU [source: MS price calc: europe west / germany west central]


Summary

For those using RHEL for small instances < 12 vCPUs the new RHEL pricing model is a saving. For SAP customers with larger instances it is a nightmare with unplanned additional costs which take directly effect by July 1, 2024.

Paying the same price for a physical core as for a virtual core is a simple calculation, but it does not have the same value. Anyone using multithreading will be penalized more heavily, and once again the customer will question whether Intel is worth the effort when comparing the cost and the performance gained.

Are customer now stop migrating to RHEL in cloud projects due to this heavy price difference?

Is it possible to get some RHEL volume discount?

The previous TCO calculation can be thrown in the trash.

Strange findings:


  • price calculators of Google and MS Azure are not up-to-date with the new RHEL pricing model
  • if you choose RHEL for SAP with HA option you get the old price of $87,60 for <= 8vCPUs and $204,40 for larger instances in the price calculator of MS Azure
  • MS Azure is calculating the large virtual node price ($7.01 per vCPU) for the 128vCPU SKUs which should have the medium virtual node price ($7.88 per vCPU) by definition

Options:


  1. AWS and may be some others can provide RHEL volume discounts in exchange for an annual spend commitment. Please work with your Technical Account Manager to assess if you qualify.
  2. Resize your instances which do currently not need the number of vCPUs
  3. migrate to SUSE (but sure SUSE will also adapt their costs in the near future - just a matter of time)


SAP HANA News by XLC

HANA OS maintenance
von Jens Gleichmann 27 Sept., 2024
Please notice that when you want to run HANA 2.0 SPS07, you need defined OS levels. As you can see RHEL7 and SLES12 are not certified for SPS07. The SPS07 release of HANA is the basis for the S/4HANA release 2023 which is my recommended go-to release for the next years. Keep in mind that you have to go to SPS07 when you are running SPS06 because it will run out of maintenance end of 2023.
News for the hyperscaler AWS, GCP and MS Azure
von Jens Gleichmann 20 Sept., 2024
news instances with SAPS, memory and CPU values in comparison
Unforeseen cloud cost increases
von Jens Gleichmann 13 Sept., 2024
Unforeseen cloud cost increases - RedHat announced back in January this year that the costs for cloud partners will be changed effective April 1, 2024. They called it scalable pricing.
HANA 2.0 SPS08 Roadmap
von Jens Gleichmann 13 Sept., 2024
SAP HANA 2.0 SPS08 Roadmap Q4 2024
SUSE maintenance
von Jens Gleichmann 16 Aug., 2024
How to interpret the SUSE Lifecycle
RISE with SAP: Roles & Responsibilities
von Jens Gleichmann 24 Mai, 2024
For every possible RISE with SAP customer it is essential to know the difference of the status quo system construct (on-prem self managed / hosted or managed by a MSP) and the RISE offering with a lot of excluded tasks or tasks with additional costs. If you don't need this tasks, it might be a perfect solution, but our experience is that most customers need some of the services with extra costs.
Performance degradation after upgrade to SPS07
von Jens Gleichmann 29 Apr., 2024
With SPS06 and even stronger in SPS07 the HEX engine was pushed to be used more often. This results on the one hand side in easy scenario to perfect results with lower memory and CPU consumption ending up in faster response times. But in scenarios with FAE (for all entries) together with FDA (fast data access), it can result in bad performance. After some customers upgraded their first systems to SPS07 I recommended to wait for Rev. 73/74. But some started early with Rev. 71/72 and we had to troubleshoot many statement. If you have similar performance issues after the upgrade to SPS07 feel free to contact us! Our current recommendation is to use Rev. 74 with some workarounds. The performance degradation is extreme in systems like EWM and BW with high analytical workload.
vm.swappiness settings
von Jens Gleichmann 24 März, 2024
The details of swapping in the context of SAP HANA
SUM tooling with target HANA
von Jens Gleichmann 18 März, 2024
Numerous IT projects such as S/4HANA projects or HANA migrations will go live over the Easter weekend. Mostly this tasks will be controlled by the SAP provided SUM tool. The SUM is responsible for the techn. migration/conversion part of the data. Over the past years it become very stable and as long as you face no new issues nearly every technical oriented employee at SAP basis team can successfully migrate also bigger systems. In former times you needed a migrateur with certification which is no longer required. As long as all data could be migrated and the system is up and running the project was successful. But what does the result look like? Is it configured according to the best recommendation and experience? Is it running optimized and tuned?No, this is where the problem begins for most companies. The definition of the project milestone is not orienting on KPIs. It is simply based on the last dialog of the SUM tool, which states that the downtime has ended and all tasks have been executed successfully.
Abstract SQL Plans
von Jens Gleichmann 13 Dez., 2023
The feature plan stability is not a new one, but can help you in case of an revision update/upgrade, if you recognize big performance degradations. But you have to activate this feature at least 1-2 weeks before the maintenance to capture the SQLs and the execution plan. You can compare the plan performance and apply the execution plan with the best performance. You can also use it as always on feature in daily operations. This may be required due to changes in data over time which may cause the query optimizer to propose different execution plans which may have a negative impact on performance and memory consumption of a query. An additional preparation step can be used to apply filters so that only specific queries are captured. In the background execution statistics are recorded so that the performance of the query can be measured and the best execution plan can
more
Share by: