HANA News Blog

Is the effort for NSE worth it?

Jens Gleichmann • 18. April 2025

Is NSE worth the effort or is the better question: Do you know your cost per GB of RAM?

Most of our presentations on data tiering projects end with these typical questions:

  • How much we will save?
  • How fast can it be implemented?
  • Is the effort worth it over time?


My counter question:

"Do you know how much 1 GB of memory costs your company per month or year?"


=> how much memory we have to save to be beneficial?


Most of the customers cannot instantly answering this questions due mixed costs (hardware, software, maintenance, extra services, etc.). Do you know yours memory costs? Over the years we collected such data specific only for the infrastructure for on-prem and cloud systems.

For on-prem systems we are in the range of 1.30€ to 2.00€ per GB per month. This means in average 1.80€ (over the number of customers).

For IaaS systems (3 years reserved instances) we are in the range of $0.77 to $4.80 per GB per month for MS Azure (customer average: $2.56) and $1.64 to $5.67 per GB per month for GCP (customer average: $3.47). Please note this not a rating pro or con one of the hyperscalers due to the lag of comparison. You cannot compare the instances 1:1 due to the difference in CPU processor, number of CPUs, interfaces, costs per OS, different contract discounts etc. Please be aware of the concurrency conversion issues from $ to €.

In the end it is an easy calculation. If you save more memory compared to the costs of a data tiering project, you have your benefit and the answer if the effort worth it.

The cost of a NSE project? It depends as always. Mostly we do more than just NSE. We are optimizing partitioning designs, index memory usage and tuning some SQLs. This means not only NSE alone will save the memory but a big part belongs to the NSE benefit.

A NSE project is normally in the range of 15,000 to 30,000€. This means if we assume the max. costs, the benefit of NSE have to be bigger than 30,000€ over time.


How much memory saving you can expect?

Typically, depending on the archiving strategy, the aggressiveness of the NSE design, and system growth, we achieve savings of 27% to 35% per system. This means for the most customers it includes the PRD, QAS, and even the secondary HANA system replication site. For most environments, you save three times as much memory per NSE project.

If we save 500GB in the HANA sizing which means round about 250-300GB of payload (our smallest project achieved a saving of 600GB in the sizing with costs of around about 13,000€). When we will reach the break even with costs of 30,000€? Do we need multiple years? No! For most of the system we will reach the break even point in less than 10months.   

500GB saving per system
12months: 500GB saving per system

If we assume a saving of 800GB per system in the HANA sizing we would reach the break even point for all systems in 8 months.

800GB saving per system

As you can see even with "high" NSE project costs you will achieve ROI in about one year.

Rule: The more systems are affected and the more savings can be achieved per system, the faster the investment in an NSE project will pay off.


The best aspect is that not only save system resources and money, you will also improve your energy efficiency and reduce your business’ carbon footprint. With the EU’s CSRD now regulating that compute energy must be measured as an emission, the environmental cost of HANA can soon overshadow its monetary cost.


If you are planning a RISE project you should implement NSE before you migrate to RISE because currently there is no CAS for a NSE design.

SAP HANA News by XLC

Buch: SAP HANA Deepdive
von Jens Gleichmann und Matthias Sander 30. März 2025
Unser erster Buch mit dem Titel "SAP HANA Deepdive: Optimierung und Stabilität im Betrieb" ist erschienen.
More time to switch from BSoH to S/4HANA
von Jens Gleichmann 7. Februar 2025
Recently handelsblatt released an article with a new SAP RISE option called SAP ERP, private edition, transition option. This option includes a extended maintenance until the end 2033. This means 3 years more compared to the original on-prem extended maintenance. This statement was confirmed by SAP on request of handelsblatt, but customers receive more details, such as the price, in the first half of the year. This is a quite unusual move of SAP without any official statement on the news page. Just to raise more doubts? Strategy? However a good move against the critics and the ever shorter timeline. Perhaps it is also a consequence of the growing shortage of experts for operating and migrating the large number of systems.
Performance degradation after upgrade to SPS07
von Jens Gleichmann 3. Februar 2025
With SPS06 and even stronger in SPS07 the HEX engine was pushed to be used more often. This results on the one hand side in easy scenario to perfect results with lower memory and CPU consumption ending up in faster response times. But in scenarios with FAE (for all entries) together with FDA (fast data access), it can result in bad performance. After some customers upgraded their first systems to SPS07 I recommended to wait for Rev. 73/74. But some started early with Rev. 71/72 and we had to troubleshoot many statement. If you have similar performance issues after the upgrade to SPS07 feel free to contact us! Our current recommendation is to use Rev. 74 with some workarounds. The performance degradation is extreme in systems like EWM and BW with high analytical workload.
Optimize your SAP HANA with NSE
von Matthias Sander 15. Januar 2025
When it comes to optimizing SAP HANA, the balance between performance and cost efficiency is critical. I am happy to share a success story where we used the Native Storage Extension (NSE) to significantly optimize memory usage while being able to adjust the sizing at the end. The Challenge: Our client was operating on a 4 TB memory SAP HANA system, where increasing data loads were driving up costs and memory usage. They needed a solution to right-size their system without compromising performance or scalability. The client wanted to use less hardware in the future. The Solution: We implemented NSE to offload less frequently accessed data from memory. The activation was customized based on table usage patterns: 6 tables fully transitioned to NSE 1 table partially transitioned (single partition) 1 table transitioned by specific columns
SAP HANA NSE - a technical deepdive with Q&A
von Jens Gleichmann 6. Januar 2025
SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. In this blog we will talk about the technical aspects in more detail.
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery
von Jens Gleichmann 5. Januar 2025
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery engagement model dedicated to manage service delivery for RISE with SAP S/4HANA Cloud, private edition customers.
Proactive maintenance for SAP RISE will start now in 2025
von Jens Gleichmann 5. Januar 2025
Proactive maintenance for SAP RISE will start now in 2025 with minor tasks like updating SPAM/SAINT and ST-PI / ST-A/PI. For those companies which are familiar with frequent maintenance windows, they are good to have such time frames to hold the systems up-to-date and secure. However, for larger companies where such frequent maintenance windows are not common because every minute of downtime is costly and may only really be necessary once, the situation is quite different.
Dynamic Aging for NSE - combined with Threshold and Interval option
von Jens Gleichmann 28. Dezember 2024
Dynamic Aging makes it possible to automatically manage at which point in time older partitions can be moved to the 'warm' data store. The data in a new OTHERS partition is 'hot' data, that is, stored in memory with the load-unit attribute implicitly set to COLUMN LOADABLE. As an extension of the Dynamic Range Partitioning feature Dynamic Aging makes it possible to automatically manage when older partitions can be moved to the 'warm' data store (Native Storage Extension) with the load-unit attribute for the partition set to PAGE LOADABLE. Warm data is then stored on disk and only loaded to memory when required. Dynamic Aging can be used with both THRESHOLD mode (defining a maximum row count number in partition OTHERS) and INTERVAL mode (defining a maximum time or other numeric interval between each new partition). For example, for a partitioned table which is managed by dynamic partitioning and containing date/time information, you can specify an age limit (for example six months) so that when data in an ol
automatic maintenance of the 'others' partition
von Jens Gleichmann 28. Dezember 2024
You can create partitions with a dynamic others partition by including the DYNAMIC keyword in the command when you create the partition, this can be used with either a THRESHOLD value to define a maximum row count number or an INTERVAL value which can be used to define a maximum time or other numeric 'distance' value. The partition can be either a single level or a second level RANGE partition and dynamic ranges can be used with both balanced and heterogeneous partitioning scenarios.
HANA Range Partitioning details
von Jens Gleichmann 23. Dezember 2024
For heterogeneous partitioning schemas Dynamic Range Partitioning is available to support the automatic maintenance of the 'others' partition. When you create an OTHERS partition there is a risk that over time it could overflow and require further maintenance. Using the dynamic range feature the others partition is monitored by a background job and will be automatically split into an additional range partition when it reaches a predefined size threshold. The background job also checks for empty partitions and if a range partition is found to be empty it is automatically merged to neighboring empty partitions (the others partition is never automatically deleted).
more