The first thing we understand about SAP HANA is that with advanced hardware capabilities the programming model can accommodate data processing at the database level. This means all data processing developers used to do in SAP by pulling data to the application layer from the database layer can be done in the database layer. So, instead of pulling data, we are pushing computational code to the database layer. This brings us to the big question – Is HANA capable of handling computations? The answer, fortunately, is Yes.
One of the direct impacts of the processing data in HANA is quick processing. Since data transfer from the database layer to the application layer is no longer needed, heavy volume data transfer is replaced by transfer of only the resulting set. This can effectively be achieved by using joins. The impact is even more pronounced in case of aggregate and grouping functions. Example, in order to get prices of all the invoices, instead of pulling all items to AS, an aggregate function SUM() can be used instead. This feature enables real-time analytics which has found great utility in many domains.
While there are great advantages, it comes at a cost. First, from the development perspective, it is trickier for a developer to handle both HANA and AS and ensure that there are no conflicts. This needs a careful navigation through the development lifecycle till object settles in Production. This warrants a cautious approach in handling. However, the upside is significant. Consider a report that is built upon data present in a dozen tables with significant transactional data. This report’s performance can be improved by miles by implementing this new approach. That is why, when ABAP Optimization comes into the picture – Code-To-Data paradigm (code-pushdown) – is one of the first things development teams address.