Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data.
In-memory databases have helped address performance concerns by leveraging main memory to deliver high throughput and low latency when handling analytic workloads. With an in-memory analytics approach, high-throughput ETLs, data warehouse servers, and data warehouses residing on disk can be eliminated, yielding obvious CapEx savings. Real-time analytics can be completed on the transactional data without impacting OLTP performance. But this introduces a new set of problems as the constant growth of data has forced these in-memory databases to be compressed to fit into memory and that means whenever analysis is being done on the data, it must first be decompressed. This has led to bottlenecks at a memory and processor level, with both having to handle compression, decompression, and querying of the analytics database, while continuing to handle other application workloads in need of memory capacity and cycles from the same processor.