Over the last 20 years or so I’ve had the opportunity to help build a variety business intelligence solutions for large organizations. It will be no surprise that one of the core requirements is fast query response times. This
requirement for fast response times seems pretty obvious – faster response times lead to greater end user satisfaction. The
performance of a database can also determine what types of applications are practical. Breakthroughs in performance
often lead to innovating new applications that can change how people work. A more efficient application is usually less
expense to operate since it might not require as large of a server.
Oracle Database In-Memory and the availability of more powerful servers
with many cores and plenty of memory really is a game changer, when it comes to improving query response times. With Database In-Memory far fewer CPU cycles are used to scan, filter, join and aggregate data. With modern servers, there are plenty of CPU cycles available for processing and DRAM is relatively inexpensive. So, Database In-Memory can do more with less and, since Database In-Memory is a feature of the Oracle Database that works with all of the others performance enhancing features of the Oracle Database, it has become yet another tool in my big tool box of performance improving techniques.
Recently, I was presented with a challenge. Could I provide extremely
fast query response times for a large number of concurrent users running
ad-hoc business intelligence workload on a database running on a single
server? More specifically, at least 100 concurrent users querying a 500
million row table with an average response time of just a few seconds.
This is the first post in a series where I will walk you through my solution to this performance challenge put before me. I’m actively working on this challenge as I
write this first post, so I’m not exactly sure how
the numbers will work out, but I’m pretty confident I’ll be successful. And you can be sure that Database In-Memory
will at the core of the solution!
a plan in mind, but perhaps some of you will have some suggestions. Feel free to let me know via the comment section below if you do.
Here’s what I know today:
I’m going to step through various designs and optimizations,
documenting the improvements along the way. I’m going to start with a baseline using only features that are
available in Oracle 11.2. That, of
course, means the baseline will not use Database In-Memory.
Since I can organize the data and write the queries any way
I want, I’m going to use a prescriptive approach to this challenge. My implementation will be designed to be CPU
efficient (that is use a few CPU cycles as possible) and avoid IO and other
wait events. I will decide what SQL
execution plan I want first and design the implementation around that. This is usually a much better approach that trying
to compensate for in efficient schema or queries that do not result in the best
Let me know if you have thoughts. In my next post, I’ll share my starting point
and baseline results.