Help! My once fast data lookups are now going slower.
As databases grow, if you are not careful, data access can take longer. This is very undesirable and can be avoided by understanding the issues and tuning your database accordingly.
There can be several reasons for data access getting slower over time:
We will cover each of these issues:
Remember that computer science class about hash collisions, load factors and chaining?
TimesTen hash indexes use a form of chaining when a hash collision occurs. This means that instead of getting constant/fast lookup times, the overloaded hash index will first do a hash lookup and then have to follow chains. This chaining gives linear access time - the longer the chain the longer the access time.
The way to avoid hash index chaining is to correctly size the your hash indexes:
Range indexes do not have buckets/pages that need to be sized as they are self managing.
If your table uses a hash index for its primary key, you can re-size the number of pages:
For hash indexes that are not for the primary key, you need to drop and re-create the hash index to change the number of pages.
TimesTen uses a cost based optimizer for its query plans. All SQL cost based optimizers desire up to date statistics. Stale statistics can result in bad query plans. Bad query plans give slow performance. The TimesTen In-Memory RDBMS is all about low latency, so help TimesTen give you the lowest possible latency by keeping your statistic up to date. There are several ways to do this:
TimesTen stores its tables, indexes, sequences, Materialized Views, PLSQL etc in shared memory.
Unless the shared memory segment is locked, it can be subject to paging [or worse swapping]. A well tuned TimesTen Database should wait for neither network nor disk IO. So for TimesTen to wait for the operating system to page back in some pages that are needed in your TimesTen shared memory segment is very annoying. Paging could add milliseconds to your original microsecond queries. Swapping could add seconds! You can lock the shared memory segment via two techniques:
So plan ahead. Correctly size your hash index pages, keep your stats up to date, lock the shared memory segments and use huge/large pages.
Disclaimer: these are my personal thoughts and do not represent Oracle's official viewpoint in any way, shape, or form.