X

TimesTen In-Memory Database
for Extreme Performance

Why are my TimesTen queries getting slower?

Doug Hood
TimesTen Cloud Product Manager

Help! My once fast data lookups are now going slower.

As databases grow, if you are not careful, data access can take longer. This is very undesirable and can be avoided by understanding the issues and tuning your database accordingly.

There can be several reasons for data access getting slower over time:

  • Incorrectly sized hash pages
  • Statistics not being updated
  • Shared memory getting paged in/out

We will cover each of these issues:

Incorrectly Sized Hash index Pages

Hash index chaining

Remember that computer science class about hash collisions, load factors and chaining

TimesTen hash indexes use a form of chaining when a hash collision occurs. This means that instead of getting constant/fast lookup times, the overloaded hash index will first do a hash lookup and then have to follow chains.  This chaining gives linear access time - the longer the chain the longer the access time.

The way to avoid hash index chaining is to correctly size the your hash indexes:

  • The hash buckets each have 256 entries and are called pages
  • You want to size the pages based on the expected table rows and leave some room for overhead
    • Divide the expected number of rows in your table by 256
    • Add some head room, say 10% for growth
    • Use this number for pages.  For example
      • Your table is expected to have 1 million rows
      • 1,000,000 / 256 = 3906.25
      • 3906.25 * 1.1 = 4296.875
      • round it up to 4300 pages
      • Create hash index my_hash_index on my_table (my_columns) pages = 4300;

​​​Range indexes do not have buckets/pages that need to be sized as they are self managing.

If your table uses a hash index for its primary key, you can re-size the number of pages:

Alter the table's primary key hash index page size

For hash indexes that are not for the primary key, you need to drop and re-create the hash index to change the number of pages.

Statistics not being updated

TimesTen uses a cost based optimizer for its query plans. All SQL cost based optimizers desire up to date statistics.  Stale statistics can result in bad query plans.  Bad query plans give slow performance.  The TimesTen In-Memory RDBMS is all about low latency, so help TimesTen give you the lowest possible latency by keeping your statistic up to date.  There are several ways to do this:

  • The simplest and least efficient is using the ttIsql statsupdate command.
    • This will update stats for all tables and check all rows
  • The ttIsql statsupdate [schema.]tablename command updates statistics for just the specific table
  • The ttIsql statsestimate [[schema.]tablename { n rows | percent } command allows you to estimate the statstic based on a percentage or number of rows.  For large tables estimates can be 'good enough'.  Estimating statistics has the advantage of completing faster than checking all rows in a table.  For instance estimating '3 percent' [of rows] will be a lot faster than checking 100% of the rows in a table with billions or rows.
  • The builtin ttOptEstimateStats is like ttIsql statsestimate, but can be called at runtime from APIs like JDBC, OCI, PLSQL, ODBC etc.

Shared Memory being paged in/out

TimesTen stores its tables, indexes, sequences, Materialized Views, PLSQL etc in shared memory.

TimesTen uses Shared Memory

Unless the shared memory segment is locked, it can be subject to paging [or worse swapping]. A well tuned TimesTen Database should wait for neither network nor disk IO. So for TimesTen to wait for the operating system to page back in some pages that are needed in your TimesTen shared memory segment is very annoying.  Paging could add milliseconds to your original microsecond queries.  Swapping could add seconds! You can lock the shared memory segment via two techniques:

  • Using the MemoryLock attribute in your sys.odbc.ini config file
  • Using Huge/Large Pages
    • Whenever your TimesTen database is larger than 256 GB, you must use huge/large pages
    • Using huge/large pages for DB < 256 GB can also give a performance improvement

So plan ahead.  Correctly size your hash index pages, keep your stats up to date, lock the shared memory segments and use huge/large pages.

Disclaimer: these are my personal thoughts and do not represent Oracle's official viewpoint in any way, shape, or form.

Join the discussion

Comments ( 1 )
  • M P Ardhanareeswaran Wednesday, June 7, 2017
    - Is there a best current practice for when or how often statistics should be updated?
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha