This blog covers TimesTen memory management - what to look out for and how to tune it.
The concepts and actions apply to TimesTen 220.127.116.11 and 18.1 (both TimesTen Classic and TimesTen Scaleout).
This blogs covers running out of memory in a correctly configured database. When you run out of memory in a TimesTen database, the error looks like either:
There are four main things to consider for TimesTen memory management:
Tuning memory for TimesTen performance is a separate subject that will be covered another day.
TimesTen is a relational database with objects like tables, indexes, sequences, views, materialized views and PLSQL packages and procedures. As TimesTen is an In-Memory Database, these objects are always in memory all of the time to enable extreme performance. These objects are also stored in the file system for persistence.
As these objects are always in memory all of the time, care must be taken to not run out of memory. Although modern computers can have Terabytes of memory, you still need to carefully manage TimesTen Database memory. Most of the time, the TimesTen memory management 'will just work'. When there is a memory management issue, this blog should cover the issues and how to address them.
You should have an estimate of how much data that you expect to store in your TimesTen Database. You will need more memory than is needed to just store the data. TimesTen uses memory for the following to create a shared memory segment for a database:
TimesTen also uses a small amount of memory for the PLSQL runtime.
The memory used by the TimesTen database can be thought of as a swimming pool.
You need to put water into it to be able to swim, but you do not want to overfill the pool.
If you keep adding water to a swimming pool it will eventually become full. For databases, if you keep adding data (eg SQL Inserts) and never remove and data [eg SQL Deletes] then your database will eventually become full. This is true whether the database is an In-Memory Database [like TimesTen] or a disk based database [like Oracle, MySQL or SQL Server etc].
If you keep adding water to a swimming pool at the same rate that you remove water, then the pool will tend to stay at the same water level. For databases, if you have a sustained SQL insert rate which is the same as the sustained SQL delete rate, then the database will tend to have a steady number of rows.
If the sustained rate at which you add water to a swimming pool is greater than the sustained rate at which you remove water from a swimming pool, then eventually the pool will fill up with water. For databases, if the sustained SQL insert rate is greater than the sustained SQL delete rate, then eventually the database will become full. Likewise if the sustained SQL insert rate is less than the sustained SQL delete rate then those tables will eventually become empty.
You can get insight into the sustained SQL insert and delete rates via the ttStats command line utility. eg
Running ttStats in monitor mode will look at a bunch of performance metrics and show the trends in operations per second over time.
You can determine how full (as a percentage) a database is with the following SQL query from ttIsql:
Select round(((PERM_IN_USE_SIZE * 1.0) / (PERM_ALLOCATED_SIZE * 1.0)) * 100, 1) as percentFull
For any database, you need to manage the sustained SQL insert and delete rates so that you do not end up with too much to too little data. If you have a large TimesTen database [eg a PermSize of many TB] and a low insert rate, then it will take a long time for the database to become full. However if your TimesTen database is small [eg a few Gigabytes] and you are inserting millions of rows per second then you could quickly run out space in your database.
TimesTen allows you to optionally define tables to have automatic 'aging'. This table aging is an automatic way of deleting rows from a table. The table aging algorithm can either be LRU (Least Recently Used) or time based. In addition to 'local' tables, TimesTen cache groups can optionally have LRU or time based aging defined for cache tables. Dynamic cache groups must use LRU aging.
For LRU based aging, a single dedicated 'aging' sub-daemon thread will automatically delete batches of rows based on the LRU configuration via the ttAgingLRUConfig built-in. This aging sub-daemon thread is called 'aging' in the ttStatus output.
The input parameters to ttAgingLRUConfig are:
The default values are lowUsageThreshold 80%, highUsageThreshold 90% and agingCycle 0 (ie every second).
Using the default LRU aging configuration, if the database is 50% full then no LRU aging will occur as 50% < 90%.
If you have a database with a large sustained SQL insert rate [eg many connections each inserting in batches of rows in parallel], then it is possible that the single LRU aging thread with the default configuration will not be able to keep up. You can increase the sustained SQL delete rate by either:
You decide on a table by table basis whether to use LRU table aging. The LRU aging configuration is defined at the database level. Usually the default LRU aging configuration is OK for most tables.
You can also optionally use time based aging for tables. When you use time based aging, you define the data lifetime for rows and how often the aging process checks for old data to delete.
Time based aging also uses the 'aging' sub-deamon thread.
If you have a database with a large sustained SQL insert rate then using time based aging where the data is only checked one per day would not be good idea. Instead for this workload, using an AgingCycle defined in seconds and a lifetime of seconds or minutes would be more appropriate.
The TimesTen In-Memory Database has multiple different memory objects and algorithms. TimesTen can be thought of as using pools [or regions] for some memory management algorithms. User visible memory objects include the permanent memory region, the temporary memory region, the log buffer, the PLSQL SGA, perm blocks and heaps.
SQL inserts tend to result in [perm] blocks being allocated and SQL deletes tend to result in [perm] blocks being de-allocated. To enable fast block allocations and de-allocations, TimesTen uses free lists. Perm blocks can vary in size. An issue that all memory managers need to deal with is memory fragmentation.
Memory fragmentation can seem strange if you are not aware that it is happening. For example, if your 100 GB database is 99% full but you cannot insert a row that uses 100KB due to 'insufficient' permanent memory then you will likely be confused and annoyed. Although your database in theory has 1GB of free space, in practice due to excessive memory fragmentation (in blocks and/or heaps), it may be that an allocation of 100KB it not possible at that time [even though smaller allocations are possible].
The ttBlockInfo built-in provides visibility for block fragmentation.
The output of ttBlockInfo is:
A symptom of excessive block fragmentation is that 'large' block allocations are not possible even though there are sufficient FreeBytes because the LargestFree block is not big enough. Although block fragmentation in TimesTen is possible, but not being able to allocate a large block due to fragmentation is rare.
TimesTen memory heaps dynamically grow/shrink and use freelists for fast heap allocations/de-allocations. Each heap's deferred freelists can also suffer from memory fragmentation. Freelist compaction is used to minimize memory fragmentation.
The ttHeapInfo built-in provides visibility into the various TimesTen heaps [eg tables, indexes and compiled SQL statements]. For each heap, ttHeapInfo output includes:
Freelist compaction can either be done within the scope of a transaction [the default] or in the background. The default memory management configurations and algorithms are sufficient for most workloads.
There are several ways to address block fragmentation and heap freelist management.
TimesTen 18.104.22.168 and 18.1 support two algorithms for compaction of heaps and blocks.
If you are experiencing memory fragmentation, it is advisable to test your workloads with both heap management algorithms. Make sure that you measure the latency and throughput for the expected concurrency for both algorithms so that you can make an informed decision on which heap management algorithm to use.
If needed, a dedicated memory compaction thread can enabled via the ttDBCompactConfig built-in:
'Large' block allocation failures due to memory fragmentation may be able to be reduced by using out of line storage for variable length table columns
Note, the actual memory allocated per row is far more complex than this in both cases.
In this example, about the same total amount of memory is allocated in both cases, but in the second case, many smaller allocations are needed as compared with fewer larger allocations, and the smaller allocations are more likely to succeed in the presence of severe memory fragmentation.
Most of the time, memory management is not of concern for the TimesTen In-Memory Database as everything just works. When your workload is experiencing memory management issues, you should look at the following things in this order:
Disclaimer: These are my personal thoughts and do not represent Oracle's official viewpoint in any way, shape, or form.