Oracle Database In-Memory Population

Thanks to all those who have sent me questions on the new Oracle Database In-Memory. Below are the answers to some of the questions you have sent. Don't worry if you don't see the answer to your question in this particular blog post, as there has been a huge interest and a lot of questions!

I have grouped the questions into categories and will publish a different category every couple of days for the next few weeks. In this post I focus on questions around the population of the In-Memory column store (IM column store).

What does the term “Populate” mean?
Populate is the term used to bring data into the IM column store. We use the term “populate” instead of “load” because load is commonly used to mean inserting new data into the database. Populate doesn’t bring new data into the database, it brings existing data into memory and formats it in an optimized column format.

Who decides what objects get populated into the IM column store?
You get to decide which objects get populated into the IM column store. Only objects with the INMEMORY attribute are populated into the IM column store. The INMEMORY attribute can be specified on a tablespace, table, (sub)partition, or materialized view. If it is enabled at the tablespace level, then all tables and materialized views in the tablespace are enabled for the IM column store by default.


When do objects get populated into the IM column store?
Objects are populated into the IM column store either in a prioritized list immediately after the database is opened or after they are scanned (queried) for the first time. Again you get to control this by specifying a sub-clause to the INMEMORY attribute called PRIORITY.


Why does the column store populate complete tables or partitions into memory instead of caching frequently accessed blocks like the buffer cache?
Standard caches such as a buffer cache rely on non-uniform access patterns that are common in OLTP applications. Some rows within tables are accessed much more frequently than other rows. That is why you can have a buffer cache that is 10% the size of a table that captures 95% of the (OLTP) accesses to the table. In this example, you get a 20x speedup with a cache that is 10% the size of the table.

The main use case for the In-Memory column store is fast analytics, which has very different access patterns. Columnar scan access patterns are sequential and uniform. You scan a whole table or segment, not a small part of it. Let's say a scan of a table is 100x faster using the in-memory column store. If only 10% of the table is in the column store, then you will get a 1.1X speedup instead of a 100X speedup. This is because 90% of the table will be read using the conventional row store, and this is no faster than before. That is a huge difference with the buffer cache example above where having 10% of the table in cache produced a 20X speedup for OLTP workloads.

Therefore, for the in-memory column store to work effectively, all of a table or partition should be populated in memory.

What happens when the In-Memory column store becomes full?
It is best practice to declare a segment “inmemory” only if there is enough capacity in the in-memory column store area for the segment. However the database will operate correctly even if too many segments are declared in-memory. The In-Memory column store is not a cache; it does not operate with a least recently used (LRU) algorithm. Once full, no more data can be populated into the column store until another object is dropped or declared as not “inmemory”. A warning message will be posted to the alert log to make you aware that an attempt was made to populate data in-memory but the In-Memory column store is full. The DBA or other privileged user can easily remove objects from the In-Memory column store to make room for new objects.

The database can function with 100% correctness when only part of a table is populated in the column store. In fact, this partial population will happen in a number of normal cases. For example, when you start a database instance, population of the column store will run in the background and "select" statements will read some parts of the table from the normal row store, and some from the column store. Another case is when an instance crashes and some of the in-memory column store is lost. In all cases, the database knows exactly what parts of the table are in the column store and what parts are not, and returns correct results.

How do I remove or un-populate an object from the IM column store?
To indicate an object is no longer a candidate, and to instantly remove it from the IM column store, simply specify the NO INMEMORY clause.


Are there any restrictions on the column data types that can be populated into the IM column store?
The IM column store supports all dataypes except for the LONG datatype (which is deprecated in any case) and out-of-line LOBs. Out-of-line LOBs would generally see minimal benefits while consuming a large amount of memory. This means that all Oracle-supplied datatypes such as spatial, XML, and Media are supported. For tables that contain LONG columns or out-of-line LOBs, it is still possible to populate the rest of the columns in the table into the In-Memory column store. Note that simpler column types such as numbers, dates, and chars will generally benefit more from column store optimization such as vector processing than complex column types.

Are there any restrictions on the table types that can be used with the IM column store?
Tables, partitions, sub-partitions, materialized views, and materialized view logs are all candidates to be populated into the IM column store. It is not possible to populate external tables, temporary tables, index-organized tables, or hash clusters in the In-Memory column store.

I will have more answers and worked examples of Oracle Database In-Memory in action in the coming weeks. But feel free to email me if you have an burning question on In-Memory that haven't been answered yet.


Will the ADDM be able to suggest In-Memory population for specific objects?

I think that also droppping the "right" indexs is important for best results... and it is not so easy as it seems.

Posted by meo bogliolo on July 08, 2014 at 12:47 PM PDT #


Can you plz emphasize on differences between in memory option and the age old pin in cache option.


Posted by Joy on July 19, 2014 at 05:56 AM PDT #

In-Memory vs. cache:

In-Memory allows you to put data blocks in RAM at instance startup. Queries run very fast even the first time.

In-Memory uses columnar organization of data and creates an index for each column. You get fast response times even putting non "traditionally" indexed fields in WHERE and ORDER BY sections of SELECT statements.

Posted by Leopoldo Gonzalez on October 24, 2014 at 09:44 AM PDT #

Hi Santosh,

You are absolutely right there was a typo in my original post where I was missing the keyword DEFAULT in the ALTER TABLESPACE command.

I've made the correction in the post.

Thanks for noticing!


Posted by Maria Colgan on December 30, 2014 at 04:27 PM PST #


Does populating the IM column store require populating the database buffer cache with the corresponding data blocks?


Posted by guest on January 28, 2015 at 07:44 PM PST #


No, populating data into the IM column store does not require the data be read into the database buffer cache.

There is no relationship or dependance between the IM column store and the buffer cache.

That said, there is no reason that the same data can't be populated in both the IM column store and the buffer cache.


Posted by Maria Colgan on January 28, 2015 at 08:37 PM PST #

Hi Maria,
Working with a customer here in the uK at the moment..... Is there a way to aggressively load data into the IM store ? We've set inmemory workers to 10 and the in memory trickle percent to 50 but it still takes (comparatively) a long time to get all the data loaded into the IM store.

Posted by guest on November 13, 2015 at 08:41 AM PST #

Hi Mike,

In-Memory population is typically CPU bound. If that is the case on your system, the only way to speed it up is to increase the initialization parameter inmemory_max_population_Servers, which will increase the number of inmemory workers.


Posted by Maria Colgan on November 25, 2015 at 03:05 PM PST #

Hi Maria,

Thank you for all your articles on the inmemory functionality!

We have tables with lots of partitions where only the newest partition contains relevant data which we want to see in the inmemory area.

Reading the docs I cannot find any other way to do it besides the manual control:

-- put the newest partition inmemory
alter table <table_name> modify partition PART_XYZ99 INMEMORY;

-- remove the older partitions from inmemory
alter table <table_name> modify partition PART_XYZ98 NO INMEMORY;

Is there any set up option which will automaticaly populate the newest partition into inmemory and purge the previous one?


Posted by guest on November 27, 2015 at 08:05 AM PST #

We have a situation where we want to further improve the performance of our queries going against Oracle IM. One of the aspects considered is to partition a table using a string column.

Question I have fundamentally is - does partitioning of the base table improve Oracle IM performance ?. On the same note, does creating partitioned indexes on IM columns on base table improve performance ?. Your response is appreciated.


Posted by Santosh S on April 21, 2016 at 09:49 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed

The Oracle Database In-Memory blog is written by the Oracle product management team and sheds light on all things In-Memory.


« April 2016