Getting started with Oracle Database In-Memory Part I - Installing & Enabling

Now that Oracle Database has been officially released, I can finally start sharing more technical details on how Oracle Database In-Memory (Database In-Memory) works.

I thought we should start this series of posts right at the very beginning by answering the most fundamental question, how and when is Database In-Memory installed and enabled.

Let’s start by doing a clean install of and allowing the installer to create a typical single instance database.

So, has In-Memory been installed?

Yes, Oracle Database In-Memory is installed. How do I know? Oracle Database In-Memory is not a bolt on technology to the Oracle Database. It has been seamlessly integrated into the core of the database as a new component of the Shared Global Area (SGA). When the Oracle Database is installed, Oracle Database In-Memory is installed. They are one and the same. You can’t unlink it or choose not to install it.

The more important question is if In-Memory is automatically enabled or not?

The answer is NO. In order to prove this we are going to need to look at some of the new initialization parameters that control In-Memory.

Six new initialization parameters with the INMEMORY prefix have been introduced to directly control the different aspects of the new in-memory functionality. There is also a new optimizer parameter that controls whether queries can use the INMEMORY or not.

Right now we are only interested in one of these parameters, INMEMORY_SIZE to determine if In-Memory is enabled.

Database In-Memory uses an In-Memory column store (IM column store), which is the new component of the SGA, called the In-Memory Area. Data in the IM column store does not reside in the traditional row format used by the Oracle Database; instead it uses a new column format. The size of the IM column store is controlled by the INMEMORY_SIZE parameter. As you can see the INMEMORY_SIZE parameter is set to 0 and therefore Database In-Memory is not enabled, as there is no IM column store allocated. We can also confirm the In-Memory Area is not allocated by querying v$SGA.

Still don't trust me? Let’s confirm Database In-Memory is not enabled by examining the feature tracking information. I’m going to force the feature-tracking table to be updated before I query it.

If Database In-Memory isn’t enabled out of the box how do you enable it?

Enabling Database In-Memory is actually a multi-step process.

Step1: First we must allocate memory for the IM column store by setting the INMEMORY_SIZE parameter to a non-zero value that is greater than 100MB.

ALTER SYSTEM SET inmemory_size = 20G scope=spfile;

Since the IM column store is part of the SGA, we also need to ensure the SGA_TARGET parameter is set large enough to accommodate the new IM column store and all of the other existing components (buffer cache, shared pool, large pool etc.). By default, the installer set the SGA_TARGET to 71G, so I’m going to bump it by 20G.

ALTER SYSTEM SET sga_target = 91G scope=spfile;

Now let’s bounce the database so these parameter changes can take effect.

As you can see, we now have an IM column store. But Database In-Memory is still not in use because no objects have been populated into the IM column store. To confirm this we can look at two new v$ views, v$IM_SEGMENTS and v$IM_USER_SEGMENTS that indicate what objects are in the In-Memory Column Store.

We can also confirm it by checking the feature tracking information again.

Step 2: Unlike a pure In-Memory database, not all of the objects in an Oracle database need to be populated in the IM column store. The IM column store should be populated with the most performance-critical data in the database. Less performance-critical data can reside on lower cost flash or disk. Of course, if your database is small enough, you can populate all of your tables into the IM column store. Only objects with the INMEMORY attribute are populated into the IM column store. The INMEMORY attribute can be specified on a tablespace, table, (sub)partition, or materialized view. In this case let's enable the INMEMORY attribute on one of the user tables CUSTOMERS.


By default Oracle automatically decides when to populate the table into the In-Memory Column Store. This is also referred to as “on demand”, as Oracle typically populates the table after it has been accessed for the first time. So, let’s run a query on the CUSTOMERS table.

SELECT cust_valid, Count(*)
FROM customers
GROUP BY cust_valid

Now if we check v$IM_SEGMENTS we see the CUSTOMERS table has been populated in the IM column store.

If we check the feature tracking information now, we will see Database In-Memory is enabled and being used.

If you want to remove a table from the IM column store you simply need to specify the NO INMEMORY attribute.


In next week's post, I will explain in a lot more detail all of the different sub-clause of the INMEMORY attribute and what all of the columns in the new v$ views represent.


Thanks for this Maria, I know there have been some concerned bloggers expressing uncertainty as to how easy it might be to accidentally enable the new IN_MEMORY option and what the implications of INMEMORY_QUERY defaulting to ENABLE might mean come an Oracle license audit. I expect this post will be referenced many times in that it clearly shows how the IN_MEMORY option is enabled and gives visibility to the not so well known DBMS Feature Usage Statistics that DBAs can add to their monitoring and alerting processes.

I look forward to you perhaps discussing some straightforward high impact use cases that we all can use to demonstrate the powerful capabilities of Oracle 12c IN_MEMORY option. @DBAStorage

Posted by Peter Herdman-Grant on July 28, 2014 at 03:27 PM PDT #

I would have been happier if the INMEMORY attribute could be set at the Column level. I may have a table with 100 columns of which I need only 2 columns to be In-Memory. Is that possible ?

Posted by Hemant K Chitale on July 28, 2014 at 08:39 PM PDT #


Does the inmemory_size parameter needs be explicitly set before someone can create a table with the inmemory clause or will in that case Oracle automatically allocates space within the sga?

Posted by Freek on July 29, 2014 at 12:41 AM PDT #


No setting the INMEMORY attribute on a table does not automatically allocate space within the SGA.

Setting the INMEMORY attribute when the INMEMORY_SIZE parameter is set to zero means the attribute is effectively ignored, as there is no IM column store that the table can be populated into.


Posted by Maria Colgan on July 29, 2014 at 10:44 AM PDT #

Hi Hermant,

I am sure Maria can answer better that me your question but I think what you are asking for is possible


Posted by guest on July 29, 2014 at 11:32 AM PDT #

Thats possible.

ALTER TABLE oe.product_information
product_id, product_name, category_id, supplier_id, min_price).

Posted by guest on July 29, 2014 at 05:14 PM PDT #

Hi Herbert,

It is possible to populate just a subset of columns from a table into the IM column store but perhaps not the way you think.
By default all of the columns in a table or partition inherit their In-Memory attribute and sub-attribute from the table. All of the column are marked INMEMORY if the table is.

However, you do have the ability to override the inherited In-Memory attributes on a column. For example you can exclude the column by marking a column NO INMEMORY. Or you can change the compression type etc.

But a column can't have the INMEMORY attribute if the table is doesn't.

For example,

Alter table sales INMEMORY NO INMEMORY(prod_id);

This statement marks the SALES table INMEMORY but excludes the prod_id column from being populated into memory.

I will post a lot more about the different aspects of the syntax in next weeks blog.


Posted by Maria Colgan on July 29, 2014 at 09:30 PM PDT #

@Hemant K Chitale:

you can enable in-memory even at column level,


Posted by Rajeshwaran, Jeyabal on July 29, 2014 at 09:31 PM PDT #

I have a standard Oracle table called EMPLOYEE. I first executed


I then ran the query

SELECT ename FROM employee WHERE salary > 10

The feature tracking view reports that I was using the in-memory option this query, even though I never set the inmemory_size parameter so I don’t have a column store. Is this correct behavior?"

Posted by guest on July 30, 2014 at 09:20 AM PDT #

Since the inmemory_size parameter is set to 0, there is no IM column store so we cannot populate the employee table into memory.

Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October.


Posted by guest on July 30, 2014 at 09:46 AM PDT #

Hi Maria,

In the example you talked about SGA_TARGET, what about automatic memory management and inmemory_size? Can you make use to AMM and in memory_size, and does this mean the defined size is the hard limit?


Posted by guest on August 22, 2014 at 01:13 PM PDT #

Excellent post - thanks for the clarification, Maria. The URL for this article will be widely shared, I'm sure.

One slight suggestion: if you could include the actual text of cool SQL (like that query against dba_feature_usage_stats) in-line, your readers would appreciate the typing savings... :-)

Posted by Jim Carter on September 17, 2014 at 06:57 AM PDT #

If there is a system failure or database crashes , is there 100% recovery/consistency for the in-memory data?

Posted by guest on October 23, 2014 at 05:18 PM PDT #

If both the inmemory_size and inmemory_max_populate_severs parameters are set to a zero value , Oracle should not allow the inmemory feature to be associated with either table creation/alter or at the tablespace level. In other words, since these two parameters can be dynamically changed, it should check during run time and block the SQL. But as per the Metalink doc (With INMEMORY_SIZE 0, IN-MEMORY option is reported as USED (Doc ID 1924832.1)) , it does allow for tables to be created with inmemory setting. The optimizer might not arrive at a plan using cache , but the dba_feature_usage_stats view then shows the value as TRUE.

Posted by GovindanK on January 29, 2015 at 11:41 AM PST #


Your suggestion that we generate an error if someone sets the INMEMORY attribute on table when both the inmemory_size and inmemory_max_populate_severs parameters are set to a zero, is very interesting. But I don't think it's technically correct.

Setting the INMEMORY attribute on an object means that this object is a candidate to be populated into the In-Memory column store. It does not mean it should be populated into memory immediately. Just because the object can't be populated at this moment doesn't necessarily warrant the statement to fail.

Regarding the MOS note Doc_ID 1924832.1, the behavior it is describing is due to Bug 19317899, which was fixed in the first bundle patch for Oracle Database In-Memory (Patch 19303936) that was released in Oct 2014.


Posted by Maria Colgan on January 29, 2015 at 03:22 PM PST #

Hi Maria,
Here is my scenario. I have 2 node RAC.

My table size on disk is
select owner , segment_name,sum(bytes) "Bytes"
from dba_segments
where owner='XXCCS_O'
and segment_name='XXCCS_DS_SL_CVR'
group by owner, segment_name;
XXCCS_O XXCCS_DS_SL_CVR 1,946,157,056
When I load a subset of columns from this table inmemory I get the following output from the following query
SELECT v.inst_id,v.owner,
FROM gv$im_segments v
where owner='XXCCS_O'
and segment_name='XXCCS_DS_SL_CVR'
2 XXCCS_O XXCCS_DS_SL_CVR TABLE 1,946,157,056 356,909,056 NO DUPLICATE
1 XXCCS_O XXCCS_DS_SL_CVR TABLE 1,946,157,056 356,909,056 NO DUPLICATE
Table Size from dba_segments is - 1,946,157,056 bytes
Now on both the RAC instances it shows Bytes loaded as - 1,946,157,056
bytes as shown in the above output.

This is confusing me because I thought that it would distribute the data across both the nodes as the data is loaded with NO DUPLICATE as shown in the gv$im_segments. But looking at the numbers it shows that the same bytes have been loaded across both the RAC Nodes.

Can you please explain the numbers here?

I don't want to load the same data inmemory across both the Nodes but want to divide and distribute the data across the nodes as I have limited memory space and want to be sure I am not doing anything wrong here.

Posted by Tarun on February 05, 2015 at 02:03 PM PST #

Hi Tarun,

The BYTES column in v$im_segments represents the total size of the object on disk. In your case that is 1,946,157,056 bytes and is the expected value for this column on both nodes in gv$im_segments.

The INMEMORY_SIZE column in v$im_segments represents the size of the object in the In-Memory column store. That means your object is taking up 356,909,056 on each of the RAC nodes. Again this is most likely expected as you have only populated a subset of column from the table, those columns will be compressed and its only half the data on both nodes.

As you pointed out we will automatically distribute the data from your table across the two RAC nodes, with half of it ending up on node 1 and the other half on node 2. So the total footprint of your table in-memory would actually be (356,909,056 + 356,909,056) 713,817,112 bytes.

You could verify this by shutdown one of the RAC nodes and populating the table on just one node and checking the same columns in v$im_segments.


Posted by guest on February 08, 2015 at 06:17 PM PST #

Hi Maria,

i have a 2 node RAC on Exadata. I put a partition of a table in memory with no duplicate. gv$im_segments shows that a sub partition which i selected from was loaded in only one of the instances - instance 1.

Now if my session is connected to instance 1 the performance of a simple query from this subpartition is fantastic. The exact same query on instance 2 does not perform. Both execution plans are identical and are showing in memory full scan of the subpartition. The "session logical reads - IM" stat for the connection to instance 1 grows after every execution and for the connection to instance 2 it does not. That makes me think that the instance 2 connection does not use the in memory scan even though that's what the execution plan states.

I believe this is not the expected behavior or am i missing something?

Thank you,


Posted by Ivan Radoslavov on November 04, 2015 at 06:07 PM PST #

Hi Ivan,

By default all objects populated into memory will be distributed across all of the IM column stores in the cluster. Oracle decides the best way to distribute the object across the cluster given the type of partitioning used (if any). If the object is partitioned we will round robin the partitions across the nodes in a RAC cluster. In your case you only marked one partition as INMEMORY so we populated that 1 partition on 1 node. You could over-ride this default behavior by specifying DISTRIBUTE BY ROWID RANGE, which would have distribute your partition by rowid range across your two RAC nodes.

With the partition only populated on Node 1 and since we do not ship IMCUs across the interconnect on RAC, your query on Node 2 must be able to start a process on Node 1 to access the data In-Memory. Otherwise we will have to get the data from disk.

How can a query start a process on a remote node?

The query must use parallel execution. Without parallel execution we are force to read the necessary data from disk, as it is not populated in the local In-Memory column store.

Remember seeing TABLE ACCESS INMEMORY FULL in the execution plan is an indication that some or all of the data may come from memory. It is not a guarantee.


Posted by guest on November 06, 2015 at 10:57 AM PST #

Hi Maria

how to put a partition table on inmemory which is already present on
Disk it has huge data and 16 partition, while i am doing following

alter table xyz inmemory.

it comes in V$im_segments

But only one partition is coming in list rest are not coming

while queying the table explain plan shows inmemory.


Posted by guest on March 22, 2016 at 04:54 AM PDT #

Hi Amit,

Without more information it's hard to say for sure what is happening on your system but I'm guessing you have run out of space in the In-Memory column store after the first partition was populated, which is why the other 15 partitions have not been populated.

You can confirm my suspicion by doing a SELECT * FROM V$INMEMORY_AREA and check the value of the POPULATE_STATUS column.

Remember seeing TABLE ACCESS INMEMORY FULL in the execution plan is an indication that some or all of the data may come from memory. It is not a guarantee. In your case this indicates that partition 1 will come from the In-Memory column while partition 2-15 will have to come from disk or the buffer cache since they currently do not reside in the In-Memory column store.


Posted by Maria Colgan on March 31, 2016 at 10:50 AM PDT #

I've observed a scenario where table is fully populated with 0 in bytes_not_populated column, but after some time, the bytes_not_populated column shows non-zero value, indicating partial table is purged out of memory for some reason. But between those 2 events, I didn't any 'Insufficient memory to populate table to inmemory area' in the alert log. The total inmemory_size reported is below than the settings. I'm wondering what could cause this. Is there any way to see the data not populated belongs to which column? Is there any data dictionary table to monitor the usage of populated columns?

Posted by Jason on April 12, 2016 at 12:40 PM PDT #

Hi Jason,

The In-Memory Column store is a store and not a cache. Once data is populated it is never "aged out".

In your scenario I suspect a data load operation has occurred on the table in question after it was fully populated into the In-Memory column store. Once the data load operation is commit the In-Memory column store will realize that the newly inserted data is missing from the column store and change the value in the bytes_not_populated column to reflect this.

If the table has an In-Memory priority specified on it, the IMCO will trigger the population of the missing data into the In-Memory column store the next time it wakes up. Assuming there is enough free space available in the column store the missing data will be fully populated and the bytes_not_populated column will return to zero.

If the table does not have an In-Memory priority specified on it, then we will wait for the table to be queried before populated the missing data. Again, assuming there is enough free space available in the column store the missing data will be fully populated and the bytes_not_populated column will return to zero.


Posted by Maria Colgan on April 13, 2016 at 10:24 AM PDT #

Hi Maria,

Thanks for the explanation. That makes sense for the very small number in the bytes_not_populated column. However, we also saw a couple of cases where bytes_not_populated shows 6G for a table having 30G on disk. I can't think our data change can be that much.

On a different topic, we set our inmemory_size 64G on container level and each pdb level. The data populated is around 40G. However, we are seeing 'Insufficient memory to populate table to inmemory area' message in alert file randomly. What's a possible reason for that?


Posted by Jason on April 13, 2016 at 11:12 AM PDT #

Hi Jason,

If you allocated 64GB to the In-Memory column store not all of that space will be available for data. Remember, the In-Memory column store has two pools a 1MB pool for data and 64KB pool for metadata. To see exactly how much space is allocated to each pool you need to query the v$inmemory_area. This view only has 2 rows in to so a SELECT * is the easy way to query it. The will show you exactly how much of the 64GB was allocated to each pool and how much memory is actually used by each pool at this point in time. It's the best place to look if you are getting the insufficient memory message in your alert log.


Posted by Maria Colgan on April 14, 2016 at 09:22 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed

The Oracle Database In-Memory blog is written by the Oracle product management team and sheds light on all things In-Memory.


« July 2016