Enabling Real-Time Analytics With Database In-Memory

Getting started with Oracle Database In-Memory Part I - Installing & Enabling

Maria Colgan
Distinguished Product Manager

Now that Oracle Database has been officially released, I can finally start sharing more technical details on how Oracle Database In-Memory (Database In-Memory) works.

I thought we should start this series of posts right at the very beginning by answering the most fundamental question, how and when is Database In-Memory installed and enabled.

Let’s start by doing a clean install of and allowing the installer to create a typical single instance database.

So, has In-Memory been installed?

Yes, Oracle Database In-Memory is installed. How do I know? Oracle Database In-Memory is not a bolt on technology to the Oracle Database. It has been seamlessly integrated into the core of the database as a new component of the Shared Global Area (SGA). When the Oracle Database is installed, Oracle Database In-Memory is installed. They are one and the same. You can’t unlink it or choose not to install it.

The more important question is if In-Memory is automatically enabled or not?

The answer is NO. In order to prove this we are going to need to look at some of the new initialization parameters that control In-Memory.

Six new initialization parameters with the INMEMORY prefix have been introduced to directly control the different aspects of the new in-memory functionality. There is also a new optimizer parameter that controls whether queries can use the INMEMORY or not.

Right now we are only interested in one of these parameters, INMEMORY_SIZE to determine if In-Memory is enabled.

Database In-Memory uses an In-Memory column store (IM column store), which is the new component of the SGA, called the In-Memory Area. Data in the IM column store does not reside in the traditional row format used by the Oracle Database; instead it uses a new column format. The size of the IM column store is controlled by the INMEMORY_SIZE parameter. As you can see the INMEMORY_SIZE parameter is set to 0 and therefore Database In-Memory is not enabled, as there is no IM column store allocated. We can also confirm the In-Memory Area is not allocated by querying v$SGA.

Still don't trust me? Let’s confirm Database In-Memory is not enabled by examining the feature tracking information. I’m going to force the feature-tracking table to be updated before I query it.

If Database In-Memory isn’t enabled out of the box how do you enable it?

Enabling Database In-Memory is actually a multi-step process.

Step1: First we must allocate memory for the IM column store by setting the INMEMORY_SIZE parameter to a non-zero value that is greater than 100MB.

ALTER SYSTEM SET inmemory_size = 20G scope=spfile;

Since the IM column store is part of the SGA, we also need to ensure the SGA_TARGET parameter is set large enough to accommodate the new IM column store and all of the other existing components (buffer cache, shared pool, large pool etc.). By default, the installer set the SGA_TARGET to 71G, so I’m going to bump it by 20G.

ALTER SYSTEM SET sga_target = 91G scope=spfile;

Now let’s bounce the database so these parameter changes can take effect.

As you can see, we now have an IM column store. But Database In-Memory is still not in use because no objects have been populated into the IM column store. To confirm this we can look at two new v$ views, v$IM_SEGMENTS and v$IM_USER_SEGMENTS that indicate what objects are in the In-Memory Column Store.

We can also confirm it by checking the feature tracking information again.

Step 2: Unlike a pure In-Memory database, not all of the objects in an Oracle database need to be populated in the IM column store. The IM column store should be populated with the most performance-critical data in the database. Less performance-critical data can reside on lower cost flash or disk. Of course, if your database is small enough, you can populate all of your tables into the IM column store. Only objects with the INMEMORY attribute are populated into the IM column store. The INMEMORY attribute can be specified on a tablespace, table, (sub)partition, or materialized view. In this case let's enable the INMEMORY attribute on one of the user tables CUSTOMERS.


By default Oracle automatically decides when to populate the table into the In-Memory Column Store. This is also referred to as “on demand”, as Oracle typically populates the table after it has been accessed for the first time. So, let’s run a query on the CUSTOMERS table.

SELECT cust_valid, Count(*)
FROM customers
GROUP BY cust_valid

Now if we check v$IM_SEGMENTS we see the CUSTOMERS table has been populated in the IM column store.

If we check the feature tracking information now, we will see Database In-Memory is enabled and being used.

If you want to remove a table from the IM column store you simply need to specify the NO INMEMORY attribute.


In next week's post, I will explain in a lot more detail all of the different sub-clause of the INMEMORY attribute and what all of the columns in the new v$ views represent.

Join the discussion

Comments ( 32 )
  • Peter Herdman-Grant Monday, July 28, 2014

    Thanks for this Maria, I know there have been some concerned bloggers expressing uncertainty as to how easy it might be to accidentally enable the new IN_MEMORY option and what the implications of INMEMORY_QUERY defaulting to ENABLE might mean come an Oracle license audit. I expect this post will be referenced many times in that it clearly shows how the IN_MEMORY option is enabled and gives visibility to the not so well known DBMS Feature Usage Statistics that DBAs can add to their monitoring and alerting processes.

    I look forward to you perhaps discussing some straightforward high impact use cases that we all can use to demonstrate the powerful capabilities of Oracle 12c IN_MEMORY option. @DBAStorage

  • Hemant K Chitale Tuesday, July 29, 2014

    I would have been happier if the INMEMORY attribute could be set at the Column level. I may have a table with 100 columns of which I need only 2 columns to be In-Memory. Is that possible ?

  • Freek Tuesday, July 29, 2014


    Does the inmemory_size parameter needs be explicitly set before someone can create a table with the inmemory clause or will in that case Oracle automatically allocates space within the sga?

  • Maria Colgan Tuesday, July 29, 2014


    No setting the INMEMORY attribute on a table does not automatically allocate space within the SGA.

    Setting the INMEMORY attribute when the INMEMORY_SIZE parameter is set to zero means the attribute is effectively ignored, as there is no IM column store that the table can be populated into.



  • guest Tuesday, July 29, 2014

    Hi Hermant,

    I am sure Maria can answer better that me your question but I think what you are asking for is possible




  • guest Wednesday, July 30, 2014


    Thats possible.

    ALTER TABLE oe.product_information


    product_id, product_name, category_id, supplier_id, min_price).

  • Maria Colgan Wednesday, July 30, 2014

    Hi Herbert,

    It is possible to populate just a subset of columns from a table into the IM column store but perhaps not the way you think.

    By default all of the columns in a table or partition inherit their In-Memory attribute and sub-attribute from the table. All of the column are marked INMEMORY if the table is.

    However, you do have the ability to override the inherited In-Memory attributes on a column. For example you can exclude the column by marking a column NO INMEMORY. Or you can change the compression type etc.

    But a column can't have the INMEMORY attribute if the table is doesn't.

    For example,

    Alter table sales INMEMORY NO INMEMORY(prod_id);

    This statement marks the SALES table INMEMORY but excludes the prod_id column from being populated into memory.

    I will post a lot more about the different aspects of the syntax in next weeks blog.



  • Rajeshwaran, Jeyabal Wednesday, July 30, 2014
  • guest Wednesday, July 30, 2014

    I have a standard Oracle table called EMPLOYEE. I first executed


    I then ran the query

    SELECT ename FROM employee WHERE salary > 10

    The feature tracking view reports that I was using the in-memory option this query, even though I never set the inmemory_size parameter so I don’t have a column store. Is this correct behavior?"

  • guest Wednesday, July 30, 2014

    Since the inmemory_size parameter is set to 0, there is no IM column store so we cannot populate the employee table into memory.

    Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October.



  • guest Friday, August 22, 2014

    Hi Maria,

    In the example you talked about SGA_TARGET, what about automatic memory management and inmemory_size? Can you make use to AMM and in memory_size, and does this mean the defined size is the hard limit?


  • Jim Carter Wednesday, September 17, 2014

    Excellent post - thanks for the clarification, Maria. The URL for this article will be widely shared, I'm sure.

    One slight suggestion: if you could include the actual text of cool SQL (like that query against dba_feature_usage_stats) in-line, your readers would appreciate the typing savings... :-)

  • guest Friday, October 24, 2014

    If there is a system failure or database crashes , is there 100% recovery/consistency for the in-memory data?

  • GovindanK Thursday, January 29, 2015

    If both the inmemory_size and inmemory_max_populate_severs parameters are set to a zero value , Oracle should not allow the inmemory feature to be associated with either table creation/alter or at the tablespace level. In other words, since these two parameters can be dynamically changed, it should check during run time and block the SQL. But as per the Metalink doc (With INMEMORY_SIZE 0, IN-MEMORY option is reported as USED (Doc ID 1924832.1)) , it does allow for tables to be created with inmemory setting. The optimizer might not arrive at a plan using cache , but the dba_feature_usage_stats view then shows the value as TRUE.

  • Maria Colgan Thursday, January 29, 2015


    Your suggestion that we generate an error if someone sets the INMEMORY attribute on table when both the inmemory_size and inmemory_max_populate_severs parameters are set to a zero, is very interesting. But I don't think it's technically correct.

    Setting the INMEMORY attribute on an object means that this object is a candidate to be populated into the In-Memory column store. It does not mean it should be populated into memory immediately. Just because the object can't be populated at this moment doesn't necessarily warrant the statement to fail.

    Regarding the MOS note Doc_ID 1924832.1, the behavior it is describing is due to Bug 19317899, which was fixed in the first bundle patch for Oracle Database In-Memory (Patch 19303936) that was released in Oct 2014.



  • Tarun Thursday, February 5, 2015

    Hi Maria,

    Here is my scenario. I have 2 node RAC.

    My table size on disk is


    select owner , segment_name,sum(bytes) "Bytes"

    from dba_segments

    where owner='XXCCS_O'

    and segment_name='XXCCS_DS_SL_CVR'

    group by owner, segment_name;









    When I load a subset of columns from this table inmemory I get the following output from the following query


    SELECT v.inst_id,v.owner,






    FROM gv$im_segments v

    where owner='XXCCS_O'

    and segment_name='XXCCS_DS_SL_CVR'
























    Table Size from dba_segments is - 1,946,157,056 bytes

    Now on both the RAC instances it shows Bytes loaded as - 1,946,157,056

    bytes as shown in the above output.

    This is confusing me because I thought that it would distribute the data across both the nodes as the data is loaded with NO DUPLICATE as shown in the gv$im_segments. But looking at the numbers it shows that the same bytes have been loaded across both the RAC Nodes.

    Can you please explain the numbers here?

    I don't want to load the same data inmemory across both the Nodes but want to divide and distribute the data across the nodes as I have limited memory space and want to be sure I am not doing anything wrong here.



  • guest Monday, February 9, 2015

    Hi Tarun,

    The BYTES column in v$im_segments represents the total size of the object on disk. In your case that is 1,946,157,056 bytes and is the expected value for this column on both nodes in gv$im_segments.

    The INMEMORY_SIZE column in v$im_segments represents the size of the object in the In-Memory column store. That means your object is taking up 356,909,056 on each of the RAC nodes. Again this is most likely expected as you have only populated a subset of column from the table, those columns will be compressed and its only half the data on both nodes.

    As you pointed out we will automatically distribute the data from your table across the two RAC nodes, with half of it ending up on node 1 and the other half on node 2. So the total footprint of your table in-memory would actually be (356,909,056 + 356,909,056) 713,817,112 bytes.

    You could verify this by shutdown one of the RAC nodes and populating the table on just one node and checking the same columns in v$im_segments.



  • Ivan Radoslavov Thursday, November 5, 2015

    Hi Maria,

    i have a 2 node RAC on Exadata. I put a partition of a table in memory with no duplicate. gv$im_segments shows that a sub partition which i selected from was loaded in only one of the instances - instance 1.

    Now if my session is connected to instance 1 the performance of a simple query from this subpartition is fantastic. The exact same query on instance 2 does not perform. Both execution plans are identical and are showing in memory full scan of the subpartition. The "session logical reads - IM" stat for the connection to instance 1 grows after every execution and for the connection to instance 2 it does not. That makes me think that the instance 2 connection does not use the in memory scan even though that's what the execution plan states.

    I believe this is not the expected behavior or am i missing something?

    Thank you,


  • guest Friday, November 6, 2015

    Hi Ivan,

    By default all objects populated into memory will be distributed across all of the IM column stores in the cluster. Oracle decides the best way to distribute the object across the cluster given the type of partitioning used (if any). If the object is partitioned we will round robin the partitions across the nodes in a RAC cluster. In your case you only marked one partition as INMEMORY so we populated that 1 partition on 1 node. You could over-ride this default behavior by specifying DISTRIBUTE BY ROWID RANGE, which would have distribute your partition by rowid range across your two RAC nodes.

    With the partition only populated on Node 1 and since we do not ship IMCUs across the interconnect on RAC, your query on Node 2 must be able to start a process on Node 1 to access the data In-Memory. Otherwise we will have to get the data from disk.

    How can a query start a process on a remote node?

    The query must use parallel execution. Without parallel execution we are force to read the necessary data from disk, as it is not populated in the local In-Memory column store.

    Remember seeing TABLE ACCESS INMEMORY FULL in the execution plan is an indication that some or all of the data may come from memory. It is not a guarantee.



  • guest Tuesday, March 22, 2016

    Hi Maria

    how to put a partition table on inmemory which is already present on

    Disk it has huge data and 16 partition, while i am doing following

    alter table xyz inmemory.

    it comes in V$im_segments

    But only one partition is coming in list rest are not coming

    while queying the table explain plan shows inmemory.



  • Maria Colgan Thursday, March 31, 2016

    Hi Amit,

    Without more information it's hard to say for sure what is happening on your system but I'm guessing you have run out of space in the In-Memory column store after the first partition was populated, which is why the other 15 partitions have not been populated.

    You can confirm my suspicion by doing a SELECT * FROM V$INMEMORY_AREA and check the value of the POPULATE_STATUS column.

    Remember seeing TABLE ACCESS INMEMORY FULL in the execution plan is an indication that some or all of the data may come from memory. It is not a guarantee. In your case this indicates that partition 1 will come from the In-Memory column while partition 2-15 will have to come from disk or the buffer cache since they currently do not reside in the In-Memory column store.



  • Jason Tuesday, April 12, 2016

    I've observed a scenario where table is fully populated with 0 in bytes_not_populated column, but after some time, the bytes_not_populated column shows non-zero value, indicating partial table is purged out of memory for some reason. But between those 2 events, I didn't any 'Insufficient memory to populate table to inmemory area' in the alert log. The total inmemory_size reported is below than the settings. I'm wondering what could cause this. Is there any way to see the data not populated belongs to which column? Is there any data dictionary table to monitor the usage of populated columns?

  • Maria Colgan Wednesday, April 13, 2016

    Hi Jason,

    The In-Memory Column store is a store and not a cache. Once data is populated it is never "aged out".

    In your scenario I suspect a data load operation has occurred on the table in question after it was fully populated into the In-Memory column store. Once the data load operation is commit the In-Memory column store will realize that the newly inserted data is missing from the column store and change the value in the bytes_not_populated column to reflect this.

    If the table has an In-Memory priority specified on it, the IMCO will trigger the population of the missing data into the In-Memory column store the next time it wakes up. Assuming there is enough free space available in the column store the missing data will be fully populated and the bytes_not_populated column will return to zero.

    If the table does not have an In-Memory priority specified on it, then we will wait for the table to be queried before populated the missing data. Again, assuming there is enough free space available in the column store the missing data will be fully populated and the bytes_not_populated column will return to zero.



  • Jason Wednesday, April 13, 2016

    Hi Maria,

    Thanks for the explanation. That makes sense for the very small number in the bytes_not_populated column. However, we also saw a couple of cases where bytes_not_populated shows 6G for a table having 30G on disk. I can't think our data change can be that much.

    On a different topic, we set our inmemory_size 64G on container level and each pdb level. The data populated is around 40G. However, we are seeing 'Insufficient memory to populate table to inmemory area' message in alert file randomly. What's a possible reason for that?



  • Maria Colgan Thursday, April 14, 2016

    Hi Jason,

    If you allocated 64GB to the In-Memory column store not all of that space will be available for data. Remember, the In-Memory column store has two pools a 1MB pool for data and 64KB pool for metadata. To see exactly how much space is allocated to each pool you need to query the v$inmemory_area. This view only has 2 rows in to so a SELECT * is the easy way to query it. The will show you exactly how much of the 64GB was allocated to each pool and how much memory is actually used by each pool at this point in time. It's the best place to look if you are getting the insufficient memory message in your alert log.



  • Simon Wednesday, July 20, 2016

    Hello Maria,

    I have encountered some problems since I tried to enable the IM Feature. I use the desktop class and I tried the SQL commands you mentioned:

    ALTER SYSTEM SET inmemory_size = 20G scope=spfile;

    ALTER SYSTEM SET sga_target = 91G scope=spfile;

    After I restarted my database I couldn't connect to my DB anymore (using SQL Developer).

    I get the ORA-12505, TNS:listener does not currently know of SID given in connect descriptor.

    What can I do?

    I appreciate any advice.

    Best Wishes


  • Maria Colgan Saturday, July 30, 2016

    Hi Simon,

    The error ORA-12505 means that the listener was up and you could connect to it, but it couldn't connect you to the database because it doesn't know that that database is up. There are two reasons for this:

    1. The database has not been started up

    2. The database has not registered with the listener, e.g. because the database was started before the listener. (When the database starts, it registers itself with a listener if it is already running. If the listener isn't running, the database doesn't register itself, and if the listener starts, it doesn't go looking for databases that might register with it.)

    Can you connect to the database in SQL*Plus to confirm it's really up?



  • Simon Thursday, August 18, 2016

    Thanks Maria now everyting is working perfect.

    Still I have one more question.

    I now rendered the inmemory_size parameter to 6G and the sga_target to 7G.

    I created a very large table for testing purposes. When I wanted to populate that table into the IM column store it didn't fit in. Okay that's not really the problem. The thing is it says in v$im_segments:

    INMEMORY_SIZE: 3.869.638.656

    BYTES: 33478934528

    BYTES NOT POPULATED: 11.001.651.200


    Looking at v$inmemory area tells me:

    1MB POOL: 5.149.556.736 (Bytes allocated)

    64KB POOL: (Bytes allocated)

    But 3.869.638.656 < 5.149.556.736

    How can that be. What happened with the remaining 5.149.556.736-3.869.638.656 = 1.279.918.080 Bytes?

    Thank you very much

  • guest Friday, October 7, 2016

    How to identify whether query is executing on in-memory database or disc based database?

  • Chakravarthi Tuesday, January 3, 2017

    Hi Maria,

    In Oracle 12c Release 1 database, 'inmemory_size' instance parameter is immediately modifiable (scope=both) as per 'ISSYS_MODIFIABLE' column in 'v$parameter' but I am unable to change it dynamically. Also, I am unable to increase the inmemory_size instance parameter to any higher value.

    SQL> sho parameter inmemory_size


    ------------------------------------ ----------- ------------------------------

    inmemory_size big integer 100M

    SQL> select name, value, ISMODIFIED, ISSYS_MODIFIABLE from v$parameter where name = 'inmemory_size';


    ------------------------------ ----------------------------------- ---------- ---------

    inmemory_size 104857600 FALSE IMMEDIATE

    SQL> alter system set inmemory_size = 200m scope = both;

    alter system set inmemory_size = 200m scope = both


    ERROR at line 1:

    ORA-02097: parameter cannot be modified because specified value is invalid

    ORA-02095: specified initialization parameter cannot be modified

    SQL> alter system set inmemory_size = 250m scope = both;

    alter system set inmemory_size = 250m scope = both


    ERROR at line 1:

    ORA-02097: parameter cannot be modified because specified value is invalid

    ORA-02095: specified initialization parameter cannot be modified

    SQL> sho parameter sga_target


    ------------------------------------ ----------- ------------------------------

    sga_target big integer 768M

    Could you please help me in understanding.

  • guest Friday, January 20, 2017

    I believe you have encountered a bug.

    The correct value for ISSYS_MODIFIABLE is FALSE for the INMEMORY_SIZE parameter in Oracle Database 12c Release 1 (

    In 12.1 the In-Memory column store is a static pool.

    As a static pool, any changes to the INMEMORY_SIZE parameter will not take effect until the database instance is restarted. It is also not impacted or controlled by Automatic Memory Management (AMM). The INMEMORY_SIZE parameter must have a minimum size of 100MB.



  • Niels Mikuta Tuesday, May 7, 2019
    I know this is a pretty old thread. Is it possible to disable INMEMORY once tables are populated?
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.