Monday Oct 06, 2014

Tiered Storage

Since this is a blog related to oracle database storage optimization this post is an attempt to explain what we mean by tiered storage. You would think that everyone would have the same definition of tiered storage, but I've found that not to be the case. So, what is tiered storage and how does it relate to an Oracle database storage solution?

Tiered storage is the idea of creating different classes of storage with different performance, capacity and price points. The data lifecycle, in an Information Lifecycle Management (ILM) Architecture recognizes that data has different lifecycle characteristics centered around modification and access. Based on this knowledge a database storage architecture can be created to take advantage of that lifecycle if the right tools and infrastructure are available.

This is where Oracle Database 12c Automatic Data Optimization (ADO) and Heat Map come into play. These are the tools that are needed to provide a database centric ILM strategy. But wait a minute, storage vendors have already built tiering into their devices, why do we need anything in the database? The answer lies in the nature of how database data is accessed. Storage is aware of blocks, or files, but has no information about the meaning of the data in those blocks or files. The Oracle database on the other hand, understands that a table or partition is a logical entity, and with Heat Map can track that entity's usage characteristics. Why is that important? Again, the answer lies in what happens to all of the data associated with a segment at the storage level. Since storage doesn't understand database data, it can't know that the data it just moved to slower storage will be needed every 30 days for the month end closing cycle. Heat Map on the other hand will know this, and moving the segment can be avoided and not cause a performance impact. The converse is true as well, an ADO policy can be created that can identify segments that should be eligible to be moved to a different storage tier sooner than might otherwise occur if space pressure dictates. The policy can be created with the understanding of the data's lifecycle, something that a hardware device cannot do.

To conclude, I would like to stress that the storage tiers that I've been describing here are specifically for active Oracle database files. Since storage "tiering" seems to be an overloaded term, I want to differentiate the use of storage tiering in a database environment to help achieve Information Lifecycle Management as opposed to storage tiering that is referred to by storage vendors to differentiate between production storage versus development/test storage or even backup storage.

Friday Sep 12, 2014

Oracle Open World 2014 Sessions

These are the sessions that the Database Performance Product Management team will be giving at Open World this year:

  •  Oracle Database 12c New Features, OAUG DB SIG, Sunday 11:00 - 11:45 am, Moscone West - 3009
  • CON8372 - 12c: Heat Map and ADO, Monday 11:45 - 12:30 pm, Moscone South - 305
  • CON8376 - Compression Best Practices, Tuesday 10:45 - 11:30 am, Moscone South - 305
  • CON3020 - Oracle Database via Direct NFS Client, Wednesday 2:00 - 2:45 pm, Intercontinental C
  • CON8379 - How to Use Oracle Database Temporal Features to Build Smarter Applications Faster, Thursday 12:00 - 12:45 pm, Moscone South - 305

In addition to these presentations we will be helping out with the Oracle Database In-Memory Hands On Labs:

  • HOL9346 - Oracle Database In-Memory Boot Camp
    • Monday 4:15 - 5:15 pm, Hotel Nikko - Peninsula
    • Tuesday 6:45 - 7:45 pm, Hotel Nikko - Peninsula
    • Wednesday 4:15 - 5:15 pm, Hotel Nikko - Peninsula
    • Thursday 1:00 - 2:00 pm, Hotel Nikko - Peninsula

Please come see us and introduce yourself.


Monday Jun 23, 2014

Automatic Data Optimization – How Do You Know It's Working?

This post is about Automatic Data Optimization (ADO) and how to tell if your policies are working. It's fairly easy to display ADO policies and even to see if they've run. It's not quite as straight forward though, to see if anything actually happened. Unfortunately there is no "log" of what events or actions actually occurred when the policy ran. There are a couple of events that can be set, but I've had only hit or miss success with them detailing what actually happened. For segment level compression and storage tiering it's fairly easy to just look at the results. It's a little harder with policies involving advanced row compression. That's because Oracle is intermixing compressed blocks with uncompressed ones. The easiest way that I've found is to use the function dbms_compression.get_compression_type. This function will return a value that can be used to determine the compression for each row input[1]. This may not be practical on a very large number of rows, but I think that once you've prototyped the behavior you can be pretty sure that the feature is actually working in practice.

The following example was run on database version 12.1.0.1.0 and will create a table similar to emp, add some rows to it, enable an ADO policy to compress rows that have not been modified, and then invoke that policy and show what happens. The one "trick" performed is to set the "POLICY CONDITION" to 1 to change the interpretation of DAYS to SECONDS. This is documented in the PL/SQL Packages and Types Reference for the DBMS_ILM_ADMIN package. The CUSTOMIZE_ILM procedure can be used to set the POLICY CONDITION parameter to a 1. Note that this is only done to make testing simpler.

SQL> col name format a40;
SQL> select * from dba_ilmparameters;

NAME                                          VALUE
---------------------------------------- ----------
ENABLED                                           1
JOB LIMIT                                        10
EXECUTION MODE                                    3
EXECUTION INTERVAL                               15
TBS PERCENT USED                                 85
TBS PERCENT FREE                                 25
POLICY CONDITION                                  0

7 rows selected.

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.POLICY_TIME,
 dbms_ilm_admin.ILM_POLICY_IN_SECONDS);

PL/SQL procedure successfully completed.

SQL> select * from dba_ilmparameters;

NAME                                          VALUE
---------------------------------------- ----------
ENABLED                                           1
JOB LIMIT                                        10
EXECUTION MODE                                    3
EXECUTION INTERVAL                               15
TBS PERCENT USED                                 85
TBS PERCENT FREE                                 25
POLICY CONDITION                                  1

7 rows selected.

SQL>

Next a new table is created, populated, row compression status is checked and then an ADO policy is created.

 
CREATE TABLE emp3
  ( empno number,
    salary number,
    deptid number,
    name VARCHAR2(100)
  )
/

SQL> desc emp3
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
EMPNO                                              NUMBER
SALARY                                             NUMBER
DEPTID                                             NUMBER
NAME                                               VARCHAR2(100)

SQL>

insert into emp3
(empno, salary, deptid, name)
select 
  empno, sal, deptno, ename 
  from emp;
commit;

begin
  for i in 1..5 loop
    INSERT INTO emp3
    SELECT
      empno,
      salary,
      deptid,
      name
    FROM emp3;
    commit;
  end loop;
end;
/

SQL> select count(*) from emp3;

  COUNT(*)
----------
       448

SQL> @row_compression

COMPRESSION_TYPE                                     NUM_ROWS
-------------------------------------------------- ----------
No Compression                                            448

SQL>

alter table emp3 ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW 
AFTER 1 DAY OF NO MODIFICATION;


set lines 200
set pages 9999
col policy_name format a11;
col object_owner format a10;
col object_name format a20;
col object_type format a20;
col subobject_name format a20;
col enabled format a7;
select * from user_ilmobjects;

POLICY_NAME OBJECT_OWN OBJECT_NAME          SUBOBJECT_NAME       OBJECT_TYPE          
----------- ---------- -------------------- -------------------- -------------------- 
P137        SCOTT      EMP3                                      TABLE                

  INHERITED_FROM       ENABLED
  -------------------- -------
  POLICY NOT INHERITED YES

col policy_name format a11;
col tablespace format a20;
col enabled format a7;
select * from user_ilmpolicies;

POLICY_NAME POLICY_TYPE   TABLESPACE           ENABLED
----------- ------------- -------------------- -------
P137        DATA MOVEMENT                      YES

col policy_name format a11;
col tier_tablespace format a20;
col compression_level format a20;
col tier_status format a20;
col custom_function format a20 wrapped;
select * from user_ilmdatamovementpolicies;

POLICY_NAME ACTION_TYPE SCOPE   COMPRESSION_LEVEL    TIER_TABLESPACE      TIER_STATUS          
----------- ----------- ------- -------------------- -------------------- -------------------- 
P137        COMPRESSION ROW     ADVANCED                                                       

  CONDITION_TYPE         CONDITION_DAYS CUSTOM_FUNCTION
  ---------------------- -------------- --------------------
  LAST MODIFICATION TIME              1

Next we'll force the running of the ADO policy and check to see if any rows were compressed. Remember that we specified no modifications in the last 1 day and that days are being interpreted as seconds.

 
declare
  v_executionid number;
begin
  dbms_ilm.execute_ilm (
    owner=>'SCOTT',
    object_name=>'EMP3',
    policy_name=>'P137',
    execution_mode=>dbms_ilm.ilm_execution_online,
    task_id=>v_executionid);
end;
/

SQL> @row_compression

COMPRESSION_TYPE                                     NUM_ROWS
-------------------------------------------------- ----------
Basic compression level                                   448

SQL>

We see that all of the rows have been compressed. Based on our policy this is what we would expect, but why does the compression type specify "Basic compression level"? Apparently there is a bug, bug number 17947871, which reports basic level compression rather than advanced row compression. If the policy is defined at the segment level then "Advanced compression level" is reported which is more along the lines of what you would expect. However, since the two compression types are equivalent in practice this should not be a problem.

One other comment is that my example wound up choosing all of the rows to compress. In reality we would expect that only some of the table blocks would have rows that would meet the policy criteria. Remember that the entire block is compressed so all of the rows in the block must meet the policy criteria for the block to be eligible for compression.

NOTE: The row_compression.sql script is available here.

Monday Jun 09, 2014

ADO and Two Way Storage Tiering

We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model.

ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails.

In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering.

Wednesday May 14, 2014

Updates in Row Compressed Tables

Updates in compressed tables, both basic table compression and advanced row compression, are not well understood and this article will attempt to explain how they work. In order to do this some background will be discussed about how Oracle performs row based compression and why the structure of the data is so important to the effectiveness of the compression. More importantly, space usage will be detailed for various insert and update scenarios and the performance impacts that occur when updating compressed tables.

Basic table compression was first introduced in Oracle 9i Release 2 and it only supported direct path inserts to achieve compression. In later releases ALTER TABLE … MOVE operations and online table redefinition are also supported. It produces better compression than advanced row compression, uses less CPU and generates less undo and redo, but additional single row inserts and updates are not compressed, and updates can lead to significant row migration. This is a particular problem because the default setting for PCTFREE is 0 which leaves no space in the block for updates which require decompression.

Advanced row compression, or OLTP table compression as it was known in 11g, allows single row inserts and updates to be compressed, and is therefore more suitable for an OLTP environment. Advanced row compression requires the Advanced Compression Option and works by only attempting compression when the block is full. This makes the compression batch like and means that inserts and updates are made uncompressed into the block. If the block is successfully compressed then it is eligible for more inserts or updates. If the block is full then the PCTFREE setting governs how much free space is available for updates, but at this point the updates will no longer be compressed because the compression mechanism will not be triggered. Due to the batch nature of the compression algorithm, advanced row compression will generally not achieve compression as good as basic table compression because there will typically be uncompressed rows leftover in the block.

Both basic table compression and advanced row compression use the same compression algorithm. Compression is achieved by replacing duplicate values with a pointer to a symbol table entry stored at the beginning of the block. The duplicate values can occur in one or more columns and there can be multiple entries in the symbol table. All of the information needed to decompress a row is stored in the database block itself so Oracle doesn’t have to access any other information, and since the data is part of the block, decompression really just consists of substituting the symbol table entry or entries so it is very fast. It is not at all like uncompressing a zip file.

The format of the compression in a block looks like the following (note that this was created from a block dump):

block_row_dump:
tab 0, row 0, @0x1f1f <- Symbol table entry
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 02 d1 fa 00 64 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42
tab 1, row 0, @0x1f1b <- Compressed row
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 00

In this example, we see that we have a symbol table entry at tab 0, row 0. All symbol table entries are associated with table 0 and row data is part of table 1 in row compression blocks. The actual row data starts at tab 1, row 0 in the example above. We see from the “tl” entry that the symbol table entry has a total length of 105 bytes, and the compressed row has a total length of only 4 bytes. If we look a little more closely at this example we see that the “bindmp” (binary dump) field for the compressed row has exactly 4 bytes which is also the same as the total length of the row. In this example the first byte is the flag byte, the second byte the lock byte, the third byte the column count and the fourth byte the entry in the symbol table.

Update Tests

Note: All tests have been performed on an Oracle 12c (12.1.0.1) database and all table definitions and SQL used to perform the tests has been included in the file located here.

A series of tests has been performed to show how well Oracle performs compression and how it handles various update scenarios. The first set of tests involve three tables, one with no compression (table T1), one with basic table compression (table T2) and the third with advanced row compression (table T3). First we’ll load the tables and verify the space usage, and then we will update all of the rows in each table and show the resulting space usage. To keep things simple the tables have only one column and the values used are highly compressible, just a series of 5000 rows with 100 A’s and 5000 rows with 100 B’s.

The result after creating and loading the tables:

TABLE_NAME     PCT_FREE SPC_MGMT BLOCKS USED_BLOCKS FREE_BLOCKS UNUSED_BLOCKS COMPRESS TYPE
T1                   10 ASSM        256         194          62             0 DISABLED
T2                    0 ASSM         24          18           0             6 ENABLED  BASIC
T3                   10 ASSM         24          24           0             0 ENABLED  ADVANCED

We see that the best compression is obtained with basic table compression (T2). Advanced row compression is not quite as good but with a PCTFREE of 10 versus 0 for basic table compression (the defaults) it is still pretty good (T3). Now let’s update the tables, and after updating the 5000 rows with 100 A’s to 100 B’s we get the following:

TABLE_NAME     PCT_FREE SPC_MGMT BLOCKS USED_BLOCKS FREE_BLOCKS UNUSED_BLOCKS COMPRESS TYPE
T1                   10 ASSM        256         194          62             0 DISABLED
T2                    0 ASSM         96          96           0             0 ENABLED  BASIC
T3                   10 ASSM         32          32           0             0 ENABLED  ADVANCED

We see that the space usage for the table without compression has not changed, but the two tables with compression have changed quite a bit. So what’s happened?

Updates in Basic Table Compression Tables

We already know that updates are not going to be compressed in tables using basic table compression, but why is our test table now 4 times larger and the used space over 5 times larger?  The answer lies in how Oracle processes the updates on compressed tables. If we consider that Oracle will not re-compress the rows, or at least the columns that were updated, and if we notice that basic table compression tables are created with a default PCTFREE value of 0 then we can be suspicious that Oracle may have chained the rows into new blocks. Where else can it put the updated column values? The following proves our theory:

SQL> analyze table T2 list chained rows;

Table analyzed.

SQL> select table_name, count(*) from chained_rows group by table_name;

TABLE_NAME                       COUNT(*)
------------------------------ ----------
T2                                   5000
                               ----------
sum                                  5000

SQL>

We see that Oracle has indeed chained our 5000 updated rows. If we dump one of the blocks we see that Oracle has actually migrated the rows to new blocks, and since the migrated rows won’t be compressed we can begin to see why so much space was used.

Block header dump:  0x008000a3

tab 1, row 0, @0x1f0e
tl: 9 fb: --H----- lb: 0x2  cc: 0
nrid:  0x008000b2.0
bindmp: 20 02 00 00 80 00 b2 00 00

The “nrid” field points us to the migrated row. If we look at that row:

Block header dump:  0x008000b2

block_row_dump:
tab 0, row 0, @0x1d7a
tl: 110 fb: ----FL-- lb: 0x1  cc: 1
hrid: 0x008000a3.0
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42

We see that the total length of the column is 110 bytes so no compression. We also see that the hrid field points back to our original row so we know these two pieces go together. The last question is what if we increase our PCTFREE to a higher value? Can we avoid row migrations and how much space is required? The answer is that it takes a PCTFREE of 90 (I tested in increments of 10) to avoid row migrations.

TABLE_NAME     PCT_FREE SPC_MGMT BLOCKS USED_BLOCKS FREE_BLOCKS UNUSED_BLOCKS COMPRESS TYPE
T1                   10 ASSM        256         194          62             0 DISABLED
T2                    0 ASSM         96          96           0             0 ENABLED  BASIC
T2A                  90 ASSM        256         177           0            79 ENABLED  BASIC
T3                   10 ASSM         32          32           0             0 ENABLED  ADVANCED

Table T2A is the test table and the problem with this is that just as much space is required as the non-compressed table T1. Granted this is an extreme example, but the takeaway is that tables that are compressed with basic table compression should be rarely, if ever, updated.

Updates in Advanced Row Compression Tables

We have seen that updates are not compressed with basic table compression tables so what about advanced row compression tables? After all, the whole point of advanced row compression is to support OLTP workloads with single row inserts and updates. If we recall our simple test above, our advanced row compression table grew from 24 blocks to 32 blocks after our update. Our update just changed the values in 5000 of the rows to the values of the other 5000 rows. In a perfect scenario the space shouldn’t have grown at all. So what’s going on?

I realized that the original test had just inserted 5000 rows with 100 A’s and then 5000 rows with 100 B’s. I was concerned that this might have caused problems with the update since most blocks would have had either all As or all Bs with only one symbol table entry. So I performed a second test that looped 5000 times inserting a row with 100 A’s and then a row with 100 B’s. The second test effectively interleaves the rows and therefore insures that most blocks will have both rows with As and rows with Bs. The total space usage was the same for both tables, but there was a slight difference in the number of rows per block, and that appears to be due to the space required in each block for two symbol tables versus only one (at least that’s the only difference that I could find).

After the updates the space usage for both tests was also the same, 24 blocks versus 32 blocks. However, in both cases the end result is the same as what occurred in the basic table compression test, the updates are not compressed. There is a difference with advanced row compression in that it uses a default PCTFREE value of 10 rather than 0. This means that some rows can be updated in place, and for the rest of the rows we see the same row migrations, but now those row migrations are compressed! This explains why the space usage is quite a bit less than with basic table compression.

Here’s an example block dump from the second table with interleaved rows:

Block header dump:  0x008001a5

block_row_dump:
tab 0, row 0, @0x1f2f
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 00 00 fa 00 64 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
tab 0, row 1, @0x1ec6
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 01 13 fa 00 64 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
tab 1, row 0, @0x1206
tl: 106 fb: --H-FL-- lb: 0x2  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 02 01 fa 00 64 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
tab 1, row 1, @0x1ebe
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 01

< data skipped >

tab 1, row 15, @0x1e86
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 01
tab 1, row 16, @0xf17
tl: 9 fb: --H----- lb: 0x2  cc: 0
nrid:  0x008001ff.f0
bindmp: 20 02 00 00 80 01 ff 00 f0
tab 1, row 17, @0x1e7e
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 01

tab 1, row 18, @0xf0e
tl: 9 fb: --H----- lb: 0x2  cc: 0
nrid:  0x008001ff.f1                
bindmp: 20 02 00 00 80 01 ff 00 f1

The block dump excerpt above shows us that we have two symbol table entries for our 100 character A’s and B’s at tab 0, row 0 and tab 0, row 1. We also see that our first data row at tab1, row 0 is uncompressed at 106 bytes, but the very next row is compressed at only 4 bytes. This alternating behavior continues through the rest of the block dump, but when we get to row 16 we see our first row migration. We can assume that this occurs as we exhaust the free space that was reserved by the PCTFREE setting of 10, and based on our data interleaving this seems to make sense. The row migration for row 18 was followed to its destination block below. Note that the block header DBA matches the "nrid" DBA (next rowid), and the row number f1 in hex is row 241 in decimal (the "nrid" field is of the format <data block address>.<row number> in hex).

This entry is also shown below:

Block header dump:  0x008001ff

block_row_dump:
tab 0, row 0, @0x1d7f
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 01 df fa 00 64 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42

tab 1, row 241, @0x140b
tl: 10 fb: ----FL-- lb: 0x1  cc: 1
hrid: 0x008001a5.12
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 0c 01 01 00 80 01 a5 00 12 00

It’s very interesting to note that the migrated row is compressed. The “tl” field, or total length, is only 10 bytes for a 100 byte column, and we see that there is a symbol table entry at tab 0, row 0.

So this begs the question, are updates ever compressed in the same block? The answer lies in how advanced row compression works. Compression is only attempted when the free space in the block is exhausted. If a compression can take place then the rows in the block are compressed. Before the first compression, an update is really the same as an insert because there are no compressed rows to update. However, after compression has taken place, if a row that has been compressed is updated, then the resulting row can be re-compressed only if another compression is triggered. This is a key point, another compression has to be triggered by filling the block again. If however, all free space has been exhausted or subsequent inserts don’t trigger another compression then the rows that have been updated will stay uncompressed and subsequent updates will cause row migrations once all space in the block is exhausted (i.e. the reserved space from the PCTFREE setting). I said inserts in the previous sentence because in my testing I only observed compression being triggered by inserts.

Let’s show an example of an update getting compressed. The idea here is that as a block is loaded it will reach a point where all free space will be exhausted. Assuming that inserts are what is filling the block a compression will be triggered. Once the block is compressed then more rows can be added to the block. Depending on the data and how compressible it is, there can be many compressions triggered. This test will trigger an initial compression, update compressed rows, and then trigger additional compressions with more inserts. We will then dump the block to prove that our updated rows were indeed compressed.

Based on dumping table T1 we find that we get about 68 uncompressed rows per block. A new table is created with 50 rows of 100 A’s, 3 rows of 100 B’s and 20 more rows of 100 A’s. This gives us a total of 73 rows and they will only fit in one block with compression. We can verify that compression has been invoked by dumping the block and checking the rows.

Block header dump:  0x008000a5

block_row_dump:
tab 0, row 0, @0x1f2f
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 00 40 fa 00 64 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41
tab 0, row 1, @0x1ec6
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 00 03 fa 00 64 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42
tab 1, row 0, @0x1ec2
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00
tab 1, row 1, @0x1ebe
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00

tab 1, row 49, @0x1dfe
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00
tab 1, row 50, @0x1dfa
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 01
tab 1, row 51, @0x1df6
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 01
tab 1, row 52, @0x1df2
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
bindmp: 2c 00 01 01
tab 1, row 53, @0x1dee
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00

This dump shows us that we have two symbol table entries, all rows are compressed except rows 67 through 72 (not shown). And we see that our three rows with B’s are at rows 50, 51 and 52 and they are compressed with total lengths of 4 bytes. Next we’ll update the three rows of B’s to A’s and verify that the updates have taken place.

Block header dump:  0x008000a5

tab 1, row 49, @0x1dfe
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00
tab 1, row 50, @0x1ad4
tl: 106 fb: --H-FL-- lb: 0x2  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 02 01 fa 00 64 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41
tab 1, row 51, @0x1a6a
tl: 106 fb: --H-FL-- lb: 0x2  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 02 01 fa 00 64 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41
tab 1, row 52, @0x1a00
tl: 106 fb: --H-FL-- lb: 0x2  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 02 01 fa 00 64 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41
tab 1, row 53, @0x1dee
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00

We see that rows 50, 51 and 52 have been updated and are no longer compressed because their total lengths are now 106 bytes. They also have not been migrated so they reside completely within the block. We do see that the surrounding rows are still compressed and have not been changed. Lastly we insert an additional 600 rows of 100 A’s with a commit after each, and see if we can trigger additional compressions.

Block header dump:  0x008000a5

block_row_dump:
tab 0, row 0, @0x1f2f
tl: 105 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 02 2e fa 00 64 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41
tab 1, row 0, @0x1f2b
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00


tab 1, row 50, @0x1e63
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00
tab 1, row 51, @0x1e5f
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00
tab 1, row 52, @0x1e5b
tl: 4 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [100]
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
bindmp: 2c 00 01 00

Now all rows are compressed up to row 557 with rows 558 through 566 remaining uncompressed. The inserts have caused a new block to be added with an additional 106 rows (not shown). We also see that there are no row migrations (from analyzing for chained rows – not shown).

Another way of verifying whether compressions have been triggered is to examine the "HSC" statistics for the session performing the DML (See Master Note for OLTP Compression (Doc ID 1223705.1) for a list of the specific statistics associated with row compression).

In this case the statistics were captured below:

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
HSC Compressed Segment Block Changes                                    620
HSC Heap Segment Block Changes                                          621
HSC OLTP Compressed Blocks                                                1
HSC OLTP Non Compressible Blocks                                          1
HSC OLTP Space Saving                                                 54271
HSC OLTP positive compression                                            20
HSC OLTP recursive compression                                           20
heap block compress                                                      20

We see that the statistic “HSC OLTP positive compression” has a value of 20 which means that 20 successful compressions were performed during the insertion of the additional 600 rows. Now these weren’t all performed on the original block because an additional block was added to hold the additional rows with a total of 567 rows in the first block and 106 rows in the second. An inspection of the second block shows that 67 rows were compressed and the remaining 39 uncompressed (block dump not shown).

This issue of inserts triggering compression when a space threshold is reached also helps to explain why advanced row compression rarely compresses as well as basic table compression. There are usually uncompressed rows that could have been compressed left over in the blocks of advanced row compression tables and this affects the space savings as compared to basic table compression.

Summary

So we have seen that updates can be compressed in tables using advanced row compression. This seems to be most beneficial for tables that have a mixture of DML activity rather than with tables that are loaded first and then updated. This is a key take away about the behavior of advanced row compression because we have seen that it takes an insert to trigger a block compression. Although updates are never compressed with basic table compession, if a table is going to be loaded and not updated then basic table compression can achieve the best compression possible. If a table will experience updates then advanced row compression should be used. If there are issues with single row insert performance on tables with advanced row compression then the table should be loaded first without compression, and once most insert activity is complete then it can be compressed with advanced row compression.

About

The Database Storage Optimization blog is written by the Oracle product management team and highlights features and issues associated with database storage optimization. The views expressed on this blog are those of the author(s) and do not necessarily reflect the views of Oracle.

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today