X

Master Note for OLTP Compression

++++++++++++++++++++++++++++++++++++++++++++++++++++
The complete and the most recent version of this article can be viewed
from My Oracle Support Knowledge Section.

Master Note for OLTP Compression (Doc ID 1223705.1)
++++++++++++++++++++++++++++++++++++++++++++++++++++


Master Note for OLTP Compression [ID 1223705.1]

 Modified 18-NOV-2010     Type BULLETIN     Status PUBLISHED 

In this Document

  Purpose

  Scope and Application

  Master Note for OLTP Compression

     Concepts

     Compression Effectiveness and Performance

     Concurrency / Locking

     Administration

     Limitations / Restrictions

     Testing and Results

     Waits

     Statistics

     Bugs

     Notes

     Other References


Applies to:


Oracle Server - Enterprise Edition - Version: 11.1.0.6 to 11.2.0.0. - Release: 11.1 to 11.2
Information in this document applies to any platform.

Purpose


This document describes the 11g new feature OLTP Compression and
provides usage guidelines that help to maximize compression ratio and
performance.

Scope and Application


The Oracle11g Advanced Compression Option includes OLTP Compression,
SecureFiles Deduplication and Compression, Recovery Manager Compression,
Data Pump Compression, and DataGuard redo compression. OLTP Compression
is one of several types of heap segment compression (HSC) and as it is
important to share common terminology for this topic an excerpt from the
manual is provided below for clarity.

    * When you enable table compression by
specifying either COMPRESS or COMPRESS BASIC, you enable basic table
compression. Oracle Database attempts to compress data during
direct-path INSERT operations when it is productive to do so. The
original import utility (imp) does not support direct-path INSERT, and
therefore cannot import data in a compressed format.

      In
earlier releases, this type of compression was called DSS table
compression and was enabled using COMPRESS FOR DIRECT_LOAD OPERATIONS.
This syntax has been deprecated.

      See Also: "Conventional and Direct-Path INSERT" for information on direct-path INSERT operations, including restrictions

   
* When you enable table compression by specifying COMPRESS FOR OLTP,
you enable OLTP table compression. Oracle Database compresses data
during all DML operations on the table. This form of compression is
recommended for OLTP environments.

      In earlier releases,
OLTP table compression was enabled using COMPRESS FOR ALL OPERATIONS.
This syntax has been deprecated.

      Note: Tables with COMPRESS
or COMPRESS BASIC use a PCTFREE value of 0 to maximize compression,
unless you explicitly set a value for PCTFREE in the
physical_attributes_clause. Tables with COMPRESS FOR OLTP or NOCOMPRESS
use the PCTFREE default value of 10, to maximize compress while still
allowing for some future DML changes to the data, unless you override
this default explicitly.

    * When you specify COMPRESS FOR
QUERY or COMPRESS FOR ARCHIVE, you enable hybrid columnar compression. 
With hybrid columnar compression, data can be compressed during bulk
load operations. During the load process, data is transformed into a
column-oriented format and then compressed. Oracle Database uses a
compression algorithm appropriate for the level you specify. In general,
the higher the level, the greater the compression ratio.

Hybrid
columnar compression can result in higher compression ratios, at a
greater CPU cost. Therefore, this form of compression is recommended for
data that is not frequently updated.

COMPRESS FOR QUERY is
useful in data warehousing environments. Valid values are LOW and HIGH,
with HIGH providing a higher compression ratio. The default is HIGH.

COMPRESS
FOR ARCHIVE uses higher compression ratios than COMPRESS FOR QUERY, and
is useful for compressing data that will be stored for long periods of
time. Valid values are LOW and HIGH, with HIGH providing the highest
possible compression ratio. The default is LOW.

See Also: Oracle
Exadata Storage Server Software documentation for more information on
hybrid columnar compression, which is a feature of Oracle Exadata.


This
document focuses on OLTP compression and only briefly discusses basic
compression as it applies to tables using OLTP compression.

Master Note for OLTP Compression


Concepts

OLTP compression differs
from basic compression found in prior versions as it allows compression
during DML operations and not just during direct path loads. When a
table is set to use OLTP compression and rows are inserted into a new
block in that table they are inserted uncompressed, just as with a
non-compressed table. When enough inserts cause that block to be
considered full then it is customarily not available for further
inserts. At this time compression is attempted on the block. If the
compression gains sufficient free space then the compression is retained
and the block is still a candidate for further insertions. If the
compression fails to gain sufficient free space then that compression
attempt is discarded and the block is considered full.

The block
itself contains the symbol table that facilitates the one time storage
of repeated values. Each block's compression is completely
self-contained; no information from outside the block is required to
access the information in the block. When a block is accessed it is not
decompressed, it remains compressed when it is read into the buffer
cache. This extends the benefits of the data compression to allow more
rows to be retained in the buffer cache without having to increase the
memory allocation. This also means that reading a compressed block has
no additional overhead when compared to a non-compressed block.

Each
time a block is compressed successfully an image of the block prior to
compression must be retained in undo for consistent read and transaction
rollback. For this reason the amount of undo created when updating a
table set for OLTP compression will be greater than the amount of undo
created for a non-compressed table. Likewise, whatever is written to
undo is also written to the redo logs so there is a corresponding
increase in redo when OLTP compression is used.

Generally updates
involve the same operations as inserts. Like inserts, updates are not
immediately compressed. When updating an already compressed block any
columns that are not updated are not uncompressed, they continue to use
the symbol table and retain their compression. Updated columns are
stored initially as with any uncompressed block. When the block
approaches full then it compression is attempted with the same
implications as with the inserts described above.

Deletes from a
compressed block are identical to deletes from a non-compressed block,
there is no change to consider. If the deletes cause a block that was
full to have sufficient free space then the block will be considered
again for new inserts.

As blocks are retained in compressed form
in the buffer cache it is expected that queries and deletes will
function at least as quickly and may gain in performance due to fewer
physical reads. Inserts and updates will often have no additional
overhead, but on occasion will trigger a compression of a block. At that
time there is an increase in cpu consumption and if the compression is
successful then a corresponding increase in undo and redo generation.

Compression Effectiveness and Performance

The
concepts described above can be used to intuit factors that will
improve or degrade the effectiveness and performance of OLTP
compression. Some of those factors are listed below and are verified by
focused testcases that manipulate those factors and monitor the results.

Improving Compression Ratio

  • Use larger blocks.
  • Increase data redundancy within a given block (order of load, partitioning).
  • Offload non-redundant columns (particularly larger columns).
  • Use direct path loads when possible.

Improving Performance

  • When loading many rows use direct path inserts (one compression per block instead of many).
  • Order inserts to take best advantage of redundancy.
  • Leave sufficient free space (PCTFREE) so that updates that lengthen the row fit within the block.
  • Improving the compression ratio will lead to fewer physical
    reads and improved performance particularly with queries and deletes.
Performance is expected to improve for queries and deletes due to
the reduction in physical reads and having no additional overhead for
those operations. Concurrency may suffer due to increased rows per block
and so should be monitored and addressed if needed. Performance for
updates can benefit from fewer physical reads but there is occasional
overhead when blocks must be compressed. Performance for inserts has the
occasional overhead of block compression as well so it is important to
use direct path when possible and otherwise to minimize the number of
times a given block is compressed.

Query optimization via the
cost-based optimizer is the same as with non-compressed tables. The
statistics for a compressed table however are different from the same
table in non-compressed form and so execution plan differences can occur
and are to be expected.

Concurrency / Locking

Locking
works the same with compressed and uncompressed blocks except that there
are often far more rows per block due to the benefits of compression.
Consideration should be given for the number of active transactions that
should be expected in a given block at any moment. If ITL waits are
observed then increase the ITLs in the block. This can be accomplished
by raising PCTFREE so that blocks are not completely filled by updates
that increase row length, and in so doing prevent row migration as well.
Another option is to raise the INITRANS to force a minimum number of
ITLs in each block.

If there is contention during inserts with
too many sessions attempting to insert into the same blocks, and if MSSM
(Manual Segment Space Management) is used, then consider raising the
number of freelists. This will spread the inserts over more blocks and
reduce contention.

Administration

Table attributes
for compression can be changed by altering the table and when this is
done new activity will honor that setting. Existing blocks are not
immediately modified so if that is needed then the table must be
reorganized; alter table move is one common way to do this. The
[dba|user|all]_tables views have two columns for compression, 1)
"COMPRESSION" which indicates if the table is compressed, and
"COMPRESS_FOR" which indicates the type of compression for the table.

Limitations / Restrictions

[OLTP] Table compression is subject to the following restrictions:

    * COMPRESS FOR OLTP and COMPRESS BASIC are not supported for tables with more than 255 columns.
   
* You cannot drop a column from a table that uses COMPRESS BASIC,
although you can set such a column as unused. All of the operations of
the ALTER TABLE ... drop_column_clause are valid for tables that use
COMPRESS FOR OLTP, COMPRESS FOR QUERY, and COMPRESS FOR ARCHIVE.
    *
If you specify COMPRESS FOR OLTP, then chained rows are not compressed
unless the header for the row remains in the original block and all row
columns are moved to another block. If the row chaining results in
leaving just the row header in the block and moving all of the row's
columns to the next block, and they all fit in the next block, then the
columns can be compressed.
    * You cannot specify any type of table
compression for an index-organized table, any overflow segment or
partition of an overflow segment, or any mapping table segment of an
index-organized table.
    * You cannot specify any type of table compression for external tables or for tables that are part of a cluster.


Testing and Results

Overview

Some simple testing was conducted to
support the information and recommended practices in this note. This
testing is not intended to establish benchmarks or to be particularly
thorough, but it is sufficient to illustrate some of the concepts and
points mentioned above.

The following practices were observed during the testing in order to improve the value of the statistics:
  • Statistics were collected from another session, not the test session.
  • If a given test modified the test table it was recreated and loaded prior to subsequent tests.
  • The buffer cache was cleared prior to each test to avoid the effect of a warm cache on the results.
  • The log buffer was raised to 64M to minimize the effect of log buffer space activity on the test results.
  • Log switches and checkpoints were managed so that they did not occur during the tests.

Test Scenarios

Scenario
Redundancy
Load Method
Compression
Percent Free
#1: Non-Compressed Table
Yes
Conventional
None
10%
#2: Compressed Without Redundancy
No
Conventional
OLTP
10%
#3: Compressed With Redundancy 10%
Yes
Conventional
OLTP
10%
#4: Compressed With Redundancy 20%
Yes
Conventional
OLTP
20%
#5: Compressed With Direct Path Load
Yes
Direct Path
OLTP
10%

Test Schema

The testing was performed with the
following table. The compression and storage were modified for each test
scenario by replacement of the &1 variable. Note that there are no
true foreign key relationships here as there are no other tables
involved in the testing. The column names were simply meant to convey
the nature of the data in the column, i.e., the frequency of redundant
values. There were no indexes on this table during testing, and none of
the test SQL were reliant on indexes being present for an appropriate
execution plan.
CREATE TABLE actest
( pk NUMBER
, fk1 VARCHAR2(15)
, fk2 DATE
, fk3 NUMBER
, nr_payload VARCHAR2(80)
)
&1;

Test Results

TEST #1: Inserts
The data for the
testing was generated with two different insert statements, one intended
to generate data with some redundancy within each block and the other
intended to avoid any redundancy within the blocks. These same
statements were used for insert append and for insert conventional path
and so the &2 was replaced with either /*+ APPEND */ or /* no append
*/ as called for by the test scenario.
INSERT &2 /* nonredundant */ INTO actest
SELECT
    level-1,
    TO_CHAR(MOD(level-1,200),'09999999999999'),
    TRUNC(SYSDATE + MOD(level-1,500)),
    MOD(level-1,1000),
     LPAD(TO_CHAR(level-1,'099999999999999'),80,'X')
FROM dual
CONNECT BY LEVEL <= 1000000;

INSERT &2 /* redundant */ INTO actest
SELECT
    level-1,
    TO_CHAR(TRUNC((level-1)/5000),'09999999999999'),
    TRUNC(SYSDATE + TRUNC((level-1)/2000)),
    TRUNC((level-1)/1000),
    LPAD(TO_CHAR(level-1,'099999999999999'),80,'X')
FROM dual
CONNECT BY LEVEL <= 1000000;

The following statistics show the results of the data loads / inserts from the five scenarios.

Storage is reflected in the "Segment blocks used" row.

  • Note that the storage for the non-compressed table and the
    storage for the compressed table with non-redundant data are comparable.
    OLTP compression attempts to compress a block and when the data in that
    block does not occupy less space than before, the block remains
    uncompressed.

  • When the table was set for free 10% the data occupied about 4%
    fewer blocks than when the table was set for 20% free. While it is
    expected that more blocks would be required for the free 20% case, this
    may be less penalty than would be expected from a change from 10% to 20%
    free space per block. This is highly dependent on row length and block
    size; larger blocks and/or shorter rows would make the addition of each
    row a smaller change in the percentage of the block occupied and so
    would lead to numbers nearer expectations.

  • The insert direct path (IDL) used the least storage of all the scenarios tested.

Performance is reflected primarily in the "CPU used by
this session" statistic though some of the other statistics directly
impact this evaluation.

  • Compression overhead is related to database configuration,
    system resources, and the data involved; it is application specific. It
    is best measured in terms of CPU time as this is the most direct
    resource involved in compression. There is also an increase in undo and
    redo when using compression so log related waits can play a vital role
    in compression performance tuning and analysis.

  • For this specific testcase the inserts into the compressed
    table required 7.11, 7.23, and 7.96 seconds of CPU depending on the
    scenario involved whereas the inserts into the non-compressed table
    required 5.73 seconds of CPU. All that can be done to tune this specific
    difference is to reduce the number of times that blocks are compressed.
    Direct path loads assure a single compression per block of data loaded
    and so are optimal in this regard, causing the least overhead in terms
    of CPU and reducing undo and redo usage due to the nature of direct
    path.

  • The increases in undo and redo in the compression cases is due
    to the pre-images of the blocks prior to compression being saved into
    undo and subsequently being written to redo.

                                      Scenario #1     Scenario #2     Scenario #3     Scenario #4     Scenario #5
                                      --------------  --------------  --------------  --------------  --------------
                                      Non-Compressed  Compressed      Compressed      Compressed      Compressed
                                      Redundant       Non-Redundant   Redundant       Redundant       Redundant
                                      Conventional    Conventional    Conventional    Conventional    Direct Path
                                      PCTFREE 10      PCTFREE 10      PCTFREE 10      PCTFREE 20      PCTFREE 10
                                      --------------  --------------  --------------  --------------  --------------

Segment blocks used                            8,195           8,197           7,223           7,498           6,500
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
Elapsed time (seconds)                   00:00:06.06     00:00:07.17     00:00:07.62     00:00:07.96     00:00:06.53
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
CPU used by this session (seconds)              5.73            7.11            7.23            7.96            6.02
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP positive compression                      0               0           7,920          14,444               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP negative compression                      0           7,230               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC IDL Compressed Blocks                          0               0               0               0           6,504
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP Space Saving                              0               0       7,341,774      11,875,780               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
redo size                                 67,254,244     129,084,064     136,293,420     195,226,432         214,684
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
undo change vector size                    2,068,532      61,757,140      68,351,796     122,220,732          48,736

TEST #2: Query performing a full table scan
SELECT COUNT(1) FROM (SELECT /*+ NO_MERGE */ * FROM actest);


Full table scans are generally going to benefit
from smaller segments. The actual read of a compressed row is no more
involved than the read of a non-compressed row as the column piece for
any compressed value is just read from the symbol table in the block
instead of within the row piece for the block. For that reason there's
no expectation of greater CPU usage attributable to decompression when
reading compressed data than when reading non-compressed data. To the
contrary, it is expected that CPU and IO consumption will decrease due
to retrieving the same data from fewer blocks.

                                      Scenario #1     Scenario #2     Scenario #3     Scenario #4     Scenario #5
                                      --------------  --------------  --------------  --------------  --------------
                                      Non-Compressed  Compressed      Compressed      Compressed      Compressed
                                      Redundant       Non-Redundant   Redundant       Redundant       Redundant
                                      Conventional    Conventional    Conventional    Conventional    Direct Path
                                      PCTFREE 10      PCTFREE 10      PCTFREE 10      PCTFREE 20      PCTFREE 10
                                      --------------  --------------  --------------  --------------  --------------

Segment blocks used                            8,195           8,197           7,223           7,498           6,500
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
Elapsed time (seconds)                   00:00:01.62     00:00:01.32     00:00:01.38     00:00:01.32     00:00:01.29
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
CPU used by this session (seconds)              1.42            1.32            1.38            1.32            1.30
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP positive compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP negative compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC IDL Compressed Blocks                          0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP Space Saving                              0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
redo size                                          0               0               0               0          11,388
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
undo change vector size                            0               0               0               0               0
TEST #3: Query repeated 1,000,000 times reading single rows by rowid in random order
The
random sequence for row retrieval was set prior to testing and
maintained throughout all the test scenarios so that the results would
be consistent. The following code was used to generate the random values
used. This was done only once prior to all testing.

CREATE TABLE acrandom
( pk NUMBER
, rn NUMBER
, pkrowid ROWID
);

INSERT INTO acrandom SELECT LEVEL-1 pk, DBMS_RANDOM.RANDOM(), NULL FROM dual CONNECT BY LEVEL <= 1000000;

The
following code was used after each load to associate the rowids from
the test table with the random ordering in the acrandom table.

UPDATE acrandom r
SET pkrowid = (
        SELECT t.ROWID trowid
        FROM actest t
        WHERE r.pk = t.pk
);

The
following is the code used to randomly access the rows from the actest
table by rowid and in order by the random sequence set above.

DECLARE
    vRows NUMBER;
BEGIN
    FOR r IN (SELECT pkrowid FROM acrandom ORDER BY rn, pk) LOOP
        SELECT COUNT(1) INTO vRows FROM (SELECT /*+ NO_MERGE */ * FROM actest WHERE rowid = r.pkrowid);
    END LOOP;
END;

This test exercised the same number of random reads in
each case and so elapsed time and CPU time are roughly the same across
the scenarios. This is another verification that reading a compressed
block is no more involved than reading a non-compressed block as
discussed in the prior test. With a larger system the compression will
have benefits for random reads as long as the physical reads are
decreased as a result of a higher buffer cache hit ratio. Clearly the
same numbers of rows are accessed from fewer blocks, leading to less
physical reads and the same data preserved in the buffer cache using
less memory.

                                      Scenario #1     Scenario #2     Scenario #3     Scenario #4     Scenario #5
                                      --------------  --------------  --------------  --------------  --------------
                                      Non-Compressed  Compressed      Compressed      Compressed      Compressed
                                      Redundant       Non-Redundant   Redundant       Redundant       Redundant
                                      Conventional    Conventional    Conventional    Conventional    Direct Path
                                      PCTFREE 10      PCTFREE 10      PCTFREE 10      PCTFREE 20      PCTFREE 10
                                      --------------  --------------  --------------  --------------  --------------

Segment blocks used                            8,195           8,197           7,223           7,498           6,500
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
Elapsed time (seconds)                   00:02:24.38     00:02:23.73     00:02:23.97     00:02:27.22     00:02:23.96
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
CPU used by this session (seconds)            143.95          143.28          142.80          143.24          143.17
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP positive compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP negative compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC IDL Compressed Blocks                          0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP Space Saving                              0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
redo size                                        188               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
undo change vector size                            0               0               0               0               0
TEST #4: Update all rows setting a non-redundant column to the same value
The following test was run to
UPDATE actest SET fk3 = fk3;

This test shows that updating a column to the same
value does (as expected) generate an update to the row and the ensuing
undo and redo that entails. This is not a change of behavior for
compression, it is this way with non-compressed tables as well. It is
necessary in order to gain expected behavior in other areas, such as
triggers, auditing, recovery, and so on. In the third scenario we see
that the time increased, this is because when a column is updated it is
now stored in the row piece instead of the symbol table. Columns that
are not updated still reside in the symbol table unchanged. In
particular, the free 10% scenario resulted in more block compressions as
the column updates filled the free space triggering the compressions.
The free 20% case had more free space to work with and so had fewer
block compressions. The direct path table took the longest to update and
had the most block compressions.

                                      Scenario #1     Scenario #2     Scenario #3     Scenario #4     Scenario #5
                                      --------------  --------------  --------------  --------------  --------------
                                      Non-Compressed  Compressed      Compressed      Compressed      Compressed
                                      Redundant       Non-Redundant   Redundant       Redundant       Redundant
                                      Conventional    Conventional    Conventional    Conventional    Direct Path
                                      PCTFREE 10      PCTFREE 10      PCTFREE 10      PCTFREE 20      PCTFREE 10
                                      --------------  --------------  --------------  --------------  --------------

Segment blocks used                            8,195           8,197           7,223           7,498           6,500
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
Elapsed time (seconds)                   00:00:06.64     00:00:04.98     00:00:12.85     00:00:11.76     00:00:21.02
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
CPU used by this session (seconds)              6.24            4.76           11.23           11.54           18.14
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP positive compression                      0               0           1,425             536           2,504
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP negative compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC IDL Compressed Blocks                          0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP Space Saving                              0               0       1,095,027         343,593       2,065,327
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
redo size                                153,024,936     128,612,352     291,122,420     358,759,236               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
undo change vector size                   56,320,708      45,937,872     118,919,436     146,953,284     170,251,100
TEST #5: Update all rows setting a redundant column to a new value not present prior to the update
UPDATE actest SET fk2 = TRUNC(SYSDATE)-1;


This test showed similar results from the prior
test as it did not matter whether the column update value was in the
symbol table or not, it would be stored in the row, consume some free
space, and lead to a block compression. What is different about this
test is that the new value was redundant in all the tests, and more
redundant than even the prior redundant data. The timings in the third
and fifth scenarios were high for this test and those timings were not
due to waits as shown by the CPU time consumed in each. This result is
being reviewed internally.

                                      Scenario #1     Scenario #2     Scenario #3     Scenario #4     Scenario #5
                                      --------------  --------------  --------------  --------------  --------------
                                      Non-Compressed  Compressed      Compressed      Compressed      Compressed
                                      Redundant       Non-Redundant   Redundant       Redundant       Redundant
                                      Conventional    Conventional    Conventional    Conventional    Direct Path
                                      PCTFREE 10      PCTFREE 10      PCTFREE 10      PCTFREE 20      PCTFREE 10
                                      --------------  --------------  --------------  --------------  --------------

Segment blocks used                            8,195           8,197           7,223           7,498           6,500
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
Elapsed time (seconds)                   00:00:03.04     00:00:04.66     00:00:33.15     00:00:11.37     00:01:27.11
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
CPU used by this session (seconds)              3.04            4.58           32.82            9.34           86.50
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP positive compression                      0               0             947           1,498           1,184
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP negative compression                      0               0               0               1               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC IDL Compressed Blocks                          0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP Space Saving                              0               0         876,612         341,639       1,148,648
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
redo size                                 52,710,596     132,373,360     223,392,200     250,096,808     401,957,704
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
undo change vector size                   24,488,364      48,002,288      94,185,364     113,112,500     122,999,392
TEST #6: Delete all rows
This test was
included in order to illustrate the impact of compression on deletes. In
essence deletes enjoy the same advantages as queries in that they are
able to access fewer blocks to accomplish their task. This leads to
fewer physical reads and an advantage in the buffer cache as discussed
prior. There is also the advantage that the same rows are deleted from
fewer blocks leading to fewer writes.
DELETE actest;

The test showed about 10% improvement in the third and
fourth scenarios which is expected to carry through to larger real-life
load profiles. There is the implication in the first scenario that some
time was lost due to waits as shown by the greater difference between
the elapsed time and the CPU time, so the 10% was calculated based on
CPU time. The direct path case wasn't as much of an improvement, though
it is still better than the non-compressed case, as it caused a greater
amount of redo than the other scenarios.

                                      Scenario #1     Scenario #2     Scenario #3     Scenario #4     Scenario #5
                                      --------------  --------------  --------------  --------------  --------------
                                      Non-Compressed  Compressed      Compressed      Compressed      Compressed
                                      Redundant       Non-Redundant   Redundant       Redundant       Redundant
                                      Conventional    Conventional    Conventional    Conventional    Direct Path
                                      PCTFREE 10      PCTFREE 10      PCTFREE 10      PCTFREE 20      PCTFREE 10
                                      --------------  --------------  --------------  --------------  --------------

Segment blocks used                                0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
Elapsed time (seconds)                   00:00:09.45     00:00:08.86     00:00:06.53     00:00:05.73     00:00:07.05
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
CPU used by this session (seconds)              6.40            6.50            5.83            5.64            6.24
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP positive compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP negative compression                      0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC IDL Compressed Blocks                          0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
HSC OLTP Space Saving                              0               0               0               0               0
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
redo size                                182,547,824     182,540,852     193,953,064     186,564,480     284,471,432
           -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
undo change vector size                  112,003,216     112,003,456     116,767,100     113,505,684     120,694,960

Waits

There are no added wait
events specific to OLTP compression. If a block is being compressed and
another session attempts to access it the wait event will be "buffer
busy wait" as it is with other block manipulations in the buffer cache.

Statistics


Statistic #
Statistic
Description
407HSC OLTP Space SavingHow many bytes saved in total using OLTP Compression. Take the delta on every compression and adds
408HSC OLTP Compressed BlocksHow many blocks are compressed using OLTP Compression. Counts only the first compressions in OLTP code
409
HSC IDL Compressed Blocks
Number of Blocks compressed using Insert Direct Load (Using DSS Compression)
410HSC Compressed Segment Block ChangesTotal number of block changes to Tables/Heaps (Compressed only)
411
HSC Heap Segment Block Changes
Total number of block changes to Tables/Heaps (Compressed or Non-Compressed)
412
HSC OLTP Non Compressible Blocks
Blocks marked as Final (Not to be compressed again)
413HSC OLTP positive compressionNumber of times compression released some space
414
HSC OLTP negative compression
Number of times compression was negative (consumed more space than before) and reverted to pre-image
415HSC OLTP recursive compressionNumber of recursive compressions
416HSC OLTP inline compressionNumber of inline compressions
417
HSC OLTP Drop Column
Number of Compressions happening due to Drop Column
418HSC OLTP Compression skipped rowsNumber of rows that are skipped for compression (could be deleted or chained)
419
HSC OLTP compression block checked
Number of times that OLTP compressed blocks have been validated by block checking

Bugs

Note 1061366.1 List of Critical Patches Required For Oracle 11g Table Compression







Notes

Note 785787.1    11g new features:Advanced Compression overview and advantages
Note 882712.1    11g New feature:All About Advanced Table Compression(Overview,Use,Examples,Restrictions)
Note 1101900.1    Performance Issue with Update Statement in Compressed Tablespace
Note 987049.1    Performance Issue After Enabling Compression
Note 829068.1    COMPRESS FOR ALL OPERATIONS generates lot of redo
Note 1149283.1    The OLTP Compression Saves No Space As Expected Using A Row Too Big
Note 805205.1    My Data are not compressed

Other References

Oracle Technology Network - Advanced Compression

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha
Oracle

Integrated Cloud Applications & Platform Services