Database Storage Optimization best practices, tips and tricks and guidance from Database Compression Product Management

ADO – Automating Storage Tiering for Information Lifecycle Management

Gregg Christman
Product Manager

In this blog
we’re going to continue the ADO discussion and talk about the role ADO plays in
database storage tiering. The use of ADO requires the Advanced Compression

How does ADO
storage tiering differ from storage-level tiering? Good question!

Storage-level tiering
is blind to database I/O types, so it can't tell if a read is for an OLTP
transaction, a DW scan, a stats-gathering job, or a backup (for example). So storage-level
tiering can sometimes get it wrong in terms of moving segments of data to the
wrong tier at the wrong time. Heat Map is fully aware of different types of
database I/O, and different reasons for doing those I/O's, so it will
automatically exclude things like RMAN backups, or database maintenance tasks,

Keep in mind that
ADO storage tiering operates at the segment level, so when an ADO policy
implements storage tiering the entire segment is moved and this movement is one
direction, meaning that ADO storage tiering is meant to move colder segments
from high performance storage to slower, lower cost storage.

ADO Insight: If
an ADO compression policy AND a storage tiering policy both qualify, the
database will execute both in a single segment reorganization step.

ADO-based storage
tiering is not based upon the ADO condition clause (i.e. after “x” days of NO
MODIFICATION) as is compression tiering and instead, is based upon tablespace
space pressure. The justification for making storage tiering dependent on
"space pressure" is exactly as you might imagine, the belief that
users will want to keep as much data as possible on their high performance (and
most expensive) storage tier, and not move data to a lower performance storage
tier until it is absolutely required.

Two important ADO
parameters for storage tiering.

The value for the
ADO parameter TBS_PERCENT_USED specifies the percentage of the tablespace quota
when a tablespace is considered full. The value for TBS_PERCENT_FREE specifies
the targeted free percentage for the tablespace. When the percentage of the
tablespace quota reaches the value of TBS_PERCENT_USED, ADO begins to move
segments (that have TIER TO policies) so that percent free of the tablespace
quota approaches the value of TBS_PERCENT_FREE. This action by ADO is a best
effort and not a guarantee.

You can set ILM
ADO parameters with the CUSTOMIZE_ILM procedure in the DBMS_ILM_ADMIN PL/SQL
package (see here), for example:





In this example,
when a tablespace reaches the used threshold (85%) defined by the user, the
database will automatically move the coldest table/partition(s) in the
tablespace to the target tablespace until the tablespace quota has at least 25
percent free. Of course this only applies to tables and partitions that have a
"TIER TO" ADO policy defined. This frees up space on your tier 1
storage for the segments that would truly benefit from the performance while
moving colder segments, that don’t need Tier 1 performance, to lower cost Tier
2 storage.

ADO Insight: This
does not apply to storage tiering policies with the Read Only option specified
-- they are based on Heat Map based condition clause and tiering policies with
conditions based on custom functions. In both these cases, the user has
explicitly instructed ADO on the event upon which the policy is to be executed.

All of this
behavior can be overridden with custom conditions on "TIER TO"
policies: the DBA can write their own conditions using PL/SQL and implement
segment movement based on any conditions they want to encode.

But for now the
database storage optimization adventure continues in my next blog, in which we
will discuss something totally different, best practices for Oracle compression

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.