Monday Mar 16, 2015

Row-Level (Block) Compression Tiering

I was at the Hotsos Symposium giving a presentation on updates in Advanced Row Compression tables and mentioned that there are times when it can be more efficient to allow high volume inserts and updates to take place on uncompressed tables, and then enable an Automatic Data Optimization (ADO) policy to compress the blocks in those tables later, in the background, after most activity has slowed based on Heat Map data. A question was asked about whether that would actually save any space. Unfortunately I didn’t answer the question very well and probably confused the asker, so here’s my attempt at explaining how this really works.

[Read More]

Monday Oct 06, 2014

Tiered Storage

Since this is a blog related to oracle database storage optimization this post is an attempt to explain what we mean by tiered storage. You would think that everyone would have the same definition of tiered storage, but I've found that not to be the case. So, what is tiered storage and how does it relate to an Oracle database storage solution?


[Read More]

Friday Sep 12, 2014

Oracle Open World 2014 Sessions

These are the sessions that the Database Performance Product Management team will be giving at Open World this year:

  •  Oracle Database 12c New Features, OAUG DB SIG, Sunday 11:00 - 11:45 am, Moscone West - 3009
  • CON8372 - 12c: Heat Map and ADO, Monday 11:45 - 12:30 pm, Moscone South - 305
  • CON8376 - Compression Best Practices, Tuesday 10:45 - 11:30 am, Moscone South - 305
  • CON3020 - Oracle Database via Direct NFS Client, Wednesday 2:00 - 2:45 pm, Intercontinental C
  • CON8379 - How to Use Oracle Database Temporal Features to Build Smarter Applications Faster, Thursday 12:00 - 12:45 pm, Moscone South - 305

In addition to these presentations we will be helping out with the Oracle Database In-Memory Hands On Labs:

  • HOL9346 - Oracle Database In-Memory Boot Camp
    • Monday 4:15 - 5:15 pm, Hotel Nikko - Peninsula
    • Tuesday 6:45 - 7:45 pm, Hotel Nikko - Peninsula
    • Wednesday 4:15 - 5:15 pm, Hotel Nikko - Peninsula
    • Thursday 1:00 - 2:00 pm, Hotel Nikko - Peninsula

Please come see us and introduce yourself.


Monday Jun 23, 2014

Automatic Data Optimization – How Do You Know It's Working?

This post is about Automatic Data Optimization (ADO) and how to tell if your policies are working. It's fairly easy to display ADO policies and even to see if they've run. It's not quite as straight forward though, to see if anything actually happened. Unfortunately there is no "log" of what events or actions actually occurred when the policy ran. There are a couple of events that can be set, but I've had only hit or miss success with them detailing what actually happened. For segment level compression and storage tiering it's fairly easy to just look at the results. It's a little harder with policies involving advanced row compression. That's because Oracle is intermixing compressed blocks with uncompressed ones. The easiest way that I've found is to use the function dbms_compression.get_compression_type. This function will return a value that can be used to determine the compression for each row input[1]. This may not be practical on a very large number of rows, but I think that once you've prototyped the behavior you can be pretty sure that the feature is actually working in practice.

[Read More]

Monday Jun 09, 2014

ADO and Two Way Storage Tiering

We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model.

ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails.

In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering.

Wednesday May 14, 2014

Updates in Row Compressed Tables

Updates in compressed tables, both basic table compression and advanced row compression, are not well understood and this article will attempt to explain how they work. In order to do this some background will be discussed about how Oracle performs row based compression and why the structure of the data is so important to the effectiveness of the compression. More importantly, space usage will be detailed for various insert and update scenarios and the performance impacts that occur when updating compressed tables.

[Read More]
About

The Database Storage Optimization blog is written by the Oracle product management team and highlights features and issues associated with database storage optimization. The views expressed on this blog are those of the author(s) and do not necessarily reflect the views of Oracle.

Search

Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today