I was at the Hotsos Symposium giving a presentation on updates in Advanced Row Compression tables and mentioned that there are times when it can be more efficient to allow high volume inserts and updates to take place on uncompressed tables, and then enable an Automatic Data Optimization (ADO) policy to compress the blocks in those tables later, in the background, after most activity has slowed based on Heat Map data. A question was asked about whether that would actually save any space. Unfortunately I didn’t answer the question very well and probably confused the asker, so here’s my attempt at explaining how this really works.
As I explained in my presentation, Advanced Row Compression uses a “batch-like” algorithm to compress blocks and is triggered by an insert that causes the block to be “full”. Inserts and updates are made in an uncompressed format so they exhibit the same performance as if they were being performed on an uncompressed block. It is only when an insert triggers a compression that any actual compression is performed on the block. This has the advantage of amortizing the overhead of compression across many inserts and updates. However, during periods of very high volume inserts and updates even this amortized overhead may be too much. If the high volume DML activity is cyclical and not constant then it may make more sense to use row-level, or more properly, block-level compression tiering instead. The reasoning goes like this, leave the table uncompressed so that inserts and updates will be made without compression. This will give the highest possible performance for high volume periods of activity. Then, with ADO, create a policy that will enable row based compression after a period of no modification or activity, depending on your circumstances. Those blocks where all of the rows meet the policy will then be compressed (hence the block-level compression tiering). This will free up space in those blocks for more inserts and this cycle can continue indefinitely on the blocks in the table. What will happen is that blocks will continue to be compressed and filled until they cannot be compressed any further, the same basic process that occurs in tables that have Advanced Row Compression enabled. It will just not happen in real time or on blocks that are active. Over time space will be saved as blocks are compressed, but no actual space will be returned from the segment, just the row density of the blocks in the table will increase up to the limits of the compression.
This will of course save space in the long run, but based on some other questions there seems to be the desire to see space returned from the segment based on the compression. This will only happen when using segment level compression. I have also not mentioned partitioning or whether this applies to generic situations. This is not a one size fits all approach. This is just one tool that can be used when trying to leverage the options available with advanced compression and ADO and Heat Map.