To start, we create an 11g database in the Oracle Public Cloud. We create the instance, wait an hour or so, change the network configuration to open port 1521 to the public, and connect using sys as sysdba to the instance. We are going to use SQL Developer in our Windows 2012 instance to make the connection. To get the connection information, we the database console and get the ip address of the instance.
We then go to our sqldeveloper tool and add this database connection. We can use ssh tunneling or open port 1521 to the world to make the connection.
The first step that we are told to do is to execute the setup.sql file available via the tutorial. We are not going to execute this program but do everything by hand through sql developer. The purpose of this script is to enable the user sh, set a password, and grant privileges to the user. We can do this from SQL Developer. The code that it recommends using is
connect / as sysdba
set echo on
alter user sh identified by sh account unlock;
grant create tablespace to sh;
grant drop tablespace to sh;
At this point, we have a user that can create and drop tables. We now want to load the create_sales_tbls.sql code from the tutorial.
The create script first, drops the existing tables. This might generate an error because the table does not exist. This error is not significant and won't stop everything from executing. We then create a non-compressed and a compressed table by selecting from the demo sales table that exists if you installed the demo database during your install.
drop table sales_nocompress purge
drop table sales_compress purge
set echo on
set timing on
create table sales_nocompress
as select * from sales
create table sales_compress compress for all operations
as select * from sales where 1=0
When we do the select, the table size should be zero based on the where clause. We then to an insert into the table to create a table of the same size. This is done by executing
set timing off
select count(*) from sales_compress
select count(*) from sales_nocompress
SQL> set timing on
commit_after integer := 0 ;
loop_variable integer ;
cursor c_sales is
from sales ;
for r_sales in c_sales
if commit_after = 0
loop_variable := 0 ;
commit_after := round(dbms_random.value(1,1)) ;
end if ;
insert into sales_compress
(prod_id, cust_id, time_id, channel_id, promo_id, quantity_sold, amount_sold)
if loop_variable = commit_after
commit_after := 0 ;
end if ;
loop_variable := loop_variable + 1 ;
end loop ;
We can execute the examine_storage.sql script to see that the compressed storage takes up about half the storage as the uncompressed table. We can also see that the table is enabled for oltp compression by looking at the parameters of the table from a select statement.
We can also look at the select time differences by reading all of the data from the compressed and uncompressed tables. Note that the compressed table takes about 3/4 of the time that the uncompressed takes to execute.
In summary, we were able to create an 11g database, create a table that is compressed and non-compressed and look at the relative size and timing on retrieving data from the table. We can experiment with this data and grow the table size to see if we still get the same improvements as the table gets larger. We can try different compression algorithms to see if it effects performance or compression ratios. We have done all of this in a database as a service public cloud instance. The only tools that we needed was a SQL Developer connection and an Oracle Cloud account. We could have done with with Amazon RDS as well as EC2 and Microsoft Azure Compute. The key difference is that this experiment took about two hours to execute and we only consumed about $15 to learn and play with compression on 11g (or 12c) given that a low memory option for the database is only $6.720 per OCPU per hour. With the pay as you go option we burn less than $15 and turn off the service. We could have uploaded our own data sets into the database instance and played with the compression advisor in a sandbox and not effected our production environment. If we were using database backup as a service we could have restored a single table from our backup and play with the compression variations and compression advisor.