Tuesday Feb 24, 2015

Oracle Linux and Database Smart Flash Cache

One, sometimes overlooked, cool feature of the Oracle Database running on Oracle Linux is called Database Smart Flash Cache.

You can find an overview of the feature in the Oracle Database Administrator's Guide. Basically, if you have flash devices attached to your server, you can use this flash memory to increase the size of the buffer cache. So instead of aging blocks out of the buffer cache and having to go back to reading them from disk, they move to the much, much faster flash storage as a secondary fast buffer cache (for reads, not writes).

Some scenarios where this is very useful : you have huge tables and huge amounts of data, a very, very large database with tons of query activity (let's say many TB) and your server is limited to a relatively small amount of main RAM - (let's say 128 or 256G). In this case, if you were to purchase and add a flash storage device of 256G or 512G (example), you can attach this device to the database with the Database Smart Flash Cache feature and increase the buffercache of your database from like 100G or 200G to 300-700G on that same server. In a good number of cases this will give you a significant performance improvement without having to purchase a new server that handles more memory or purchase flash storage that can handle your many TB of storage to live in flash instead of rotational storage.

It is also incredibly easy to configure.

-1 install Oracle Linux (I installed Oracle Linux 6 with UEK3)
-2 install Oracle Database 12c (this would also work with 11g - I installed EE)
-3 add a flash device to your system (for the example I just added a 1GB device showing up as /dev/sdb)
-4 attach the storage to the database in sqlplus

$ ls /dev/sdb

$ sqlplus '/ as sysdba'

SQL*Plus: Release Production on Tue Feb 24 05:46:08 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  alter system set db_flash_cache_file='/dev/sdb' scope=spfile;

System altered.

SQL> alter system set db_flash_cache_size=1G scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 4932501504 bytes
Fixed Size		    2934456 bytes
Variable Size		 1023412552 bytes
Database Buffers	 3892314112 bytes
Redo Buffers		   13840384 bytes
Database mounted.
Database opened.

SQL> show parameters flash

------------------------------------ ----------- ------------------------------
db_flash_cache_file		     string	 /dev/sdb
db_flash_cache_size		     big integer 1G
db_flashback_retention_target	     integer	 1440

SQL> select * from v$flashfilestat; 

---------- ---------- ------------ -------------------- ----------
1073741824	    1		 0		      0 	 0

You can get more information on configuration and guidelines/tuning here. If you want selective control of which tables can use or will use the Database Smart Flash Cache, you can use the ALTER TABLE command. See here. Specifically the STORAGE clause. By default, the tables are aged out into the flash cache but if you don't want certain tables to be cached you can use the NONE option.

alter table foo storage (flash_cache none);
This feature can really make a big difference in a number of database environments and I highly recommend taking a look at how Oracle Linux and Oracle Database 12c can help you enhance your setup. It's included with the database running on Oracle Linux.

Here is a link to a white paper that gives a bit of a performance overview.

Thursday Mar 29, 2012

4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here.

Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load.

In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized.

During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps.

Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

Saturday Feb 04, 2012

Changing database repositories in Oracle VM 3

At home I have a small atom-based server that was running Oracle VM Manager 3, installed using simple installation. Simple installation is the option where you just enter a password and the Oracle VM Manager installer installs : Oracle XE database, WebLogic Server and the Oracle VM Manager container. The same password is used for the database user, Oracle VM Manager database schema user, weblogic user and admin user for the manager instance.

The manager instance stores its data as objects inside the database. To do that, there is something called a datasource defined in weblogic during installation. It's basically a jdbc connection from weblogic to the database. This DS requires the following information : database hostname, database instance name, database listener port number, schema username and schema password. In my default install this was localhost, XE, 1521, ovs, mypassword.

Now that I re-organized my machines a bit, I have a larger server that runs a normal database, which I also happen to use for EM12c. So I figured I would take some load off the little atom server, keep it running Oracle VM Manager but shut down XE and move the schema over to my dedicated database host. This is a straightforward process so I just wanted to list the steps.

1) shut down Oracle VM Manager so that it does not continue updating the repository.
as root : /etc/init.d/ovmm stop

2) export the schema user using the exp command for Oracle XE
as oracle : 
cd /u01/app/oracle/product/11.2.0/xe
export ORACLE_HOME=`pwd`
(enter user ovs and its password)
export user (option 2)
export everything including data
this will create (by default) a file called expdat.dmp
copy this file over to the other server with the other database
The schema name is also in /u01/app/oracle/ovm-manager-3/.config (OVSSCHEMA)

3) shutdown oracle-xe as it's no longer needed  
as root : /etc/init.d/oracle-xe stop

4) import the ovs user into the new database. I like to do it as the user. 
I just simply pre-create the schema before starting import
as oracle : 
sqlplus '/ as sysdba'
create user ovs identified by MyPassword;
grant connect,resource to ovs;
at this point, run the imp utility on the box to import the expdat.dmp
import asks for username/password, enter ovs and its password
import yes on all data and tables and content.

At this point you have a good complete repository. 
Now let's make the Oracle VM Manager weblogic instance point to the new database.

5) on the original system, restart weblogic
as root :/etc/init.d/ovmm start
wait a few minutes for the instance to come online

6) use the ovm_admin tool
as oracle : 
cd /u01/app/oracle/ovm-manager-3/bin
./ovm_admin --modifyds orcl wopr8 1521 ovs mypassword
My new host name for the database is called wopr, 
the database instance is orcl and listener is still 1521 with schema ovs
The admin tool asks for a password, this is the weblogic user password. 
In a simple install, this would be the same as your admin or ovs account password.

7) restart to have everything take effect.
as root : 
/etc/init.d/ovmm stop  ; sleep 5 ;/etc/init.d/ovmm start ;

8) edit the config file and update the new data 
vi /u01/app/oracle/ovm-manager-3/.config 
modify :
and leave the rest as is. 

that should do it !


Wim Coekaerts is the Senior Vice President of Linux and Virtualization Engineering for Oracle. He is responsible for Oracle's complete desktop to data center virtualization product line and the Oracle Linux support program.

You can follow him on Twitter at @wimcoekaerts


« November 2015