Wednesday Jun 27, 2012

FairScheduling Conventions in Hadoop

While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things:

  • Organizations are still determining what their dataflow and analysis workloads will comprise
  • Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options
  • The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation.

However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices:

  • The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis.
  • The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis.
  • Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill.

If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require.

To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf:





What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker.

Our allocation file is now going to look a little like this

<?xml version="1.0"?>
<pool name="dan">

In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

[Read More]

Tuesday Apr 10, 2012

Self-Study course on ODI Application Adapter for Hadoop

For those of you who want to work with Oracle Data Integrator and its Hadoop capabilities, a good way to start is the newly released self-study course from Oracle University. You can find the course here.

Enjoy, and if you have any feedback, do send this into Oracle University by logging in (so we can unleash our big data analytics on it ;-) ).

Monday Jul 18, 2011

Is Parallelism married to Partitioning?

In the last couple of months I have been asked a number of times about the dependency of parallelism on partitioning. There seems to be some misconceptions and confusion about their relationship, and some people still believe that partitioning is required to perform operations in parallel. Well, this is not the case, so let me try to briefly explain how Oracle parallelizes operations. Note that 'parallel operation' in this context means to break down a single operation – e.g. a single SQL statement – into smaller unit of works that are processed in parallel to leverage more or even all resources in a system to return the result faster. More in-depth information and whitepapers on Parallel Execution and Partitioning can be found on OTN.

Let’s begin with a quick architecture recap. Oracle is a shared everything system, and yes, it can run single operations in a massively parallel fashion. All of the data is shared between all nodes in a cluster and the capability of how to parallelize operations is not restricted to any predetermined static data distribution done at database/table setup (creation) time. In a pure shared nothing architecture, database files (or tables) have to be partitioned to map disjoint data sets – “partitions” of data – to specific “nodes” of a multi-core or multi-node system to enable parallel processing as illustrated below (note that the distribution in a shared nothing system is normally done based on a hash algorithm; I just used letter ranges for illustration purposes here):

Design strategies for parallelization: Shared Nothing v Shared Everything architecture

In other words, with the Oracle Database any node in a cluster configuration can access any piece of data in an arbitrary parallel fashion, and the data does not have to be “partitioned” at creation time for parallelization.

Oracle’s parallel execution determines the most optimal subset of the smaller parallel units of work based on the user’s request – namely the SQL statement. Oracle does not rely on Oracle Partitioning to enable parallel execution, but takes advantage of partitioning wherever it makes sense, just like a shared nothing system; we will talk about this a bit later.

So how does Oracle dynamically determine these smaller units of work? In Oracle, the basic unit of work in parallelism is called a granule. Oracle divides the operation being parallelized (for example a table scan, table update, or index creation) into granules. Oracle distinguishes two different kinds of granules for initial data access: Block range granules and Partition based granules.

Block range granules are ranges of physical blocks from a table. Each parallel execution server works on its own block range granule. Block range granules are the basic unit of most parallel operations even on partitioned tables. Therefore, from an Oracle Database perspective, the degree of parallelism (DOP) is not related to the number of partitions or even the fact whether a table is partitioned or not. The number of granules and their size correlates with the DOP and the object size.

That said, Oracle is flexible in its decisions: some operations can benefit from an underlying partitioned data structure and leverage individual partitions as granules of work, which brings me to the second kind of granules; with partition based granules only one parallel execution server performs the work for all of the data in a single partition or subpartition (the maximum allowable DOP obviously then becomes the number of partitions). 

The best known operation that can benefit from parallel granules is a join between large equi-partitioned tables, a so-called partition-wise join. A partition-wise join takes place when tables to be joined are equi-partitioned on the join key or the operation is executed in parallel and the larger one of the tables to being joined is partitioned on the join key.

By joining the equivalent partitions the database has to do less data distribution and the sort/join resource requirements are reduced. The picture below illustrates the execution of a partition-wise join. Instead of joining all data of SALES with all data of CUSTOMERS, partition pairs are joined in parallel by multiple parallel execution servers, having each parallel execution server working on one pair of partitions. You can understand this a little bit like shared-nothing-like processing at runtime.

Now that we know about Oracle’s choice of granules how do we tell which type of granules was chosen?  The answer lies in the execution plan. In the example below, operation ID 7 (above the table access of SALES) it says PX BLOCK ITERATOR, which means that block range granules have been used to access table SALES in parallel.

To illustrate partition-wise joins and the use of partition based granules consider the following example: table SALES is range partitioned by date (8 partitions) and sub partitioned by hash on cust_id using 16 subpartitions for each range partition, for a total of 128 subpartitions. The CUSTOMER table is hash partitioned on cust_id with 16 hash partitions.

Below is the execution plan for the following query:

SELECT c.cust_last_name, COUNT(*)

FROM sales s, customers c

WHERE s.cust_id = c.cust_id AND

s.time_id BETWEEN TO_DATE('01-JUL-1999', 'DD-MON-YYYY') AND

     (TO_DATE('01-OCT-1999', 'DD-MON-YYYY'))

GROUP BY c.cust_last_name HAVING COUNT(*) > 100;

Operation ID 11 above the table access of SALES says PX PARTITION RANGE ITERATOR which means partition based granules were used to access partition 8 and 9 and subpartitions 113 to 144 (Partition  pruning which be discussed in another blog)

Operation ID 8 says PX PARTITIONS HASH ALL which means the corresponding 16 hash subpartitions of the SALES table are joined to customer table using a full partition-wise join. (For more information on Partition-Wise joins refer to  Oracle® Database VLDB and Partitioning Guide)

As we have seen,unlike shared nothing systems, Oracle parallel execution combines the best of both worlds, providing the most flexible way of parallelizing an operation but still take full advantage of pre-existing partitioning strategies for optimal execution when needed.

Wednesday May 25, 2011

Parallel Load: Uniform or AutoAllocate extents?

Over the last couple of years there has been a lot of debate about space management in Data Warehousing environments and the benefits of having fewer larger extents. Many believe the easiest way to achieve this is to use uniform extents but the obvious benefits can often be out weighed by some not so obvious drawbacks.

For performance reasons most loads leverage direct path operations where the load process directly formats and writes Oracle blocks to disk instead of going through the buffer cache. This means that the loading process allocates extents and fills them with data during the load. During parallel loads, each loader process will allocate it own extent and  no two processes work on the same extent. When loading data into a table with UNIFORM extents each loader process will allocate its own Uniform extent and begin loading the data, if the extents  are not fully populated the table is left with a lot of partially filled extents, effectively creating ‘HOLES’ in the table. Not only is this space wastage but it also impacts query performance as subsequent queries that scan the table have to scan all of the extents even if they are not fully filled.

What is different with AUTOALLOCATE? AUTOALLOCATE will dynamically adjust the size of an extent and trim the extent after the load in case it is not fully loaded (Extent Trimming)

Tom Kyte covers this problem in great details in his post Loading and Extents but below is a simple example just to illustrate what a huge difference there can be in space management when you load into a table with uniform extents versus a table with autoallocated extents.

1) Create two tablespaces: Test_Uniform (using uniform extent management), Test_Allocate (using auto allocate)

create tablespace test_uniform datafile '+DATA/uniform.dbf' SIZE 1048640K


EXTENT management local uniform size 100m;

create tablespace test_allocate datafile '+DATA/allocate2.dbf' SIZE 1048640K


EXTENT management local autoallocate segment space management auto;

2)Create a flat file with a 10,000,000 records.

-rw-r--r-- 1 oracle dba 1077320689 May 17 16:59 TEST.dat

3)Do a parallel direct path load of this file into each tablespace:

create table UNIFORM_TEST                                                                         parallel
tablespace Test_
_uniform                                                                        as  select * from big_table_ext;

tablespace Test_allocate
as select * from big_table_ext;

Let's view the space usage using a PL/SQL package called show_space  .


 As you can see from the results there are no unformatted blocks in the autoallocate table as extent trimming has taken place after the load was complete. The same can not be said for the uniform_test table. It is quite evident from the numbers that there is substantial space wastage in the uniform_test table. Although the count of full blocks are the same in both cases, the Total Mbytes used is 10x greater in the Uniform table.


Space utilization is much better with autoallocate becuase of extent trimming. As I said before more information on this topic can be found on Tom's Kyte post.


Wednesday Apr 20, 2011

Calculating Parameter Values for Parallelism

[Read More]

Wednesday Mar 23, 2011

Best Practices: Data Warehouse in Archivelog mode?

[Read More]

Monday Feb 14, 2011

Auto DOP and Queuing Parameter Settings

[Read More]

Wednesday Sep 08, 2010

Session Queuing and Workload Management

[Read More]

Friday Apr 23, 2010

Oracle Communications Data Model

[Read More]

Friday Apr 09, 2010

Auto DOP and Concurrency

[Read More]

Monday Nov 30, 2009

TechCast: Data Warehouse Best Practices – Dec 2nd

[Read More]

Friday Jul 24, 2009

Why does it take forever to build ETL processes?

[Read More]

Thursday Jun 25, 2009

Best Practices at ODTUG

[Read More]

Friday Apr 10, 2009

Compressing Individual Partitions in the warehouse

[Read More]

The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.


« November 2015