Monday Oct 13, 2008

Siebel 8.0 on Sun SPARC Enterprise T5440 - More Bang for the Buck!!

Today Sun announced the 14,000 user Siebel 8.0 PSPP benchmark results on a single Sun SPARC Enterprise T5440. An Oracle white paper with Sun's 14,000 user benchmark results is available on Oracle's Siebel benchmark web site. The content in this blog post complements the benchmark white paper.

Some of the notes and highlights from this competitive benchmark are as follows:

  • Key specifications for the Sun SPARC Enterprise T5440 system under test, are: 4 x UltraSPARC T2 Plus processors, 32 cores, 256 compute threads and 128 GB of memory in a 4RU space.

  • The entire Siebel 8.0 solution was deployed on a single Sun SPARC Enterprise T5440 including the web, gateway, application, and database servers.

      9 load driver clients with dual-core Opteron and Xeon processors were used to load up 14,000 concurrent users

  • Web, Application and the Database servers were isolated from each other by creating three Solaris Containers (non-global zones or local zones) dedicated one each for all those servers.

      Solaris 10 Binary Application Guarantee Program guarantees the binary compatibility for all applications running under Solaris native host operating system environments as well as Solaris 10 OS running as a guest operating system in a virtualized platform environment.

  • Siebel Gateway server and the Siebel Application servers were installed and configured in one of the three Solaris Containers. Two identical copies of Siebel Application server instances were configured to handle 7,000 user load by each of those instances.

      From our experiments with the Siebel 8.0 benchmark workload, it appears that a single instance of Siebel Application server could scale up to 10,000 active users. Siebel Connection Broker (SCBroker) component becomes the bottleneck at the peak load in a single instance of the Siebel Application server.

  • To keep it simple, the benchmark publication white paper limits itself to an overview of the system configuration. The full details are available in the diagram below.

    Topology Diagram



    The breakdown of the approximate averages of CPU and memory utilization by each tier is shown below.

    TierCPUMemory
    Web78%4.50 GB
    App76%69.00 GB
    DB72%20.00 GB

    System-wide averages are as follows:

    TierCPUMemory
    Web + App + DB82%93.5 GB


  • 1276 watts is the average power consumption when all the 14,000 concurrent users are in the steady state of the benchmark test. That is, in the case of similarly configured workloads, T5440 supports 10.97 users per watt of the power consumed; and supports 3500 users per rack unit.

Based on the above notes: Sun SPARC Enterprise T5440 is inexpensive, requires: less power and data center footprint, ideal for consolidation and equally importantly scales well.



Vendor-to-Vendor comparison

How does our new 14,000 user benchmark result compare with the high watermark benchmark results published by other vendors using the same Siebel 8.0 PSPP workload?

Besides Sun, IBM and HP are the only other vendors who published benchmark results so far with the Siebel 8.0 PSPP benchmark workload. IBM's highest user count is 7,000; where as 5,200 is HP's. Here is a quick comparison of the throughputs based on the results published by Sun, IBM and HP with the highest number of active users.

Sun Microsystems' 14,000 user benchmark on a single T5440 outperformed:

  • IBM's 7,000 user benchmark result by 1.9x

  • HP's 5,200 user benchmark result by 2.5x
      HP published the 5,200 user result with a combination of 2 x BL460c running Windows Server 2003 and 1 x rx6600 HP system running HP-UX.

  • Sun's own 10,000 user benchmark result on a combination of 2 x T5120 and 2 x T5220s by 1.4x

From the operating system perspective, Solaris outperformed AIX, Windows Server 2003 and HP-UX. Linux is nowhere to be found in the competitive landscape.

A simple comparison of all the published Siebel 8.0 benchmark results (as of today) by all vendors justifies the title of this blog post. As IBM and HP do not post the list price of all of their servers, I am not even attempting to show the price/performance comparison in here. On the other hand, Sun openly lists out all the list prices at store.sun.com web site.

CAUTION

Although T5440 possesses a ton of great qualities, it might not be suitable for deploying workloads with heavy single-threaded dependencies. The T5440 is an excellent hardware platform for multi-threaded, and moderately single-threaded/multi-process workloads. When in doubt, it is a good idea to leverage Sun Microsystems' Try & Buy program to try the workloads on this new and shiny T5440 before making the final call.



I would like to share the tuning information from the OS and the underlying hardware perspective for couple of reasons -- 1. Oracle's benchmark white paper does not include any of the system specific tuning information, and 2. it may take quite a bit of time for Oracle Corporation to update the Siebel Tuning Guide for Solaris with some of the tuning information that you find in here.

Check the second part of this blog post for the best practices running Siebel on Sun CMT hardware.

Sunday May 25, 2008

Deploying TWiki 4.2.0 on Sun Java Web Server 7.0

(The following instructions are based on Manish Kapur's Installing TWiki on Sun Java System Web Server. These instructions complement Manish's 2005 blog post)

The goal is to get the TWiki up and running on Sun Java Web Server (formerly known as Sun ONE Web Server / iPlanet Web Server). The assumption is that Sun Java Web Server is running on the Solaris operating system.

Steps:
  1. Create 'twiki' user.
    % mkdir /export/twiki
    % useradd -d /export/twiki -s /bin/bash twiki
    % cd /export
    % chown twiki:other twiki
    % passwd twiki

  2. Install Sun Java Web Server 7.0 Update 2 in /export/twiki/sjws7u2 directory. Select 'Custom' installation; and choose port '8080' to run the web server.

    Sun Java Web Server 7.0 Update 2 is available for free at Sun Downloads web site.
            View by Category -> Web & Proxy Servers -> Web Servers -> Web Server 7.0 Update 2.

  3. Install TWiki 4.2.0

    Download TWiki tgz file
    % mkdir /export/twiki/sjws7u2/https-<hostname>/docs/twiki
    % cp TWiki-4.2.0.tgz /export/twiki/sjws7u2/https-<hostname>/docs/twiki
    % cd /export/twiki/sjws7u2/https-<hostname>/docs/twiki
    % gunzip TWiki-4.2.0.tgz
    % tar xf TWiki-4.2.0.tar

  4. Enable CGI on the web server

    • cd /export/twiki/sjws7u2/https-<hostname>/config

    • Edit the 'default' section of obj.conf file to include the following two lines

      NameTrans fn="pfx2dir" from="/twiki/view" dir="/export/twiki/sjws7u2/<hostname>/docs/twiki/bin/view" name="cgi"

      Service fn="send-cgi" type="magnus-internal/cgi" nice="10"


      Hopefully the following example gives an idea on where to insert those two lines. In the example, 't2000-240-08' is the hostname.

       % diff -C 3 obj.conf obj.conf.orig
      
      \*\*\* obj.conf    Wed May 14 01:36:34 2008
      --- obj.conf.orig       Wed May 14 01:02:52 2008
      \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
      \*\*\* 10,17 \*\*\*\*
        AuthTrans fn="match-browser" browser="\*MSIE\*" ssl-unclean-shutdown="true"
        NameTrans fn="ntrans-j2ee" name="j2ee"
        NameTrans fn="pfx2dir" from="/mc-icons" dir="/export/twiki/sjws7u2/lib/icons" name="es-internal"
      - # Consider anything in the directory /export/twiki/sjws7u2/https-t2000-240-08/docs/twiki/bin to be a CGI
      - NameTrans fn="pfx2dir" from="/twiki/view" dir="/export/twiki/sjws7u2/https-t2000-240-08/docs/twiki/bin/view" name="cgi"
        PathCheck fn="uri-clean"
        PathCheck fn="check-acl" acl="default"
        PathCheck fn="find-pathinfo"
      --- 10,15 ----
      \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
      \*\*\* 20,27 \*\*\*\*
        ObjectType fn="type-j2ee"
        ObjectType fn="type-by-extension"
        ObjectType fn="force-type" type="text/plain"
      - # Run CGI processes with a "Nice" level of 10
      - Service fn="send-cgi" type="magnus-internal/cgi" nice="10"
        Service method="(GET|HEAD)" type="magnus-internal/directory" fn="index-common"
        Service method="(GET|HEAD|POST)" type="\*~magnus-internal/\*" fn="send-file"
        Service method="TRACE" fn="service-trace"
      --- 18,23 ----

  5. Install Revision Control System (RCS), GNU diff utilities (diffutils) and update the PATH variable in user's .profile.

    Those packages are available in 'ready to install' form at sunfreeware.com and blastwave.org web sites. Perhaps the easiest way is to use pkg-get to pull those packages with the required dependencies from Blastwave.

    Assuming RCS and diffutils are available under /usr/csw/bin directory,
    % cat ~/.profile | grep PATH
    export PATH=/usr/bin:/usr/sbin:/usr/local/bin:/usr/sfw/bin:/usr/csw/bin:$PATH

  6. Configure the TWiki. Edit TWiki's \*.cfg files, that is.

    1. Create the config file LocalLib.cfg.

      1. There is a template for this config file in twiki/bin/LocalLib.cfg.txt. Copy LocalLib.cfg.txt to LocalLib.cfg.
        % cd /export/twiki/sjws7u2/https-<hostname>/docs/twiki/bin
        % cp LocalLib.cfg.txt LocalLib.cfg
      2. Edit LocalLib.cfg to update the $twikiLibPath variable.
        $twikiLibPath = "/export/twiki/sjws7u2/https-<hostname>/docs/twiki/lib";

    2. Manually create the config file LocalSite.cfg.
      % cd /export/twiki/sjws7u2/https-<hostname>/docs/twiki/lib
      % cp TWiki.spec LocalSite.cfg

    3. Update the following variables with appropriate values in LocalSite.cfg.

      $TWiki::cfg{DefaultUrlHost}, $TWiki::cfg{ScriptUrlPath}, $TWiki::cfg{PubUrlPath}, $TWiki::cfg{PubDir}, $TWiki::cfg{TemplateDir}, $TWiki::cfg{DataDir}, $TWiki::cfg{LocalesDir}, and $TWiki::cfg{OS}, $TWiki::cfg{RCS}{EgrepCmd} and $TWiki::cfg{RCS}{FgrepCmd}.

      This diff output shows how they were configured in the demo TWiki that I used for the Proof-Of-Concept (POC). The hardware it was deployed on is a Sun Fire T2000. So Cool Stack was installed on the server to take advantage of the optimized Perl.

  7. Exit the current shell; and reconnect.

  8. Restart the Sun Java Web Server.
    % cd /export/twiki/sjws7u2/https-<hostname>/bin
    % ./stopserv
    % ./startserv
  9. Manually execute the 'view' script from the command line.
    % cd /export/twiki/sjws7u2/https-/docs/twiki/bin
    % ./view
    You should see valid HTML (not errors) on the stdout. If you see errors, check /export/twiki/sjws7u2/https-<hostname>/logs/errors for the error messages. Fix the issues and re-run 'view' from the command line. Continue this exercise until the script returns valid HTML.

  10. Finally verify the TWiki installation through the web browser.
            Open http://<webhost>:<port>/twiki/view in a web browser.
References:
  1. Installing TWiki on Sun Java System Web Server by Manish Kapur, Sun Microsystems
  2. Using PHP on Sun Java System Web Server by Joe McCabe, Sun Microsystems


Copy of these instructions are also available at my other weblog:
http://technopark02.blogspot.com/2008/05/deploying-twiki-420-on-sun-java-web.html

Tuesday Apr 08, 2008

Running Batch Workloads on Sun's CMT Servers

Ever since Sun introduced Chip Multi-Threading (CMT) hardware in the form of UltraSPARC T1's T1000/T2000, our internal mail aliases were inundated with variety of customer stories, majority of those go like 'batch jobs are taking 12+ hours on T2000, where as it takes only 3 or 4 hours on US-IV+ based v490'. Even after two and half years since the introduction of the revolutionary CMT hardware, it appears that majority of Sun customers are still under the impression that Sun's CMT systems like T2000, T5220 are not capable of handling CPU intensive batch workloads. It is not a valid concern. CMT processors like UltraSPARC T1, T2, T2 Plus can handle batch workloads just as well like any other traditional/conventional processor viz. UltraSPARC-IV+, SPARC64-VI, AMD Opteron, Intel Xeon, IBM POWER6. However CMT awareness and little effort are required at the customer end to achieve good throughput on CMT systems.

First of all, the end users must realize the fact that the maximum clock speed of the existing CMT processor line-up (UltraSPARC T1, UltraSPARC T2, UltraSPARC T2 Plus) is only 1.4 GHz; and on top of that each strand (individual hardware thread) within a core shares the CPU cycles with the other strands that operate on the same core (Note: each core operates at the speed of the processor). Based on these facts, it is no surprise to see batch jobs taking longer times to complete when only one or a very few single-threaded batch jobs are submitted to the system. In such cases, the system resources are fairly under-utilized in addition to the longer elapsed times. One possible trick to achieve the required throughput in the expected time frame is to split up the workload into multiple jobs. For example, if an EDU customer needs to generate 1000 transcripts, the customer should consider submitting 4 individual jobs with 250 transcripts each or 8 jobs with 125 transcripts each rather than submitting one job for all 1000 transcripts. Ideally the customer should observe the resource utilization (CPU%, for example); and experiment with the number of jobs to be submitted until the system achieves the desired throughput within the expected time frame.

Case study: Oracle E-Business Suite Payroll 11i workload on Sun SPARC Enterprise T5220

In order to prove that the aforementioned methodology works beyond a reasonable doubt, let's take Oracle's E-Business Suite 11.5.10 Payroll workload as an example. On a single T5220 with one 1.4 GHz UltraSPARC T2 processor, acting as the batch, application and database server, 4 payroll threads generated 5,000 paychecks in 31.53 minutes of time consuming only 6.04% CPU on average. ~9,500 paychecks is the projected hourly throughput. This is a classic example of what majority of Sun's CMT customers are experiencing as of today i.e., longer batch processing times with little resource consumption. Keep in mind that each UltraSPARC T2 and UltraSPARC T2 Plus processors can execute up to 64 jobs in parallel (on a side note, UltraSPARC T1 processor can execute up to 32 jobs in parallel). So to put the idling resources for effective use, there by to improve the elapsed times and the overall throughput, few experiments were conducted with 64 payroll threads and the results are very impressive. With a maximum of 64 payroll threads, it took only 4.63 minutes to process 5,000 paychecks at an average of 40.77% CPU utilization. In other words, similarly configured T5220 can process ~64,700 paychecks at less than half of the available CPU cycles. Here is a word of caution: just because the processor can execute 64 threads in parallel, it doesn't mean it is always optimal to submit 64 parallel jobs on systems like T5220. Very high number of batch jobs (payroll threads in this particular scenario) might be an overkill for simple tasks like NACHA in Payroll process.

The following white paper has more detailed information about the nature of the workload and the results from the experiments with various number of threads for different components of the Oracle Applications' Payroll batch workload. Refer to the same white paper for the exact tuning information as well.

Link to the white paper:
     E-Business Suite Payroll 11i (11.5.10) using Oracle 10g on a Sun SPARC Enterprise T5220

Here is the summary of the results that were extracted from the white paper:

Hardware configuration
          1x Sun SPARC Enterprise T5220 for running the application, batch and the database servers
              Specifications: 1x 1.4 GHz 8-core UltraSPARC T2 processor with 64 GB memory

Software configuration
          Oracle E-Business Suite 11.5.10
          Oracle 10g R1 10.1.0.4 RDBMS
          Solaris 10 8/07

Results
Oracle E-Business Suite 11i Payroll - Number of employees: 5,000
Component #Threads Time (min) Avg. CPU% Hourly Throughput
Payroll process 64 1.87 90.56 160,714
PrePayments 64 0.20 46.33 1,500,000
Ext. Proc. Archive 64 1.90 90.77 157,895
NACHA 8 0.05 2.52 6,000,000
Check Writer 24 0.38 9 782,609
Costing 48 0.23 32.5 1,285,714
Total or Average NA 4.63 min 40.77% 64,748

It is evident from the average CPU% that the Payroll process and the External Process Archive components are extremely CPU intensive; and hence take longer time to complete. That's the reason 64 threads were configured for those components to run at the full potential of the system. Light-weight components like NACHA need fewer threads to complete the job efficiently. Configuring 64 threads for NACHA will have a negative impact on the throughput. In other words, we would be wasting CPU cycles for no apparent improvement.

It is the responsibility of the customers to tune the application and the workload appropriately. One size doesn't fit all.

The Payroll 11i results on the T5220 demonstrate clearly that Sun's CMT systems are capable of handling batch workloads well. It would be interesting to see how well they perform against other systems equipped with traditional processors with higher clock speeds. For this comparison, we could use couple of results that were published by UNISYS and IBM with the same workload. The following table summarizes the results from the following two white papers. For the sake of completeness, Sun's CMT results were included as well.

Source URLs:
  1. E-Business Suite Payroll 11i (11.5.10) using Oracle 10g on a UNISYS ES7000/one Enterprise Server
  2. E-Business Suite Payroll 11i (11.5.10) using Oracle 10g for Novell SUSE Linux on IBM eServer xSeries 366 Servers
Oracle E-Business Suite 11i Payroll - Number of employees: 5,000
Vendor OS Hardware Config #Threads Time (min) Avg. CPU% Hourly Throughput
UNISYS Linux: RHEL 4 Update 3 DB/App/Batch server: 1x Unisys ES7000/one Enterprise Server (4x 3.0 GHz Dual-Core Intel Xeon 7041 processors, 32 GB memory) 121 5.18 min 53.22% 57,915
IBM Novell SUSE Linux Enterprise Server 9 SP1 DB, App servers: 2x IBM eServer xSeries 366 4-way server (4x 3.66 GHz Intel Xeon MP Processors (EM64T), 32 GB memory) 12 8.42 min 50+%2 35,644
Sun Solaris 10 8/07 DB/App/Batch server: 1x Sun SPARC Enterprise T5220 (1x 1.4 GHz 8-core UltraSPARC T2 processor, 64 GB memory) 8 to 64 4.63 min 40.77% 64,748

Better results were highlighted. The results speak for themselves. One 1.4 GHz UltraSPARC T2 processor outperformed four 3 GHz / 3.66 GHz processors in terms of the average CPU utilization and most importantly in the hourly throughput (Hourly throughput calculation relies on the total elapsed time).

Before we conclude, let us reiterate few things purely based on the factual evidence presented in this blog post:

  • Sun's CMT servers like T2000, T5220, T5240 (two socket system with UltraSPARC T2 Plus processors) are good to run batch workloads like Oracle Applications Payroll 11i

  • Sun's CMT servers like T2000, T5220, T5240 are good to run the Oracle 10g RDBMS when the DML/DDL/SQL statements that make up the majority of the workload are not very complex, and

  • When the application is tuned appropriately, the performance of CMT processors can outperform some of the traditional processors that were touted to deliver the best single thread performance

Footnotes

1. There is a note in the UNISYS/Payroll 11i white paper that says "[...] the gains {from running increased numbers of threads} decline at higher numbers of parallel threads." This is quite contrary to what Sun observed in its Payroll 11i experiments on UltraSPARC T2 based T5220. Higher number of parallel threads (maximum: 64) improved the throughput on T5220, where as UNISYS' observation is based on their experiments with a maximum of 12 parallel threads. Moral of the story: do NOT treat all hardware alike.

2. IBM's Payroll 11i white paper has no references to the average CPU numbers. 50+% was derived from the "Figure 3: Average CPU Utilization".

Friday Jan 04, 2008

Sun publishes 10,000 user Siebel 8.0 PSPP benchmark on Niagara 2 systems

Here is the link to the benchmark white paper publication:

Siebel CRM Release 8.0 Industry Applications and Oracle 10gR2 DB on Sun SPARC Enterprise T5120/T5220 servers running the Solaris 10 OS

Some key highlights from the white paper:


  • All tiers of the Siebel CRM Release 8.0 architecture ran on chip multithreading (CMT) processor, UltraSPARC T2 based T5120/T5220 systems running Solaris 10 8/07.
  • Multithreading capability of the US T2 processor allowed each of the active Object Manager (OM) processes to run hundreds of Light Weight Processes (LWP), thus utilizing the available resources very effectively. A total number of 30 object managers serviced the work load of 10,000 concurrent users.
  • While supporting 10,000 concurrent Siebel users, the entire Sun Solution based on Sun SPARC Enterprise T5120/T5220 servers running Siebel CRM Release 8.0 and Oracle 10g R2 on top of Solaris 10 consumed 1202 watts in a 6U rack space. As a result the T5210/T5220 supports 8.3 users per watt of energy consumed and supports 1666 users per rack unit.
  • Apparently this is the best Siebel 8.0 benchmark result out there in terms of the number of users and price/performance. Feel free to compare Sun's 10,000 user result with other published results that you may find at Oracle Siebel Benchmark White Papers web page.

Saturday Jan 13, 2007

US Energy Policy Act of 2005 - Daylight Saving Time changes 2007

Due to the Energy Policy Act of 2005, there will be some changes to the daylight saving time in US, starting in 2007. As a hardware/software vendor, Sun Microsystems is ready with a solution (set of patches) to accommodate those changes; and the required information is well documented in the form of Sunsolve alerts.

Documentation


  1. Required patches for Solaris, Java and Sun Fire firmware:

    BigAdmin tech note: 2007 Daylight Saving Time Changes in the U.S


  2. The above tech note is referring to a Sun alert, 102178, which was obsoleted and replaced by a more comprehensive Sun Alert, 102775:

    Daylight Saving Time (DST) Changes for Australia (2006), Canada (2007), United States (2007) and Others


  3. Required firmware for timezone changes due to energy policy act of 2005 is at:

    Required Server Firmware for Timezone Changes Due to U.S. Energy Policy Act of 2005

In case if Sun didn't address some areas which may get impacted with the changes mentioned in energy policy act of 2005, feel free to file a bug at bugs.sun.com (Java bugs) or at bugs.opensolaris.org (Solaris bugs).

(Original blog post is at:
http://technopark02.blogspot.com/2007/01/energy-policy-act-of-2005-daylight.html
)

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today