Tuesday Dec 23, 2014

Solaris Studio : C/C++ Dynamic Analysis

First, a reminder - Oracle Solaris Studio 12.4 is now generally available. Check the Solaris Studio 12.4 Data Sheet before downloading the software from Oracle Technology Network.

Dynamic Memory Usage Analysis

Code Analyzer tool in Oracle Solaris Studio compiler suite can analyze static data, dynamic memory access data, and code coverage data collected from binaries that were compiled with the C/C++ compilers in Solaris Studio 12.3 or later. Code Analyzer is supported on Solaris and Oracle Enterprise Linux.

Refer to the static code analysis blog entry for a quick summary of steps involved in performing static analysis. The focus of this blog entry is the dynamic portion of the analysis. In this context, dynamic analysis is the evaluation of an application during runtime for memory related errors. Main objective is to find and debug memory management errors -- robustness and security assurance are nice side effects however limited their extent is.

Code Analyzer relies on another primary Solaris Studio tool, discover, to find runtime errors that are often caused by memory mismanagement. discover looks for potential errors such as accessing outside the bounds of the stack or an array, unallocated memory reads and writes, NULL pointer deferences, memory leaks and double frees. Full list of memory management issues analyzed by Code Analyzer/discover is at: Dynamic Memory Access Issues

discover performs the dynamic analysis by instrumenting the code so that it can keep track of memory operations while the binary is running. During runtime, discover monitors the application's use of memory by interposing on standard memory allocation calls such as malloc(), calloc(), memalign(), valloc() and free(). Fatal memory access errors are detected and reported immediately at the instant the incident occurs, so it is easy to correlate the failure with actual source. This behavior helps in detecting and fixing memory management problems in large applications with ease somewhat. However the effectiveness of this kind of analysis highly depends on the flow of control and data during the execution of target code - hence it is important to test the application with variety of test inputs that may maximize code coverage.

High-level steps in using Code Analyzer for Dynamic Analysis

Given the enhancements and incremental improvements in analytical tools, Solaris Studio 12.4 is recommended for this exercise.

  1. Build the application with debug flags

    –g (C) or -g0 (C++) options generate debug information. It enables Code Analyzer to display source code and line number information for errors and warnings.

    • Linux users: specify –xannotate option on compile/link line in addition to -g and other options
  2. Instrument the binary with discover

    % discover -a -H <filename>.%p.html -o <instrumented_binary> <original_binary>

    where:

    • -a : write the error data to binary-name.analyze/dynamic directory for use by Code Analyzer
    • -H : write the analysis report to <filename>.<pid>.html when the instrumented binary was executed. %p expands to the process id of the application. If you prefer the analysis report in a plain text file, use -w <filename>.%p.txt instead
    • -o : write the instrumented binary to <instrumented_binary>

    Check Command-Line Options page for the full list of discover supported options.

  3. Run the instrumented binary

    .. to collect the dynamic memory access data.

    % ./<instrumented_binary> <args>

  4. Finally examine the analysis report for errors and warnings

Example

The following example demonstrates the above steps using Solaris Studio 12.4 C compiler and discover command-line tool. Same code was used to demonstrate static analysis steps as well.

Few things to be aware of:

  • If the target application preloads one or more functions using LD_PRELOAD environment variable that discover tool need to interpose on for dynamic analysis, the resulting analysis may not be accurate.
  • If the target application uses runtime auditing using LD_AUDIT environment variable, this auditing will conflict with discover tool's use of auditing and may result in undefined behavior.

Reference & Recommended Reading:

  1. Oracle Solaris Studio 12.4 : Code Analyzer User's Guide
  2. Oracle Solaris Studio 12.4 : Discover and Uncover User's Guide

Friday Nov 28, 2014

Solaris Studio 12.4 : C/C++ Static Code Analysis

First things first -- Oracle Solaris Studio 12.4 is now generally available. One of the key features of this release is the support for the latest industry standards including C++11, C11 and OpenMP 4.0. Check the Solaris Studio 12.4 Data Sheet before downloading the software from Oracle Technology Network.

Static Code Analysis

Code Analyzer tool in Oracle Solaris Studio compiler suite can analyze static data, dynamic memory access data, and code coverage data collected from binaries that were compiled with the C/C++ compilers in Solaris Studio 12.3 or later. Code Analyzer is supported on Solaris and Oracle Enterprise Linux.

Primary focus of this blog entry is the static code analysis.

Static code analysis is the process of detecting common programming errors in code during compilation. The static code checking component in Code Analyzer looks for potential errors such as accessing outside the bounds of the array, out of scope variable use, NULL pointer deferences, infinite loops, uninitialized variables, memory leaks and double frees. The following webpage in Solaris Studio 12.4: Code Analyzer User's Guide has the complete list of errors with examples.

    Static Code Issues analyzed by Code Analyzer

High-level steps in using Code Analyzer for Static Code analysis

Given the enhancements and incremental improvements in analysis tools, Solaris Studio 12.4 is recommended for this exercise.

  1. Collect static data

    Compile [all source] and link with –xprevise=yes option.

    • when using Solaris Studio 12.3 compilers, compile with -xanalyze=code option.
    • Linux users: specify –xannotate option on compile/link line in addition to -xprevise=yes|-xanalyze=code.

    During compilation, the C/C++ compiler extracts static errors automatically, and writes the error information to the sub-directory in <binary-name>.analyze directory.

  2. Analyze the static data

    Two options available to analyze and display the errors in a report format.

Example

The following example demonstrates the above steps using Solaris Studio 12.4 C compiler and codean command-line tool.

Few things to be aware of:

  • compilers may not be able to detect all of the static errors in target code especially if the errors are complex.
  • some errors depend on data that is available only at runtime -- perform dynamic analysis as well.
  • some errors are ambiguous, and also might not be actual errors -- expect few false-positives.

Reference & Recommended Reading:
    Oracle Solaris Studio 12.4 Code Analyzer User's Guide

Monday Oct 14, 2013

[Script] Breakdown of Oracle SGA into Solaris Locality Groups

Goal: for a given process, find out how the SGA was allocated in different locality groups on a system running Solaris operating system.

Download the shell script, sga_in_lgrp.sh. The script accepts any Oracle database process id as input, and prints out the memory allocated in each locality group.

Usage: ./sga_in_lgrp.sh <pid>

eg.,

# prstat -p 12820

   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU PROCESS/NLWP
 12820 oracle     32G   32G sleep    60  -20   0:00:16 0.0% oracle/2

# ./sga_in_lgrp.sh 12820

Number of Locality Groups (lgrp): 4
------------------------------------

lgroup 1 :   8.56 GB
lgroup 2 :   6.56 GB
lgroup 3 :   6.81 GB
lgroup 4 :  10.07 GB

Total allocated memory:  32.00 GB

For those who wants to have a quick look at the source code, here it is.

# cat sga_in_lgrp.sh

#!/bin/bash

# check the argument count
if [ $# -lt 1 ]
then
        echo "usage: ./sga_in_lgrp.sh <oracle pid>"
        exit 1
fi

# find the number of locality groups
lgrp_count=$(kstat -l lgrp | tail -1 | awk -F':' '{ print $2 }')
echo "\nNumber of Locality Groups (lgrp): $lgrp_count"
echo "------------------------------------\n"

# save the ism output using pmap
pmap -sL $1 | grep ism | sort -k5 > /tmp/tmp_pmap_$1

# calculate the total amount of memory allocated in each lgroup
for i in `seq 1 $lgrp_count`
do
        echo -n "lgroup $i : "
        grep "$i   \[" /tmp/tmp_pmap_$1 | awk '{ print $2 }' | sed 's/K//g' | 
               awk '{ sum+=$1} END {printf ("%6.2f GB\n", sum/(1024*1024))}'
done

echo
echo -n "Total allocated memory: "
awk '{ print $2 }' /tmp/tmp_pmap_$1 | sed 's/K//g' | awk '{ sum+=$1} END 
         {printf ("%6.2f GB\n\n", sum/(1024*1024))}'

rm /tmp/tmp_pmap_$1

Like many things in life, there will always be a better or simpler way to achieve this. If you find one, do not fret over this approach. Please share, if possible.

Tuesday Jul 30, 2013

Oracle Tips : Solaris lgroups, CT optimization, Data Pump, Recompilation of Objects, ..

1. [Re]compiling all objects in a schema
exec DBMS_UTILITY.compile_schema(schema => 'SCHEMA');

To recompile only the invalid objects in parallel:

exec UTL_RECOMP.recomp_parallel(<NUM_PARALLEL_THREADS>, 'SCHEMA');

A NULL value for SCHEMA recompiles all invalid objects in the database.


2. SGA breakdown in Solaris Locality Groups (lgroup)

To find the breakdown, execute pmap -L | grep shm. Then separate the lines that are related to each locality group and sum up the value in 2nd column to arrive at a number that shows the total SGA memory allocated in that locality group.

(I'm pretty sure there will be a much easier way that I am not currently aware of.)


3. Default values for shared pool, java pool, large pool, ..

If the *pool parameters were not set explicitly, executing the following query is one way to find out what are they currently set to.

eg.,
SQL> select * from v$sgainfo;

NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2171296 No
Redo Buffers                      373620736 No
Buffer Cache Size                8.2410E+10 Yes
Shared Pool Size                 1.7180E+10 Yes
Large Pool Size                   536870912 Yes
Java Pool Size                   1879048192 Yes
Streams Pool Size                 268435456 Yes
Shared IO Pool Size                       0 Yes
Granule Size                      268435456 No
Maximum SGA Size                 1.0265E+11 No
Startup overhead in Shared Pool  2717729536 No
Free SGA Memory Available                 0
12 rows selected.

4. Fix to PLS-00201: identifier 'GV$SESSION' must be declared error

Grant select privilege on gv_$SESSION to the owner of the database object that failed to compile.

eg.,
SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Warning: Package Body altered with compilation errors.

SQL> show errors
Errors for PACKAGE BODY OWF_MGR.FND_SVC_COMPONENT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
390/22   PL/SQL: Item ignored
390/22   PLS-00201: identifier 'GV$SESSION' must be declared

SQL> grant select on gv_$SESSION to OWF_MGR;
Grant succeeded.

SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Package body altered.

5. Solaris Critical Thread (CT) optimization for Oracle logwriter (lgrw)

Critical Thread is a new scheduler optimization available in Oracle Solaris releases Solaris 10 Update 10 and later versions. Latency sensitive single threaded components of software such as Oracle database's logwriter benefit from CT optimization.

On a high level, LWPs marked as critical will be granted more exclusive access to the hardware. For example, on SPARC T4 and T5 systems, such a thread will be assigned exclusive access to a core as much as possible. CT optimization won't delay scheduling of any runnable thread in the system.

Critical Thread optimization is enabled by default. However the users of the system have to hint the OS by marking a thread or two "critical" explicitly as shown below.

priocntl -s -c FX -m 60 -p 60 -i pid <pid_of_critical_single_threaded_process>

From database point of view, logwriter (lgwr) is one such process that can benefit from CT optimization on Solaris platform. Oracle DBA's can either make the lgwr process 'critical' once the database is up and running, or can simply patch the 11.2.0.3 database software by installing RDBMS patch 12951619 to let the database take care of it automatically. I believe Oracle 12c does it by default. Future releases of 11g software may make lgwr critical out of the box.

Those who install the database patch 12951619 need to carefully follow the post installation steps documented in the patch README to avoid running into unwanted surprises.


6. ORA-14519 error while importing a table from a Data Pump export dump
ORA-14519: Conflicting tablespace blocksizes for table : Tablespace XXX block \
size 32768 [partition specification] conflicts with previously specified/implied \
tablespace YYY block size 8192
 [object-level default]
Failing sql is:
CREATE TABLE XYZ
..

All partitions in table XYZ are using 32K blocks whereas the implicit default partition is pointing to a 8K block tablespace. Workaround is to use the REMAP_TABLESPACE option in Data Pump impdp command line to remap the implicit default tablespace of the partitioned table to the tablespace where the rest of partitions are residing.


7. Index building task in Data Pump import process

When Data Pump import process is running, by default, index building is performed with just one thread, which becomes a bottleneck and causes the data import process to take a long time especially if many large tables with millions of rows are being imported into the target database. One way to speed up the import process execution is by skipping index building as part of data import task with the help of EXCLUDE=INDEX impdp command line option. Extract the index definitions for all the skipped indexes from the Data Pump dump file as shown below.

impdp <userid>/<password> directory=<directory> dumpfile=<dump_file>.dmp \
    sqlfile=<index_def_file>.sql INCLUDE=INDEX

Edit <index_def_file>.sql to set the desired number of parallel threads to build each index. And finally execute the <index_def_file>.sql to build the indexes once the data import task is complete.

Friday Aug 03, 2012

Enabling 2 GB Large Pages on Solaris 10

Few facts:

  • - 8 KB is the default page size on Solaris 10 and 11 as of this writing
  • - both hardware and software must have support for 2 GB large pages
  • - SPARC T4 hardware is capable of supporting 2 GB pages
  • - Solaris 11 kernel has in-built support for 2 GB pages
  • - Solaris 10 has no default support for 2 GB pages
  • - Memory intensive 64-bit applications may benefit the most from using 2 GB pages

Prerequisites:

OS: Solaris 10 8/11 (Update 10) or later
Hardware: SPARC T4. eg., SPARC T4-1, T4-2 or T4-4

Steps to enable 2 GB large pages on Solaris 10:

  1. Install the latest kernel patch or ensure that 147440-04 or later was installed

  2. Add the following line to /etc/system and reboot
    • set max_uheap_lpsize=0x80000000

  3. Finally check the output of the following command when the system is back online
    • pagesize -a

    eg.,
    % pagesize -a
    8192		<-- 8K
    65536		<-- 64K
    4194304		<-- 4M
    268435456	<-- 256M
    2147483648	<-- 2G
    
    % uname -a
    SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v
    

Also See:

Saturday Aug 27, 2011

Oracle 11g: Travel back in time with the Database Flashback

Error recovery, historical reporting, trend analysis, data forensics and fraud detection are just some of the business problems that can be solved by using the Flashback Data Archive feature in Oracle 11g database. The Flashback option can be enabled for the entire database or for a selected set of tables. It can be enabled in the database with no application changes.

At work I usually run performance tests by starting with a clean copy of the database. I analyze the test results at the end of the test, determine the next course of action (tuning), restore the clean copy of the database from a backup, apply the tuning and re-run the performance test. It goes on in a cycle until I'm happy with the overall test result. In some cases especially with large data sets, restoring the database from a backup becomes one of the time consuming tasks. In such situations, using the database flashback to go back to a previously saved restore point saves quite a bit of time. Rest of this blog post demonstrates how to enable database flashback and to go back to a specified restore point. Check Oracle Total Recall with Oracle Database 11g Release 2 white paper for more information on Flashback Data Archive (FDA).

Objective

Revert the entire database to a previously saved state at will

Steps to perform

  • Configure the following initialization parameters: db_recovery_file_dest & db_recovery_file_dest_size
  • Enable Archive Log mode
  • Enable database Flashback option
  • Create a restore point. Decide whether to create a normal or a guaranteed restore point
    --------------------------------------------------------------------------------------------------------
  • Finally flashback database to the created restore point when required

Be aware that there will be some performance and storage overhead in using the database flashback. Evaluate all your options carefully before configuring database flashback.

Example

The following example uses guaranteed restore point to flashback the database in a two-node RAC environment. Most of the example is self-explanatory.

% srvctl status database -d DEMO
Instance DEMO1 is running on node racnode01
Instance DEMO2 is running on node racnode02

/* stop all the database instances except one (anyone) in RAC config */

% srvctl stop instance -d DEMO -i DEMO2

% export ORACLE_SID=DEMO1

/* put one of the instances in non-cluster mode */

% sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile;

% srvctl stop instance -d DEMO -i DEMO1

% sqlplus / as sysdba
SQL> startup mount

/* enable archive log mode */

SQL> alter database archivelog;

SQL> archive log list
Database log mode	       Archive Mode
Automatic archival	       Enabled
Archive destination	       USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     2
Next log sequence to archive   4
Current log sequence	       4

SQL> show parameter db_recovery_file

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest		     string	 +FRA
db_recovery_file_dest_size	     big integer 512G

/* enable flashback option */

SQL> alter database flashback on;

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

/* put the instance back in cluster mode and restart the database */

SQL> alter system set cluster_database=true scope=spfile;

SQL> alter database open;

% srvctl stop instance -d DEMO -i DEMO1

% srvctl start database -d DEMO

/* create a guaranteed restore point */

% sqlplus / as sysdba
SQL> create restore point demo_clean_before_test guarantee flashback database;

Restore point created.

SQL> column NAME format A25
SQL> column TIME format A40
SQL> set lines 120
SQL> select NAME, SCN, TIME, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE 
  2  from V$RESTORE_POINT where GUARANTEE_FLASHBACK_DATABASE='YES';

NAME				 SCN TIME		              GUA STORAGE_SIZE
------------------------- ---------- -------------------------------- --- ------------
DEMO_CLEAN_BEFORE_TEST     17460960 21-AUG-11 01.01.20.000 AM	      YES     67125248

/* flashback database to the saved restore point */

% srvctl stop database -d DEMO

% export ORACLE_SID=DEMO1

% rman TARGET /

RMAN> STARTUP MOUNT;
RMAN> FLASHBACK DATABASE TO RESTORE POINT 'DEMO_CLEAN_BEFORE_TEST';

Starting flashback at 21-AUG-11
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:25

Finished flashback at 21-AUG-11

RMAN> ALTER DATABASE OPEN RESETLOGS;

database opened

RMAN> SHUTDOWN IMMEDIATE;

% srvctl start database -d DEMO

/* ============================================================================== */

/* alternatively run the following RMAN script as shown below */

% cat restore.rman
RUN {
        STARTUP MOUNT;
        FLASHBACK DATABASE TO RESTORE POINT 'DEMO_CLEAN_BEFORE_TEST';
        ALTER DATABASE OPEN RESETLOGS;
        SHUTDOWN IMMEDIATE;
}

EXIT

% rman TARGET / cmdfile=restore.rman

Note:
It is not mandatory to enable logging for flashback database in order to create and use restore points. The requirement in such a case is to put the database in ARCHIVELOG mode and creating the first guaranteed restore point when the database is in mounted state.

Friday Jul 01, 2011

Be Eco-Responsible. Use Resource Management features in Solaris, Oracle Database and Go Green

This blog post is about a white paper that introduces the resource management features in Oracle Solaris and Oracle Database.

Who needs more hardware when under-utilized systems are sitting idle in datacenters consuming valuable space, energy and polluting environment with carbon emissions. These days virtualization and con solidation are more popular than ever before. And then there is Oracle Corporation with great products consisting rich features for resource manageability in consolidated, virtualized and isolated envir onments. Put together all these pieces what do we get? - a complete green solution that helps make this planet a better place.

In an attempt to increase the awareness of the resource management features available in Oracle Solaris and Oracle Database, we just published a four-part white paper surrounding Oracle Solaris Resource Manager and Oracle Database Resource Manager. Those resource managers are not really a product, process or an add-on to Solaris and Oracle Database. "Resource Manager" is the abstract term for the set of software modules that are integrated into the Solaris operating system and Oracle database management system to facilitate resource management from the OS and RDBMS perspective.

The first part of the series introduces: the concept of resource management, Solaris Resource Manager and Oracle Database Resource Manager; compares and contrasts both of those resource managers andends the discussion with a brief overview of resource management in high-availability (HA) environments. Access this paper from the following URL:

        Part 1: Introduction to Resource Management in Oracle Solaris and Oracle Database

The second part is all about the available resource management features in Oracle Solaris. This includes resource management in virtualized environments too. The range of topics vary from the simple process binding to the dynamic reconfiguration of CPU and memory resources in a logical domain (LDom). The value of this paper lies in the volume of examples that follow each of the introduced feature. Access the second part from the following URL:

        Part 2: Effective Resource Management Using Oracle Solaris Resource Manager

Similar to the second part, the third part talks about the available resource management features in Oracle Database Management System. Majority of the discussion revolves around creating resource pl ans in a Oracle database. This part also contains plenty of examples. Access the third part from the following URL:

        Part 3: Effective Resource Management Using Oracle Database Resource Manager

The final part briefly introduces various hardware/software products that makes Oracle Corporation the ideal consolidation platform. Also an imaginary case study was presented to demonstrate how to consolidate multiple applications and databases onto a single server using Oracle virtualization technologies and the resource management features found in Oracle Solaris and Oracle Database. Description of the consolidation plan and the implementation of the plan takes up major portion of this final part. Access the fourth and final part of this series from the following URL:

        Part 4: Resource Management Case Study for Mixed Workloads and Server Sharing

Happy reading.

Friday May 27, 2011

PeopleSoft Application Server : Binding JSL Port to Multiple IP Addresses

(Pardon the formatting. Legible copy of this blog post is available at:
http://technopark02.blogspot.com/2011/05/peoplesoft-application-server-binding.html)

For the impatient:

On any multi-homed1 host, replace %PS_MACH% variable in "Jolt Listener" section of the application server domains' psappsrv.cfg file wih the special IP address "0.0.0.0" to get the desired effect. It enables TCP/IP stack to listen on all available network interfaces on the system. In other words, if JSL is listening on 0.0.0.0 on a multi-homed system, PIA traffic can flow using any of the IP addresses assigned to that system.

For the rest:

A little background first.

PeopleSoft application server relies on Jolt, a companion product that co-exists with Tuxedo, to handle all web requests. That is, Jolt is the bridge between PeopleSoft application server and the web server (any supported one) that facilitates web communication. Tuxedo helps schedule PeopleSoft application server processes to perform the actual transactions. When the application server is booted up, Jolt server listener (JSL) is bound to a pre-configured port number and is actively monitored for incoming web requests. On the other hand, web server instance(s) are made aware of the existence of all Jolt listeners in a PeopleSoft Enterprise by configuring the hostname:port# pairs in each of the web domain's configuration.properties file.

By default the variable %PS_MACH% in each of the application server domain configuration file, psappsrv.cfg, gets resolved to the hostname of the system during application server boot-up time. The following example demonstrates that.

eg.,

/* Application server configuration file */
% cat psappsrv.cfg
..
[JOLT Listener]
Address=%PS_MACH%
Port=9000
..

/* Boot up the application server domain */
% psadmin -c boot -d HRHX
..
exec JSL -A -- -d /dev/tcp -n //ben01:9000 -m 100 -M 100 -I 5 -j ANY -x 20 -S 10 -c 1000000 -w JSH :
        process id=20077 ... Started.
..

% hostname
ben01

% netstat -a | grep 9000
ben01.9000                 *.*                0      0 49152      0 LISTEN

% netstat -an | grep 9000
17.16.221.106.9000          *.*                0      0 49152      0 LISTEN

% ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
 inet 127.0.0.1 netmask ff000000 
bge0: flags=1000843 mtu 1500 index 2
 inet 17.16.221.106 netmask ffffff00 broadcast 17.16.221.255
bge1: flags=1000843 mtu 1500 index 3
 inet 18.1.1.1 netmask ffffff00 broadcast 18.1.1.255
e1000g0: flags=1000843 mtu 1500 index 4
 inet 18.1.1.201 netmask ffffff00 broadcast 18.1.1.255

% telnet 17.16.221.106 9000
Trying 17.16.221.106...
Connected to 17.16.221.106.
Escape character is '^]'.

% telnet 18.1.1.1 9000
Trying 18.1.1.1...
telnet: Unable to connect to remote host: Connection refused

% telnet 18.1.1.201 9000
Trying 18.1.1.201...
telnet: Unable to connect to remote host: Connection refused

Notice that %PS_MACH% was replaced by the actual hostname and the Jolt listener created the server socket using the IP address 17.16.221.106 and port number 9000. From the outputs of netstat, ifconfig and telnet, it is apparent that "bge0" is the only network interface that is being used by the Jolt listener. It means web server can communicate to JSL using the IP address 17.16.221.106 over port 9000 but not using any of the other two IP addresses 18.1.1.1 or 18.1.1.201. This is the default behavior.

However some customers may wish to have the ability to connect to the application services from different/multiple networks. This is possible in case of multi-homed systems -- servers with multiple network interfaces that are connected to a single or multiple networks. For example, such a host could be part of a public network, a private network where only those clients that can communicate over private links can connect or an InfiniBand network, a low latency high throughput network. The default behavior of JSL can be changed by using a special IP address "0.0.0.0" in place of the variable %PS_MACH% in application server domains' configuration file. The IP address 0.0.0.0 hints the Jolt listener (JSL) to listen on all available IPv4 network interfaces on the system. (I read somewhere that "::0" is the equivalent for IPv6 interfaces. Didn't get a chance to test it out yet). The following example demonstrates how the default behavior changes with the IP address 0.0.0.0.

% cat psappsrv.cfg
..
[JOLT Listener]
Address=0.0.0.0
Port=9000
..

/* Update the binary configuration by reloading the config file */
% psadmin -c configure -d HRHX

% psadmin -c boot -d HRHX
..
exec JSL -A -- -d /dev/tcp -n //0.0.0.0:9000 -m 100 -M 100 -I 5 -j ANY -x 20 -S 10 -c 1000000 -w JSH :
 process id=20874 ... Started.

% netstat -a | grep 9000
      *.9000               *.*                0      0 49152      0 LISTEN

% telnet 17.16.221.106 9000
Trying 17.16.221.106...
Connected to 17.16.221.106.
..

% telnet 18.1.1.1 9000
Trying 18.1.1.1...
Connected to 18.1.1.1.
..

% telnet 18.1.1.201 9000
Trying 18.1.1.201...
Connected to 18.1.1.201.

Footnote:
[1] Any system with more than one interface is considered a multi-homed host

Thursday Apr 14, 2011

Oracle Solaris: Show Me the CPU, vCPU, Core Counts and the Socket-Core-vCPU Mapping

[Replaced old code with new code on 10/03/11]

It should be easy to find this information just by running an OS command. However for some reason it ain't the case as of today. The user must know few details about the underlying hardware and run multiple commands to figure out the exact number of physical processors, cores etc.,

For the benefit of our customers, here is a simple shell script that displays the number of physical processors, cores, virtual processors, cores per physical processor, number of hardware threads (vCPUs) per core and the virtual CPU mapping for all physical processors and cores on a Solaris system (SPARC or x86/x64). This script showed valid output on recent T-series, M-series hardware as well as on some older hardware - Sun Fire 4800, x4600. Due to the changes in the output of cpu_info over the years, it is possible that the script may return incorrect information in some cases. Since it is just a shell script, tweak the code as you like. The script can be executed by any OS user.

Download the script : showcpucount


% cat showcpucount

--------------------------------------- CUT HERE -------------------------------------------
#!/bin/bash

/usr/bin/kstat -m cpu_info | egrep "chip_id|core_id|module: cpu_info" > /var/tmp/cpu_info.log

nproc=`(grep chip_id /var/tmp/cpu_info.log | awk '{ print $2 }' | sort -u | wc -l | tr -d ' ')`
ncore=`(grep core_id /var/tmp/cpu_info.log | awk '{ print $2 }' | sort -u | wc -l | tr -d ' ')`
vproc=`(grep 'module: cpu_info' /var/tmp/cpu_info.log | awk '{ print $4 }' | sort -u | wc -l | tr -d ' ')`

nstrandspercore=$(($vproc/$ncore))
ncoresperproc=$(($ncore/$nproc))

speedinmhz=`(/usr/bin/kstat -m cpu_info | grep clock_MHz | awk '{ print $2 }' | sort -u)`
speedinghz=`echo "scale=2; $speedinmhz/1000" | bc`

echo "Total number of physical processors: $nproc"
echo "Number of virtual processors: $vproc"
echo "Total number of cores: $ncore"
echo "Number of cores per physical processor: $ncoresperproc"
echo "Number of hardware threads (strands or vCPUs) per core: $nstrandspercore"
echo "Processor speed: $speedinmhz MHz ($speedinghz GHz)"

# now derive the vcpu-to-core mapping based on above information #

echo -e "\n** Socket-Core-vCPU mapping **"
let linenum=2

for ((i = 1; i <= ${nproc}; ++i ))
do
        chipid=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $2 }'`
        echo -e "\nPhysical Processor $i (chip id: $chipid):"

        for ((j = 1; j <= ${ncoresperproc}; ++j ))
        do
                let linenum=($linenum + 1)
                coreid=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $2 }'`
                echo -e "\tCore $j (core id: $coreid):"

                let linenum=($linenum - 2)
                vcpustart=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $4 }'`

                let linenum=(3 * $nstrandspercore + $linenum - 3)
                vcpuend=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $4 }'`

                echo -e "\t\tvCPU ids: $vcpustart - $vcpuend"
                let linenum=($linenum + 4)
        done
done

rm /var/tmp/cpu_info.log
--------------------------------------- CUT HERE -------------------------------------------

# prtdiag | head -1
System Configuration:  Sun Microsystems  sun4u SPARC Enterprise M4000 Server

# ./showcpucount
Total number of physical processors: 4
Number of virtual processors: 32
Total number of cores: 16
Number of cores per physical processor: 4
Number of hardware threads (strands or vCPUs) per core: 2
Processor speed: 2660 MHz (2.66 GHz)

** Socket-Core-vCPU mapping **

Physical Processor 1 (chip id: 1024):
        Core 1 (core id: 0):
                vCPU ids: 0 - 1
        Core 2 (core id: 2):
                vCPU ids: 2 - 3
        Core 3 (core id: 4):
                vCPU ids: 4 - 5
        Core 4 (core id: 6):
                vCPU ids: 6 - 7

Physical Processor 2 (chip id: 1032):
        Core 1 (core id: 8):
                vCPU ids: 8 - 9
        Core 2 (core id: 10):
                vCPU ids: 10 - 11
        Core 3 (core id: 12):
                vCPU ids: 12 - 13
        Core 4 (core id: 14):
                vCPU ids: 14 - 15

Physical Processor 3 (chip id: 1040):
        Core 1 (core id: 16):
                vCPU ids: 16 - 17
        Core 2 (core id: 18):
                vCPU ids: 18 - 19
        Core 3 (core id: 20):
                vCPU ids: 20 - 21
        Core 4 (core id: 22):
                vCPU ids: 22 - 23

Physical Processor 4 (chip id: 1048):
        Core 1 (core id: 24):
                vCPU ids: 24 - 25
        Core 2 (core id: 26):
                vCPU ids: 26 - 27
        Core 3 (core id: 28):
                vCPU ids: 28 - 29
        Core 4 (core id: 30):
                vCPU ids: 30 - 31

Monday Feb 14, 2011

Oracle Solaris Studio C/C++: Tuning iropt for inline control

It is desirable to inline as many hot routines as possible to reduce the runtime overhead of CPU intensive applications. In general, compilers go by their own rules when to inline and when to not inline a routine. This blog post is intended to introduce some of the not widely known (or used) compiler internal flags to tweak the pre-defined rules of compiler.

Consider the following trivial C code:


% cat inline.c

#include <stdio.h>
#include <stdlib.h>

inline void freememory(int \*ptr)
{
        free(ptr);
}

extern inline void swapdata(int \*ptr1, int \*ptr2)
{
        int \*temp;

        temp = (int \*) malloc (sizeof (int));
        printf("\\nswapdata(): before swap ->");

        \*temp = \*ptr1;
        \*ptr1 = \*ptr2;
        \*ptr2 = \*temp;

        printf("\\nswapdata(): after swap ->");

        free (temp);
}

inline void printdata(int \*ptr)
{
        printf("\\nAddress = %x\\tStored Data = %d", ptr, \*ptr);
}

inline void storedata(int \*ptr, int data)
{
        \*ptr = data;
}

inline int \*getintptr()
{
        int \*ptr;
        ptr = (int \*) malloc (sizeof(int));
        return (ptr);
}

inline void AllocLoadAndSwap(int val1, int val2)
{
        int \*intptr1, \*intptr2;

        intptr1 = getintptr();
        intptr2 = getintptr();
        storedata(intptr1, val1);
        storedata(intptr2, val2);
        printf("\\nBefore swapping .. ->");
        printdata(intptr1);
        printdata(intptr2);
        swapdata(intptr1, intptr2);
        printf("\\nAfter swapping .. ->");
        printdata(intptr1);
        printdata(intptr2);
        freememory(intptr1);
        freememory(intptr2);
}

inline void InitAllocLoadAndSwap()
{
        printf("\\nSnapshot 1\\n___________");
        AllocLoadAndSwap(100, 200);
        printf("\\n\\nSnapshot 2\\n___________");
        AllocLoadAndSwap(435, 135);
}

int main() {
        InitAllocLoadAndSwap();
        return (0);
}

By default auto inlining is turned off in Oracle Studio compilers; and to turn it on, the code must be compiled with -O4 or higher optimization. This example attempts to hint the compiler to inline all the routines with the help of inline keyword. Note that inline keyword is a suggestion/request for the compiler to inline the function. However there is no guarantee that compiler honors the suggestion/request. Just like everything else in the world, compiler has a pre-defined set of rules. And based on those rules, it tries to do its best as long as those rules are not violated. If the compiler chooses to inline a routine, the function body will be expanded at all the call sites (just like a macro expansion).

When this code is compiled with Oracle Studio C compiler, it doesn't print any diagnostic information on stdout or stderr - so, using nm or elfdump is one way to find what routines are inlined and what routines are not.


% cc -xO3 -c inline.c
% nm inline.o

inline.o:

[Index]   Value      Size      Type  Bind  Other Shndx   Name

[4]     |         0|       0|NOTY |LOCL |0    |3      |Bbss.bss
[6]     |         0|       0|NOTY |LOCL |0    |4      |Ddata.data
[8]     |         0|       0|NOTY |LOCL |0    |5      |Drodata.rodata
[16]    |         0|       0|NOTY |GLOB |0    |ABS    |__fsr_init_value
[14]    |         0|       0|FUNC |GLOB |0    |UNDEF  |InitAllocLoadAndSwap
[1]     |         0|       0|FILE |LOCL |0    |ABS    |inline.c
[15]    |         0|      20|FUNC |GLOB |0    |2      |main

From this output, we can see that InitAllocLoadAndSwap() is not inlined yet there is no information as to why this function is not inlined.

Compiler commentary with er_src

To get some useful diagnostic information, Oracle Studio compiler collection offers a utility called er_src. When the source code was compiled with debug flag (-g or -g0), er_src can print the compiler commentary. However since compiler does auto inlining only at O4 or later optimization levels, unfortunately compiler commentary for inlining is not available at O3 optimization level.

iropt's inlining report

"iropt" is the global optimizer in Oracle Solaris Studio compiler collection. Inlining will be taken care by iropt. It performs inlining for callees in the same file unless compiler options for cross file optimizations such as -xipo, -xcrossfile are specified on compile line. Some of the iropt options can be used to control inlining heuristics. These options have no dependency on the optimization level.

Finding the list of iropt phases and the supported options

Oracle Studio C/C++ compilers on SPARC support a variety of options for function inline control. -help displays the list of supported flags/options.


% CC -V
CC: Sun C++ 5.9 SunOS_sparc Patch 124863-01 2007/07/25

% cc -V
cc: Sun C 5.9 SunOS_sparc Patch 124867-01 2007/07/12

% iropt -help

  \*\*\*\*\*\*  General Usage Information about IROPT  \*\*\*\*\*\*

To get general help information about IROPT, use -help
To list all the optimization phases in IROPT, use -phases
To get help on a particular phase, use -help=phase
To turn on phases, use -A++...+
To turn off phases, use -R++...+
To use phase-specific flags, use -A:

% iropt -phases


  \*\*\*\*\*\* List of Optimization Phases in IROPT \*\*\*\*\*\*

    Phase Name          Description
-------------------------------------------------------------
bitfield	     Bitfield transformations
iv		     Strength Reduction
loop		     Loop Invariant Code Motion
cse		     Common Subexpression Elimination
copy		     Copy Propagation
const		     Const Propagation and Folding
reg		     Virtual Register Allocation
unroll		     Data Dependence Loop Unrolling
merge		     Merge Basic Blocks
reassoc		     Reconstruction of associative and/or distributive expressions
composite_breaker	     
tail		     Tail Recursion Optimization
rename		     Scalar Rename
reduction	     
mvl		     Two-version loops for parallelization
loop_dist	     Loop Distribution
ivsub		     Induction Variables Substitution: New Algorithm
ddint		     Loop Interchange
fusion		     Loop Fusion
eliminate	     Scalar Replacement on def-def and def-use
private		     Private Array Analysis
scalarrep	     Scalar Replacement for use-use
tile		     Cache Blocking
ujam		     Register Blocking
ddrefs		     Loop Invariant Array References Moving
invcc		     Invariant Conditional Code Motion
sprof		     Synthetic Profiling
restrict_g	     Assume global pointers as restrict
dead		     Dead code elimination
pde		     Partial dead code elimination
reassoc2	     loop invariant reassociative tranfsformations
distr		     distributive reassociative tranfsformations
height2		     tree height reassociative reduction
ansi_alias	     Apply ANSI Aliase Rules to Pointer References
perfect		     
yas		     Scalar Replacement for reduction arrays
pf		     Prefetch Analysis
cond_elim	     Conditional Code Elimination
vector		     Vectorizing Some Intrinsics Functions Calls in Loops
whole		     Whole Program Mode
bopt		     Branches Reordering based on Profile Data
invccexp	     Invariant Conditional Code Expansion
bcopy		     Memcpy and Memset Transformations
ccse		     Cross Iteration CSE
data_access	     Array Access Regions Analysis
ipa		     Interprocedual Analysis
contract	     Array Contraction Analysis
symbol		     Symbolic Analysis
ppg2		     optimistic strategy of constant propagation
parallel	     Parallelization
pcg		     Parallel Code Generator
lazy		     Lazy Code Motion
region		     Region-based Optimization
loop_peeling	     Loop Peeling
loop_reform	     Loop Reformulation
loop_shifting	     Loop Shifting
loop_collapsing	     Loop Collapsing
memopt		     Merge memory allocations
inline		     IPA-based inlining phase
clone		     Routine cloning phase
norm_ir		     clean-up and normalize ir
ipa_ppg		     interprocedural constant propagation
sr		     Strength reduction (new)
ivsub3		     Induction Variable Substitution
icall_opt	     indirect call optimization
cha		     Class Hierarchy Analysis
ippt		     Interprocedual pointer tracking
reverse_invcc	     reverse invariant condition code hoisting
crit		     Critical path optimisations
loop_norm	     loop normalization
loop_unimodular	     loop unimodular transformation
scalar_repl	     Scalar Replacement
loop_bound	     Redundant Loop Bound Checking Elimination
loop_condition	     Invariant Loop Bound Checking Hoisting
memopt_pattern	     Memory Access Optimization
loop_improvement	     Loop structure improvement by code specialization
pbranch_opt	     C++ Java Pbranch Optimizations
norm_ldst	     short ld/st normalisation
micro_vector	     Micro vectorization for x86
ipa_symbol_ppg	     interprocedural symbolic analysis
optinfo		     Compile-time information about loop and inlining transformations
vp		     Value profiling and code specialization
pass_ti		     Pass IR type trees to the backend
fully_unroll	     Fully Loop Unrolling
builtin_opt	     Builtin Optimization


% iropt -help=inline

NAME
    inline - Qoption for IPA-based inlining phase.

SYNOPSIS
    -Ainline[:][:]:...[:] - turn on inline.
    -Rinline                             - turn off inline

DESCRIPTION
    inline is on by default now. -Ainline turns it on.
    -Rinline turns it off. 
    
    NOTE: the following is a brief description of the old inliner qoptions
          1. Old inliner qoptions that do not have equivalent 
             options in the new inliner--avoid to use them later: 
             -Ml -Mi -Mm -Ma -Mc -Me -Mg -Mw -Mx -Mx -MC -MS 

          2. Old inliner qoptions that have equivalent option 
             in the new inliner--use the new options later: 
             Old options     new options 
                -Msn          recursion=n 
                -Mrn          irs=n      
                -Mtn          cs=n       
                -Mpn          cp=n       
                -MA           chk_alias  
                -MR           chk_reshape 
                -MI           chk_reshape=no 
                -MF           mi         
 
    The acceptable sub-options are:

      report[=n] - dump inlining report.
                  n=chain: 
                        show to-be-inlined call chains.
                  n=user_request: 
                        show the inlining status of user-requests.
                  n=0:  show inlined calls only.
                  n=1:  (default):  show both inlined and 
                        non-inlined calls and reasons for 
                        inlining/non-inlining.
                  n=2:  n=1 plus call id and node id
                  n=3:  show inlining summary only
                  n=4:  n=2 and iropt aborts after the 
                        inlining report is dumped out.
      cgraph     - dump cgraph.
      call_in_pragma[=no|yes]:
                 - call_in_pragma or call_in_pragma=yes: 
                   Inline a call that in the Parallel region 
		      into the original routine 
                 - call_in_pragma=no: (default) 
                   Don't inline a call that in the Parallel region
		      into the original routine 
      inline_into_mfunction[=no|yes]:(only for Fortran) 
		    - inline_into_mfunction or inline_into_mfunction=yes:(default) 
		      Inline a call into the mfunction if it is in the
		      Parallel Region
                 - inline_into_mfunction=no: 
                   Don't inline a call into the mfunction if it 
                   in the Parallel Region
NOTE: for other languages, if you specify inline_into_mfunction=yes 
	 The compiler will silently ignore this qoption. As a result, 
	 Calls in parallel region will still be inlined into pragma constructs
      rs=n       - max number of triples in inlinable routines.
                   iropt defines a routine as inlinable or not
                   based on this number. So no routines over 
                   this limit will be considered for inlining.
      irs=n      - max number of triples in a inlining routine,
                   including size increase by inlining its calls
      cs=n       - max number of triples in a callee. 
                   In general, iropt only inline calls whose 
                   callee is inlinable (defined by rs) AND 
                   whose callee size is not greater than n.
                   But some calls to inlinable routines are 
                   actually inlined because of other factors
                   such as constant actuals, etc. 
      recursion=n  
                 - max level of resursive call that is 
                   considered for inlining.
      cp=n       - minimum profile feedback counter of a call.
                   No call with counter less than this limit 
                   would be inlined.
      inc=n      - percentage of the total number of triples 
                   increased after inlining. No inlining over
                   this percentage. For instance, 'inc=30' 
                   means inlining is allowed to increase the 
                   total program size by 30%.
      create_iconf=:
      use_iconf=:
                   This creates/uses an inlining configuration.
                   The file lists calls and routines that are
                   inlined and routines that inline their calls.
                   Its format is:
                      air      /\* actual inlining routines \*/
                      r11 r12 r13 ...
                      r21 r22 r23 ...
                      .....
                      ari      /\* actual routines inlined \*/
                      r11 r12 r13 ...
                      r21 r22 r23 ...
                      .....
                      aci      /\* actual calls inlined \*/
                      (r11,c11) (r12,c12) (r13,c13) ...
                      (r21,c21) (r22,c22) (r23,c23) ...
                      .....
                   The numbers are callids (cxx) and nodeids(rxx) 
                   printed out when report=2. It is used for
                   debugging. The usual usage is to use
                   create_iconf= to create a config file.
                   then, comment (by preceding numbers line
                   with #) to disallow inlining for those 
                   calls or routines. For instance, 
                       aci
                       (2,3) (2,5) (2,6) (3,9)
                       (3,10) (6,4) (6,7) (7,6)
                       #(7,10) (8,21) (8,22)
                   with the above config file, calls whose
                   nodeids and callids are (7,10),(8,21) and 
                   (8,22) will not be inlined.

                   NOTE:for the aci part of the configure file,
                        in each pair (rij,cij), the parentheses
                        are not necessary, but the comma is necessary 
                        and there should not be any space between
                        rji and comma, comma and cij.
      do_inline=:
                 - guide inliner to do inlining for a given
                   routine only.
      mi:
                 - Do maximum inlining for given routines if do_inline
                   is used; otherwise, do maximum inlining for main routine.
                   (The inliner will not check inlining parameters.
      inline_level[=1|2|3]: 
                 - specify the level of inline: 
                     inline_level=1    basic inlining 
                     inline_level or inline_level=2    medium inlining (default) 
                     inline_level=3 or inline_level=4,5...   aggressive inlining 
      remove_ip[=no|yes]:
                 - remove_ip or remove_ip=yes:
                      removing inliningPlan after inlining.
                 - remove_ip=no [default]:
                      keep inliningPlan after inlining.
      chk_alias[=no|yes]:
                 - chk_alias or chk_alias=yes [default]:
                      Don't inline a call if inlining it causes
                      aliases among callee's formal arrays.
                 - chk_alias=no:
                      Ignore such checking.
      chk_reshape[=no|yes]:
                 - chk_reshape or chk_reshape=yes [default]:
                      Don't inline a call if its array argument
                      is reshaped between caller and callee.
                 - chk_reshape=no:
                      Ignore such checking.
      chk_mismatch[=no|yes]:
                 - chk_mismatch or chk_mismatch=yes [default]:
                      Don't inline a call if any real argument
                      mismatches with its formal in type.
                 - chk_mismatch=no:
                      Ignore such checking.
      do_chain[=no|yes]:
                 - do_chain or do_chain=yes [default]:
                      Enable inlining for call chains.
                 - do_chain=no:
                      Disable inlining for call chains.
      callonce[=no|yes]:
                 - callonce=no [default]:
                      Disable inlining a routine that is
                      called only once.
                 - callonce or callonce=yes:
                      Enable inlining a routine that is
                      called only once.
      icall_recurse[=no|yes]:
                 - icall_recurse=no [default]:
                      Disable recursive inlining of indirect
                      and virtual call sites
                 - icall_recurse=yes:
                      Enable recursive inlining of indirect
                      and virtual call sites
      formal_dbgsym[=no|yes]: (default = no)
                 - Specify to preserve the debug information for
                   formal parameter of inlined funcion

Some of these options can be used to get all the diagnostic information from compile time. Especially the sub-option (report) to -Ainline is useful in obtaining the inlining report. To pass special flags to iropt, specify -W2,<option>:<sub-option> on compile line.

Here is an example.


% cc -xO3 -c -W2,-Ainline:report=2 inline.c

INLINING SUMMARY

   inc=400: percentage of program size increase.
   irs=4096: max number of triples allowed per routine after inlining.
   rs=450: max routine size for an inlinable routine.
   cs=400: call size for inlinable call.
   recursion=1: max level for inlining recursive calls.
   Auto inlining: OFF

   Total inlinable calls: 14
   Total inlined calls: 36
   Total inlined routines: 7
   Total inlinable routines: 7
   Total inlining routines: 3
   Program size: 199
   Program size increase: 744
   Total number of call graph nodes: 11

   Notes for selecting inlining parameters

    1. "Not inlined, compiler decision":
       If a call is not inlined by this reason, try to
       increase inc in order to inline it by
          -Qoption iropt -Ainline:inc=  for FORTRAN, C++
          -W2,-Ainline:inc=  for C

    2. "Not inlined, routine too big after inlining":
       If a call is not inlined by this reason, try to
       increase irs in order to inline it by
          -Qoption iropt -Ainline:irs=  for FORTRAN, C++
          -W2,-Ainline:irs=  for C

    3. "Not inlined, callee's size too big":
       If a call is not inlined by this reason, try to
       increase cs in order to inline it by
          -Qoption iropt -Ainline:cs=  for FORTRAN, C++
          -W2,-Ainline:cs=  for C

    4. "Not inlined, recursive call":
       If a call is not inlined by this reason, try to
       increase recursion level in order to inline it by
          -Qoption iropt -Ainline:recrusion=  for FORTRAN, C++
          -W2,-Ainline:recrusion=  for C

    5. "Routine not inlined, too many operations":
       If a routine is not inlinable by this reason, try to
       increase rs in order to make it inlinable by
          -Qoption iropt -Ainline:rs=  for FORTRAN, C++
          -W2,-Ainline:rs=  for C


ROUTINES NOT INLINABLE:

 main [id=7] (inline.c)
   Routine not inlined, user requested

CALL INLINING REPORT:

 Routine: freememory [id=0] (inline.c)
  Nothing inlined.

 Routine: swapdata [id=1] (inline.c)
  Nothing inlined.

 Routine: printdata [id=2] (inline.c)
  Nothing inlined.

 Routine: storedata [id=3] (inline.c)
  Nothing inlined.

 Routine: getintptr [id=4] (inline.c)
  Nothing inlined.

 Routine: AllocLoadAndSwap [id=5] (inline.c)
   getintptr [call_id=8], line 46: Auto inlined
   getintptr [call_id=9], line 47: Auto inlined
   storedata [call_id=10], line 48: Auto inlined
   storedata [call_id=11], line 49: Auto inlined
   printdata [call_id=13], line 51: Auto inlined
   printdata [call_id=14], line 52: Auto inlined
   swapdata [call_id=15], line 53: Auto inlined
   printdata [call_id=17], line 55: Auto inlined
   printdata [call_id=18], line 56: Auto inlined
   freememory [call_id=19], line 57: Auto inlined
   freememory [call_id=20], line 58: Auto inlined

 Routine: InitAllocLoadAndSwap [id=6] (inline.c)
   AllocLoadAndSwap [call_id=22], line 64: Not inlined, compiler decision
     (inc limit reached. See INLININING SUMMARY)
   AllocLoadAndSwap [call_id=24], line 66: Auto inlined
     swapdata [call_id=15], line 53: Auto inlined
     getintptr [call_id=8], line 46: Auto inlined
     getintptr [call_id=9], line 47: Auto inlined
     printdata [call_id=13], line 51: Auto inlined
     printdata [call_id=14], line 52: Auto inlined
     printdata [call_id=17], line 55: Auto inlined
     printdata [call_id=18], line 56: Auto inlined
     freememory [call_id=19], line 57: Auto inlined
     freememory [call_id=20], line 58: Auto inlined
     storedata [call_id=10], line 48: Auto inlined
     storedata [call_id=11], line 49: Auto inlined

 Routine: main [id=7] (inline.c)
   InitAllocLoadAndSwap [call_id=25], line 70: Auto inlined
     AllocLoadAndSwap [call_id=22], line 64: Not inlined, compiler decision
       (inc limit reached. See INLININING SUMMARY)
     AllocLoadAndSwap [call_id=24], line 66: Auto inlined
       swapdata [call_id=15], line 53: Auto inlined
       getintptr [call_id=8], line 46: Auto inlined
       getintptr [call_id=9], line 47: Auto inlined
       printdata [call_id=13], line 51: Auto inlined
       printdata [call_id=14], line 52: Auto inlined
       printdata [call_id=17], line 55: Auto inlined
       printdata [call_id=18], line 56: Auto inlined
       freememory [call_id=19], line 57: Auto inlined
       freememory [call_id=20], line 58: Auto inlined
       storedata [call_id=10], line 48: Auto inlined
       storedata [call_id=11], line 49: Auto inlined

The above report shows the threshold values being used while making decisions, all the routines and information about whether a call to any function is inlined; if not, the reason for not inlining it, and some suggestions on how to make it succeed. This is cool stuff.

Going back to the example: based on the report, the compiler is trying to inline all the routines as long as the program size doesn't go beyond 400% of the original size (ie., without inlining). Unfortunately AllocLoadAndSwap() went beyond the limits and as a result, compiler decides not to inline it. Fair enough. If we don't bother about the size of the binary and if we really want this routine inlined, one solution is to increase the value for inc in such a way that AllocLoadAndSwap()'s inclusion would fit into the newer limits.

eg.,


% cc -xO3 -c -W2,-Ainline:report=2,-Ainline:inc=650 inline.c
INLINING SUMMARY

   inc=650: percentage of program size increase.
   irs=4096: max number of triples allowed per routine after inlining.
   rs=450: max routine size for an inlinable routine.
   cs=400: call size for inlinable call.
   recursion=1: max level for inlining recursive calls.
   Auto inlining: OFF

   Total inlinable calls: 14
   Total inlined calls: 60
   Total inlined routines: 7
   Total inlinable routines: 7
   Total inlining routines: 3
   Program size: 199
   Program size increase: 1260
   Total number of call graph nodes: 11

   Notes for selecting inlining parameters

    ... skip ... (see prev reports for the text that goes here)

ROUTINES NOT INLINABLE:

 main [id=7] (inline.c)
   Routine not inlined, user requested


CALL INLINING REPORT:

 Routine: freememory [id=0] (inline.c)
  Nothing inlined.

 Routine: swapdata [id=1] (inline.c)
  Nothing inlined.

 Routine: printdata [id=2] (inline.c)
  Nothing inlined.

 Routine: storedata [id=3] (inline.c)
  Nothing inlined.

 Routine: getintptr [id=4] (inline.c)
  Nothing inlined.

 Routine: AllocLoadAndSwap [id=5] (inline.c)
   getintptr [call_id=8], line 46: Auto inlined
   getintptr [call_id=9], line 47: Auto inlined
   storedata [call_id=10], line 48: Auto inlined
   storedata [call_id=11], line 49: Auto inlined
   printdata [call_id=13], line 51: Auto inlined
   printdata [call_id=14], line 52: Auto inlined
   swapdata [call_id=15], line 53: Auto inlined
   printdata [call_id=17], line 55: Auto inlined
   printdata [call_id=18], line 56: Auto inlined
   freememory [call_id=19], line 57: Auto inlined
   freememory [call_id=20], line 58: Auto inlined

 Routine: InitAllocLoadAndSwap [id=6] (inline.c)
   AllocLoadAndSwap [call_id=22], line 64: Auto inlined
     swapdata [call_id=15], line 53: Auto inlined
     getintptr [call_id=8], line 46: Auto inlined
     getintptr [call_id=9], line 47: Auto inlined
     printdata [call_id=13], line 51: Auto inlined
     printdata [call_id=14], line 52: Auto inlined
     printdata [call_id=17], line 55: Auto inlined
     printdata [call_id=18], line 56: Auto inlined
     freememory [call_id=19], line 57: Auto inlined
     freememory [call_id=20], line 58: Auto inlined
     storedata [call_id=10], line 48: Auto inlined
     storedata [call_id=11], line 49: Auto inlined
   AllocLoadAndSwap [call_id=24], line 66: Auto inlined
     swapdata [call_id=15], line 53: Auto inlined
     getintptr [call_id=8], line 46: Auto inlined
     getintptr [call_id=9], line 47: Auto inlined
     printdata [call_id=13], line 51: Auto inlined
     printdata [call_id=14], line 52: Auto inlined
     printdata [call_id=17], line 55: Auto inlined
     printdata [call_id=18], line 56: Auto inlined
     freememory [call_id=19], line 57: Auto inlined
     freememory [call_id=20], line 58: Auto inlined
     storedata [call_id=10], line 48: Auto inlined
     storedata [call_id=11], line 49: Auto inlined

 Routine: main [id=7] (inline.c)
   InitAllocLoadAndSwap [call_id=25], line 70: Auto inlined
     AllocLoadAndSwap [call_id=22], line 64: Auto inlined
       swapdata [call_id=15], line 53: Auto inlined
       getintptr [call_id=8], line 46: Auto inlined
       getintptr [call_id=9], line 47: Auto inlined
       printdata [call_id=13], line 51: Auto inlined
       printdata [call_id=14], line 52: Auto inlined
       printdata [call_id=17], line 55: Auto inlined
       printdata [call_id=18], line 56: Auto inlined
       freememory [call_id=19], line 57: Auto inlined
       freememory [call_id=20], line 58: Auto inlined
       storedata [call_id=10], line 48: Auto inlined
       storedata [call_id=11], line 49: Auto inlined
     AllocLoadAndSwap [call_id=24], line 66: Auto inlined
       swapdata [call_id=15], line 53: Auto inlined
       getintptr [call_id=8], line 46: Auto inlined
       getintptr [call_id=9], line 47: Auto inlined
       printdata [call_id=13], line 51: Auto inlined
       printdata [call_id=14], line 52: Auto inlined
       printdata [call_id=17], line 55: Auto inlined
       printdata [call_id=18], line 56: Auto inlined
       freememory [call_id=19], line 57: Auto inlined
       freememory [call_id=20], line 58: Auto inlined
       storedata [call_id=10], line 48: Auto inlined
       storedata [call_id=11], line 49: Auto inlined

From the above output we can conclude that AllocLoadAndSwap() was inlined by the compiler when we let the program size to increase by 650%.

Notes:

  • Multiple iropt options separated by a comma (,) can be specified after -W2
    eg., -W2,-Ainline:report=2,-Ainline:inc=650

  • For C++ code, use -Qoption to specify iropt options.
    eg., -Qoption iropt -Ainline:report=2
    -Qoption iropt -Ainline:report=2,-Ainline:inc=650

  • Inlining those functions whose function call overhead is large relative to the routine code improves performance. Improvement is the result of elimination of the function call, stack frame manipulation and the function return

  • Even though inlining may increase the runtime performance of an application, do not try to inline too many functions. Inline only those functions (from profiling data) that could benefit from inlining

  • In general, compiler threshold values are good enough for inlining the functions. Use iropt's options only if some very hot routines couldn't make it due to some reason. Turn on auto inlining with -xO4 option

  • Inline functions increase build time and program sizes. Sometimes it is possible that some of the very large routines (when inlined) may not fit into processor's cache and may lead to poor performance mainly due to the increased cache miss rate

ALSO SEE
Oracle Solaris Studio: Advanced Compiler Options for Performance

(Original blog post is at:
http://technopark02.blogspot.com/2005/11/sun-studio-cc-tuning-iropt-for-inline.html)

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today