Saturday Jan 31, 2015

Programming in C: Few Tidbits #4

1. Using Wildcards in Filename Pattern Matching

Relying on *stat() API is not much of an option when using wildcards to match a filename pattern. Some of the options involve traversing a directory checking each file for a match using fnmatch(), or to use system() function to execute an equivalent shell command. Another option that is well suited for this task is the glob*() API, which is part of Standard C Library Functions. (I believe glob() depends on fnmatch() for finding matches).

Here is a simple example that displays the number of matches found for pattern "/tmp/lint_" along with the listing of matches.

% ls -1 /tmp/lint_*
/tmp/lint_AAA.21549.0vaOfQ
/tmp/lint_BAA.21549.1vaOfQ
/tmp/lint_CAA.21549.2vaOfQ
/tmp/lint_DAA.21549.3vaOfQ
/tmp/lint_EAA.21549.4vaOfQ
/tmp/lint_FAA.21549.5vaOfQ
/tmp/lint_GAA.21549.6vaOfQ


% cat match.c
#include <stdio.h>
#include <glob.h>

...
glob_t buf;

if (argc == 1) return 0;

glob(argv[1], 0 , NULL , &buf);

printf("\nNumber of matches found for pattern '%s': %d\n",
      argv[1], buf.gl_pathc);

for (int i = 0; i < buf.gl_pathc; ++i) {
    printf("\n\t%d. %s", (i + 1), buf.gl_pathv[i]);
}

globfree(&buf);
...


% ./<executable> /tmp/lint_\*

Number of matches found for pattern '/tmp/lint_*': 7

        1. /tmp/lint_AAA.21549.0vaOfQ
        2. /tmp/lint_BAA.21549.1vaOfQ
        3. /tmp/lint_CAA.21549.2vaOfQ
        4. /tmp/lint_DAA.21549.3vaOfQ
        5. /tmp/lint_EAA.21549.4vaOfQ
        6. /tmp/lint_FAA.21549.5vaOfQ
        7. /tmp/lint_GAA.21549.6vaOfQ

Please check the man page out for details -- glob(3C).


2. Microtime[stamp]

One of the old blog posts has an example to extract the current timestamp using time API. It shows the timestamp in standard format month-date-year hour:min:sec. In this post, let's add microseconds to the timestamp.

Here is the sample code.

% cat microtime.c
#include <stdio.h>
#include <time.h>

...
char timestamp[80], etimestamp[80];
struct timeval tmval;
struct tm *curtime;

gettimeofday(&tmval, NULL);

curtime = localtime(&tmval.tv_sec);
if (curtime == NULL) return 1;

strftime(timestamp, sizeof(timestamp), "%m-%d-%Y %X.%%06u", curtime);
snprintf(etimestamp, sizeof(etimestamp), timestamp, tmval.tv_usec);

printf("\ncurrent time: %s\n", etimestamp);
...

% ./<executable>
current time: 01-31-2015 15:49:26.041111

% ./<executable>
current time: 01-31-2015 15:49:34.575214

One major change from old approach is the reliance on gettimeofday() since it returns a structure [timeval] with a member variable [tv_usec] to represent the microseconds.

strftime() fills up the date/time data in timestamp variable as per the specifiers used in time format (third argument). By the time strftime() completes execution, timestamp will have month-date-year hr:min:sec filled out. Subsequent snprintf fills up the only remaining piece of time data - microseconds - using the tv_usec member in timeval structure and writes the updated timestamp to a new variable, etimestamp.

Credit: stackoverflow user unwind.


3. Concatenating Multi-Formatted Strings

I have my doubts about this header - so, let me show an example first. The following rudimentary example attempts to construct a sentence that is something like "value of pi = (22/7) = 3.14". In other words, the sentence has a mixture of character strings, integers, floating point number and special characters.

% cat fmt.c
#include <stdio.h>
#include <string.h>

...
char tstr[48];
char pistr[] = "value of pi = ";
int num = 22, den = 7;
float pi = ((float)num/den);

char snum[8], sden[8], spi[8];

sprintf(sden, "%d", den);
sprintf(snum, "%d", num);
sprintf(spi, "%0.2f", pi);

strcpy(tstr, pistr);
strcat(tstr, "(");
strcat(tstr, snum);
strcat(tstr, "/");
strcat(tstr, sden);
strcat(tstr, ") = ");
strcat(tstr, spi);

puts(tstr);
...

% ./<executable>
value of pi = (22/7) = 3.14

Nothing seriously wrong with the above code. It is just that it uses a bunch of sprintf(), strcpy() and strcat() calls to construct the target string. Also it overallocates the memory required for the actual string.

The same effect can be achieved by using asprintf(). The resulting code will be much smaller and easy to maintain however. This function also eases the developer from the burden of allocating memory of appropriate size. In general, overallocation leads to memory wastage and underallocation likely leads to buffer overflows posing unnecessary security risks. When relying on asprintf(), developers are not relieved from two factors though -- checking the return value to see if the call succeeded, and in freeing up the buffer when done with it. Ignoring those two aspects lead to program failures in the worst case, and memory leaks are almost guaranteed.

Here is the alternate version that achieves the desired effect by making use of asprintf().

% cat ifmt.c
#include <stdio.h>
#include <stdlib.h>

...
char *tstr;
int num = 22, den = 7;
float pi = ((float)num/den);

int ret = asprintf(&tstr, "value of pi = (%d/%d) = %0.2f", num, den, pi);

if (ret == -1) return 1;

puts(tstr);
free(tstr);
...

% ./<executable>
value of pi = (22/7) = 3.14

(Full copy of the same blog post with complete examples can be found at:
http://technopark02.blogspot.com/2015/01/programming-in-c-few-tidbits-4.html)

Tuesday Dec 23, 2014

Solaris Studio : C/C++ Dynamic Analysis

First, a reminder - Oracle Solaris Studio 12.4 is now generally available. Check the Solaris Studio 12.4 Data Sheet before downloading the software from Oracle Technology Network.

Dynamic Memory Usage Analysis

Code Analyzer tool in Oracle Solaris Studio compiler suite can analyze static data, dynamic memory access data, and code coverage data collected from binaries that were compiled with the C/C++ compilers in Solaris Studio 12.3 or later. Code Analyzer is supported on Solaris and Oracle Enterprise Linux.

Refer to the static code analysis blog entry for a quick summary of steps involved in performing static analysis. The focus of this blog entry is the dynamic portion of the analysis. In this context, dynamic analysis is the evaluation of an application during runtime for memory related errors. Main objective is to find and debug memory management errors -- robustness and security assurance are nice side effects however limited their extent is.

Code Analyzer relies on another primary Solaris Studio tool, discover, to find runtime errors that are often caused by memory mismanagement. discover looks for potential errors such as accessing outside the bounds of the stack or an array, unallocated memory reads and writes, NULL pointer deferences, memory leaks and double frees. Full list of memory management issues analyzed by Code Analyzer/discover is at: Dynamic Memory Access Issues

discover performs the dynamic analysis by instrumenting the code so that it can keep track of memory operations while the binary is running. During runtime, discover monitors the application's use of memory by interposing on standard memory allocation calls such as malloc(), calloc(), memalign(), valloc() and free(). Fatal memory access errors are detected and reported immediately at the instant the incident occurs, so it is easy to correlate the failure with actual source. This behavior helps in detecting and fixing memory management problems in large applications with ease somewhat. However the effectiveness of this kind of analysis highly depends on the flow of control and data during the execution of target code - hence it is important to test the application with variety of test inputs that may maximize code coverage.

High-level steps in using Code Analyzer for Dynamic Analysis

Given the enhancements and incremental improvements in analytical tools, Solaris Studio 12.4 is recommended for this exercise.

  1. Build the application with debug flags

    –g (C) or -g0 (C++) options generate debug information. It enables Code Analyzer to display source code and line number information for errors and warnings.

    • Linux users: specify –xannotate option on compile/link line in addition to -g and other options
  2. Instrument the binary with discover

    % discover -a -H <filename>.%p.html -o <instrumented_binary> <original_binary>

    where:

    • -a : write the error data to binary-name.analyze/dynamic directory for use by Code Analyzer
    • -H : write the analysis report to <filename>.<pid>.html when the instrumented binary was executed. %p expands to the process id of the application. If you prefer the analysis report in a plain text file, use -w <filename>.%p.txt instead
    • -o : write the instrumented binary to <instrumented_binary>

    Check Command-Line Options page for the full list of discover supported options.

  3. Run the instrumented binary

    .. to collect the dynamic memory access data.

    % ./<instrumented_binary> <args>

  4. Finally examine the analysis report for errors and warnings

Example

The following example demonstrates the above steps using Solaris Studio 12.4 C compiler and discover command-line tool. Same code was used to demonstrate static analysis steps as well.

Few things to be aware of:

  • If the target application preloads one or more functions using LD_PRELOAD environment variable that discover tool need to interpose on for dynamic analysis, the resulting analysis may not be accurate.
  • If the target application uses runtime auditing using LD_AUDIT environment variable, this auditing will conflict with discover tool's use of auditing and may result in undefined behavior.

Reference & Recommended Reading:

  1. Oracle Solaris Studio 12.4 : Code Analyzer User's Guide
  2. Oracle Solaris Studio 12.4 : Discover and Uncover User's Guide

Friday Nov 28, 2014

Solaris Studio 12.4 : C/C++ Static Code Analysis

First things first -- Oracle Solaris Studio 12.4 is now generally available. One of the key features of this release is the support for the latest industry standards including C++11, C11 and OpenMP 4.0. Check the Solaris Studio 12.4 Data Sheet before downloading the software from Oracle Technology Network.

Static Code Analysis

Code Analyzer tool in Oracle Solaris Studio compiler suite can analyze static data, dynamic memory access data, and code coverage data collected from binaries that were compiled with the C/C++ compilers in Solaris Studio 12.3 or later. Code Analyzer is supported on Solaris and Oracle Enterprise Linux.

Primary focus of this blog entry is the static code analysis.

Static code analysis is the process of detecting common programming errors in code during compilation. The static code checking component in Code Analyzer looks for potential errors such as accessing outside the bounds of the array, out of scope variable use, NULL pointer deferences, infinite loops, uninitialized variables, memory leaks and double frees. The following webpage in Solaris Studio 12.4: Code Analyzer User's Guide has the complete list of errors with examples.

    Static Code Issues analyzed by Code Analyzer

High-level steps in using Code Analyzer for Static Code analysis

Given the enhancements and incremental improvements in analysis tools, Solaris Studio 12.4 is recommended for this exercise.

  1. Collect static data

    Compile [all source] and link with –xprevise=yes option.

    • when using Solaris Studio 12.3 compilers, compile with -xanalyze=code option.
    • Linux users: specify –xannotate option on compile/link line in addition to -xprevise=yes|-xanalyze=code.

    During compilation, the C/C++ compiler extracts static errors automatically, and writes the error information to the sub-directory in <binary-name>.analyze directory.

  2. Analyze the static data

    Two options available to analyze and display the errors in a report format.

Example

The following example demonstrates the above steps using Solaris Studio 12.4 C compiler and codean command-line tool.

Few things to be aware of:

  • compilers may not be able to detect all of the static errors in target code especially if the errors are complex.
  • some errors depend on data that is available only at runtime -- perform dynamic analysis as well.
  • some errors are ambiguous, and also might not be actual errors -- expect few false-positives.

Reference & Recommended Reading:
    Oracle Solaris Studio 12.4 Code Analyzer User's Guide

Tuesday Sep 30, 2014

Programming in C: Few Tidbits #3

1) Not able to redirect the stdout output from a C program/application to a file

Possible cause:

Buffered nature of standard output (stdout) stream. Might be waiting for a newline character, for the buffer to be full, or for some other condition to be met based on implementation.

Few potential workarounds:

  • Explicit flushing of standard output stream where needed.
        fflush(stdout);

            -or-

  • Relying on no buffering standard error (stderr) in place of stdout stream.

            -or-

  • Turning-off buffering explicitly by calling setbuf() or setvbuf().
    eg.,
    
    /* just need one of the following two calls, but not both */
    setbuf (stdout, NULL);
    setvbuf(stdout, NULL, _IONBF, 0);  // last argument value need not really be zero
    

2) Printing ("escaping" maybe?) a percent sign (%) in a printf formatted string

Conversion/format specifiers start with a % sign, and using the slash sequence to escape the % sign in strings that are not format specifiers usually does not work. Check the following example out.

eg.,

Executing the following code:

        int pct = 35;
        printf("\n%d%", pct);

.. results in:

35, but not 35% as one would expect.

Format specifier "%%" simply prints the percent sign - so, the desired result can be achieved by replacing "%d%" with "%d%%" in printf statement.

        int pct = 35;
        printf("\n%d%%", pct);

.. shows:

35% as expected

(web search keywords: C printf conversion specification)


3) Duplicating a structure

If the structure has no pointers, assigning one struct to another struct duplicates the structure. The same effect can be achieved by using memcpy() too, but it is not really necessary. After the execution of struct assignment, there will be two copies of struct with no dependency - so, they can be operated independently without impacting the other. The following sample code illustrates this point.

eg., #1
	...
	...

	typedef struct human {
		int accno;
        	int age;
	} person;

	...
	...

	person guy1, guy2;

	guy1.accno = 20202;
	guy1.age = 10;

	guy2 = guy1;

	printf("\nAddress of:\n\t-> guy1: %p. guy2: %p", guy1, guy2);

	printf("\n\nBefore update:\n");
	printf("\naccno of:\n\t-> guy1: %d. guy2: %d", guy1.accno, guy2.accno);
	printf("\nage of:\n\t-> guy1: %d. guy2: %d", guy1.age, guy2.age);

	guy1.age = 15;
	guy2.accno = 30303;

	printf("\n\nAfter update:\n");
	printf("\naccno of:\n\t-> guy1: %d. guy2: %d", guy1.accno, guy2.accno);
	printf("\nage of:\n\t-> guy1: %d. guy2: %d", guy1.age, guy2.age);

	...
	...

Execution outcome:

Address of:
        -> guy1: ffbffc38. guy2: ffbffc30

Before update:

accno of:
        -> guy1: 20202. guy2: 20202
age of:
        -> guy1: 10. guy2: 10

After update:

accno of:
        -> guy1: 20202. guy2: 30303
age of:
        -> guy1: 15. guy2: 10

On the other hand, if the structure has pointer variable(s), duplication of a structure using assignment operator leads to pointer variables in both original and copied structures pointing to the same block of memory - thus creating a dependency that could potentially impact both pointer variables with unintended consequences. The following sample code illustrates this.

eg., #2
	...
	...

	typedef struct human {
        	int *accno;
        	int age;
	} person;

	...
	...

	person guy1, guy2;

	guy1.accno = malloc(sizeof(int));
	*(guy1.accno) = 20202;

	guy1.age = 10;
	guy2 = guy1;
	
	...
	...

	guy1.age = 15;
	*(guy2.accno) = 30303;

	...
	...
Execution outcome:
Address of:
        -> guy1: ffbffb48. guy2: ffbffb40

Before update:

accno of:
        -> guy1: 20202. guy2: 20202
age of:
        -> guy1: 10. guy2: 10

After update:

accno of:
        -> guy1: 30303. guy2: 30303
age of:
        -> guy1: 15. guy2: 10

Few people seem to refer this kind of duplication as shallow copy though not everyone agrees with the terminology.

If the idea is to clone an existing struct variable that has one or more pointer variables, then to work independently on the clone without impacting the struct variable it was cloned from, one has to allocate memory manually for pointer variables and copy data from source structure to the destination. The following sample code illustrates this.

eg., #3
	...
	...

	typedef struct human {
        	int *accno;
        	int age;
	} person;

	...
	...

	person guy1, guy2;

	guy1.accno = malloc(sizeof(int));
	*(guy1.accno) = 20202;

	guy1.age = 10;

	guy2.age = guy1.age;
	guy2.accno = malloc(sizeof(int));
	*(guy2.accno) = *(guy1.accno);

	...
	...

	guy1.age = 15;
	*(guy2.accno) = 30303;

	...
	...

Execution outcome:

Address of:
        -> guy1: ffbffaa8. guy2: ffbffaa0

Before update:

accno of:
        -> guy1: 20202. guy2: 20202
age of:
        -> guy1: 10. guy2: 10

After update:

accno of:
        -> guy1: 20202. guy2: 30303
age of:
        -> guy1: 15. guy2: 10

This style of explicit duplication is referred as deep copy by few people though not everyone agrees with the terminology.

Thursday Jul 31, 2014

Programming in C: Few Tidbits #2

(1) ceil() returns an incorrect value?

ceil() rounds the argument upward to the nearest integer value in floating-point format. For example, calling ceil() with an argument (2/3) should return 1.

printf("\nceil(2/3) = %f", ceil(2/3));

results in:

ceil(2/3) = 0.000000

.. which is not the expected result.

However:

printf("\nceil((float)2/3) = %f", ceil((float)2/3));

shows the expected result.

ceil((float)2/3) = 1.000000

The reason for the incorrect result in the first attempt can be attributed to the integer division. Since both operands in the division operation are integers, it resulted in an integer division which discarded the fractional part.

Desired result can be achieved by casting one of the operands to float or double as shown in the subsequent attempt.

One final example for the sake of completeness.

printf("\nceil(2/(float)3) = %f", ceil(2/(float)2));
..
ceil(2/(float)3) = 1.000000

(2) Main difference between abort() and exit() calls

On a very high level: abort() sends SIGABRT signal causing abnormal termination of the target process without calling functions registered with atexit() handler, and results in a core dump. Some cleanup activity may happen.

exit() causes normal process termination after executing functions registered with the atexit() handler, and after performing cleanup activity such as flushing and closing all open streams.

If it is desirable to bypass atexit() registered routine(s) during a process termination, one way is to call _exit() rather than exit().

Of course, this is all high level and the information provided here is incomplete. Please check relevant man pages for detailed information.


(3) Current timestamp

The following sample code shows the current timestamp in two different formats. Check relevant man pages for more information.

#include <time.h>
..
char timestamp[80];
time_t now;
struct tm *curtime;

now = time(NULL);
curtime = localtime(&now);

strftime(timestamp, sizeof(timestamp), "%m-%d-%Y %X", curtime);

printf("\ncurrent time: %s", timestamp);
printf("\ncurrent time in a different format: %s", asctime(curtime));
..

Executing this code shows output

current time: 07-31-2014 22:05:42
current time in a different format: Thu Jul 31 22:05:42 2014

Monday Jun 30, 2014

Programming in C: Few Tidbits

.. with little commentary aside. Target audience: new programmers. These tips are equally applicable in C and C++ programming environments.


1. Duplicating a file pointer

Steps: find the integer file descriptor associated with the file stream using fileno() call, make a copy of the file descriptor using dup() call, and finally associate the file stream with the duplicated file descriptor by calling fdopen().

eg.,
FILE *fptr = fopen("file", "mode");

FILE *fptrcopy = fdopen( dup( fileno(fptr) ), "mode");

2. Capturing the exit code of a command that was invoked using popen()

Using pipes is one way of executing commands programmatically that are otherwise invoked from a shell. While pipes are useful in performing tasks other than executing shell commands, this tip is mainly about the exit code of a command (to figure out whether it succeeded or failed) that was executed using popen() API.

To capture the exit code, simply use the value returned by pclose(). This function call returns the termination status of the command that was executed as a child process. However the termination status of the child process is in the top 16 bits of the return value, so dividing the pclose() return value by 256 gives the actual exit code of the command that was executed.

eg.,
...
FILE *ptr;
int rc;

if ((ptr = popen("ls", "r")) != NULL) {
	rc = pclose(ptr)/256;
	printf("\nls: exit code = %d", rc);
}

if ((ptr = popen("ls -W", "r")) != NULL) {
	rc = pclose(ptr)/256;
	printf("\nls -W: exit code = %d", rc);
}
...

% ./<executable>

ls: exit code = 0
ls: illegal option -- W
ls -W: exit code = 2

3. Converting an integer to a string

Standard C library has implementation for converting a string to an integer (atoi()), but not for converting an integer to a string. One way to achieve the desired result is by using sprintf() function call, which writes formatted data to a string.

eg.,
int weight = 30;
char *wtstr = malloc(sizeof(char) * 3);

sprintf(wtstr, "%d", weight);
...

sprintf() can also be used to convert data in other data types such as float, double to string. Also see: man page for snprintf().


4. Finding the length of a statically allocated array

When size was not specified explicitly, simply divide the total size of the array by the size of the first array element.

eg.,
static const char *greeting[] = { "Hi", "Hello", "Hola", "Bonjour", \
                                    "Namaste", "Ciao", "Ni Hao" };
int numgreetings = sizeof(greeting)/sizeof(greeting[0]);

After execution, numgreetings variable holds a value of 7. Note that sizeof(greeting[0]) is actually the size of a pointer to a character array.

  • sizeof is not a function, but an operator -- hence it is not necessary or required to use parentheses when using it.
  • Though not so useful, this is applicable even when the size was explicitly specified.

Saturday May 10, 2014

Solaris 11.2 Highlights [Part 2] in 4 Minutes or Less

Part 1: Solaris 11.2 Highlights in 6 Minutes or Less

Highlights contd.,

Package related ..

Minimal Set of System Packages

For the past few years, it is one of the hot topics -- what is the bare minimum [set of packages] needed to run applications. There were a number of blog posts and few technical articles around creating minimal Solaris configurations. Finally users/customers who wish to have their OS installed with minimal set of required system packages for running most of the applications in general, can just install solaris-minimal-server package and not worry about anything else such as removing unwanted packages.

# pkg install pkg:/group/system/solaris-minimal-server

Oracle Database Pre-requisite Package

Until Solaris 11.1, it is up to the users to check the package dependencies and make sure to have those installed before attempting to install Oracle database software especially using graphic installer. Solaris 11.2 frees up the users from the burden of checking and installing individual [required] packages by providing a brand new package called oracle-rdbms-server-12cR1-preinstall. Users just need to install this package for a smoother database software installation later.

# pkg install pkg:/group/prerequisite/oracle/oracle-rdbms-server-12cR1-preinstall

Mirroring a Package Repository

11.2 provides the ability to create local IPS package repositories and keeps them in synch with the IPS package repositories hosted publicly by Oracle Corporation. The key in achieving this is the SMF service svc:/application/pkg/mirror. The following webpage has the essential steps listed on a high-level.

How to Automatically Copy a Repository From the Internet

Another enhancement is the cloning of a package repository using --clone option of pkgrecv command.

Observability related ..

Network traffic diagnostics:

A brand new command, ipstat(1M), reports IP traffic statistics.

# ipstat -?
Usage:	ipstat [-cmnrt] [-a address[,address...]] [-A address[,address...]]
[-d d|u] [-i interface[,interface...]] [-l nlines] [-p protocol[,protocol...]]
[-s key | -S key] [-u R|K|M|G|T|P] [-x opt[=val][,opt[=val]...]]

# ipstat -uM 5

SOURCE                     DEST                       PROTO    INT        BYTES
etc5mdbadm01.us.oracle.com etc2m-appadm01.us.oracle.c TCP      net8       76.3M
etc2m-appadm01.us.oracle.c etc5mdbadm01.us.oracle.com TCP      net8        0.6M
dns1.us.oracle.com         etc2m-appadm01.us.oracle.c UDP      net8        0.0M
169.254.182.76             169.254.182.77             UDP      net20       0.0M
...

Total: bytes in: 76.3M bytes out:  0.6M

Another new command, tcpstat(1M), reports TCP and UDP traffic statistics.

# tcpstat -?
Usage:	tcpstat [-cmnrt] [-a address[,...]] [-A address[,...]] [-d d|u] [-i pid[,...]] 
[-l nlines] [-p port[,...]] [-P port[,...]] [-s key | -S key] [-x opt[=val][,...]] 
[-z zonename[,...]] [interval [count]]

# tcpstat 5

ZONE         PID PROTO  SADDR             SPORT DADDR             DPORT   BYTES
global      1267 TCP    etc5mdbadm01.us.  42972 etc2m-appadm01.u     22   84.3M
global      1267 TCP    etc2m-appadm01.u     22 etc5mdbadm01.us.  42972   48.0K
global      1089 UDP    169.254.182.76      161 169.254.182.77    33436  137.0 
global      1089 UDP    169.254.182.77    33436 169.254.182.76      161   44.0 
...
...

Total: bytes in: 84.3M bytes out: 48.4K

# tcpstat -i 43982 5		<-- TCP stats for a given pid

ZONE         PID PROTO  SADDR             SPORT DADDR             DPORT   BYTES
global     43982 TCP    etc2m-appadm01.u  43524 etc5mdbadm02.us.     22   73.7M
global     43982 TCP    etc5mdbadm02.us.     22 etc2m-appadm01.u  43524   41.9K

Total: bytes in: 42.1K bytes out: 73.7M

Up until 11.1, it is not so straight-forward to figure out what process created a network endpoint -- one has to rely on a combination of commands such as netstat, pfiles or lsof and proc filesystem (/proc) to extract that information. Solaris 11.2 attempts to make it easy by enhancing the existing tool netstat(1M). Enhanced netstat(1M) shows what user, pid created and control a network endpoint. -u is the magic flag.

#  netstat -aun			<-- notice the -u flag in netstat command; and User, Pid, Command columns in the output

UDP: IPv4
   Local Address        Remote Address      User    Pid      Command       State
-------------------- -------------------- -------- ------ -------------- ----------
      *.*                                 root        162 in.mpathd      Unbound
      *.*                                 netadm      765 nwamd          Unbound
      *.55388                             root        805 picld          Idle
	...
	...

TCP: IPv4
   Local Address        Remote Address      User     Pid     Command     Swind  Send-Q  Rwind  Recv-Q    State
-------------------- -------------------- -------- ------ ------------- ------- ------ ------- ------ -----------
10.129.101.1.22      10.129.158.100.38096 root       1267 sshd           128872      0  128872      0 ESTABLISHED
192.168.28.2.49540   192.168.28.1.3260    root          0       2094176      0 1177974      0 ESTABLISHED
127.0.0.1.49118            *.*            root       2943 nmz                 0      0 1048576      0 LISTEN
127.0.0.1.1008             *.*            pkg5srv   16012 httpd.worker        0      0 1048576      0 LISTEN
	...

[x86 only] Memory Access Locality Characterization and Analysis

Solaris 11.2 introduced another brand new tool, numatop(1M), that helps in characterizing the NUMA behavior of processes and threads on systems with Intel Westmere, Sandy Bridge and Ivy Bridge processors. If not installed by default, install the numatop package as shown below.

# pkg install pkg:/diagnostic/numatop

Performance related ..

This is a grey area - so, just be informed that there are some ZFS and Oracle database related performance enhancements.

Starting with 11.2, ZFS synchronous write transactions are committed in parallel, which should help improve the I/O throughput.

Database startup time has been greatly improved in Solaris 11 releases -- it's been further improved in 11.2. Customers with databases that use hundreds of Gigabytes or Terabyte(s) of memory will notice the improvement to the database startup times. Other changes to asynchronous I/O, inter-process communication using event ports etc., help improve the performance of the recent releases of Oracle database such as 12c.

Miscellaneous ..

Java 8

Java 7 is still the default in Solaris 11.2 release, but Java 8 can be installed from the IPS package repository.

eg.,

# pkg install pkg:/developer/java/jdk-8		<-- Java Development Kit
# pkg install pkg:/runtime/java/jre-8		<-- Java Runtime

Bootable USB Media

Solaris 11.2 introduces the support for booting SPARC systems from USB media. Use Solaris Distribution Constructor (requires distribution-constructor package) to create the USB bootable media, or copy a bootable/installation image to the USB media using usbcopy(1M) and dd(1M) commands.

Oracle Hardware Management Pack

Oracle Hardware Management Pack is a set of tools that are integrated into the Solaris OS distribution, that show the existing hardware configuration, help configure hardware RAID volumes, update server firmware, configure ILOM service processor, enable monitoring the hardware using existing tools etc., Look for pkg:/system/management/hmp/hmp-* packages.

Few other interesting packages:

Parallel implementation of bzip2 : compress/pbzip2
NVM Express (nvme) utility : system/storage/nvme-utilities
Utility to administer cluster of servers : terminal/cssh

Tuesday Apr 29, 2014

Solaris 11.2 Highlights [Part 1] in 6 Minutes or Less

This is not the complete list, of course. Just a few hand-picked ones.

First things first, Solaris 11.2 beta is out.

URLs: Download | What's New in Solaris 11.2 | Information Library (documentation)

Highlights:

Zones related ..

Kernel Zones

Kernel Zones bring the ability to run a non-global/local zone at a different kernel version from the global zone and can be patched or updated independently without the need to reboot the global zone. In other words, kernel zones are independent and isolated environments with a full kernel and user environment.

Creating a Kernel Zone:

  1. If not available, install the kernel zone brand
    # pkg install brand/brand-solaris-kz
    
  2. Create and install a kernel zone using the existing zonecfg and zoneadm commands. The only difference compared to creating a non-kernel zone (the zones we have been creating for the past 10 years) is the template to be used -- by default, SYSdefault template is used. To create a kernel zone, use SYSsolaris-kz template instead.

    # zonecfg -z <zone-name> create –t SYSsolaris-kz
    # zoneadm –z <zone-name> install
    # .. continue with the rest of the steps to complete zone configuration ..
    

Kernel Zones can be used in combination with logical domains (Oracle VM for SPARC), but cannot be used in combination with other virtualization solutions such as Oracle VM VirtualBox that does not support nested virtualization.

Live Zone Re-configuration

This release (11.2) added support for the dynamic re-configuration of local zones. Now the following configuration changes do not require a zone reboot.

  • Resource controls and pools
  • Network configuration
  • Adding or removing file systems
  • Adding or removing virtual and physical devices

Read-Only Global Zones

Recent releases of Solaris have support for Immutable Non-Global Zones already. Solaris 11.2 extends the immutable zone support to Global Zones. Immutable zones will have a read-only zone root.

Make a Global Zone Read-Only/Immutable by:

# zonecfg -z global set file-mac-profile=fixed-configuration

Installing Packages across multiple Non-Global Zones from the Global Zone

  • -r option of pkg can be used to install/update/uninstall software packages into/in/from all non-global zones from the global zone.
  • Use -Z option along with -r to exclude a zone in applying the package operation. Similarly use -z along with -r to apply the intended package operation only in a specific zone

Multiple Boot Environments for Solaris 10 Zones

Multiple BE support has been extended to Solaris 10 Zones in this release. This feature is useful when performing operations such as patching within an Solaris 10 environment running on a Solaris 11 system.

CMT Aware Zones and Resource Pool Configuration

It is now possible to allocate CMT based resources -- vCPUs, Cores and Sockets, using the existing zonecfg and poolcfg commands. This is useful from performance and/or licensing point of view as it provides flexibility and control for managing licensing boundaries or dedicating hardware resources solely to a zone.

Cloud related ..

Centralized Cloud Management with OpenStack

Solaris 11.2 is the first release to incorporate a complete OpenStack distribution. OpenStack allows managing and sharing compute, network and storage resources in the data center through a centralized web portal. In other words, now administrators can set up an enterprise ready private cloud Infrastructure-as-a-Service (IaaS) environment with ease.

Check this quick How-To article out at Oracle Technology Network -- Getting Started with OpenStack on Oracle Solaris 11.2

Cloning and Disaster Recovery with Unified Archives

Unified Archives is a new native archive type that enables quick cloning for rapid application deployment in the cloud, fast and reliable disaster recovery. Both bare metal and virtual environments are supported. Check the archiveadm(1M) man page for details.

eg.,
Create a clone archive of a system
# archiveadm create ./clone.uar

Create bootable media
# archiveadm create-media ./archive.uar				/* USB image */
# archiveadm create-media -f iso <other options> ./bootarch.uar	/* ISO image */

Create a full system recovery archive
# archiveadm create --recovery ./recovery.uar

Extract information from a Unified Archive
# archiveadm info somearchive.uar

To be continued .. Stay tuned.

Monday Mar 31, 2014

[Solaris] ZFS Pool History, Writing to System Log, Persistent TCP/IP Tuning, ..

.. with plenty of examples and little comments aside.

[1] Check existing DNS client configuration

Solaris 11 and later:

% svccfg -s network/dns/client listprop config
config                      application        
config/value_authorization astring     solaris.smf.value.name-service.dns.client
config/options             astring     "ndots:2 timeout:3 retrans:3 retry:1"
config/search              astring     "sfbay.sun.com" "us.oracle.com" "oraclecorp.com" "oracle.com" "sun.com"
config/nameserver          net_address xxx.xx.xxx.xx xxx.xx.xxx.xx xxx.xx.xxx.xx

Solaris 10 and prior:

Check the contents of /etc/resolv.conf

% cat /etc/resolv.conf
search  sfbay.sun.com us.oracle.com oraclecorp.com oracle.com sun.com
options ndots:2 timeout:3 retrans:3 retry:1
nameserver      xxx.xx.xxx.xx
nameserver      xxx.xx.xxx.xx
nameserver      xxx.xx.xxx.xx

Note that /etc/resolv.conf file exists on Solaris 11.x releases too as of today.

[2] Logical domains: finding out the hostname of control domain

Use virtinfo(1M) command.

root@ppst58-cn1-app:~# virtinfo -a
Domain role: LDoms guest I/O service root
Domain name: n1d2
Domain UUID: 02ea1fbe-80f9-e0cf-ecd1-934cf9bbeffa
Control domain: ppst58-01
Chassis serial#: AK00083297

The above output shows that n1d2 domain is a guest domain, which is also an I/O domain, the service domain and a root I/O domain. Control domain is running on host ppst58-01.

Output from control domain:

root@ppst58-01:~# ldm list
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME
primary          active     -n-cv-  UART    64    130304M  0.1%  0.1%  243d 2h 
n1d1             active     -n----  5001    448   916992M  0.2%  0.2%  3d 15h 26m
n1d2             active     -n--v-  5002    512   1T       0.0%  0.0%  3d 15h 29m

root@ppst58-01:~# virtinfo -a
Domain role: LDoms control I/O service root
Domain name: primary
Domain UUID: 19337210-285a-6ea4-df8f-9dc65714e3ea
Control domain: ppst58-01
Chassis serial#: AK00083297

[3] Administering NFS configuration

Solaris 11 and later:

Use sharectl(1M) command. Solaris 11.x releases include the sharectl administrative tool to configure and manage file-sharing protocols such as NFS, SMB, autofs.

eg.,
Display all property values of NFS:

# sharectl get nfs
servers=1024
lockd_listen_backlog=32
lockd_servers=1024
grace_period=90
server_versmin=2
server_versmax=4
client_versmin=2
client_versmax=4
server_delegation=on
nfsmapid_domain=
max_connections=-1
listen_backlog=32
..
..

# sharectl status
autofs  online client
nfs     disabled

eg.,
Modifying the nfs v4 grace period from the default 90s to 30s:

# sharectl get -p grace_period nfs
grace_period=90
# sharectl set -p grace_period=30 nfs
# sharectl get -p grace_period nfs
grace_period=30

Solaris 10 and prior:

Edit /etc/default/nfs file, and restart NFS related service(s).

[4] Examining ZFS Storage Pool command history

Solaris 10 8/07 and later releases log successful zfs and zpool commands that modify the underlying pool state. All those executed commands can be examined by running zpool history command. Because this command shows the actual zfs commands executed as they are, the 'history' feature is really useful in troubleshooting an error scenario that was resulted from executing some zfs command.

# zpool list
NAME       SIZE  ALLOC  FREE  CAP  DEDUP   HEALTH  ALTROOT
rpool      416G   152G  264G  36%  1.00x   ONLINE  -
zs3actact  848G  17.4G  831G   2%  1.00x   ONLINE  -

# zpool history -l zs3actact
History for 'zs3actact':
2014-03-19.22:02:32 zpool create -f zs3actact c0t600144F0AC6B9D2900005328B7570001d0 [user root on etc25-appadm05:global]
2014-03-19.22:03:12 zfs create zs3actact/iscsivol1 [user root on etc25-appadm05:global]
2014-03-19.22:03:33 zfs set recordsize=128k zs3actact/iscsivol1 [user root on etc25-appadm05:global]

Note that this log is enabled by default, and cannot be disabled.

[5] Modifying TCP/IP configuration parameters

Using ndd(1M) is the old way of tuning TCP/IP parameters, and still supported as of today (in Solaris 11.x releases). However using padm(1M) command is the recommended way to modify or retrieve TCP/IP Internet protocols on Solaris 11.x and later releases.

# ipadm show-prop -p max_buf tcp
PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
tcp   max_buf               rw   1048576      --           1048576      128000-1073741824

# ipadm set-prop -p max_buf=2097152 tcp

# ipadm show-prop -p max_buf tcp
PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
tcp   max_buf               rw   2097152      2097152      1048576      128000-1073741824

ndd style (still valid):

# ndd -get /dev/tcp tcp_max_buf
1048576

# ndd -set /dev/tcp tcp_max_buf 2097152

# ndd -get /dev/tcp tcp_max_buf
2097152

One of the advantages of using ipadm over ndd is that the configured/tuned non-default values are persistent across reboots. In case of ndd, we have to re-apply those values either manually or by creating a Run Control script (/etc/rc*.d/S*) to make sure that the intended values are set automatically during a reboot of the system.

[6] Writing to system log from a shell script

Use logger(1) command as shown in the following example.

eg.,

# logger -p local0.warning Big Brother is watching you

# dmesg | tail -1
Mar 30 18:42:14 etc27zadm01 root: [ID 702911 local0.warning] Big Brother is watching you

Check syslog.conf(4) man page for the list of available system facilities and the severity of the condition being logged (levels).

BONUS:

[*] Forceful NFS unmount on Linux

Try the lazy unmount option (-l) on systems running Linux kernel 2.4.11 or later to forcefully unmount a filesystem that keeps throwing Device or resource busy and/or device is busy error(s).

eg.,

# umount -f /bkp
umount2: Device or resource busy
umount: /bkp: device is busy
umount2: Device or resource busy
umount: /bkp: device is busy

# umount -l /bkp
#

Wednesday Mar 26, 2014

Software Availability : Solaris Studio 12.4 Beta & ORAchk

First off, these are two unrelated softwares.

Solaris Studio 12.4 Beta

Nearly two-and-a-half years after the release of Solaris Studio 12.3, Oracle is gearing up for the next major release 12.4. In addition to the compiler and library optimizations to support the latest and greatest SPARC & Intel x64 hardware such as SPARC T5, M5, M6, Fujitsu's M10, and Intel's Ivy Bridge and Haswell line of servers, support for C++ 2011 language standard is one of the highlights of this forthcoming release. The complete list of features and enhancements in release 12.4 are documented in the What's New page.

Those who feel compelled to give the updated/enhanced compilers and tools a try, can get started right away by downloading the beta bits from the following location. This software is available for Solaris 10 & 11 running on SPARC, x86 hardware; and Linux 5 & 6 runnin g on x86/x64 hardware. Anyone can download this software for free.

     Oracle Solaris Studio 12.4 Beta Download

Don't forget to check the Release Notes out for the installation instructions, known issues, limitations and workarounds, features that were removed in this release and so on.

Here's a pointer to the documentation (preview): Oracle Solaris Studio 12.4 Information Library

Finally, should you run into any issue(s) or if you have questions about anything related, feel free to use the Solaris Studio Community Forum.




ORAchk 2.2.4 (formerly known as EXAchk)

ORAchk, the Oracle Configuration Audit Tool, enhances EXAchk tool's functionality, and replaces the existing & popular RACcheck tool. In addition to the top issues reported by users/customers, ORAchk proactively scans for known problems within Oracle Database, Sun systems (especially engineered systems) and Oracle E-Business Suite Financials.

While checking, ORAchk covers a wide range of areas such as OS kernel settings, database installations (single instance and RAC), performance, backup and recovery, storage setting, and so on.

ORAchk generated reports (mostly high level) show the system health risks with the ability to drill down into specific problems and offers recommendations specific to the environment and product configuration. Those who do not like sending this data back to Oracle should be happy to know that there is no phone home feature in this release.

Note that ORAchk is available only for the Oracle Premier Support Customers - meaning only those customers with appropriate support contracts can use this tool. So, if you are a Oracle customer with the ability to access the Oracle Support website, check the following pages out for additional information.

     ORAchk - Oracle Configuration Audit Tool
     ORAchk user's guide

Feel free to use the community forum to ask any related questions.

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« February 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
       
       
Today