Wednesday Jun 25, 2014

Helping Your Compiler Handle the Size of Your Constants

by Darryl Gove

When I use a constant in the following code, I get a warning:

On the other hand if I wrote:

Then then compiler will quite happily handle the constant.

The problem with the first bit of code is that it treats the value as a signed integer, and a signed integer can only hold 31 bits of precision plus a sign bit.

So how does the compiler decide how to represent a constant? The answer is interesting.

The compiler will attempt to fit a constant into the smallest value that it can. So it will try to fit the value into these types, in order: into an int, a long int, and then a long long int.

In the above code sample, the compiler will find that 1 and 31 both fit very nicely into signed ints. There's a shift left operation (<<) in the expression that produces a result of the same type as the left operand. So the whole expression (1<<31) has type signed int, which leads to the the warning.

To avoid the warning we can tell the compiler that this is an unsigned value. Either by typecasting the 1 to be unsigned in this manner:

or by declaring it as an unsigned value, like this:

More About Oracle Solaris Studio

Oracle Solaris Studio is a C, C++ and Fortran development tool suite, with compiler optimizations, multithread performance, and analysis tools for application development on Oracle Solaris, Oracle Linux, and Red Hat Enterprise Linux operating systems. Find out more about the Oracle Solaris Studio 12.4 Beta program here.

About the Photograph

Photograph of Zion National Park, Utah taken by Rick Ramsey in May 2014 on The Ride to the Sun Reunion.

- Darryl

Follow Darryl on:
Oracle Blog | Twitter
  Follow OTN Garage on:
Web | Facebook | Twitter | YouTube
About

Contributors:
Rick Ramsey
Kemer Thomson
and members of the OTN community

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Blogs We Like