by Darryl Gove
When I use a constant in the following code, I get a warning:
On the other hand if I wrote:
Then then compiler will quite happily handle the constant.
The problem with the first bit of code is that it treats the value as a signed integer, and a signed integer can only hold 31 bits of precision plus a sign bit.
So how does the compiler decide how to represent a constant? The answer is interesting.
The compiler will attempt to fit a constant into the smallest value that it can. So it will try to fit the value into these types, in order: into an
long int, and then a
long long int.
In the above code sample, the compiler will find that 1 and 31 both fit very nicely into signed
ints. There's a shift left operation (<<) in the expression that produces a result of the same type as the
left operand. So the whole expression (1<<31) has type
signed int, which leads to the the warning.
To avoid the warning we can tell the compiler that this is an unsigned value. Either by typecasting the 1 to be unsigned in this manner:
or by declaring it as an unsigned value, like this:
Photograph of Zion National Park, Utah taken by Rick Ramsey in May 2014 on The Ride to the Sun Reunion.
|Follow Darryl on: Oracle Blog | ||Follow OTN Garage on:Web | Facebook | Twitter | YouTube|