bitNot = (sbyte)(~bitNot)
VS.
myInt = Int32.Parse(myInput);
Hello, I'm a bit confused on the above two statements... it seems like both statements are trying to convert, but why is the syntax bitNot = (sbyte)(~bitNot) for the first statement?
why can't we use bitNot = sbyte.Parse(~bitNot) like the syntax we have on the second statement? Thanks
The first statement takes bitNot, which is presumably some form of integer, inverts all the bits, cast it into an sbyte, and stores the result back in bitNot
The second statement takes myInput, which is most likely a string of some sort, parses it from a human-readable form into an Int32 type, and stores that in myInt.
The major difference is the types you are operating on; you only need Parse if you are dealing with strings. In first statement, a cast operation is being done instead; this usually means converting from a 32-bit integer to a 8-bit integer, for example. It is a very different kind of operation.
Related
I want to declare -1 literal using the new binary literal feature:
int x = 0b1111_1111_1111_1111_1111_1111_1111_1111;
Console.WriteLine(x);
However, this doesn't work because C# considers this as a uint literal and we get Cannot implicitly convert type 'uint' to 'int'... which is a bit strange for me since we deal with binary data.
Is there a way to declare -1 integer value using binary literal in C#?
After trying some cases, I finally found out this one
int x = -0b000_0000_0000_0000_0000_0000_0000_0001;
Console.WriteLine(x);
And as result is printed -1.
If I understand everything correct they use sing flag for -/+ so when you put 32 1 you go into uint
You can explicitly cast it, but because there's a constant term involved, I believe you have to manually specify unchecked:
int x = unchecked((int)0b1111_1111_1111_1111_1111_1111_1111_1111);
(Edited to include Jeff Mercado's suggestion.)
You can also use something like int x = -0b1 as pointed out in S.Petrosov's answer, but of course that doesn't show the actual bit representation of -1, which might defeat the purpose of declaring it using a binary literal in the first place.
I always come across code that uses int for things like .Count, etc, even in the framework classes, instead of uint.
What's the reason for this?
UInt32 is not CLS compliant so it might not be available in all languages that target the Common Language Specification. Int32 is CLS compliant and therefore is guaranteed to exist in all languages.
int, in c, is specifically defined to be the default integer type of the processor, and is therefore held to be the fastest for general numeric operations.
Unsigned types only behave like whole numbers if the sum or product of a signed and unsigned value will be a signed type large enough to hold either operand, and if the difference between two unsigned values is a signed value large enough to hold any result. Thus, code which makes significant use of UInt32 will frequently need to compute values as Int64. Operations on signed integer types may fail to operate like whole numbers when the operands are overly large, but they'll behave sensibly when operands are small. Operations on unpromoted arguments of unsigned types pose problems even when operands are small. Given UInt32 x; for example, the inequality x-1 < x will fail for x==0 if the result type is UInt32, and the inequality x<=0 || x-1>=0 will fail for large x values if the result type is Int32. Only if the operation is performed on type Int64 can both inequalities be upheld.
While it is sometimes useful to define unsigned-type behavior in ways that differ from whole-number arithmetic, values which represent things like counts should generally use types that will behave like whole numbers--something unsigned types generally don't do unless they're smaller than the basic integer type.
UInt32 isn't CLS-Compliant. http://msdn.microsoft.com/en-us/library/system.uint32.aspx
I think that over the years people have come to the conclusions that using unsigned types doesn't really offer that much benefit. The better question is what would you gain by making Count a UInt32?
Some things use int so that they can return -1 as if it were "null" or something like that. Like a ComboBox will return -1 for it's SelectedIndex if it doesn't have any item selected.
If the number is truly unsigned by its intrinsic nature then I would declare it an unsigned int. However, if I just happen to be using a number (for the time being) in the positive range then I would call it an int.
The main reasons being that:
It avoids having to do a lot of type-casting as most methods/functions are written to take an int and not an unsigned int.
It eliminates possible truncation warnings.
You invariably end up wishing you could assign a negative value to the number that you had originally thought would always be positive.
Are just a few quick thoughts that came to mind.
I used to try and be very careful and choose the proper unsigned/signed and I finally realized that it doesn't really result in a positive benefit. It just creates extra work. So why make things hard by mixing and matching.
Some old libraries and even InStr use negative numbers to mean special cases. I believe either its laziness or there's negative special values.
In the following snippet:
long frameRate = (long)(_frameCounter / this._stopwatch.Elapsed.TotalSeconds);
Why is there an additional (long)(...) to the right of the assignment operator?
The division creates a double-precision floating point value (since TimeSpan.TotalSeconds is a double), so the cast truncates the resulting value to be integral instead of floating point. You end up with an approximate but whole number of frames-per-second instead of an exact answer with fractional frames-per-second.
If frameRate is used for display or logging, the cast might just be to make the output look nicer.
It's an explicit conversion (cast) that converts the result of the division operation to a long.
See: Casting and Type Conversions
Because the result of the calculation relates to what types the variables are that are being used. If the compiler thinks the result type is not a long because of the types being acted on, then you need to cast your result.
Note that casting your result may incur loss of accuracy or values. The bracketed cast (long) is an explicit cast and will not generate any errors if, say, you tried to fit "1.234" into a long, which could only store "1".
In my opinion there could be a few reasons:
At least one of types in expression is not integer type (I don't think so).
Developer wanted to highlighted that result is long type (makes the result type clear for reader -- good reason).
Developer was not sure what is the result of expression and wanted to make sure it will be long (it's better to make sure, that hopes, it will work).
I believe it was 3 :).
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
byte + byte = int… why?
I have a method like this:
void Method(short parameter)
{
short localVariable = 0;
var result = localVariable - parameter;
}
Why is the result an Int32 instead of an Int16?
It's not just subtraction, there simply exisits no short (or byte/sbyte) arithmetic.
short a = 2, b = 3;
short c = a + b;
Will give the error that it cannot convert int (a+b) to short (c).
One more reason to almost never use short.
Additional: in any calculation, short and sbyte will always be 'widened' to int, ushort and byte to uint. This behavior goes back to K&R C (and probaly is even older than that).
The (old) reason for this was, afaik, efficiency and overflow problems when dealing with char. That last reason doesn't hold so strong for C# anymore, where a char is 16 bits and not implicitly convertable to int. But it is very fortunate that C# numerical expressions remain compatible with C and C++ to a very high degree.
All operations with integral numbers smaller than Int32 are widened to 32 bits before calculation by default. The reason why the result is Int32 is simply to leave it as it is after calculation. If you check the MSIL arithmetic opcodes, the only integral numeric type they operate with are Int32 and Int64. It's "by design".
If you desire the result back in Int16 format, it is irrelevant if you perform the cast in code, or the compiler (hypotetically) emits the conversion "under the hood".
Also, the example above can easily be solved with the cast
short a = 2, b = 3;
short c = (short) (a + b);
The two numbers would expand to 32 bits, get subtracted, then truncated back to 16 bits, which is how MS intended it to be.
The advantage of using short (or byte) is primarily storage in cases where you have massive amounts of data (graphical data, streaming, etc.)
P.S. Oh, and the article is "a" for words whose pronunciation starts with a consonant, and "an" for words whose pronunciated form starts with a vowel. A number, AN int. ;)
The other answers given within this thread, as well as the discussions given here are instructive:
(1) Why is a cast required for byte subtraction in C#?
(2) byte + byte = int… why?
(3) Why is a cast required for byte subtraction in C#?
But just to throw another wrinkle into it, it can depend on which operators you use. The increment (++) and decrement (--) operators as well as the addition assignment (+=) and subtraction assignment (-=) operators are overloaded for a variety of numeric types, and they perform the extra step of converting the result back to the operand's type when returning the result.
For example, using short:
short s = 0;
s++; // <-- Ok
s += 1; // <-- Ok
s = s + 1; // <-- Compile time error!
s = s + s; // <-- Compile time error!
Using byte:
byte b = 0;
b++; // <-- Ok
b += 1; // <-- Ok
b = b + 1; // <-- Compile time error!
b = b + b; // <-- Compile time error!
If they didn't do it this way, calls using the increment operator (++) would be impossible and calls to the addition assignment operator would be awkward at best, e.g.:
short s
s += (short)1;
Anyway, just another aspect to this whole discussion...
I think its done automatically done to avoid overflow,
lets say you do something like this.
Short result = Short.MaxValue + Short.MaxValue;
The result clearly wouldn't fit in a short.
one thing i don't understand is then why not do it for int32 too which would automatically convert to long???
The effect you are seeing...
short - short = int
...is discussed extensively in this Stack Overflow question: byte + byte = int… why?
There is a lot of good information and some interesting discussions as to why it is that way.
Here is a highly-voted answer:
I believe it's basically for the sake
of performance. (In terms of "why it
happens at all" it's because there
aren't any operators defined by C# for
arithmetic with byte, sbyte, short or
ushort, just as others have said. This
answer is about why those operators
aren't defined.)
Processors have native operations to
do arithmetic with 32 bits very
quickly. Doing the conversion back
from the result to a byte
automatically could be done, but would
result in performance penalties in the
case where you don't actually want
that behaviour.
-- Jon Skeet
I always come across code that uses int for things like .Count, etc, even in the framework classes, instead of uint.
What's the reason for this?
UInt32 is not CLS compliant so it might not be available in all languages that target the Common Language Specification. Int32 is CLS compliant and therefore is guaranteed to exist in all languages.
int, in c, is specifically defined to be the default integer type of the processor, and is therefore held to be the fastest for general numeric operations.
Unsigned types only behave like whole numbers if the sum or product of a signed and unsigned value will be a signed type large enough to hold either operand, and if the difference between two unsigned values is a signed value large enough to hold any result. Thus, code which makes significant use of UInt32 will frequently need to compute values as Int64. Operations on signed integer types may fail to operate like whole numbers when the operands are overly large, but they'll behave sensibly when operands are small. Operations on unpromoted arguments of unsigned types pose problems even when operands are small. Given UInt32 x; for example, the inequality x-1 < x will fail for x==0 if the result type is UInt32, and the inequality x<=0 || x-1>=0 will fail for large x values if the result type is Int32. Only if the operation is performed on type Int64 can both inequalities be upheld.
While it is sometimes useful to define unsigned-type behavior in ways that differ from whole-number arithmetic, values which represent things like counts should generally use types that will behave like whole numbers--something unsigned types generally don't do unless they're smaller than the basic integer type.
UInt32 isn't CLS-Compliant. http://msdn.microsoft.com/en-us/library/system.uint32.aspx
I think that over the years people have come to the conclusions that using unsigned types doesn't really offer that much benefit. The better question is what would you gain by making Count a UInt32?
Some things use int so that they can return -1 as if it were "null" or something like that. Like a ComboBox will return -1 for it's SelectedIndex if it doesn't have any item selected.
If the number is truly unsigned by its intrinsic nature then I would declare it an unsigned int. However, if I just happen to be using a number (for the time being) in the positive range then I would call it an int.
The main reasons being that:
It avoids having to do a lot of type-casting as most methods/functions are written to take an int and not an unsigned int.
It eliminates possible truncation warnings.
You invariably end up wishing you could assign a negative value to the number that you had originally thought would always be positive.
Are just a few quick thoughts that came to mind.
I used to try and be very careful and choose the proper unsigned/signed and I finally realized that it doesn't really result in a positive benefit. It just creates extra work. So why make things hard by mixing and matching.
Some old libraries and even InStr use negative numbers to mean special cases. I believe either its laziness or there's negative special values.