Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As you can see here I have an unsigned variable and a signed one both with the same binary values but different decimal results.
uint unsigned = 0xFFFFFFFF; // 4,294,967,295
int signed = 0xFFFFFFFF; // -1
I am really confused! How does the machine make the difference between signed and unsigned , after all those are 32/64 bits all turned on. Is there a CPU flag that is used exactly for this? How does the CPU know the difference between a unsigned and a signed number? Trying to understand the concept for hours!
Or... another example...let's take 4 bits:
// 1 1 0 1 <--- -3
// 0 0 1 0 <--- 2
If we subtract 2 from -3 the result will be:
// 1 0 0 1 <--- -1 or 9???
The cpu will yield that result, but how does my application know that it is a -1 or a 9? It is known at compile time the expected result type?
another example...
int negative = -1;
Console.WriteLine(negative.ToString("X")); //output FFFFFFFF
uint positive = 0xFFFFFFFF;
Console.WriteLine(positive.ToString("X")); //output FFFFFFFF
later edit :
I messed up a little more with this stuff and I think I got it and please correct me if I am wrong:
We have the following example:
sbyte positive = 15; //0xF ( 0000 1111)
sbyte negative = unchecked((sbyte)-positive); //0xF01;( 1111 0001)
sbyte anotherPositive = 7; //0x7 ( 0000 0111)
sbyte result = (sbyte)(negative + anotherPositive);
//The cpu will add the numbers 1111 0001
// 0000 0111
// 1111 1000
// (Addition result is 248 but our variable is
//SIGNED and we have the sign bit ON(1) so the the value will be interpreted as -8)
I am really confused! How does the machine make the difference between at low level , after all those are 32 bits all turned on.
Machine doesn't; the compiler does. It is the compiler that knows the type of signed and unsigned variables (ignoring for a moment the fact that both signed and unsigned are keywords in C and C++). Therefore, the compiler knows what instructions to generate and what functions to call based on these types.
The distinction between the types is made at compile time, changing the interpretation of possibly identical raw data based on its compile-time type.
Consider an example of printing the value of a variable. When you write
cout << mySignedVariable << " " << myUnsignedVariable << endl;
the compiler sees two overloads of << operator being applied:
The first << makes a call to ostream& operator<< (int val);
The second << makes a call to ostream& operator<< (unsigned int val);
Once the code reaches the proper operator implementation, you are done: the code generated for the specific overload has the information on how to handle the signed or unsigned value "baked into" its machine code. The implementation that takes an int uses signed machine instructions for comparisons, divisions, multiplications, etc. while the unsigned implementation uses different instructions, producing the desired results.
It's the job of the language's type system to interpret what it sees.
All the CPU really sees is a bunch of zeros and ones.
Interestingly, two's complement arithmetic can use the same instructions as unsigned arithmetic. One's complement arithmetic requires an annoying complementing subtractor.
the computer doesn't care, as far as its concerned they are both the same, its the compiler and language structure running on the computer that decides how to interpret and handle the bit array and they know if its a uint, int, double, single, unicode char etc.
that's why managed memory is so important, if your program writes a uint to a memory address and another program reads it an a char then you get all sorts of strange behaviour, this can been seen when you open a binary file in notepad
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I just started coding in C++ and I saw in some example codes this symbol: <<
is there a equavalent in C# if so What is it?
Thank you in advance.
Disclaimer: I don't know anything about C#; this answer just describes the operator in C++.
It depends on context; that operator is often overloaded to mean different things for different types.
For integer types, it's the bitwise left shift operator; it takes the bit pattern of a value, and moves it to the left, inserting zero into the less significant bits:
unsigned x = 6; // decimal 6, binary 00110
unsigned y = x << 2; // decimal 24, binary 11000
In general, a left-shift by N bits is equivalent to multiplying by 2N (so here, shifting by 2 bits multiplies by 4).
I'm fairly sure this use of the operator is the same in C# as in C++.
The standard library overloads the operator to insert a value into an output stream, in order to produce formatted output on the console, or in files, or in other ways.
#include <iostream> // declare standard input/output streams
std::cout << 42 << std::endl; // print 42 to the console, end the line, and flush.
I think C# has a TextWriter or something to handle formatted output, with Console.Out or something being equivalent to std::cout; but C# uses normal method calls rather than an overloaded operator.
operator<< means exactly the same in C++ as it does in C#; it is the left-shift operator and moves all the bits in a number one bit to the left.
But, in C++, you can overload most operators to make them do whatever you like for user-defined types. Perhaps most commonly, the left- and right-shift operators are overloaded for streams to mean 'stuff this thing into that stream' (left-shift) or 'extract a variable of this type from that stream' (right-shift).
This question already has answers here:
No overflow exception for int in C#?
(6 answers)
Closed 8 years ago.
I have a c# application in which i have this code :
public static void Main()
{
int i = 2147483647;
int j = i+1;
Console.WriteLine(j);
Console.ReadKey();
}
The result is : -2147483648
I know that every integer must be < 2147483648. So
Why I don't have a compilation or runtime error? like in this exemple
What is the reason of the negative sign?
thanks
The compiler defaults to unchecked arithmetic; you have simply overflown and looped around, thanks to two's-complement storage.
This fails at runtime:
public static void Main()
{
int i = 2147483647;
int j = checked((int)(i + 1)); // <==== note "checked"
Console.WriteLine(j);
Console.ReadKey();
}
This can also be enabled globally as a compiler-switch.
As Christos says, the negative sign comes from integer overflow. The reason you do net get an error is because the compiler does not evaluate expressions for overflowing values.
0111 1111 1111 1111 1111 1111 1111 1111 2^31-1
+0000 0000 0000 0000 0000 0000 0000 0001 1
=1000 0000 0000 0000 0000 0000 0000 0000 -2^31
The reason for this is that the leftmost bit is the sign bit, it determines whether the int is positive or negative. 0 is positive, 1 is negative. If you add one to the largest possible number, you essentially change the sign bit and get the smallest representable number. The reason for this is that integers use two's complement storage
To check if the value overflows, do:
int j = checked(i + 1);
Why I don't have a compilation or runtime error?
Because compiler can determine that you have assigned a larger than int.MaxValue value to the variable. Since it is hard coded. But for i+1 compiler can't execute the code to determine that the result of this calculation would be greater than int.MaxValue
What is the reason of the negative sign?
It is because of integer overflow.
See: checked (C# Reference)
By default, an expression that contains only constant values causes
a compiler error if the expression produces a value that is outside
the range of the destination type. If the expression contains one or
more non-constant values, the compiler does not detect the overflow.
What is the reason of the negative sign?
You have a negative sign, because you have exceeded the maximum integer value and the next integer is the lowest integer that can be represented.
Why I don't have a compilation or runtime error?
You don't have a complilation error, because this is not an error. Also, this is not a runtime error. You just add one to the i in the runtime. Since the value of i is the maximum integer value that can be stored in a variable of type int and since the circular nature of the integers in programming, you will get the lowest integer that can be stored in a variable of type int.
(A variable of type int can store 32-bit integers).
Furthermore, by default you in C# integer operation don't throw exceptions upon overflow. You could alter this either from project settings or using a checked statement, as it is already have been pointed out here.
(uint)Convert.ToInt32(elements[0]) << 24;
The << is the left shift operator.
Given that the number is a binary number, it will shift all the bits the specified amount to the left.
If we have
2 << 1
This will take the number 2 in binary (00000010) and shift it to the left one bit. This gives you 4 (000000100).
Overflows
Note that once you get to the very left, the bits are discarded. So assuming you are working with an 8 bit sized integer (I know c# uint like you have in your example is 32 bits - I dont want to have to type out a 32 bit digit, so just assume we are on 8 bits)
255 << 1
will return 254 (11111110).
Use
Being very careful of the overflows mentioned before, bit shifting is a very fast way to multiply or divide by 2. In a highly optimised environment (such as games) this is a very useful way to perform arithmetic very fast.
However, in your example, it is taking only the right most 8 bits of the number making them the left most 8 bits (multiplying it by 16,777,216) . Why you would want do this, I could only guess.
I guess you are referring to Shift operators.
As Mongus Pong said, shifts are usually used to multiply and divide very fast. (And can cause weird problems due to overflow).
I'm going to go out on a limb and trying to guess what your code is doing.
If elements[0] is a byte element(that is to say, it contains only 8 bits), then this code will result in straight-forward multiplication by 2^24. Otherwise, it will drop the 24 high-order bits and multiply by 2^24.
I was studying shift operators in C#, trying to find out
when to use them in my code.
I found an answer but for Java, you could:
a) Make faster integer multiplication and division operations:
*4839534 * 4* can be done like this:
4839534 << 2
or
543894 / 2 can be done like this: 543894 >> 1
Shift operations much more faster than multiplication for most of processors.
b) Reassembling byte streams to int values
c) For accelerating operations with graphics since Red, Green and Blue colors coded by separate bytes.
d) Packing small numbers into one single long...
For b, c and d I can't imagine here a real sample.
Does anyone know if we can accomplish all these items in C#?
Is there more practical use for shift operators in C#?
There is no need to use them for optimisation purposes because the compiler will take care of this for you.
Only use them when shifting bits is the real intent of your code (as in the remaining examples in your question). The rest of the time just use multiply and divide so readers of your code can understand it at a glance.
Unless there is a very compelling reason, my opinion is that using clever tricks like that typically just make for more confusing code with little added value. The compiler writers are a smart bunch of developers and know a lot more of those tricks than the average programmer does. For example, dividing an integer by a power of 2 is faster with the shift operator than a division, but it probably isn't necessary since the compiler will do that for you. You can see this by looking at the assembly that both the Microsoft C/C++ compiler and gcc perform these optimizations.
I will share an interesting use I've stumbled across in the past. This example is shamelessly copied from a supplemental answer to the question, "What does the [Flags] Enum Attribute mean in C#?"
[Flags]
public enum MyEnum
{
None = 0,
First = 1 << 0,
Second = 1 << 1,
Third = 1 << 2,
Fourth = 1 << 3
}
This can be easier to expand upon than writing literal 1, 2, 4, 8, ... values, especially once you get past 17 flags.
The tradeoff is, if you need more than 31 flags (1 << 30), you also need to be careful to specify your enum as something with a higher upper bound than a signed integer (by declaring it as public enum MyEnum : ulong, for example, which will give you up to 64 flags). This is because...
1 << 29 == 536870912
1 << 30 == 1073741824
1 << 31 == -2147483648
1 << 32 == 1
1 << 33 == 2
By contrast, if you set an enum value directly to 2147483648, the compiler will throw an error.
As pointed out by ClickRick, even if your enum derives from ulong, your bit shift operation has to be performed against a ulong or your enum values will still be broken.
[Flags]
public enum MyEnum : ulong
{
None = 0,
First = 1 << 0,
Second = 1 << 1,
Third = 1 << 2,
Fourth = 1 << 3,
// Compiler error:
// Constant value '-2147483648' cannot be converted to a 'ulong'
// (Note this wouldn't be thrown if MyEnum derived from long)
ThirtySecond = 1 << 31,
// so what you would have to do instead is...
ThirtySecond = 1UL << 31,
ThirtyThird = 1UL << 32,
ThirtyFourth = 1UL << 33
}
Check out these Wikipedia articles about the binary number system and the arithmetic shift. I think they will answer your questions.
The shift operators are rarely encountered in business applications today. They will appear frequently in low-level code that interacts with hardware or manipulates packed data. They were more common back in the days of 64k memory segments.
Given this field:
char lookup_ext[8192] = {0}; // Gets filled later
And this statement:
unsigned short *slt = (unsigned short*) lookup_ext;
What happens behind the scenes?
lookup_ext[1669] returns 67 = 0100 0011 (C), lookup_ext[1670] returns 78 = 0100 1110 (N) and lookup_ext[1671] returns 68 = 0100 0100 (D); yet slt[1670] returns 18273 = 0100 0111 0110 0001.
I'm trying to port this to C#, so besides an easy way out of this, I'm also wondering what really happens here. Been a while since I used C++ regularly.
Thanks!
The statement that you show doesn't cast a char to an unsigned short, it casts a pointer to a char to a pointer to an unsigned short. This means that the usual arithmetic conversions of the pointed-to-data are not going to happen and that the underlying char data will just be interpreted as unsigned shorts when accessed through the slt variable.
Note that sizeof(unsigned short) is unlikely to be one, so that slt[1670] won't necessarily correspond to lookup_ext[1670]. It is more likely - if, say, sizeof(unsigned short) is two - to correspond to lookup_ext[3340] and lookup_ext[3341].
Do you know why the original code is using this aliasing? If it's not necessary, it might be worth trying to make the C++ code cleaner and verifying that the behaviour is unchanged before porting it.
If I understand correctly, the type conversion will be converting a char array of size 8192 to a short int array of size half of that, which is 4096.
So I don't understand what you are comparing in your question. slt[1670] should correspond to lookup_ext[1670*2] and lookup_ext[1670*2+1].
Well, this statement
char lookup_ext[8192] = {0}; // Gets filled later
Creates an array either locally or non-locally, depending on where the definition occurs. Initializing it like that, with an aggregate initializer will initialize all its elements to zero (the first explicitly, the remaining ones implicitly). Therefore i wonder why your program outputs non-zero values. Unless the fill happens before the read, then that makes sense.
unsigned short *slt = (unsigned short*) lookup_ext;
That will interpret the bytes making up the array as unsigned short objects when you read from that pointer's target. Strictly speaking, the above is undefined behavior, because you can't be sure the array is suitable aligned, and you would read from a pointer that's not pointing at the type of the original pointed type (unsigned char <-> unsigned short). In C++, the only portable way to read the value out of some other pod (plain old data. that's all the structs and simple types that are possible in C too (such as short), broadly speaking) is by using such library functions as memcpy or memmove.
So if you read *slt above, you would interpret the first sizeof(*slt) bytes of the array, and try to read it as unsigned short (that's called type pun).
When you do "unsigned short slt = (unsigned short) lookup_ext;", the no. of bytes equivalent to the size of (unsigned short) are picked up from the location given by lookup_ext, and stored at the location pointed to by slt. Since unsigned short would be 2 bytes, first two bytes from lookup_ext would be stored in the location given by slt.