When you left-shift (<<) or right-shift (>>) an integer register, you pull in zeros from the opposite side. It stands to reason that if you shift by more than the width of a register (e.g. shift a uint by 32 or more bits), it should become all zeros. In C#, this is not what happens. Instead, the shift count is effectively interpreted mod the register width, so it is impossible to definitively make the register zero using shifts. This is a pain.
I got hit by this because I am representing an extra-large integer via a unit[]. When you perform bit shifts on the extra-large register, in general bits from different uint instances in the array must "mix", so the code does stuff like:
s[i] = (s[i+k] << n) | (s[i+k+1] >> (32 - n))
but this code fails when n = 0 or 32, i.e. when the shift corresponds to a pure shift among uint values with no mixing. (It's easy to write the code so that n = 32 never appears, but then the shift by (32 - n) still fails when n = 0.) For now, I have got around this problem by special-casing n = 0, but this is extra code (ugly) and an extra test-and-branch in a performance-critical spot (bad).
Can anyone suggest a modification (maybe using masking instead of or in addition to bit shifting?) to get around this difficulty? Can anyone explain why C# made the choice to make shift counts not behave according to naive expectations?
First off, no I am not a student...just a C# guy porting a C++ library.
What do these two crazy lines mean? What are they equivalent to in C#? I'm mostly concerned with the size_t and sizeof. Not concerned about static_cast or assert..I know how to deal with those.
size_t Index = static_cast<size_t>((y - 1620) / 2);
assert(Index < sizeof(DeltaTTable)/sizeof(double));
y is a double and DeltaTTable is a double[]. Thanks in advance!
size_t is a typedef for an unsigned integer type. It is used for sizes of things, and may be 32 or 64 bits in size. The particular size of a size_t is implementation defined, but it is unsigned.
I suppose in C# you could use a 64-bit unsigned integer type.
All sizeof does is return the size in bytes of a C++ type. Every type takes up a certain quantity of room, and sizeof returns that size.
What your code is doing is computing the number of doubles (64-bit floats) that the DeltaTTable takes up. Essentially, it's ensuring that the table is larger than some size based on y, whatever that is.
There is no equivalent of sizeof in C#, nor does it need it. There is no reason for you to port this code to C#.
The bad news first you can't do that in C#. There's no static cast only dynamic casts. However the good news is it doesn't matter.
The two lines of code is asserting that the index is in bounds of the table so that the code won't accidentally read some arbitrary memory location. The CLR takes care of that for you. So when porting just ignore those lines they are automatically there for you any ways.
Of course this is based on an assumption based on the pattern of the code. There's no information on what Y represents and how Index is used
sizeOf calculates how much memory in bytes the DeltaTable type takes.
There is not equivalent to calculate the size like this in c# AFAIK.
I guess size_t much be a struct type in C++ code.
What actually happens when a Byte overflows?
Say we have
byte byte1 = 150; // 10010110
byte byte2 = 199; // 11000111
If we now do this addition
byte byte3 = byte1 + byte2;
I think we'll end up with byte3 = 94 but what actually happens? Did I overwrite some other memory somehow or is this totally harmless?
It's quite simple. It just does the addition and comes off at a number with more than 8 bits. The ninth bit (being one) just 'falls off' and you are left with the remaining 8 bits which form the number 94.
(yes it's harmless)
The top bits will be truncated. It is not harmful to any other memory, it is only harmful in terms of unintended results.
In C# if you have
checked { byte byte3 = byte1 + byte2; }
It will throw an overflow exception. Code is compiled unchecked by default. As the other answers are saying, the value will 'wrap around'. ie, byte3 = (byte1 + byte2) & 0xFF;
The carry flag gets set... but besides the result not being what you expect, there should be no ill effects.
Typically (and the exact behaviour will depend on the language and platform), the result will be taken modulo-256. i.e. 150+199 = 349. 349 mod 256 = 93.
This shouldn't affect any other storage.
Since you have tagged your question C#, C++ and C, I'll answer about C and C++. In c++ overflow on signed types, including sbyte (which, I believe, is signed char in C/C++) results in undefined behavior. However for unsigned types, such as byte (which is unsigned char in C++) the result is takes modulo 2n where n is the number of bits in the unsigned type. In C# the second rule holds, and the signed types generate an exception if they are in checked block. I may be wrong in the C# part.
Overflow is harmless in c# - you won't overflow memory - you simply obly get the last 8 bits of the result. If you want this to this an exception, use the 'checked' keyword. Note also that you may find byte+byte gives int, so you may need to cast back to byte.
The behavior depends on the language.
In C and C++, signed overflow is undefined and unsigned overflow has the behavior you mentioned (although there is no byte type).
In C#, you can use the checked keyword to explicitly say you want to receive an exception if there is overflow and the unchecked keyword to explicitly say you want to ignore it.
Leading bit just dropped off.
And arithmetic overflow occurs. Since 150+199=349, binary 1 0101 1101, the upper 1 bit is dropped and the byte becomes 0101 1101; i.e. the number of bits a byte can hold overflowed.
No damage was done - e.g. the memory didn't overflow to another location.
Let's look at what actually happens (in C (assuming you've got the appropriate datatype, as some have pointed out that C doesn't have a "byte" datatype; nonetheless, there are 8-bit datatypes which can be added)). If these bytes are declared on the stack, they exist in main memory; at some point, the bytes will get copied to the processor for operation (I'm skipping over several important steps, such as processsor cacheing...). Once in the processor, they will be stored in registers; the processor will execute an add operation upon those two registers to add the data together. Here's where the cause of confusion occurs. The CPU will perform the add operation in the native (or sometimes, specified) datatype. Let's say the native type of the CPU is a 32-bit word (and that that datatype is what is used for the add operation); that means these bytes will be stored in 32-bit words with the upper 24 bits unset; the add operation will indeed do the overflow in the target 32-bit word. But (and here's the important bit) when the data is copied back from the register to the stack, only the lowest 8 bits (the byte) will be copied back to the target variable's location on the stack. (Note that there's some complexity involved with byte packing and the stack here as well.)
So, here's the upshot; the add causes an overflow (depending on the specific processor instruction chosen); the data, however, is copied out of the processor into a datatype of the appropriate size, so the overflow is unseen (and harmless, assuming a properly written compiler).
As far as C# goes, adding two values of type byte together results in a value of type int which must then be cast back to byte.
Therefore your code sample will result in a compiler error without a cast back to byte as in the following.
byte byte1 = 150; // 10010110
byte byte2 = 199; // 11000111
byte byte3 = (byte)(byte1 + byte2);
See MSDN for more details on this. Also, see the C# language specification, section 7.3.6 Numeric promotions.
I have an odd scenario (see this answer for more details), where I need to add two bytes of data together. Obviously this is not normal adding. Here is the scenario:
I am trying to get a coordinate out of a control. When the control is less that 256 in width then the x coordinate takes one byte, otherwise it takes two bites.
So, I now have an instance of that control that is larger than 256 in width. How do I add these two numbers together?
So for example:
2 + 0 is not 2 because the 2 is the high byte (or maybe it is the low byte and it is 2...)
Am I making sense? If so, how can I do this kind of addition in C#?
Update: Sorry for the confusing question. I think I got it figured out. See my answer below.
Approach with multiplying is pretty clear but not common in bitwise word, and your approach with BitConverter takes byte array which is not convenient in many cases.
The most common (and easy way) to perform this - use bitwise operators:
var r = (high << 8) | low;
And remember about byte ordering because it's not always obvious which byte is high and which is low.
You mean something like
256 * high + low
?
Just in case anyone else needs this, I was looking for:
BitConverter.ToInt16
It takes two bytes and converts them to an integer.
What I mean is: Imagine we have a 8 byte variable that has a high value and low value. I can make one pointer point to the upper 4 bytes and other point to the lower 4 bytes, and set/retrieve their values without problems. Now, is there a way to get/set values for anything smaller than a byte? If instead of dividing it in two 4 bytes "variables", I'd want to consider eight 1 byte variables I could use a bool, but there is no defined smaller variable in c#. Would it possible to divide it to 16 just with pointers? Or even in 32, 64? It wouldn't right?
This is a pretty academic question, I know this can be achieved otherwise with bitshiffting, unions(Struct.Explicit), etc. Thanks!
No, C# does not support bit fields and a byte is the minimum amount of addressable memory. You can manually provide properties that change one or several specific bits but you have to provide packing/unpacking logic yourself:
public bool Bit5 {
get { return (field & 32) != 0; }
set { if (value) field |= 32; else field &= ~32; }
}
By the way, I don't know how you achieve it using LayoutKind.Explicit as the minimum FieldOffset you can specify is one byte.
As a side note, even C++ that can do this with bit fields will just hide the bitwise tricks and makes the compiler do it instead of you. There's no way you could grab something less than a byte from memory to a register, at least on x86 architecture.