I was playing with stackalloc and it throwsoverflow exception when I wrote below piece of code:-
var test = 0x40000000;
unsafe
{
int* result = stackalloc int[test];
result[0] = 5;
}
and when I make changes in test variable as shown below it throws stack overflow exception.
var test = 0x30000000;
unsafe
{
int* result = stackalloc int[test];
result[0] = 5;
}
what is happening in both scenario??
The runtime tries to calculate how much space is needed to store your array, and the first array is too large for the result to fit in the range of the type used.
The first array needs 0x40000000 * 4 = 0x0100000000 bytes of storage, while the second one needs only 0xc0000000 bytes.
Tweaking the array size and the array type (e.g. long[] or char[]) indicates that whenever the required space for the array goes over 0xffffffff you get the OverflowException; otherwise, the runtime attempts to create the array and crashes.
Based on the above I think it's fairly safe to conclude that the required space is being calculated using a value of type uint in a checked context. If the calculation overflows you unsurprisingly get the OverflowException; otherwise the runtime blows up due to the invalid unsafe operation.
Both arrays are larger than the largest allowed size of a single object in .NET (which is 2GB regardless of the platform), meaning that this would fail even if you didn't try to allocate in on the stack.
In the first case, you need 4 * 0x40000000 bytes (int is 32-bit in .NET), which doesn't even fit inside 32-bits and creates an arithmetic overflow. The second case needs ~3GB but is still way larger than the 2GB limit.
Additionally, since you're trying to allocate on the stack, you need to know that the stack size for a single thread is ~1MB by default, so using stackalloc for anything larger than that will also fail.
For example, this will also throw a StackOverflowException:
unsafe
{
byte* result = stackalloc byte[1024 * 1024];
result[0] = 5;
}
Related
I'm getting an out of memory exception and I don't know why?
This is a my C# code:
List<byte> testlist = new List<byte>();
for (byte i = 0; i <= 255; i++)
{
testlist.Add(i); //exception thrown here in the last cycle
}
Your loop never terminates because byte is an unsigned, 8-bit integer with valid values between 0 and 255.
So, when i == 255 and the loop body completes, another increment occurs. However, due to the range of byte, this does not cause i to equal 256 (it can't!), which would in turn cause the loop to terminate. Instead, it overflows, and rolls around to 0. So, the loop goes on (and on and on...). This is a relatively common bug when using unsigned loop counters.
In the meantime, your list is growing until you run OOM. There's no reason to use a byte here; just use int and cast i when adding it to the list.
I'm doing an experiment as part of an R & D process. I need to be able to set values in a struct and retrieve and set them as a byte[].
Here's my struct:
[StructLayout(LayoutKind.Explicit, Size = 17)]
unsafe internal struct MyBuffer
{
[FieldOffset(0)]
internal fixed byte Bytes[17];
[FieldOffset(0)]
internal long L1;
[FieldOffset(8)]
internal long L2;
[FieldOffset(16)]
internal byte B;
}
Setting the values will obviously automatically set the byte[]:
MyBuffer test = new MyBuffer();
test.L1 = 100;
test.L2 = 200;
test.B = 150;
Inspecting test in debug mode yields what I expect.
What I need is as follows:
To be able to read the unmanaged fixed byte array as a 17 byte long managed array.
To be able to set the unmanaged fixed byte array from a 17 byte managed array.
NOTES:
If at all possible, I don't want to use marshalling as this is a time sensitive operation.
I can't omit the fixed directive as that throws a runtime error due to the overlapping of objects and non-objects in the struct.
You're already using unsafe code, so why not simply get a pointer to the structure and pass it? Doesn't this work?
MyBuffer bf = new MyBuffer();
bf.L1 = 23;
unsafe
{
MyBuffer* pStruct = &bf;
YourNativeMethod(pStruct);
}
[DllImport]
static extern void YourNativeMethod(MyBuffer* pStruct);
To avoid all marshalling, you might have to write a C++/CLI wrapper, I'm not sure if .NET does marshalling even if you pass an unsafe pointer.
You don't even need the byte-array, the native method certainly doesn't care whether you're passing a pointer to a byte array or a structure. Everything is a byte array :D
EDIT: Since your case doesn't explicitly call a native method, we have to go around this.
The problem is, fixed byte[] isn't actually a byte array at all. It's simply a sequence of 17 bytes, nothing more. That's not enough for a .NET array. So we have to copy it to a new array (it might be worthwhile to keep "buffer" byte arrays ready and recycle them to avoid allocations and deallocations). This can be done either through Marshal.Copy or some unsafe pointer fun:
byte[] bytes = new byte[17];
unsafe
{
IntPtr srcPtr = new IntPtr(bf.Bytes);
{
Marshal.Copy(srcPtr, bytes, 0, 17);
}
}
This uses direct memory copying, but does some checks. In my testing, it's a great way to copy bigger arrays (for me the break-even point was somewhere around 50 bytes). If your array is smaller, the overhead of those checks gets higher compared to total copy time, so you might want to use byte-by-byte copying instead:
byte[] bytes = new byte[17];
unsafe
{
byte* srcPtr = bf.Bytes;
fixed (byte* bPtr = bytes)
{
var j = 0;
while (j++ < 17)
{
*(bPtr + j) = *(srcPtr++);
}
}
}
I hope I don't have to tell you to be careful around this kind of code :)
In any case, I wouldn't worry about the performance too much and I'd use the Marshal.Copy variant, simply because your DB call is going to be the bottle-neck anyway. The safer option is better :)
There's also a few tricks you can use to speed this up, for example copying a whole int or long at a time, which the CPU is much better, although it's trickier. Trying with a simple variant (a length with a multiple of 4, copying a whole int at a time) cut my test runtime by four. If your data length is not a multiple of four, you'd simply copy the remainder as bytes.
I really appreciate this community and all the help it has provided towards my programming problems that I've had in the past.
Now unfortunately, I cannot seem to find an answer to this problem which, at first glance, seems like a no brainer. Please note that I am currently using C++ 6.0.
Here is the code that I am trying to convert from C#:
byte[] Data = new byte[0x200000];
uint Length = (uint)Data.Length;
In C++, I declared the new byte array Data as follows:
BYTE Data[0x200000];
DWORD Length = sizeof(Data) / sizeof(DWORD);
When I run my program, I receive stack overflow errors (go figure). I believe this is because the array is so large (2 MB if I'm not mistaken).
Is there any way to implement this size array in C++ 6.0?
Defining array this way makes in on stack which ends in stack overflow. You can create very big arrays on heap by using pointers. For example:
BYTE *Data = new BYTE[0x200000];
Currently, you are allocating a lot of memory on the thread's stack, which will cause stack overflow, as stack space is usually limited to a few megabytes. You can create the array on the heap with new (by the way, you are calculating the array length incorrectly):
DWORD length = 0x200000;
BYTE* Data = new BYTE[length];
You might as well use vector<BYTE> instead of a raw array:
vector<BYTE> Data;
int length = Data.size();
What actually happens when a Byte overflows?
Say we have
byte byte1 = 150; // 10010110
byte byte2 = 199; // 11000111
If we now do this addition
byte byte3 = byte1 + byte2;
I think we'll end up with byte3 = 94 but what actually happens? Did I overwrite some other memory somehow or is this totally harmless?
It's quite simple. It just does the addition and comes off at a number with more than 8 bits. The ninth bit (being one) just 'falls off' and you are left with the remaining 8 bits which form the number 94.
(yes it's harmless)
The top bits will be truncated. It is not harmful to any other memory, it is only harmful in terms of unintended results.
In C# if you have
checked { byte byte3 = byte1 + byte2; }
It will throw an overflow exception. Code is compiled unchecked by default. As the other answers are saying, the value will 'wrap around'. ie, byte3 = (byte1 + byte2) & 0xFF;
The carry flag gets set... but besides the result not being what you expect, there should be no ill effects.
Typically (and the exact behaviour will depend on the language and platform), the result will be taken modulo-256. i.e. 150+199 = 349. 349 mod 256 = 93.
This shouldn't affect any other storage.
Since you have tagged your question C#, C++ and C, I'll answer about C and C++. In c++ overflow on signed types, including sbyte (which, I believe, is signed char in C/C++) results in undefined behavior. However for unsigned types, such as byte (which is unsigned char in C++) the result is takes modulo 2n where n is the number of bits in the unsigned type. In C# the second rule holds, and the signed types generate an exception if they are in checked block. I may be wrong in the C# part.
Overflow is harmless in c# - you won't overflow memory - you simply obly get the last 8 bits of the result. If you want this to this an exception, use the 'checked' keyword. Note also that you may find byte+byte gives int, so you may need to cast back to byte.
The behavior depends on the language.
In C and C++, signed overflow is undefined and unsigned overflow has the behavior you mentioned (although there is no byte type).
In C#, you can use the checked keyword to explicitly say you want to receive an exception if there is overflow and the unchecked keyword to explicitly say you want to ignore it.
Leading bit just dropped off.
And arithmetic overflow occurs. Since 150+199=349, binary 1 0101 1101, the upper 1 bit is dropped and the byte becomes 0101 1101; i.e. the number of bits a byte can hold overflowed.
No damage was done - e.g. the memory didn't overflow to another location.
Let's look at what actually happens (in C (assuming you've got the appropriate datatype, as some have pointed out that C doesn't have a "byte" datatype; nonetheless, there are 8-bit datatypes which can be added)). If these bytes are declared on the stack, they exist in main memory; at some point, the bytes will get copied to the processor for operation (I'm skipping over several important steps, such as processsor cacheing...). Once in the processor, they will be stored in registers; the processor will execute an add operation upon those two registers to add the data together. Here's where the cause of confusion occurs. The CPU will perform the add operation in the native (or sometimes, specified) datatype. Let's say the native type of the CPU is a 32-bit word (and that that datatype is what is used for the add operation); that means these bytes will be stored in 32-bit words with the upper 24 bits unset; the add operation will indeed do the overflow in the target 32-bit word. But (and here's the important bit) when the data is copied back from the register to the stack, only the lowest 8 bits (the byte) will be copied back to the target variable's location on the stack. (Note that there's some complexity involved with byte packing and the stack here as well.)
So, here's the upshot; the add causes an overflow (depending on the specific processor instruction chosen); the data, however, is copied out of the processor into a datatype of the appropriate size, so the overflow is unseen (and harmless, assuming a properly written compiler).
As far as C# goes, adding two values of type byte together results in a value of type int which must then be cast back to byte.
Therefore your code sample will result in a compiler error without a cast back to byte as in the following.
byte byte1 = 150; // 10010110
byte byte2 = 199; // 11000111
byte byte3 = (byte)(byte1 + byte2);
See MSDN for more details on this. Also, see the C# language specification, section 7.3.6 Numeric promotions.
Why does following code throw exception "Arithmetic operation resulted in an overflow." ?
UInt64[] arr=new UInt64[UInt64.MaxValue];
I guess because totally 8 * UInt64.MaxValue bytes is requested to be allocated, and this multiplication obviously overflows a 64-bit register.
Because indexers only take Int32 values. You can do
UInt64[] arr=new UInt64[Int32.MaxValue];
But thats the limit.
EDIT: Technically you can index an array with value structures that can theoretically be higher than Int32.MaxValue (because you can index an array with a long or a uint for example), however, you will run into that runtime error when the value exceeds Int32.MaxValue.
Because
a) all object are limited to 2GB in .NET
b) You don't have 64 PetaBytes of memory to spend
According to Microsoft's documentation, with framework .NET 4.5 these limitations apply:
The maximum number of elements in an array is UInt32.MaxValue.
The maximum index in any single dimension is 2,147,483,591 (0x7FFFFFC7) for byte arrays and arrays of single-byte structures, and 2,146,435,071 (0X7FEFFFFF) for other types.
The maximum size for strings and other non-array objects is unchanged