OutOfMemoryException when filling List<byte> in C# - c#

I'm getting an out of memory exception and I don't know why?
This is a my C# code:
List<byte> testlist = new List<byte>();
for (byte i = 0; i <= 255; i++)
{
testlist.Add(i); //exception thrown here in the last cycle
}

Your loop never terminates because byte is an unsigned, 8-bit integer with valid values between 0 and 255.
So, when i == 255 and the loop body completes, another increment occurs. However, due to the range of byte, this does not cause i to equal 256 (it can't!), which would in turn cause the loop to terminate. Instead, it overflows, and rolls around to 0. So, the loop goes on (and on and on...). This is a relatively common bug when using unsigned loop counters.
In the meantime, your list is growing until you run OOM. There's no reason to use a byte here; just use int and cast i when adding it to the list.

Related

Reading time of arrays with an equal number of elements but of different dimensional

I have 3 array that keep integer values. A array of 4 -dimensional, a array of 2-dimensional, a array of single-dimensional. But the total number of elements is equal to each. I'm going to print on console all the elements in these array. Which one prints the fastest? Or is it equal to printing times?
int[,,,] Q = new int[4, 4, 4, 4];
int[,] W = new int[16,16];
int[] X = new int[256];
Unless I'm missing something, there are two main ways you could be iterating over the multi-dimensional arrays.
The first is:
int[,] W = new int[16,16];
for(int i = 0; i < 16; i++)
{
for(int j = 0; j < 16; j++)
Console.WriteLine(W[i][j]);
}
This method is slower than iterating over the single-dimensional array, as the only difference is that for every 16 members, you need to start a new iteration of the outside loop and re-initiate the inner loop.
The second is:
for(int i = 0; i < 256; i++)
{
Console.WriteLine(W[i / 16][i % 16]);
}
This method is slower because every iteration you need to calculate both (i / 16) and (i % 16).
Ignoring the iteration factor, there is also the time it takes to access another pointer every iteration.
To the extent of my knowledge in boolean functions*, given two sets of two integers, one of them bigger numbers but both having the same size in memory (as is the case for all numbers of type int in c#), the time to compute the addition of the two sets would be exactly the same (as in the number of clock ticks, but it's not something I'd expect everyone who stumbles upon this question to be familiar with). This being the case, the time for calculating the address of an array member is not dependent upon how big its index is.
So to summarize, unless I'm missing something or I'm way rustier than I think, there is one factor that is guaranteed to lengthen the time it takes for iterating over multidimensional arrays (the extra pointers to access), another factor that is guaranteed to do the same, but you can choose one of two options for (multiple loops or additional calculations every iteration of the loop), and there are no factors that would slow down the single-dimensional array approach (no "tax" for an extra long index).
CONCLUSIONS:
That makes it two factors working for a single-dimensional array, and none for a multi-dimensional one.
Thus, I would assume the single-dimensional array would be faster
That being said, you're using C#, so you're probably not really looking for that insignificant an edge or you'd use a low-level language. And if you are, you should probably either switch to a low-level language or really contemplate whether you are doing whatever it is you're trying to in the best way possible (the only case where this could make an actual difference, that I can think of, is if you load into your code a whole 1 million record plus database, and that's really bad practice).
However, if you're just starting out in C# then you're probably just overthinking it.
Whichever it is, this was a fun hypothetical, so thanks for asking it!
*by boolean functions, I mean functions at the binary level, not C# functions returning a bool value

Any way to limit values without the use of if statements?

I'm working on a program that works with images.
I'm having trouble on a function that adjusts contrast and brightness.
I need to calculate a value for each RGB component based on what input I recieve from the user.
The problem is, I need to make sure the final value after said calculation isn't greater than 255 or less than 0. So it can fit inside a byte.
temp = c * dataPtr[0] + b; //variable temp is of type double.
if (temp > 255)
{
temp = 255;
}
else if (temp < 0)
{
temp = 0.0;
}
dataPtr[0] = (byte)(Math.Round(temp));
I'm repeating this for every RGB component of every pixel, so the ifs are getting executed a million times, most of the times needlessly.
I thought about just casting a double back to byte but it just reads the first byte of the double and doesn't max out the value if it's greater than what a byte can handle.
Is there any obvious way to optmize this this range-check that I'm just missing?
Thank you.
No. if is the "gold standard" way of comparing values. If you need to make sure the the value is in a range if is the way to do it.
If you must have an alternative: use an existing type that can only handle values 0-255, but then "overflow" behavior is "less defined". If you make your own type it probably uses if inside it anyway.
Your reasons "lots of needless ifs" is nothing to worry about.

Integer overflow in for-loop causing odd behaviour

Consider the following code. First 5 iterations goes well and then it goes on in an infinite loop spamming 0's.
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
I'm suspecting its because the Integer overflows, and guessing that the default returned value when it overflows is 0.
How would you write this code, so that it does what it is intended to do?
Edit: This is kinda a theoretical example, and i'm just playing around to learn. Is there no way to do this, without changing datatype or checking for 0? Would be nice if it actually threw an exception.
I believe you're right that it overflows.
Rather than just telling you to use unchecked to force it to throw an exception, here is an explanation of WHY you get zeros after the 5th result
Here is the series:
2
4
16
256
65536
4294967296 <- this won't fit into an int
in your code, 2,147,483,648 + 1 becomes -2,147,483,648
the 6th value overflows and becomes 0 (4,294,967,296 ends up going from 0 to 2,147,483,648 to -2,147,483,648 and keeps adding back up towards 0 again)
and your for modifier clause will then run 0*=0 indefinitely
EDIT: as per your update, use the checked keyword to get an exception on the overflow
checked {
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
}
otherwise use a long or ulong instead of int as both of those can safely store the result of 100000*100000
Use checked if you want an overflow exception:
checked
{
for (int i = 2; i < 100000; i*=i)
{
Console.WriteLine(i);
}
}
The checked keyword is used to explicitly enable overflow checking for
integral-type arithmetic operations and conversions.
By default, an expression that contains only constant values causes a
compiler error if the expression produces a value that is outside the
range of the destination type. If the expression contains one or more
non-constant values, the compiler does not detect the overflow.

2 different exception generated by doing minor change

I was playing with stackalloc and it throwsoverflow exception when I wrote below piece of code:-
var test = 0x40000000;
unsafe
{
int* result = stackalloc int[test];
result[0] = 5;
}
and when I make changes in test variable as shown below it throws stack overflow exception.
var test = 0x30000000;
unsafe
{
int* result = stackalloc int[test];
result[0] = 5;
}
what is happening in both scenario??
The runtime tries to calculate how much space is needed to store your array, and the first array is too large for the result to fit in the range of the type used.
The first array needs 0x40000000 * 4 = 0x0100000000 bytes of storage, while the second one needs only 0xc0000000 bytes.
Tweaking the array size and the array type (e.g. long[] or char[]) indicates that whenever the required space for the array goes over 0xffffffff you get the OverflowException; otherwise, the runtime attempts to create the array and crashes.
Based on the above I think it's fairly safe to conclude that the required space is being calculated using a value of type uint in a checked context. If the calculation overflows you unsurprisingly get the OverflowException; otherwise the runtime blows up due to the invalid unsafe operation.
Both arrays are larger than the largest allowed size of a single object in .NET (which is 2GB regardless of the platform), meaning that this would fail even if you didn't try to allocate in on the stack.
In the first case, you need 4 * 0x40000000 bytes (int is 32-bit in .NET), which doesn't even fit inside 32-bits and creates an arithmetic overflow. The second case needs ~3GB but is still way larger than the 2GB limit.
Additionally, since you're trying to allocate on the stack, you need to know that the stack size for a single thread is ~1MB by default, so using stackalloc for anything larger than that will also fail.
For example, this will also throw a StackOverflowException:
unsafe
{
byte* result = stackalloc byte[1024 * 1024];
result[0] = 5;
}

int or uint or what

Consider this
int i = 2147483647;
var n = i + 3;
i = n;
Console.WriteLine(i); // prints -2147483646 (1)
Console.WriteLine(n); // prints -2147483646 (2)
Console.WriteLine(n.GetType()); // prints System.Int32 (3)
I am confused with following
(1) how could int hold the value -2147483646 ? (int range = -2,147,483,648 to 2,147,483,647)
(2) why does this print -2147483648 but not 2147483648 (compiler should
decide better type as int range
exceeds)
(3) if it is converted somewhere, why n.GetType() gives System.Int32
?
Edit1: Made the correction: Now you will get What I am Getting. (sorry for that)
var n = i + 1; to
var n = i + 3;
Edit2: One more thing, if it as overflow, why is an exception not raised ?
Addition: as the overflow occurs, is it not right to set the type for
var n
in statement var n = i + 3; to another type accordingly ?
you are welcome to suggest a better title, as this is not making sense to.... me at least
Thanks
Update: Poster fixed his question.
1) This is output is expected because you added 3 to int.MaxValue causing an overflow. In .NET by default this is a legal operation in unchecked code giving a wrap-around to negative values, but if you add a checked block around the code it will throw an OverflowException instead.
2) The type of a variable declared with var is determined at compile time not runtime. It's a rule that adding two Int32s gives an Int32, not a UInt32, an Int64 or something else. So even though at runtime you can see that the result is too big for an Int32, it still has to return an Int32.
3) It's not converted to another type.
1) -2147483646 is bigger than -2,147,483,648
2) 2147483648 is out of range
3) int is an alias for Int32
1)
First of all, the value in the variable is not -2147483646, it's -2147483648. Run your test again and check the result.
There is no reason that an int could not hold the value -2147483646. It's within the range -2147483648..2147483647.
2)
The compiler chooses the data type of the variable to be the type of the result of the expression. The expression returns an int value, and even if the compiler would choose a larger data type for the variable, the expression still returns an int and you get the same value as result.
It's the operation in the expression that overflows, it's not when the result is assigned to the variable that it overflows.
3)
It's not converted anywhere.
This is an overflow, your number wrapped around and went negative
This isn't the compiler's job, as a loop at runtime can cause the same thing
int is an alias or System.Int32 they are equivalent in .Net.
This is because of the bit representation
you use Int32 but the same goes for char (8 bits)
the first bit holds the sign, then the following bits hold the number
so with 7 bits you can represent 128 numbers 0111 1111
when you try the 129th, 1000 0001, the sign bits get set so the computer thinks its -1 instead
Arithmic operations in .NET don't change the actual type.
You start off with an (32bit) integer and the +3 isn't going to change that.
That's also why you get an unexpected round number when you do this:
int a = 2147483647;
double b = a / 4;
or
int a = 2147483647;
var b = a / 4;
for that matter.
EDIT:
There is no exception because .NET overflows the number.
The overflow exception will only occur at assignment operations or as Mark explains when you set the conditions to generate the exception.
If you want an exception to be thrown, write
abc = checked(i+3)
instead. That will check for overflows.
Also, in c#, the default setting is to not throw exceptions on overflows. But you can switch that option somewhere on your project's properties.
You could make this easier on us all by using hex notation.
Not everyone knows that the eighth Mersenne prime is 0x7FFFFFFF
Just sayin'

Categories

Resources