A problem with using System.Collections.Specialized.BitVector32: a bug? - c#

I'm trying to do a short simulation where i needed a small bit array, and I chose System.Collections.Specialized.BitVector32.
I'm running it inside a single-threaded object, in a single-threaded loop about 1,000,000 times, each time for indexes {0,1,2}.
Here is the code:
private System.Collections.Specialized.BitVector32 currentCalc
= new System.Collections.Specialized.BitVector32();
private void storeInCurrent(int idx, bool val)
{
currentCalc[idx] = val;
if (currentCalc[idx] != val)
{
throw new Exception("Inconceivable!");
}
}
To my understanding, the exception should not be thrown, but sometimes it does! An exception is not thrown every time, but in a fair percent - a CONSTANT 1/6 of the time! (which is even stranger)
What am I doing wrong?

Look at MSDN; the indexer takes the mask, not the index. So that is:
int mask = 1 << idx;
then use currentCalc[mask]
This is odd though; if you are happy enough to use masks - why would one be using BitVector32, rather than just an int. I also assumed the indexer would take the index. VERY odd design decision.

Related

Is numbers comparisons as uint faster than normal int in c#

I was looking how some of the .Net core libraries were implemented and one of many things that caught my eye was that in Dictionary<TKey, TValue> class some numeric comparisons where done doing a casting to (uint) even though at my naive eyes this didn't impacted the logic.
For example on
do { // some magic } while (collisionCount <= (uint)entries.Length);
collisionCount was initialized at 0 and always incremented (collisionCount++) and thus entries being an array its length will not be negative either see source code
as opposed to
if ((uint)i >= (uint)entries.Length) { // some code }
source code line
where i could become negative in some occasions when doing the following, see debug img
i = entry.next;
and thus using it as positive would change the program flow (due to two's complement)
See an extract of the class code:
// Some code and black magic
uint hashCode = (uint)key.GetHashCode();
int i = GetBucket(hashCode);
Entry[]? entries = _entries;
uint collisionCount = 0;
if (typeof(TKey).IsValueType)
{
i--;
do
{
if ((uint)i >= (uint)entries.Length) // <--- Workflow impact
{
goto ReturnNotFound;
}
entry = ref entries[i];
if (entry.hashCode == hashCode && EqualityComparer<TKey>.Default.Equals(entry.key, key))
{
goto ReturnFound;
}
i = entry.next;
collisionCount++;
} while (collisionCount <= (uint)entries.Length);
}
// More cool stuffs
Is there any performace gain or what is the reason for this?
The linked Dictionary source contains this comment;
// Should be a while loop https://github.com/dotnet/runtime/issues/9422
// Test in if to drop range check for following array access
if ((uint)i >= (uint)entries.Length)
{
goto ReturnNotFound;
}
entry = ref entries[i];
The uint comparison here isn't faster, but it helps speed up the array access. The linked github issue talks about a limitation in the runtime compiler, and how this loop structure allows further optimisations. Since this uint comparison has been performed explicitly, the compiler can prove that 0 <= i < entries.Length. This allows the compiler to leave out the array boundary test and throw of IndexOutOfRangeException that would otherwise be required.
In other words, at the time this code was written, and performance profiling was performed. The compiler wasn't smart enough to make simpler, more readable code, run as fast as possible. So someone with a deep understanding of the limitations of the compiler tweaked the code to make it better.

How to properly Clear a Queue containing structs?

I have declared a basic struct like this
private struct ValLine {
public string val;
public ulong linenum;
}
and declared a Queue like this
Queue<ValLine> check = new Queue<ValLine>();
Then in a using StreamReader setup where I'm reading through the lines of an input file using ReadLine in a while loop, among other things, I'm doing this to populate the Queue:
check.Enqueue(new ValLine { val = line, linenum = linenum });
("line" is a string containing the text of each line, "linenum" is just a counter that is initialized at 0 and is incremented each time through the loop.)
The purpose of the "check" Queue is that if a particular line meets some criteria, then I store that line in "check" along with the line number that it occurs on in the input file.
After I've finished reading through the input file, I use "check" for various things, but then when I'm finished using it I clear it out in the obvious manner:
check.Clear();
(Alternatively, in my final loop through "check" I could just use .Dequeue(), instead of foreach'ing it.)
But then I got to thinking - wait a minute, what about all those "new ValLine" I generated when populating the Queue in the first place??? Have I created a memory leak? I've pretty new to C#, so it's not coming clear to me how to deal with this - or even if it should be dealt with (perhaps .Clear() or .Dequeue() deals with the now obsoleted structs automatically?). I've spent over an hour with our dear friend Google, and just not finding any specific discussion of this kind of example in regard to the clearing of a collection of structs.
So... In C# do we need to deal with wiping out the individual structs before clearing the queue (or as we are dequeueing), or not? And if so, then what is the proper way to do this?
(Just in case it's relevant, I'm using .NET 4.5 in Visual Studio 2013.)
UPDATE: This is for future reference (you know, like if this page comes up in a Google search) in regard to proper coding. To make the struct immutable as per recommendation, this is what I've ended up with:
private struct ValLine {
private readonly string _val;
private readonly ulong _linenum;
public string val { get { return _val; } }
public ulong linenum { get { return _linenum; } }
public ValLine(string x, ulong n) { _val = x; _linenum = n; }
}
Corresponding to that change, the queue population line is now this:
check.Enqueue(new ValLine(line,linenum));
Also, though not strictly necessary, I did get rid of my foreach on the queue (and the check.Clear();, and changed it to this
while (check.Count > 0) {
ValLine ll = check.Dequeue();
writer.WriteLine("[{0}] {1}", ll.linenum, ll.val);
}
so that the queue is emptied out as the information is output.
UPDATE 2: Okay, yes, I'm still a C# newbie (less than a year). I learn a lot from the Internet, but of course, I'm often looking at examples from more than a year ago. I have changed my struct so now it looks like this:
private struct ValLine {
public string val { get; private set; }
public ulong linenum { get; private set; }
public ValLine(string x, ulong n): this()
{ this.val = x; this.linenum = n; }
}
Interestingly enough, I had actually tried exactly this off the top of my head before coming up with what's in the first update (above), but got a compile error (because I did not have the : this() with the constructor). Upon further suggestion, I checked further and found a recent example showing that : this() for making it work like I tried before, plugged that in, and - Wa La! - clean compile. I like the cleaner look of the code. What the private variables are called is irrelevant to me.
No, you won't have created a memory leak. Calling Clear or Dequeue will clear the memory appropriately - for example, if you had a List<T> then a clear operation might use:
for (int i = 0; i < capacity; i++)
{
array[i] = default(T);
}
I don't know offhand whether Queue<T> is implemented with a circular buffer built on an array, or a linked list - but either way, you'll be fine.
Having said that, I would strongly recommend against using mutable structs as you're doing here, along with mutable fields. While it's not causing the particular problem you're envisaging, they can behave in confusing ways.

If - return is a huge bottleneck in my application

This is a snippet of code from my C# application:
public Player GetSquareCache(int x, int y)
{
if (squaresCacheValid)
return (Player)SquaresCache[x,y];
else
//generate square cache and retry...
}
squareCacheValid is a private bool and SquaresCache is private uint[,].
The problem was that the application is running extremely slow and any optimization just made it slower, so I ran a tracing session.
I figured that GetSquareCache() gets 94.41% own time, and the if and return split that value mostly evenly (46% for if and 44.82% for return statement). Also the method is hit cca. 15,000 times in 30 seconds, in some tests going up to 20,000.
Before adding methods that call GetSquareCache(), program preformed pretty well but was using random value instead of actual GetSquareCache() calls.
My questions are: is it possible that these if/return statements used up so much CPU time? How is it possible that if statements GetSquareCache() is called in (which in total are hit the same number of times) have minimal own time? And is it possible to speed up the fundamental computing operation if?
Edit: Player is defined as
public enum Player
{
None = 0,
PL1 = 1,
PL2 = 2,
Both = 3
}
I would suggest a different approach , under the assumption that most of the values in the square hold no player, and that the square is very large remember only location where there are players,
It should look something like this :
public class PlayerLocaiton
{
Dictionary<Point, List<Player>> _playerLocation = new ...
public void SetPlayer(int x, int y, Player p)
{
_playerLocation[new Point(x,y)].add(p);
}
public Player GetSquareCache(int x, int y)
{
if (squaresCacheValid)
{
Player value;
Point p = new Point(x,y);
if(_playerLocation.TryGetValue(p, out value))
{
return value ;
}
return Player.None;
}
else
//generate square cache and retry...
}
}
The problem is just the fact that method is called way too many times. And indeed, 34,637 ms it gets in last trace, over 34,122 hits it got is a little over 1ms per hit. In decompiled CIL code there are also some assignments to local variables not present in code in both if branches because it needs one ret statement. The algorithm itself is what needs to be modified, and such modifications were planned anyway.
replace return type of this method to int and remove the casting
to Player
if cache is to be set once remove the if from this method so
it is always true when the method is called
replace array with single
dimension array and access it via unsafe fixed way

Stack Overflow in random number generator

For some reason, this code works fine when I don't use a seed in the Random class, but if I try to use DateTime.Now to get a more random number, I get a StackOverflowException! My class is really simple. Could someone tell me what I'm doing wrong here? See MakeUniqueFileName.
public class TempUtil
{
private int strcmp(string s1, string s2)
{
try
{
for (int i = 0; i < s1.Length; i++)
if (s1[i] != s2[i]) return 0;
return 1;
}
catch (IndexOutOfRangeException)
{
return 0;
}
}
private int Uniqueness(object randomObj)
{
switch (randomObj.ToString())
{
case "System.Object":
case "System.String":
return randomObj.ToString()[0];
case "System.Int32":
return int.Parse(randomObj.ToString());
case "System.Boolean":
return strcmp(randomObj.ToString(), "True");
default:
return Uniqueness(randomObj.ToString());
}
}
public string MakeUniqueFileName()
{
return "C:\\windows\\temp\\" + new Random(Uniqueness(DateTime.Now)).NextDouble() + ".tmp";
}
}
You're calling DateTime.Now.ToString(), which doesn't give you one of the strings you're checking for... so you're recursing, calling it with the same string... which still isn't one of the strings you're looking for.
You don't need to use Random to demonstrate the problem. This will do it very easily:
Uniqueness(""); // Tick, tick, tick... stack overflow
What did you expect it to be doing? It's entirely unclear what your code is meant to be doing, but I suggest you ditch the Uniqueness method completely. In fact, I suggest you get rid of the whole class, and use Path.GetTempFileName instead.
In short:
It should say
switch (randomObj.GetType().ToString())
instead of
switch (randomObj.ToString())
But even then this isn't very clever.
You are passing a DateTime instance to your Uniqueness method.
This falls through and calls itself with ToString - on a DateTime instance this will be a formatted DateTime string (such as "21/01/2011 13:13:01").
Since this string doesn't match any of your switch cases (again), the method calls itself again, but the result of calling ToString on a string is the same string.
You have caused an infinite call stack that results in the StackOverflowException.
There is no need to call Uniquness - when creating a Random instance, it will be based on the current time anyways.
I suggest reading Random numbers from the C# in depth website.
The parameter-less constructor of Random already uses the current time as seed value. It uses the time ticks, used internally to represent a DateTime.
A problem with this approach, however, is that the time clock ticks very slowly compared to the CPU clock frequency. If you create a new instance of Random each time you need a random value, it may be, that the clock did not tick between two calls, thus generating the same random number twice.
You can simply solve this problem by creating a single instance of Random.
public class TempUtil {
private static readonly Random random = new Random();
public string MakeUniqueFileName()
{
return #"C:\windows\temp\" + random.NextDouble() + ".tmp";
}
}
This will generate very good random numbers.
By the way
System.IO.Path.GetTempFileName()
automatically creates an empty temporary file with a unique name and returns the full path of that file.
Where to begin.
1. There is already a string compare. Use it. It has been debugged.
2. Your Unique function is illogical. The first two case items return a 'S' perhaps cast to an int. You have neglected the break on the first case.
Your third case is like this:
if (x =="System.Int32") return int.Parse("System.Int32");
That may return 32, or a parse error.
Your fourth case is like this:
if (x == "System.Boolean") return strcmp("System.Boolean", "True");
Your default case is called recursevly (sp) causing the stack overflow (see comment above)
In order fix this program, I recommend you read at least one good book on C#, then rethink your program, then write it. Perhaps Javascript would be a better fit.

The performance cost to using ref instead of returning same types?

Hi this is something that's really bothering me and I'm hoping someone has an answer for me. I've been reading about ref (and out) and I'm trying to figure out if I'm slowing down my code using refs. Commonly I will replace something like:
int AddToInt(int original, int add){ return original+add; }
with
void AddToInt(ref int original, int add){ original+=add; } // 1st parameter gets the result
because to my eyes this
AddToInt(ref _value, _add);
is easier to read AND code than this
_value = AddToInt(_value, _add);
I know precisely what I'm doing on the code using ref, as opposed to returning a value. However, performance is something I take seriously, and apparently dereferencing and cleanup is a lot slower when you use refs.
What I'd like to know is why every post I read says there is very few places you would typically pass a ref (I know the examples are contrived, but I hope you get the idea), when it seems to me that the ref example is smaller, cleaner and more exact.
I'd also love to know why ref really is slower than returning a value type - to me it would seem to me, if I was going to edit the function value a lot before returning it, that it would be quicker to reference the actual variable to edit it as opposed to an instance of that variable shortly before it gets cleaned from memory.
The main time that "ref" is used in the same sentence as performance is when discussing some very atypical cases, for example in XNA scenarios where the game "objects" are quite commonly represented by structs rather than classes to avoid problems with GC (which has a disproportionate impact on XNA). This becomes useful to:
prevent copying an oversized struct multiple times on the stack
prevent data loss due to mutating a struct copy (XNA structs are commonly mutable, against normal practice)
allow passing a struct in an an array directly, rather than ever copying it out and back in
In all other cases, "ref" is more commonly associated with an additional side-effect, not easily expressed in the return value (for example see Monitor.TryEnter).
If you don't have a scenario like the XNA/struct one, and there is no awkward side effect, then just use the return value. In addition to being more typical (which in itself has value), it could well involve passing less data (and int is smaller than a ref on x64 for example), and could require less dereferencing.
Finally, the return approach is more versatile; you don't always want to update the source. Contrast:
// want to accumulate, no ref
x = Add(x, 5);
// want to accumulate, ref
Add(ref x, 5);
// no accumulate, no ref
y = Add(x, 5);
// no accumulate, ref
y = x;
Add(ref y, x);
I think the last is the least clear (with the other "ref" one close behind it) and ref usage is even less clear in languages where it is not explicit (VB for example).
The main purpose of using the ref keyword is to signify that the variable's value can be changed by the function its being passed into. When you pass a variable by value, updates from within the function don't effect the original copy.
Its extremely useful (and faster) for situations when you want multiple return values and building a special struct or class for the return values would be overkill. For example,
public void Quaternion.GetRollPitchYaw(ref double roll, ref double pitch, ref double yaw){
roll = something;
pitch = something;
yaw = something;
}
This is a pretty fundamental pattern in languages that have unrestricted use of pointers. In c/c++ you frequently see primitives being passed around by value with classes and arrays as pointers. C# does just the opposite so 'ref' is handy in situations like the above.
When you pass a variable you want updated into a function by ref, only 1 write operation is necessary to give you your result. When returning values however, you normally write to some variable inside the function, return it, then write it again to the destination variable. Depending on the data, this could add unnecessary overhead. Anyhow, these are the main things that I typically consider before using the ref keyword.
Sometimes ref is a little faster when used like this in c# but not enough to use it as a goto justification for performance.
Here's what I got on a 7 year old machine using the code below passing and updating a 100k string by ref and by value.
Output:
iterations: 10000000
byref: 165ms
byval: 417ms
private void m_btnTest_Click(object sender, EventArgs e) {
Stopwatch sw = new Stopwatch();
string s = "";
string value = new string ('x', 100000); // 100k string
int iterations = 10000000;
//-----------------------------------------------------
// Update by ref
//-----------------------------------------------------
sw.Start();
for (var n = 0; n < iterations; n++) {
SetStringValue(ref s, ref value);
}
sw.Stop();
long proc1 = sw.ElapsedMilliseconds;
sw.Reset();
//-----------------------------------------------------
// Update by value
//-----------------------------------------------------
sw.Start();
for (var n = 0; n < iterations; n++) {
s = SetStringValue(s, value);
}
sw.Stop();
long proc2 = sw.ElapsedMilliseconds;
//-----------------------------------------------------
Console.WriteLine("iterations: {0} \nbyref: {1}ms \nbyval: {2}ms", iterations, proc1, proc2);
}
public string SetStringValue(string input, string value) {
input = value;
return input;
}
public void SetStringValue(ref string input, ref string value) {
input = value;
}
I have to agree with Ondrej here. From a stylistic view, if you start passing everything with ref you will eventually end up working with devs who will want to strangle you for designing an API like this!
Just return stuff from the method, don't have 100% of your methods returning void. What you are doing will lead to very unclean code and might confuse other devs who end up working on your code. Favour clarity over performance here, since you won't gain much in optomization anyway.
check this SO post: C# 'ref' keyword, performance
and this article from Jon Skeet: http://www.yoda.arachsys.com/csharp/parameters.html
in your case, using ref is a bad idea.
when using ref, the program will read a pointer off of the stack, than read the value the pointer is pointing at.
but if you were to pass by value it would only need to read the value off of the stack, basically reducing the amount of reads by half.
passing by reference should only be used for medium to large data structures like, for example, 3D models or arrays.
Using ref for basic datatypes is not a good idea.
Especially for simple methods which have few lines of code.
First off C# compiler will do lots of optimizations to make the code faster.
As per my benchmark https://rextester.com/CQJR12339 passing by ref has degraded the performance.
When a reference is passed , you are copying 8bytes as a pointer variable (assuming 64 bit processor) why not pass a 8Bytes double directly?
Pass by ref is useful in case of larger objects , For example a string which has lots of characters.
First, don't bother whether using ref is slower or faster. It's premature optimization. In 99.9999% cases you won't run into a situation this would cause a performance bottleneck.
Second, returning the result of the calculation as a return value as opposed to using ref is preferred because of the usual 'functional' nature of C-like languages. It leads to better chaining of statements/calls.
Update: Adding evidence from an actual performance benchmark, which shows the difference is ~1%, and hence readability over premature optimization is the suggested approach. Plus, ref turns out to be actually slower than a return value; however, as the difference is so small, repeated runs of the benchmark may end up opposite.
Results for .NET 6.0.7:
Method
Mean
Error
StdDev
AddToIntBase
375.6 us
5.36 us
4.18 us
AddToIntReturn
378.6 us
7.22 us
7.41 us
AddToIntReturnInline
375.5 us
4.84 us
4.04 us
AddToIntRef
383.7 us
5.92 us
5.25 us
AddToIntRefInline
384.2 us
7.29 us
8.96 us
Results for .NET 4.8 (because this is an 11-year-old question) are essentially the same:
Method
Mean
Error
StdDev
AddToIntBase
381.3 us
7.40 us
7.92 us
AddToIntReturn
380.8 us
7.00 us
6.55 us
AddToIntReturnInline
380.0 us
5.03 us
4.20 us
AddToIntRef
378.5 us
6.62 us
5.87 us
AddToIntRefInline
381.5 us
4.50 us
3.76 us
Benchmark code:
public class AddToIntBenchmark
{
[Benchmark]
public void AddToIntBase()
{
int result = 0;
for (int i = 0; i < 1_000_000; i++) result += i;
}
[Benchmark]
public void AddToIntReturn()
{
int result = 0;
for (int i = 0; i < 1_000_000; i++) result = AddToInt(result, i);
}
[Benchmark]
public void AddToIntReturnInline()
{
int result = 0;
for (int i = 0; i < 1_000_000; i++) result = AddToIntInline(result, i);
}
[Benchmark]
public void AddToIntRef()
{
int result = 0;
for (int i = 0; i < 1_000_000; i++) AddToInt(ref result, i);
}
[Benchmark]
public void AddToIntRefInline()
{
int result = 0;
for (int i = 0; i < 1_000_000; i++) AddToIntInline(ref result, i);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private int AddToIntInline(int original, int add) { return original + add; }
private int AddToInt(int original, int add) { return original + add; }
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void AddToIntInline(ref int original, int add) { original += add; }
private void AddToInt(ref int original, int add) { original += add; }
}
Environment: .NET 6.0.7, Intel Xeon Gold 16-core 2.4 GHz, WS2019 virtual machine

Categories

Resources