This is a paraphrasing of a question I had before. It's a simple threading question but I can't seem to understand it.
If i have shared code:
private static object objSync = new object();
private int a = 0;
public void function()
{
lock(objSync)
a += 2;
lock(objSync)
a += 3;
Console.WriteLine(a.ToString());
a=0;
}
I can't expect 'a' to equal 5 in the end because the first thread gains a lock sets 'a' to 2 and then then the next thread can get a lock set it to '4' before the first thread is able to add 3 and you'll end up with 7 in the end.
The solution as I understand it would be to put a lock around the entire thing and then you could always expect 5. Now my question is what if between the two locks there is a a million lines of code. I can't imagine putting a lock around a million lines of code. How would you ensure thread safety but not take the fail to performance by putting a lock around a million and two lines of code?
EDIT:
This is nonsense code that I wrote for the question. The actual application is a line monitoring system. There are two screens that show the current line, one for the clerks and one for the public. The screen for the clerk accepts 'clicks' through the serial port which progresses the line by one and then updates the public screen through a notify event (see observer design pattern). The problem is they aren't synchronized if you spam the clicker. I imagine what's happening is the first screen adds to the line number, displays it and then updates the public screen, but before the public screen has a chance to show the line number from the database, the clerk clicks again and the number goes out of synch. The 'a' above represents a value I retrieve from the db and not just a simple int. There are a few places that I change the value and I need all of it to happen atomically.
It all depends on what you define as "success". If it is a counter, that could still be the right result. If you only want to update it if it is what you thought it was, then take a snapshot inside the first lock, and likewise compare to the snapshot in the final lock. In that scenario you can replace much of the code with some Interlocked.CompareExchange. Likewise in the "counter" scenario, Interlocked.Increment is your friend.
If the code must match, then you have a few options:
if the compare fails, run the entire million lines again; repeat until you win the race (by matching)
if the compare fails, throw an exception
block for the million lines; it might still be a fast million lines...
think of some other resolution strategy that makes sense for your scenario
In the first two you should watch for polluting things by leaving any related data in an intermediate state.
And btw; the final assignment to 0 should probably also be part of the same locking strategy. Locks only work if all access respects them.
But only your scenario can tell us what the "right" behaviour in a conflict is.
It is a matter of design actually. If your method calculates different values for "a", depending on certain circumstances, but you do not want another thread changing "a"'s value while an operation is active, something like this is more appropriate:
private static object objSync = new object();
private int a = 0;
public void function()
{
int b;
lock(objSync)
b = a;
b += 2;
//million lines of code
b +=3;
lock(objSync)
{
a = b;
Console.WriteLine(a.ToString());
a=0;
}
}
Related
I want to ask user to enter a value less than 10. I am using the following code. Which one is better to use? Loop or Recursive method. Someone said me using Recursive function method may cause Memory Leakage. Is it true?
class Program
{
static void Main(string[] args)
{
int x;
do
{
Console.WriteLine("Please Enther a value less than 10.");
x = int.Parse(Console.ReadLine());
} while (x > 10);
//Uncomment the bellow method and comment previous to test the Recursive method
//Value();
}
static string Value()
{
Console.WriteLine("Please Enther a value less than 10.");
return int.Parse(Console.ReadLine()) > 9 ? Value() : "";
}
}
It would probably be a long time before recursion became an issue in this example.
Recursive methods run the risk of causing stack overflow exceptions if they keep running for a long time without completing. This is because each method call results in data being stored on the stack (which has very limited space) - more info here:
https://en.wikipedia.org/wiki/Stack_overflow#:~:text=The%20most%2Dcommon%20cause%20of%20stack%20overflow%20is%20excessively%20deep%20or%20infinite%20recursion%2C%20in%20which%20a%20function%20calls%20itself%20so%20many%20times%20that%20the%20space%20needed%20to%20store%20the%20variables%20and%20information%20associated%20with%20each%20call%20is%20more%20than%20can%20fit%20on%20the%20stack.
In your case unless they enter a number greater than or equal to 10 loads of times or you have very little memory it should be fine.
Generally it's better to use loops than recursion as they are simpler to understand. Recursion is a useful tool for achieving good performance in certain scenarios but generally loops should be preferred.
Recursive functions are used when you have a base case and a regular case. The base case is vital, since it marks the end of recursion. For example, you can create a function factorial(n) to calculate the factorial of a number n. The base case happens when n reaches 1 and you just return 1, while in the regular case you just multiply n by factorial(n - 1).
In general (there are a few optimization cases in which you can save memory), recursive functions create a new stack frame for each call. So, for factorial(3), there are at least three stack frames being created, the factorial(3) itself, the factorial(2) one, and finally the one that finished recursions which would be factorial(1).
At least, in that case you know when you are going to finish, and how much memory you need, so a compiler can work with that in advance.
All that discussion above means that you are misunderstanding recursion if you think you can use it in order to validate user input. There can be only one call with the correct answer, which would be the base case, or even hundreds or thousands or regular case instances as well. This has the potential of overflowing the stack of your program, without any way to prevent that, on the part of the compiler or on your part.
Another way to see this is that recursion is used as a way of abstraction: you specify what needs to happen, even near to the mathematical problem, and not how it should happen. In your example, that level of abstraction is unneeded.
After reading so much about how to do it, I'm quite confused.
So here is what I want to do:
I have a datastructure/object that holds all kinds of information. I tread the datastructure as if it were immutable. Whenever I need to update information, I make a DeepCopy and do the changes to it. Then I swap the old and the newly created object.
Now I don't know how to do everything right.
Let's look at it from the side of the reader/consumer threads.
MyObj temp = dataSource;
var a = temp.a;
... // many instructions
var b = temp.b;
....
As I understand reading references is atomic. So I don't need any volatile or locking to assign the current reference of dataSource to temp. But what about the Garbage Collection. As I understand the GC has some kind of reference counter to know when to free memory. So when another thread updates dataSource exactly at the moment when dataSource is assigned to temp, does the GC increase the counter on the right memory block?
The other thing is compiler/CLR optimization. I assign dataSource to temp and use temp to access data members. What does the CLR do? Does it really make a copy of dataSource or does the optimizer just use dataSource to access .a and .b? Let's assume that between temp.a and temp.b are lot's of instructions so that the reference to temp/dataSource cannot be held in a CPU register. So is temp.b really temp.b or is it optimized to dataSource.b because the copy to temp can be optimized away. This is especially important if another thread updates dataSource to point to a new object.
Do I really need volatile, lock, ReadWriterLockSlim, Thread.MemoryBarrier or something else?
The important thing to me is that I want to make sure that temp.a and temp.b access the old datastructure even when another thread updates dataSource to another newly created data structure. I never change data inside an existing structure. Updates are always done by creating a copy, updating data and then updating the reference to the new copy of the datastructre.
Maybe one more question. If I don't use volatile, how long does it take until all cores on all CPUs see the updated reference?
When it comes to volatile please have a look here: When should the volatile keyword be used in C#?
I have done a little test programm:
namespace test1 {
public partial class Form1 : Form {
public Form1() { InitializeComponent(); }
Something sharedObj = new Something();
private void button1_Click(object sender, EventArgs e) {
Thread t = new Thread(Do); // Kick off a new thread
t.Start(); // running WriteY()
for (int i = 0; i < 1000; i++) {
Something reference = sharedObj;
int x = reference.x; // sharedObj.x;
System.Threading.Thread.Sleep(1);
int y = reference.y; // sharedObj.y;
if (x != y) {
button1.Text = x.ToString() + "/" + y.ToString();
Update();
}
}
}
private void Do() {
for (int i = 0; i < 1000000; i++) {
Something someNewObject = sharedObj.Clone(); // clone from immutable
someNewObject.Do();
sharedObj = someNewObject; // atomic
}
}
}
class Something {
public Something Clone() { return (Something)MemberwiseClone(); }
public void Do() { x++; System.Threading.Thread.Sleep(0); y++; }
public int x = 0;
public int y = 0;
}
}
In Button1_click there is a for-loop and inside the for-loop I access a datastructure/object once using the direct "shareObj" and once using a temporarily created "reference". Using the reference is enough to make sure that "var a" and "var b" are initialized with values from the same object.
The only thing I don't understand is, why is "Something reference = sharedObj;" not optimized away and "int x = reference.x;" not replaced by "int x = sharedObj.x;"?
How does the compiler, jitter know not to optimize this? Or are temporarily objects never optimized in C#?
But most important: Is my example running as intended because it is correct or is it running as intended only by accident?
As I understand reading references is atomic.
Correct. This is a very limited property though. It means reading a reference will work; you'll never get the bits of half an old reference mixed with the bits of half a new reference resulting in a reference that doesn't work. If there's a concurrent change it promises nothing about whether you get the old or the new reference (what would such a promise even mean?)
So I don't need any volatile or locking to assign the current reference of dataSource to temp.
Maybe, though there are cases where this can have problems.
But what about the Garbage Collection. As I understand the GC has some kind of reference counter to know when to free memory.
Incorrect. There is no reference counting in .NET garbage collection.
If there is a static reference to an object, then it is not eligible for reclamation.
If there is an active local reference to an object, then it is not eligble for reclamation.
If there is a reference to an object in a field of an object that is not eligible for reclamation, then it too is not eligible for reclamation, recursively.
There's no counting here. Either there is an active strong reference prohibiting reclamation, or there isn't.
This has a great many very important implications. Of relevance here is that there can never be any incorrect reference counting, since there is no reference counting. Strong references will not disappear under your feet.
The other thing is compiler/CLR optimization. I assign dataSource to temp and use temp to access data members. What does the CLR do? Does it really make a copy of dataSource or does the optimizer just use dataSource to access .a and .b?
That depends on what dataSource and temp are as far as whether they are local or not, and how they are used.
If dataSource and temp are both local, then it is perfectly possible that either the compiler or the jitter would optimise the assignment away. If they are both local though, they are both local to the same thread, so this isn't going to impact multi-threaded use.
If dataSource is a field (static or instance), then since temp is definitely a local in the code shown (because its initialised in the code fragment shown) the assignment cannot be optimised away. For one thing, grabbing a local copy of a field is in itself a possible optimisation, being faster to do several operations on a local reference than to continually access a field or static. There's not much point having a compiler or jitter "optimisation" that just makes things slower.
Consider what actually happens if you were to not use temp:
var a = dataSource.a;
... // many instructions
var b = dataSource.b;
To access dataSource.a the code must first obtain a reference to dataSource and then access a. Afterwards it obtains a reference to dataSource and then accesses b.
Optimising by not using a local makes no sense, since there's going to be an implicit local anyway.
And there is the simple fact that the fear you have is something considered: After temp = dataSource there's no assumption that temp == dataSource because there could be other threads changing dataSource, so it's not valid to make optimisations predicated on temp == dataSource.*
Really the optimisations you are concerned about are either not relevant or not valid and hence not going to happen.
There is a case that could cause you problems though. It is just about possible for a thread running on one core to not see a change to dataSource made by a thread changing on another core. As such if you have:
/* Thread A */
dataSource = null;
/* Some time has passed */
/* Thread B */
var isNull = dataSource == null;
Then there's no guarantee that just because Thread A had finished setting dataSource to null, that Thread B would see this. Ever.
The memory models in use in .NET itself and in the processors .NET generally runs on (x86 and x86-64) would prevent that happening, but in terms of possible future optimisations, this is something that's possible. You need memory barriers to ensure that Thread A's publishing definitely affects Thread B's reading. lock and volatile are both ways to ensure that.
*One doesn't even need to be multi-threaded for this to not follow, though it is possible to prove in particular cases that there are no single-thread changes that would break that assumption. That doesn't really matter though, because the multi-threaded case still applies.
This is the first time I face a problem like this. Not being this my profession but only my hobby, I have no previous references.
In my program I have added one by one several functions to control a machine. After I added the last function (temperature measurement), I have started experiencing problems on other functions (approx. 8 of them running all together. The problem I am experiencing is on a chart (RPM of a motor) that is not related to this function but is affected by it. You see the difference between these two charts with and without the temperature measurement running. The real speed of the motor is the same in both charts but in the second one I loose pieces on the fly because the application slows down.
Without the temperature function.
With the temperature function
Particularly this function is disturbing the above control and I think is because the work load is becoming heavy for the application and or because I need sampling so there is some time waiting to get them:
private void AddT(decimal valueTemp)
{
sumTemp += valueTemp;
countTemp += 1;
if (countTemp >= 20) //take 20 samples and make average
{
OnAvarerageChangedTemp(sumTemp / countTemp);
sumTemp = 0;
countTemp = 0;
}
}
private void OnAvarerageChangedTemp(decimal avTemp)
{
float val3 = (float)avTemp;
decimal alarm = avTemp;
textBox2.Text = avTemp.ToString("F");
if (alarm > 230)
{
System.Media.SoundPlayer player = new System.Media.SoundPlayer();
player.Stream = Properties.Resources.alarma;
player.Play();
timer4.Start();
}
else
{
timer4.Stop();
panel2.BackColor = SystemColors.Control;
}
}
I am wondering if running this function on a different thread would solve the problem and how I can do that? Or if there is a different way to solve the problem.Sample code will be appreciated.
Update, added method call.
This is how I call the method AddT
if (b != "")
{
decimal convTemp; //corrente resistenza
decimal.TryParse(b, out convTemp);
AddT(convTemp);}
This is how I receive the data from the serial and pass it to the class that strips out unwonted chars and return values to the different variables.
This is the class that strips out the unwonted chars and return the values.
And this is how I manage the serial incoming data. Please do not laugh at me after seeing my coding. I do a different job and I am learning on my own.
It's very hard to tell if there's anything wrong and what it might be - it looks like subtle problem.
However, it might be easier to get a handle on these things if you refactor your code. There are many things in the code you've shown that make it harder than necessary to reason about what's happening.
You're using float and decimal - float isn't that accurate but small and fast; decimal (tries) to be precise but especially is predictable since it rounds errors the way a human might in base-10 - but it is quite slow, and is usually intended for calculations where precise reproducibility is necessary (e.g. financial stuff). You should probably use double everywhere.
You've got useless else {} code in the Stripper class.
Your Stripper is an instatiable class, when it should simply be a static class with a static method - Stripper is stateless.
You're catching exceptions just to rethrow them.
You're using TryParse, and not checking for success. Normally you'd only use TryParse if you (a) expect parsing to fail sometimes, and (b) can deal with that parse failure. If you don't expect failure or can't deal with it, you're better off with a crash you learn about soon than a subtly incorrect values.
In stripper, you're duplicating variables such as _currentMot, currentMot, and param4 but they're identical - use only the parameter, and give it a logical name.
You're using out parameters. It's almost always a better idea to define a simple struct and return that instead - this also allows you to ensure you can't easily mix up variable names, and it's much easier to encapsulate and reuse functionality since you don't need to duplicate a long call and argument definition.
Your string parsing logic is too fragile. You should probably avoid Replace entirely, and instead explicitly make a Substring without the characters you've checked for, and you have some oddly named things like test1 and test2 which refer to a lastChar that's not the last character - this might be OK, but better names can help keep things straight in your head too.
You have incorrect code comments (decimal convTemp; //corrente resistenza). I usually avoid all purely technical code comments; it's better to use descriptive variable names which are another form of self-documenting code but one in which the compiler can at least check if you use them consistently.
Rather that return 4 possibly empty values, your Stripper should probably accept a parameter "sink" object on which it can call AddT AddD and AddA directly.
I don't think any of the above will fix your issue, but I do believe they're help keep your code a little cleaner and (in the long run) make it easier to find the issues.
your problem is in the parsing of the values you have
decimal.TryParse(a, out convRes);
AddA(convRes);
and don't check for failed values you only accept the value if it returns true
if(decimal.TryParse(a, out convRes))
{
AddA(convRes);
}
you may have more errors but this one is making you process 0 values every time the TryParse fails.
Let's think of it as a family tree, a father has kids, those kids have kids, those kids have kids, etc...
So I have a recursive function that gets the father uses Recursion to get the children and for now just print them to debug output window...But at some point ( after one hour of letting it run and printing like 26000 rows) it gives me a StackOverFlowException.
So Am really running out of memory? hmmm? then shouldn't I get an "Out of memory exception"? on other posts I found people were saying if the number of recursive calls are too much, you might still get a SOF exception...
Anyway, my first thought was to break the tree into smaller sub-strees..so I know for a fact that my root father always has these five kids, so Instead of Calling my method one time with root passed to it, I said ok call it five times with Kids of root Passes to it.. It helped I think..but still one of them is so big - 26000 rows when it crashes - and still have this issue.
How about Application Domains and Creating new Processes at run time at some certain level of depth? Does that help?
How about creating my own Stack and using that instead of recursive methods? does that help?
here is also a high-level of my code, please take a look, maybe there is actually something silly wrong with this that causes SOF error:
private void MyLoadMethod(string conceptCKI)
{
// make some script calls to DB, so that moTargetConceptList2 will have Concept-Relations for the current node.
// when this is zero, it means its a leaf.
int numberofKids = moTargetConceptList2.ConceptReltns.Count();
if (numberofKids == 0)
return;
for (int i = 1; i <= numberofKids; i++)
{
oUCMRConceptReltn = moTargetConceptList2.ConceptReltns.get_ItemByIndex(i, false);
//Get the concept linked to the relation concept
if (oUCMRConceptReltn.SourceCKI == sConceptCKI)
{
oConcept = moTargetConceptList2.ItemByKeyConceptCKI(oUCMRConceptReltn.TargetCKI, false);
}
else
{
oConcept = moTargetConceptList2.ItemByKeyConceptCKI(oUCMRConceptReltn.SourceCKI, false);
}
//builder.AppendLine("\t" + oConcept.PrimaryCTerm.SourceString);
Debug.WriteLine(oConcept.PrimaryCTerm.SourceString);
MyLoadMethod(oConcept.ConceptCKI);
}
}
How about creating my own Stack and using that instead of recursive methods? does that help?
Yes!
When you instantiate a Stack<T> this will live on the heap and can grow arbitrarily large (until you run out of addressable memory).
If you use recursion you use the call stack. The call stack is much smaller than the heap. The default is 1 MB of call stack space per thread. Note this can be changed, but it's not advisable.
StackOverflowException is quite different to OutOfMemoryException.
OOME means that there is no memory available to the process at all. This could be upon trying to create a new thread with a new stack, or in trying to create a new object on the heap (and a few other cases).
SOE means that the thread's stack - by default 1M, though it can be set differently in thread creation or if the executable has a different default; hence ASP.NET threads have 256k as a default rather than 1M - was exhausted. This could be upon calling a method, or allocating a local.
When you call a function (method or property), the arguments of the call are placed on the stack, the address the function should return to when it returns are put on the stack, then execution jumps to the function called. Then some locals will be placed on the stack. Some more may be placed on it as the function continues to execute. stackalloc will also explicitly use some stack space where otherwise heap allocation would be used.
Then it calls another function, and the same happens again. Then that function returns, and execution jumps back to the stored return address, and the pointer within the stack moves back up (no need to clean up the values placed on the stack, they're just ignored now) and that space is available again.
If you use up that 1M of space, you get a StackOverflowException. Because 1M (or even 256k) is a large amount of memory for these such use (we don't put really large objects in the stack) the three things that are likely to cause an SOE are:
Someone thought it would be a good idea to optimise by using stackalloc when it wasn't, and they used up that 1M fast.
Someone thought it would be a good idea to optimise by creating a thread with a smaller than usual stack when it wasn't, and they use up that tiny stack.
A recursive (whether directly or through several steps) call falls into an infinite loop.
It wasn't quite infinite, but it was large enough.
You've got case 4. 1 and 2 are quite rare (and you need to be quite deliberate to risk them). Case 3 is by far the most common, and indicates a bug in that the recursion shouldn't be infinite, but a mistake means it is.
Ironically, in this case you should be glad you took the recursive approach rather than iterative - the SOE reveals the bug and where it is, while with an iterative approach you'd probably have an infinite loop bringing everything to a halt, and that can be harder to find.
Now for case 4, we've got two options. In the very very rare cases where we've got just slightly too many calls, we can run it on a thread with a larger stack. This doesn't apply to you.
Instead, you need to change from a recursive approach to an iterative one. Most of the time, this isn't very hard thought it can be fiddly. Instead of calling itself again, the method uses a loop. For example, consider the classic teaching-example of a factorial method:
private static int Fac(int n)
{
return n <= 1 ? 1 : n * Fac(n - 1);
}
Instead of using recursion we loop in the same method:
private static int Fac(int n)
{
int ret = 1;
for(int i = 1; i <= n, ++i)
ret *= i;
return ret;
}
You can see why there's less stack space here. The iterative version will also be faster 99% of the time. Now, imagine we accidentally call Fac(n) in the first, and leave out the ++i in the second - the equivalent bug in each, and it causes an SOE in the first and a program that never stops in the second.
For the sort of code you're talking about, where you keep producing more and more results as you go based on previous results, you can place the results you've got in a data-structure (Queue<T> and Stack<T> both serve well for a lot of cases) so the code becomes something like):
private void MyLoadMethod(string firstConceptCKI)
{
Queue<string> pendingItems = new Queue<string>();
pendingItems.Enqueue(firstConceptCKI);
while(pendingItems.Count != 0)
{
string conceptCKI = pendingItems.Dequeue();
// make some script calls to DB, so that moTargetConceptList2 will have Concept-Relations for the current node.
// when this is zero, it means its a leaf.
int numberofKids = moTargetConceptList2.ConceptReltns.Count();
for (int i = 1; i <= numberofKids; i++)
{
oUCMRConceptReltn = moTargetConceptList2.ConceptReltns.get_ItemByIndex(i, false);
//Get the concept linked to the relation concept
if (oUCMRConceptReltn.SourceCKI == sConceptCKI)
{
oConcept = moTargetConceptList2.ItemByKeyConceptCKI(oUCMRConceptReltn.TargetCKI, false);
}
else
{
oConcept = moTargetConceptList2.ItemByKeyConceptCKI(oUCMRConceptReltn.SourceCKI, false);
}
//builder.AppendLine("\t" + oConcept.PrimaryCTerm.SourceString);
Debug.WriteLine(oConcept.PrimaryCTerm.SourceString);
pendingItems.Enque(oConcept.ConceptCKI);
}
}
}
(I haven't completely checked this, just added the queuing instead of recursing to the code in your question).
This should then do more or less the same as your code, but iteratively. Hopefully that means it'll work. Note that there is a possible infinite loop in this code if the data you are retrieving has a loop. In that case this code will throw an exception when it fills the queue with far too much stuff to cope. You can either debug the source data, or use a HashSet to avoid enqueuing items that have already been processed.
Edit: Better add how to use a HashSet to catch duplicates. First set up a HashSet, this could just be:
HashSet<string> seen = new HashSet<string>();
Or if the strings are used case-insensitively, you'd be better with:
HashSet<string> seen = new HashSet<string>(StringComparison.InvariantCultureIgnoreCase) // or StringComparison.CurrentCultureIgnoreCase if that's closer to how the string is used in the rest of the code.
Then before you go to use the string (or perhaps before you go to add it to the queue, you have one of the following:
If duplicate strings shouldn't happen:
if(!seen.Add(conceptCKI))
throw new InvalidOperationException("Attempt to use \" + conceptCKI + "\" which was already seen.");
Or if duplicate strings are valid, and we just want to skip performing the second call:
if(!seen.Add(conceptCKI))
continue;//skip rest of loop, and move on to the next one.
I think you have a recursion's ring (infinite recursion), not a really stack overflow error. If you are got more memory for stack - you will get the overflow error too.
For test it:
Declare a global variable for storing a operable objects:
private Dictionary<int,object> _operableIds = new Dictionary<int,object>();
...
private void Start()
{
_operableIds.Clear();
Recurtion(start_id);
}
...
private void Recurtion(int object_id)
{
if(_operableIds.ContainsKey(object_id))
throw new Exception("Have a ring!");
else
_operableIds.Add(object_id, null/*or object*/);
...
Recurtion(other_id)
...
_operableIds.Remove(object_id);
}
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
System.Random keeps on returning the same value
I'm refactoring and expanding a small C# agent-based model to help some biology professors predict the spread of a disease. Each year of the simulation each individual agent randomly travels to a nearby population node, possibly spreading the disease. I'm brand new to C#, but I've read about potential problems with Random.Next returning the same value if re-initialized with the same system time. To avoid this I've created a static instance which is referenced for each new random value.
The specifics:
In my efforts to scale up the model I've changed it to compute the "travel" information for each population node in parallel. When testing the model I noticed that in the new version the disease would not spread past the first year. Further inquiry narrowed the problem down to the travel between nodes. After the first year all the agents remained immobile. I examined the function responsible for their travel and found that it works by creating a list of all nearby nodes, generating a random number <= the number of elements in the list, and travelling to listOfNearbyNodes[myRandomNumber].
The problem:
I then added a print statement to output the value of the random index for each iteration. I found that the whole model works exactly as expected for the first year of the simulation, with the random numbers being generated in an acceptable range. However, after the first year ends and the simulation loops the exact same code will only return a "random" index of 0. Every thread, every iteration, every node, every agent, always 0. As the current node of an agent is always the first item in the list the agents never move again.
I thought this might be another manifestation of the system time seed error, so I've tried three different ways of implementing a static random object, but it doesn't help. Every time I run the simulation the first year always works correctly and then Random.Next() starts to only return 0's.
Does anyone have ideas as to where I should look next for the bug? Thanks!
I suspect you're using the same instance of Random concurrently in multiple threads. Don't do that - it's not thread-safe.
Options:
Create a new instance of Random per thread (ThreadStatic can help here)
Use a single instance of Random, but only ever use it in a lock.
I have a blog post with some sample code, but please read the comments as well as there are good suggestions for improving it. I'm planning on writing another article about randomness at some point in the near-ish future...
I don't believe that the Random class is designed to be thread safe (concurrently usable from multiple threads) - so if you're sharing a single instance in this manner, you may corrupt the state of the random generator, preventing it from operating correctly.
You can decorate the static variable that holds the reference to the Random class as ThreadStatic, which will allow you to maintain a separate instance per thread:
[ThreadStatic]
private static Random m_Random; // don't attempt to initialize this here...
public void YourThreadStartMethod()
{
// initialize each random instance as each thread starts...
m_Random = new Random();
}
If you're using .NET 4.0, there's also the ThreadLocal<T> class, which helps make initializing one instance per thread easier.
The Random object is not thread safe. To get around that, you could use this code stolen from this answer:
class ThreadSafeRandom
{
private static Random random = new Random();
public static int Next()
{
lock (random)
{
return random.Next();
}
}
}
You could also use the RNGCryptoServiceProvider, which is thread safe and also produces better random data.
I think you should instead use the RNGCryptoServiceProvider
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rngcryptoserviceprovider.aspx