C# optimizations and side effects - c#

Can optimizations done by the C# compiler or the JITter have visible side effects?
One example I've though off.
var x = new Something();
A(x);
B(x);
When calling A(x) x is guaranteed to be kept alive to the end of A - because B uses the same parameter. But if B is defined as
public void B(Something x) { }
Then the B(x) can be eliminated by the optimizer and then a GC.KeepAlive(x) call might be necessary instead.
Can this optimization actually be done by the JITter?
Are there other optimizations that might have visible side effects, except stack trace changes?

If your function B does not use the parameter x, then eliminating it and collecting x early does not have any visible side effects.
To be "visible side effects", they have to be visible to the program, not to an external tool like a debugger or object viewer.

When calling A(x) x is guaranteed to be kept alive to the end of A - because B uses the same parameter.
This statement is false. Suppose method A always throws an exception. The jitter could know that B will never be reached, and therefore x can be released immediately. Suppose method A goes into an unconditional infinite loop after its last reference to x; again, the jitter could know that via static analysis, determine that x will never be referenced again, and schedule it to be cleaned up. I do not know if the jitter actually performs these optimization; they seem dodgy, but they are legal.
Can this optimization (namely, doing early cleanup of a reference that is not used anywhere) actually be done by the JITter?
Yes, and in practice, it is done. That is not an observable side effect.
This is justified by section 3.9 of the specification, which I quote for your convenience:
If the object, or any part of it, cannot be accessed by any possible continuation of execution, other than the running of destructors, the object is considered no longer in use, and it becomes eligible for destruction. The C# compiler and the garbage collector may choose to analyze code to determine which references to an object may be used in the future. For instance, if a local variable that is in scope is the only existing reference to an object, but that local variable is never referred to in any possible continuation of execution from the current execution point in the procedure, the garbage collector may (but is not required to) treat the object as no longer in use.
Can optimizations done by the C# compiler or the JITter have visible side effects?
Your question is answered in section 3.10 of the specification, which I quote here for your convenience:
Execution of a C# program proceeds
such that the side effects of each
executing thread are preserved at
critical execution points.
A side
effect is defined as a read or write
of a volatile field, a write to a
non-volatile variable, a write to an
external resource, and the throwing of
an exception.
The critical execution
points at which the order of these
side effects must be preserved are
references to volatile fields, lock statements,
and thread creation and termination.
The execution environment is free to
change the order of execution of a C#
program, subject to the following
constraints:
Data dependence is
preserved within a thread of
execution. That is, the value of each
variable is computed as if all
statements in the thread were executed
in original program order.
Initialization ordering rules are
preserved.
The
ordering of side effects is preserved
with respect to volatile reads and
writes.
Additionally, the
execution environment need not
evaluate part of an expression if it
can deduce that that expression’s
value is not used and that no needed
side effects are produced (including
any caused by calling a method or
accessing a volatile field).
When
program execution is interrupted by an
asynchronous event (such as an
exception thrown by another thread),
it is not guaranteed that the
observable side effects are visible in
the original program order.
The second-to-last paragraph is I believe the one you are most concerned about; that is, what optimizations is the runtime allowed to perform with respect to affecting observable side effects? The runtime is permitted to perform any optimization which does not affect an observable side effect.
Note that in particular data dependence is only preserved within a thread of execution. Data dependence is not guaranteed to be preserved when observed from another thread of execution.
If that doesn't answer your question, ask a more specific question. In particular, a careful and precise definition of "observable side effect" will be necessary to answer your question in more detail, if you do not consider the definition given above to match your definition of "observable side effect".

Including B in your question just confuses the matter. Given this code:
var x = new Something();
A(x);
Assuming that A(x) is managed code, then calling A(x) maintains a reference to x, so the garbage collector can't collect x until after A returns. Or at least until A no longer needs it. The optimizations done by the JITer (absent bugs) will not prematurely collect x.
You should define what you mean by "visible side effects." One would hope that JITer optimizations at least have the side effect of making your code smaller or faster. Are those "visible?" Or do you mean "undesireable?"

Eric Lippert has started a great series about refactoring which leads me to believe that the C# Compiler and JITter makes sure not to introduce side effects. Part 1 and Part 2 are currently online.

Related

Is volatile needed with indirect access?

Please note I am not asking about replacing volatile with other means (like lock), I am asking about volatile nature -- thus I use "needed" in sense, volatile or no volatile.
Consider such case that one thread only write to variable x (Int32) and the other thread only reads it. The access in both cases is direct.
The volatile is needed to avoid caching, correct?
But what if the access to x is indirect -- for example via property:
int x;
int access_x { get { return x; } set { x = value; } }
So both threads now uses only access_x, not x. Is x needed to be marked as volatile? If yes is there some limit of indirection when volatile is not needed anymore?
Update: consider such code of the reader (no writes):
if (x>10)
...
// second thread changes `x`
if (x>10)
...
In the second if compiler could evaluate use the old value because x could be in cache and without volatile there is no need to refetch. My question is about such change:
if (access_x>10)
...
// second thread changes `x`
if (access_x>10)
...
And let's say I skip volatile for x. What will happen / what can happen?
Is x needed to be marked as volatile?
Yes, and no (but mostly yes).
"Yes", because technically you have no guarantees here. The compilers (C# and JIT) are permitted to make any optimization they see fit, as long as the optimization would not change the behavior of the code when executed in a single-thread. One obvious optimization is to omit the call to the property setter and getter and just directly access the field (i.e. inlining). Of course, the compiler is allowed to do whatever analysis it wants and make further optimizations.
"No", because in practice this usually is not a problem. With the access wrapped in a method, the C# compiler won't optimize away the field, and the JIT compiler is unlikely to do so (even if the methods are inlined…again, no guarantees, but AFAIK such an optimization isn't performed, and I think it not likely a future version would). So all you're left with is memory coherency issues (the other reason volatile is used…i.e. essentially dealing with optimizations performed at the hardware level).
As long as your code is only ever going to run on Intel x86-compatible hardware, that hardware treats all reads and writes as volatile.
However, other platforms may be different. Itanium and ARM being a couple of common examples which have different memory models.
Personally, I prefer to write to the technicalities. Writing code that just happens to work, in spite of a lack of guarantee, is just asking to find some time in the future that the code stops working for some mysterious reason.
So IMHO you really should mark the field as volatile.
If yes is there some limit of indirection when volatile is not needed anymore?
No. If such a limit existed, it would have to be documented to be useful. In practice, the limit in terms of compiler optimizations is really just a "level of indirection" (as you put it). But no amount of levels of indirection avoid the hardware-level optimizations, and even the limit in terms of compiler optimizations is strictly "in practice". You have no guarantee that the compiler will never analyze the code more deeply and perform more aggressive optimizations, to any arbitrarily deep level of calls.
More generally, my rule of thumb is this: if I am trying to decide whether I should use some particular feature that I know is normally used to protect against bugs in concurrency-related scenarios, and I have to ask "do I really need this feature, or will the code work fine without it?", then I probably don't know enough about the way that feature works for me to safely avoid using it.
Consider it a variation on the old "if you have to ask how much it costs, you can't afford it."

Implicit limits of compiler instruction reordering?

When learning about locking mechanisms and concurrency recently, I came across compiler instruction reordering. Since then, I am a lot more suspicious about the correctness of the code I write even if there is no concurrent access to fields. I just encountered a piece of code like this:
var now = DateTime.Now;
var newValue = CalculateCachedValue();
cachedValue = newValue;
lastUpdate = now;
Is it possible that lastUpdate = now is executed before cachedValue is assigned the new value? This would mean that if the thread running this code was cancelled I would have logged an update that was not saved. From what I know now I have to assume this is the case and I need a memory barrier.
But is it even possible that the first statement is executed after the second? This would mean now is the time after the calculation and not before. I guess this is not the case because a method call is involved. However, there is no other clear dependency that prevents reordering. Is a method call/property access an implicit barrier? Are there other implicit constraints for instruction reordering that I should be aware of?
The .NET jitter can reorder instructions, yes. Invariant code motion and common sub-expression elimination are important optimizations and can make code a great deal faster.
But that does not just happen willy-nilly. The optimizer will only ever contemplate such an optimization if it knows that reordering will not have any undesirable side-effects. In order for it to know, it first has to inline a method or property getter call. And that will never happen for DateTime.Now, it requires an operating system call and those can never be inlined. So you have a hard guarantee that no statement ever moves before or after var now = DateTime.Now;
And it actually has to make sense to move code and result in a measurable benefit. There is no point in reordering the assignment statements, it does not make the code any faster. Invariant code motion is an optimization that's applied to statements that are inside a loop, moving such a statement ahead of the loop so it does not get executed repeatedly pays off. No risk of this at all in this snippet. Sub-expression elimination is also nowhere in sight here.
Being afraid of optimizer-induced bugs is a bit like being afraid to step outside because you might be struck by a bolt of lightning. That happens. Odds are just very, very low. A nice guarantee you get with the .NET jitter is that it gets tested millions of times every day.

C# TPL: Invoke method on outer scoped instance

So my title was fairly obscure, here is what I'm worried about. Can I invoke a method on an instance of a class that is declared outside of the block without suffering pitfalls i.e
Are there concurrency issues for code as structured below.
HeavyLifter hl = new HeavyLifter();
var someActionBlock = new ActionBlock<string>(n =>
{
int liftedStuff= hl.DoSomeHeavyLifting(n);
if (liftedStuff> 0)
.....
});
The source of my concerns are below.
The Block may have multiple threads running at the same time, and each of these threads may enter the DoSomeHeavyLifting method. Does each function invocation get its own frame pointer? Should I make sure I don't reference any variables outside of the CountWords scope?
Is there a better way to do this than to instantiate a HeavyLifter in my block?
Any help is greatly appreciated, I'm not too lost, but I know Concurrency is the King of latent errors and corner cases.
Assuming by frame pointer, that you mean stack frame, then yes, each invocation gets it's own stack frame, and associated variables. If parameters to the function are reference types, then all of the parameters will refer to the same object.
Whether or not it's safe to use the same HeavyLifter instance for all invocations depends on whether the DoSomeHeavyLifting method has side effects. That is, whether DoSomeHeavyLifting modifies any of the contents of the HeavyLifter object's state. (or any other referenced objects)
Ultimately whether it is safe to do this depends largely on what DoSomeHeavyLifting does internally. If it's carefully constructed in order to be reentrant then there are no problems calling it the way you have it. If however, DoSomeHeavyLifting modifies the state, or the state is modified as a side effect of any other operation, then the decision would have to be made in the context of the overall architecture how to handle it. For example, do you allow the state change, and enforce atomicity, or do you prevent any state change that affects the operation? Without knowing what the method is actually doing, it's impossible to give any specific advice.
In general when designing for concurrency it's usually best to assume the worst:
If a race condition can happen, it will.
When a race condition happens, you will lose the race in the most complex way your code allows.
Non-atomic state updates will corrupt each other, and leave your object in an undefined state.
If you use a lock there will be a case where you could deadlock.
Something that doesn't ever happen in debug, will always happen in release.

Why is the standard C# event invocation pattern thread-safe without a memory barrier or cache invalidation? What about similar code?

In C#, this is the standard code for invoking an event in a thread-safe way:
var handler = SomethingHappened;
if(handler != null)
handler(this, e);
Where, potentially on another thread, the compiler-generated add method uses Delegate.Combine to create a new multicast delegate instance, which it then sets on the compiler-generated field (using interlocked compare-exchange).
(Note: for the purposes of this question, we don't care about code that runs in the event subscribers. Assume that it's thread-safe and robust in the face of removal.)
In my own code, I want to do something similar, along these lines:
var localFoo = this.memberFoo;
if(localFoo != null)
localFoo.Bar(localFoo.baz);
Where this.memberFoo could be set by another thread. (It's just one thread, so I don't think it needs to be interlocked - but maybe there's a side-effect here?)
(And, obviously, assume that Foo is "immutable enough" that we're not actively modifying it while it is in use on this thread.)
Now I understand the obvious reason that this is thread-safe: reads from reference fields are atomic. Copying to a local ensures we don't get two different values. (Apparently only guaranteed from .NET 2.0, but I assume it's safe in any sane .NET implementation?)
But what I don't understand is: What about the memory occupied by the object instance that is being referenced? Particularly in regards to cache coherency? If a "writer" thread does this on one CPU:
thing.memberFoo = new Foo(1234);
What guarantees that the memory where the new Foo is allocated doesn't happen to be in the cache of the CPU the "reader" is running on, with uninitialized values? What ensures that localFoo.baz (above) doesn't read garbage? (And how well guaranteed is this across platforms? On Mono? On ARM?)
And what if the newly created foo happens to come from a pool?
thing.memberFoo = FooPool.Get().Reset(1234);
This seems no different, from a memory perspective, to a fresh allocation - but maybe the .NET allocator does some magic to make the first case work?
My thinking, in asking this, is that a memory barrier would be required to ensure - not so much that memory accesses cannot be moved around, given the read is dependent - but as a signal to the CPU to flush any cache invalidations.
My source for this is Wikipedia, so make of that what you will.
(I might speculate that maybe the interlocked-compare-exchange on the writer thread invalidates the cache on the reader? Or maybe all reads cause invalidation? Or pointer dereferences cause invalidation? I'm particularly concerned how platform-specific these things sound.)
Update: Just to make it more explicit that the question is about CPU cache invalidation and what guarantees .NET provides (and how those guarantees might depend on CPU architecture):
Say we have a reference stored in field Q (a memory location).
On CPU A (writer) we initialize an object at memory location R, and write a reference to R into Q
On CPU B (reader), we dereference field Q, and get back memory location R
Then, on CPU B, we read a value from R
Assume the GC does not run at any point. Nothing else interesting happens.
Question: What prevents R from being in B's cache, from before A has modified it during initialisation, such that when B reads from R it gets stale values, in spite of it getting a fresh version of Q to know where R is in the first place?
(Alternate wording: what makes the modification to R visible to CPU B at or before the point that the change to Q is visible to CPU B.)
(And does this only apply to memory allocated with new, or to any memory?)+
Note: I've posted a self-answer here.
This is a really good question. Let us consider your first example.
var handler = SomethingHappened;
if(handler != null)
handler(this, e);
Why is this safe? To answer that question you first have to define what you mean by "safe". Is it safe from a NullReferenceException? Yes, it is pretty trivial to see that caching the delegate reference locally eliminates that pesky race between the null check and the invocation. Is it safe to have more than one thread touching the delegate? Yes, delegates are immutable so there is no way that one thread can cause the delegate to get into a half-baked state. The first two are obvious. But, what about a scenario where thread A is doing this invocation in a loop and thread B at some later point in time assigns the first event handler? Is that safe in the sense that thread A will eventually see a non-null value for the delegate? The somewhat surprising answer to this is probably. The reason is that the default implementations of the add and remove accessors for the event create memory barriers. I believe the early version of the CLR took an explicit lock and later versions used Interlocked.CompareExchange. If you implemented your own accessors and omitted a memory barrier then the answer could be no. I think in reality it highly depends on whether Microsoft added memory barriers to the construction of the multicast delegate itself.
On to the second and more interesting example.
var localFoo = this.memberFoo;
if(localFoo != null)
localFoo.Bar(localFoo.baz);
Nope. Sorry, this actually is not safe. Let us assume memberFoo is of type Foo which is defined like the following.
public class Foo
{
public int baz = 0;
public int daz = 0;
public Foo()
{
baz = 5;
daz = 10;
}
public void Bar(int x)
{
x / daz;
}
}
And then let us assume another thread does the following.
this.memberFoo = new Foo();
Despite what some may think there is nothing that mandates that instructions have to be executed in the order that they were defined in the code as long as the intent of the programmer is logically preserved. The C# or JIT compilers could actually formulate the following sequence of instructions.
/* 1 */ set register = alloc-memory-and-return-reference(typeof(Foo));
/* 2 */ set register.baz = 0;
/* 3 */ set register.daz = 0;
/* 4 */ set this.memberFoo = register;
/* 5 */ set register.baz = 5; // Foo.ctor
/* 6 */ set register.daz = 10; // Foo.ctor
Notice how the assignment to memberFoo occurs before the constructor is run. That is valid because it does not have any unintended side-effects from the perspective of the thread executing it. It could, however, have a major impact on other threads. What happens if your null check of memberFoo on the reading thread occurred when the writing thread just fininished instruction #4? The reader will see a non-null value and then attempt to invoke Bar before the daz variable got set to 10. daz will still hold its default value of 0 thus leading to a divide by zero error. Of course, this is mostly theoretical because Microsoft's implementation of the CLR creates a release-fence on writes that would prevent this. But, the specification would technically allow for it. See this question for related content.
I think I have figured out what the answer is. But I'm not a hardware guy, so I'm open to being corrected by someone more familiar with how CPUs work.
The .NET 2.0 memory model guarantees:
Writes cannot move past other writes from the same thread.
This means that the writing CPU (A in the example), will never write a reference to an object into memory (to Q), until after it has written out contents of that object being constructed (to R). So far, so good. This cannot be re-ordered:
R = <data>
Q = &R
Let's consider the reading CPU (B). What is to stop it reading from R before it reads from Q?
On a sufficiently naïve CPU, one would expect it to be impossible to read from R without first reading from Q. We must first read Q to get the address of R. (Note: it is safe to assume that the C# compiler and JIT behave this way.)
But, if the reading CPU has a cache, couldn't it have stale memory for R in its cache, but receive the updated Q?
The answer seems to be no. For sane cache coherency protocols, invalidation is implemented as a queue (hence "invalidation queue"). So R will always be invalidated before Q is invalidated.
Apparently the only hardware where this is not the case is the DEC Alpha (according to Table 1, here). It is the only listed architecture where dependent reads can be re-ordered. (Further reading.)
Capturing reference to immutable object guarantees thread safety (in sense of consistency, it does not guarantee that you get the latest value).
List of events handlers are immutable and thus it is enough for thread safety to capture reference to current value. The whole object would be consistent as it never change after initial creation.
Your sample code does not explicitly state if Foo is immutable, so you get all sorts of problems with figuring out whether the object can change or not i.e. directly by setting properties. Note that code would be "unsafe" even in single-threaded case as you can't guarantee that particular instance of Foo does not change.
On CPU caches and like: The only change that can invalidate data at actual location in memory for true immutable object is GC's compaction. That code ensures all necessary locks/cache consistency - so managed code would never observe change in bytes referenced by your cached pointer to immutable object.
When this is evaluated:
thing.memberFoo = new Foo(1234);
First new Foo(1234) is evaluated, which means that the Foo constructor executes to completion. Then thing.memberFoo is assigned the value. This means that any other thread reading from thing.memberFoo is not going to read an incomplete object. It's either going to read the old value, or it's going to read the reference to the new Foo object after its constructor has completed. Whether this new object is in the cache or not is irrelevant; the reference being read won't point to the new object until after the constructor has completed.
The same thing happens with the object pool. Everything on the right evaluates completely before the assignment happens.
In your example, B will never get the reference to R before R's constructor has run, because A does not write R to Q until A has completed constructing R. If B reads Q before that, it will get whatever value was already in Q. If R's constructor throws an exception, then Q will never be written to.
C# order of operations guarantees this will happen this way. Assignment operators have the lowest precedence, and new and function call operators have the highest precedence. This guarantees that the new will evaluate before the assignment is evaluated. This is required for things like exceptions -- if an exception is thrown by the constructor then the object being allocated will be in an invalid state and you don't want that assignment to occur regardless of whether you're multithreaded or not.
It seems to me you should be using see this article in this case. This ensures the compiler doesn't perform optimisations that assume access by a single thread.
Events used to use locks, but as of C# 4 use lock-free synchronisation - I'm not sure exactly what (see this article).
EDIT:
The Interlocked methods use memory barriers which will ensure all threads read the updated value (on any sane system). So long as you perform all updates with Interlocked you can safely read the value from any thread without a memory barrier. This is the pattern used in the System.Collections.Concurrent classes.

What is thread safe (C#) ? (Strings, arrays, ... ?)

I'm quite new to C# so please bear with me. I'm a bit confused with the thread safety. When is something thread safe and when something isn't?
Is reading (just reading from something that was initialized before) from a field always thread safe?
//EXAMPLE
RSACryptoServiceProvider rsa = new RSACrytoServiceProvider();
rsa.FromXmlString(xmlString);
//Is this thread safe if xml String is predifined
//and this code can be called from multiple threads?
Is accessing an object from an array or list always thread safe (in case you use a for loop for enumeration)?
//EXAMPLE (a is local to thread, array and list are global)
int a = 0;
for(int i=0; i<10; i++)
{
a += array[i];
a -= list.ElementAt(i);
}
Is enumeration always/ever thread safe?
//EXAMPLE
foreach(Object o in list)
{
//do something with o
}
Can writing and reading to a particular field ever result in a corrupted read (half of the field is changed and half is still unchanged) ?
Thank you for all your answers and time.
EDIT: I meant if all threads are only reading & using (not writing or changing) object. (except for the last question where it is obvious that I meant if threads both read and write). Because I do not know if plain access or enumeration is thread safe.
It's different for different cases, but in general, reading is safe if all threads are reading. If any are writing, neither reading or writing is safe unless it can be done atomically (inside a synchronized block or with an atomic type).
It isn't definite that reading is ok -- you never know what is happening under the hoods -- for example, a getter might need to initialize data on first usage (therefore writing to local fields).
For Strings, you are in luck -- they are immutable, so all you can do is read them. With other types, you will have to take precautions against them changing in other threads while you are reading them.
Is reading (just reading from something that was initialized before) from a field always thread safe?
The C# language guarantees that reads and writes are consistently ordered when the reads and writes are on a single thread in section 3.10:
Data dependence is preserved within a thread of execution. That is, the value of each variable is computed as if all statements in the thread were executed in original program order. Initialization ordering rules are preserved.
Events in a multithreaded, multiprocessor system do not necessarily have a well-defined consistent ordering in time with respect to each other. The C# language does not guarantee there to be a consistent ordering. A sequence of writes observed by one thread may be observed to be in a completely different order when observed from another thread, so long as no critical execution point is involved.
The question is therefore unanswerable because it contains an undefined word. Can you give a precise definition of what "before" means to you with respect to events in a multithreaded, multiprocessor system?
The language guarantees that side effects are ordered only with respect to critical execution points, and even then, does not make any strong guarantees when exceptions are involved. Again, to quote from section 3.10:
Execution of a C# program proceeds such that the side effects of each executing thread are preserved at critical execution points. A side effect is defined as a read or write of a volatile field, a write to a non-volatile variable, a write to an external resource, and the throwing of an exception. The critical execution points at which the order of these side effects must be preserved are references to volatile fields, lock statements, and thread creation and termination. [...] The ordering of side effects is preserved with respect to volatile reads and writes.
Additionally, the execution environment need not evaluate part of an expression if it can deduce that that expression’s value is not used and that no needed side effects are produced (including any caused by calling a method or accessing a volatile field). When program execution is interrupted by an asynchronous event (such as an exception thrown by another thread), it is not guaranteed that the observable side effects are visible in the original program order.
Is accessing an object from an array or list always thread safe (in case you use a for loop for enumeration)?
By "thread safe" do you mean that two threads will always observe consistent results when reading from a list? As noted above, the C# language makes very limited guarantees about observation of results when reading from variables. Can you give a precise definition of what "thread safe" means to you with respect to non-volatile reading?
Is enumeration always/ever thread safe?
Even in single threaded scenarios it is illegal to modify a collection while enumerating it. It is certainly unsafe to do so in multithreaded scenarios.
Can writing and reading to a particular field ever result in a corrupted read (half of the field is changed and half is still unchanged) ?
Yes. I refer you to section 5.5, which states:
Reads and writes of the following data types are atomic: bool, char, byte, sbyte, short, ushort, uint, int, float, and reference types. In addition, reads and writes of enum types with an underlying type in the previous list are also atomic. Reads and writes of other types, including long, ulong, double, and decimal, as well as user-defined types, are not guaranteed to be atomic. Aside from the library functions designed for that purpose, there is no guarantee of atomic read-modify-write, such as in the case of increment or decrement.
Well, I generally assume everything is thread unsafe. For quick and dirty access to global objects in an threaded environment I use the lock(object) keyword. .Net have an extensive set of synchronization methods like different semaphores and such.
Reading can be thread-unsafe if there are any threads that are writing (if they write in the middle of a read, for example, they'll be hit with an exception).
If you must do this, then you can do:
lock(i){
i.GetElementAt(a)
}
This will force thread-safety on i (as long as other threads similarly attempt to lock i before they use it. only one thing can lock a reference type at a time.
In terms of enumeration, I'll refer to the MSDN:
The enumerator does not have exclusive access to the collection; therefore, enumerating
through a collection is intrinsically not a thread-safe procedure. To guarantee thread
safety during enumeration, you can lock the collection during the entire enumeration. To
allow the collection to be accessed by multiple threads for reading and writing, you must
implement your own synchronization.
An example of no thread-safety: When several threads increment an integer. You can set it up in a way that you have a predeterminded number of increments. What youmay observe though, is, that the int has not been incremented as much as you thought it would. What happens is that two threads may increment the same value of the integer.This is but an example of aplethora of effects you may observe when working with several threads.
PS
A thread-safe increment is available through Interlocked.Increment(ref i)

Categories

Resources