What value is the ThreadState Property? - c#

This question got me thinking about the .NET equivalent. What value is there to the ThreadState property of the Thread class? In this code example:
if (someThread.ThreadState != System.Threading.ThreadState.Running)
{
someThread = new Thread(SomeMethod);
someThread.Start();
}
The someThread's ThreadState property could switch to Running between the if and the code inside the if, right?

ThreadState is one of those fantastic properties that look promising at first, but once you dive deep into the way in which in functions, you find that it's almost totally useless.
The main problem with ThreadState is the enumeration names are very misleading. For instance take ThreadState.Runnning. This is a really bad name because it does not actually indicate the thread is running. Instead it indicates the thread was running at some point in the recent past and may or may not still be running.
This may seem trivial but it's not. It's really easy to take the names of the enumeration literally and produces really nice looking code. However as well as the code reads, it's often based on flawed logic.
I really only use this property for debugging purposes.
This value can be useful in very limited sets of scenarios where you have some other mechanism which controls the thread you are looking at. Such as a lock or WaitHandle. But it's usually better to use another form of synchronization than relying on this property.

From MSDN:
Important Note:
There are two thread state enumerations,
System.Threading.ThreadState and
System.Diagnostics.ThreadState.
The thread state enumerations are only of interest in a few debugging
scenarios.
Your code should never use thread state to synchronize the activities of
threads.

ThreadState is useful for looking at in the debugger, in order to help understand and debug certain types of blocking/synchronization bugs. For example, you can tell if a specific thread is blocked in the debugger by looking at this, and seeing the ThreadState set to ThreadState.WaitSleepJoin.
That being said, it's something I almost never rely upon. I've had mixed results trying to debug using this, so in general, I think it's really most often best to pretend that it doesn't exist.

Related

Is .NET 6 PriorityQueue thread-safe?

.NET 6 now has PriorityQueue<TElement,TPriority> which is very useful. The document is not very clear yet (granted at the time of the question the documentation is still for RC1) if it is thread-safe or not. Two things:
It resides in System.Collections.Generic and there doesn't seem to be an equivalent in System.Collections.Concurrent.
It does have method named TryDequeue and TryPeek. Granted they probably are just methods that do not throw exception when the queue is empty but it does give an impression of the concurrent collections.
Can I use it for multithreaded environment without wrapping/locking (for example in an ASP.NET Core website)? Any concurrent equivalent that I am not aware of (I try not to use 3rd-party package if possible)?
With a look at the source code for PriorityQueue.Enqueue, for instance, it is immediately apparent that the code is not thread-safe:
public void Enqueue(TElement element, TPriority priority)
{
// Virtually add the node at the end of the underlying array.
// Note that the node being enqueued does not need to be physically placed
// there at this point, as such an assignment would be redundant.
int currentSize = _size++; // <-- BOOM
The document is not very clear yet
It actually is. Anything in .NET is NOT thread safe unless it is EXPLICITLY mentioned in the documentation. Period.
Thread safety comes with a (significant) performance overhead, particularly when done generically (i.e. not assuming specific uses). As such, it would be extremely stupid to make everything thread safe "just in case". Hence the general concept in .NET (since back in 1.0) that NOTHING is thread safe unless it is explicitly mentioned in the documentation.
As you say, the documentation has no mention of thread safety. As such, it is extremely clear on NOT being thread safe.

C# Multithreading: Do I have to use locks when only getting objects from a list/dictionary?

I am currently working on a multithreaded c# application.
In my case, I have a list/dictionary, which is assigned and filled in the main-thread while the application is starting up. The list will never be modified again. I only use the list to get objects.
Do I have to use locks?
lock(list) { var test = list[0]; }
or can I access the object directly?
I know, if I access the object in the list, the object has to be thread-safe.
Reading is not a problem. But be sure that unexpected behaviors can appear if someone else is writing/deleting. When you are reading
if this list is prepared before and not changed after this you could access the object without locking which is also faster.
But be really aware that you should strongly avoid and modification action to happend when you read from the collection.
When you need also the writing operations then you would need synchronization like with ReaderWriterLockSlim or have a look at the system.collections.concurrent namespace
As long as you don't change the content of the list/array, there is no immediate need for locks.
But I would suggest to implement some synchronization (like locks) anyway. Can you be sure that you won't change your application in the next years so that you will change the content later at runtime?
Lock is used to avoid fetching dirty reads if there's other thread . Since you won't change it, list is lock-free.
If you really want to prevent some unexpected changes for debugging (get error when it happens), you can declare it as const.
As others have mentioned, reading is not a problem. But as you said, you are populating this collection at start up. But you have not mentioned at what point are you starting to read. So presumably there can be unexpected behaviours. You have to use thread safe collections for that. For an example you can use a blocking collection for this purpose.
Here is the MSDN article which explains more about thread safe collections Link.

Is it possible to make a piece of code atomic (C#)?

When I said atomic, I meant set of instructions will execute without any context switching to another thread on the same process (other kinds of switches have to be done of course). The only solution I came up with is to suspend all threads except currently executed before part and resume them after it. Any more elegant way?
The reason I want to do that is to collect a coherent state of objects running on multiple threads. However, their code cannot be changed (they're already compiled), so I cannot insert mutexes, semaphores, etc in it. The atomic operation is of course state collecting (i.e. copying some variables).
There are some atomic operations in the Interlocked class but it only provides a few very simple operations. It can't be used to create an entire atomic block of code.
I'd advise using locking carefully to make sure that your code will still work even if the context changes.
Well, you can use locks, but you can't prevent context switching exactly. But if your threads lock on the same object, then the threads waiting obviously won't be running, so there's no context switching involved since there's nothing to run.
You might want to look at this page too.
No. You can surround a block of code with a Monitor to make it thread-safe, but you cannot make general code snippets atomic.
object lck = new object();
lock(lck)
{
// thread safe code goes in here
}
No, that's against multi-tasking.
Unless very simple operations like incrementing ... which are not subject of your question.
It is possible to obtain a global state from a shared memory composed of a collection (array) of atomic one reader/multi writer registers. The solution is simple but not trivial. You can read the algorithm published in the paper "atomic snapshots of shared memory" or you can read the chapter 4 from the art of multiprocesor programming book, there you can get ideas on the implementation on the java language, of course, once you are familiarized with the idea you should be able to transport it to any other language. Sorry if my english is not well enough.

different thread accessing MemoryStream

There's a bit of code which writes data to a MemoryStream object directly into it's data buffer by calling GetBuffer(). It also uses and updates the Position and SetLength() properties appropriately.
This code works properly 99.9999% of the time. Literally. Only every so many 100,000's of iterations it will barf. The specific problem is that the Position property of MemoryStream suddenly returns zero instead of the appropriate value.
However, code was added that checks for the 0 and throws an exception which includes log of the MemoryStream properties like Position and Length in a separate method. Those return the correct value. Further addition of logging within the same method shows that when this rare condition occurs, the Position only has zero inside this particular method.
Okay. Obviously, this must be a threading issue. And most likely a compiler optimization issue.
However, the nature of this software is that it's organized by "tasks" with a scheduler and so any one of several actual O/S thread may run this code at any give time--but never more than one at a time.
So it's my guess that ordinarily it so happens that the same thread keeps getting used for this method and then on a rare occasion a different thread get used. (Just code the idea to test this theory by capturing and comparing the thread id.)
Then due to compiler optimizations, the different thread never gets the correct value. It gets a "stale" value.
Ordinarily in a situation like this I would apply a "volatile" keyword to the variable in question to see if that fixes it. But in this case the variables are inside the MemoryStream object.
Does anyone have any other idea? Or does this mean we have to implement our own MemoryStream object?
Sincerely,
Wayne
EDIT: Just ran a test which counts the total number of calls to this method and counts the number of times the ManagedThreadId is different than the last call. It's almost exactly 50% of the time that it switches threads--alternating between them. So my theory above is almost certainly wrong or the error would occur far more often.
EDIT: This bug occurs so rarely that it would take nearly a week to run without the bug before feeling any confidence it's really gone. Instead, it's better to run experiments to confirm precisely the nature of the problem.
EDIT: Locking currently is handled via lock() statements in each of 5 methods that use the MemoryStream.
(Really need exemplar code to confirm this.)
MemoryStream members are not documented as thread safe (e.g. Position) so you need to ensure you are only access this instance (or any reference to an object logically a part of the MemoryStream) from one thread at a time.
But MemoryStream is not documented as having thread affinity, so you can access an instance from a different thread—as long as such an access is not concurrent.
Threading is hard (axiomatic for this Q&A).
I would suggest you have some concurrent access going on, with two threads both accessing the same instance concurrently and this is, occasionally, corrupting some aspect of the instance state.
I would ensure I keep the locking as simple as possible (trying to be extra clever and limiting locking is often a cause of very hard to find bugs) and get things working. Testing on a multi-core system may also help. Only try and optimise the locking if profiling shows there is potential for significant net (application as a whole) gain.

Is there any circumstance in which calling EnterWriteLock on a ReaderWriterLockSlim should enter a Read lock instead?

I have a seemingly very simple case where I'm using System.Threading.ReaderWriterLockSlim in the 3.5 version of the .NET Framework. I first declare one, as shown here:
Lock Declaration http://odeh.temp.s3.amazonaws.com/lock_declaration.bmp
I put a break point right before the lock is acquired and took a screen shot so you can see (in the watch window) that there are currently no locks held:
pre lock acquisition http://odeh.temp.s3.amazonaws.com/prelock.bmp
Then, after calling EnterWriteLock, as you can see I am holding a Read Lock.
post lock acquisition http://odeh.temp.s3.amazonaws.com/postlock.bmp
This seems like truly unexpected behavior and I can't find it documented anywhere. Does anyone else know why this happens? In other places in my code (earlier), this exact same line of code correctly obtains a write lock. Consistently, however, across multiple systems it instead obtains a read lock at this place in the call stack. Hope I've made this clear and thanks for taking the time to look at this.
--- EDIT---
For those mentioning asserts... this just confuses me further:
post assert http://odeh.temp.s3.amazonaws.com/assert.bmp
I really can't say how it got past this assertion except that perhaps the Watch Window and the Immediate window are wrong (perhaps the value is stored thread locally, as another poster mentioned). This seems like an obvious case for a volatile variable and a Happens Before relationship to be established. Either way, several lines later there is code that asserts for a write lock and does not have one. I have set a break point on the only line of code in the entire program that releases this lock, and it doesn't get called after the acquisition shown here so that must mean it was never actually acquired... right?
This could be a debugger side-effect. The ReaderWriterLockSlim class is very sensitive to the current thread ID (Thread.ManagedThreadId). I can't state for a fact that the debugger will always use the current active thread to evaluate the watch expressions. It usually does, but there might be different behavior, say, if you entered the debugger with a hard break.
Trust what the code does first of all, your Debug.Assert proves the point.

Categories

Resources