This question already has answers here:
Semaphore halting my thread
(1 answer)
Semaphore is deadlocking by halting code running on own thread [duplicate]
(2 answers)
Semaphore - What is the use of initial count?
(10 answers)
Closed 2 years ago.
public class SampleData
{
private static readonly Semaphore pool = new Semaphore(0,1);
public string Data => getFromFile();
private static string getFromFile()
{
pool.WaitOne();
var data =
File.ReadAllText("somefilepath");
pool.Release();
return data;
}
}
In program.cs
var tasks = new List<Task<string>>();
for(int i=0; i<=5; i++)
{
tasks.Add(Task.Run<string>(()=>
new SampleData().Data));
}
Task.WaitAll(tasks.ToArray());
When I run this, it never completes the tasks. Can any one tell me what's the issue here?
Thanks.
If getFromFile() throw an exception, semaphore will never be released and Task.WaitAll will wait forever, you need to move pool.Release() to finally section. That might be the reason for indefinite waiting.
private static string getFromFile()
{
pool.WaitOne();
try {
var data = File.ReadAllText("somefilepath");
return data;
}
finally {
pool.Release();
}
}
}
First of all
You should consider using SemaphoreSlim if this is not cross process.
Secondly
In regards to your initialization of Semaphore(0,1). If initialCount is less than maximumCount, the effect is the same as if the current thread had called WaitOne (maximumCount minus initialCount) times.
That's to say, when you call pool.WaitOne(); in your current configuration, you are going to block until someone calls release (straight off the bat).
E.g.:
private static readonly Semaphore pool = new Semaphore(0,1);
static void Main(string[] args)
{
pool.WaitOne();
Console.WriteLine("This will never get executed");
}
It's likely you want:
Semaphore(1,1)
// or
SemaphoreSlim(1,1)
Lastly
You must always wrap these synchronization primitives in a try finally, or you will eventually deadlock if an exception gets thrown before Release.
SemaphoreSlim
await pool.WaitAsync();
try
{
// do something
}
finally
{
pool.Release();
}
Or if you really need to use a Semaphore
pool.WaitOne();
try
{
// do something
}
finally
{
pool.Release();
}
Related
This question already has an answer here:
Thread control flow in async .NET Console program [duplicate]
(1 answer)
Closed 27 days ago.
I have this couple of methods:
private static bool loaded = false;
private static bool replaying = false;
private static string wIndex = String.Empty;
private static WorldData wData;
private static ConcurrentDictionary<int, List<long>> streamPosition
= new ConcurrentDictionary<int, List<long>>();
private static ConcurrentDictionary<int, List<string>> collectionNames
= new ConcurrentDictionary<int, List<string>>();
private static async void StartReplay()
{
try
{
Stopwatch st = new Stopwatch();
while (loaded)
{
while (replaying)
{
st.Start();
for (int i = 0; i < collectionNames.Count; i++)
{
XLogger.Log(toConsole.Debug, Thread.CurrentThread.ManagedThreadId
.ToString());
wData.CopyCollection(await DeserializeListFromStreamAsync(
wData.GetCollectionByName(collectionNames[Thread.CurrentThread
.ManagedThreadId][i]), i, new CancellationToken()));
}
st.Stop();
int sleepTime = DebriefingManager.replayRate
- (int)st.ElapsedMilliseconds;
if (sleepTime > 0)
{
Thread.Sleep(sleepTime);
}
else
{
XLogger.Log(toConsole.Bad, "Debriefing is slow, video may lag.");
XLogger.Log(toFile.System, "Debriefing is slow, video may lag.");
}
st.Reset();
}
}
}
catch (Exception e)
{
XLogger.Log(toConsole.Bad, e.ToString());
XLogger.Log(toFile.Error, e.ToString());
}
}
private static async Task<ConcurrentDictionary<string, T>>
DeserializeListFromStreamAsync<T>(
ConcurrentDictionary<string, T> coll, int i, CancellationToken cancellationToken)
{
var dataStructures = new ConcurrentDictionary<string, T>();
using (FileStream stream = File.OpenRead(DebriefingManager
.GetReadingStreamByCollection(coll)))
{
stream.Position = streamPosition[Thread.CurrentThread.ManagedThreadId][i];
using (var streamReader = new MessagePackStreamReader(stream))
{
XLogger.Log(toConsole.Debug,
$"{Thread.CurrentThread.ManagedThreadId} --- test 1");
ReadOnlySequence<byte>? msgpack = await streamReader
.ReadAsync(cancellationToken);
XLogger.Log(toConsole.Debug,
$"{Thread.CurrentThread.ManagedThreadId} --- test 2");
if (msgpack is null) return null;
dataStructures = MessagePackSerializer
.Deserialize<ConcurrentDictionary<string, T>>(
(ReadOnlySequence<byte>)msgpack, cancellationToken: cancellationToken);
}
streamPosition[Thread.CurrentThread.ManagedThreadId][i] = stream.Position;
}
return dataStructures;
}
StartReplay is run by three different threads.
I need to have a unique id for each thread as I need the List<long> and List<string> to be unique for each one. So I thought about using ConcurrentDictionaries and the Thread.CurrentThread.ManagedThreadId as a key.
The first thing I tried was to use Thread.CurrentThread.ManagedThreadId but I discovered that after this line: ReadOnlySequence<byte>? msgpack = await streamReader.ReadAsync(cancellationToken); the Id changed. Not knowing that it should be immutable I thought nothing of it and tried to use the [ThreadStatic] attribute, but after that same line the value of the variable tagged was reset to 0.
After using the Thread debug window I found out that the threads that ran my code were "killed" after that line and new ones were used to continue the code.
My question than is: why does this happen? And how do I prevent it? Might this be impacting performance?
EDIT: I should also add that the method is a modified version of the one in the MessagePack documentation in the "Multiple MessagePack structures on a single Stream
" section.
Why does this happen?
Because this is the nature of the beast (asynchrony). The completion of asynchronous operations happens on a thread that is usually different than the thread that initiated the asynchronous operation. This is especially true for Console applications, that are not equipped with any specialized mechanism that restores the original thread after the await. It would be different if you had, for example, a Windows Forms app. These applications start by installing a specialized scheduler on the UI thread, called WindowsFormsSynchronizationContext, which intervenes after the await, and schedules the continuation back on the UI thread. You don't have such a thing in a Console application, so you are experiencing the effects of asynchrony in its purest form.
How do I prevent it?
By not having asynchronous execution flows. Just wait synchronously all the asynchronous operations, and you'll be in the same thread from start to finish:
ReadOnlySequence<byte>? msgpack = streamReader
.ReadAsync(cancellationToken).GetAwaiter().GetResult();
If you find it tiresome to write .GetAwaiter().GetResult() everywhere, you can shorten it to .Wait2() with these two extension methods:
public static void Wait2(this Task task) => task.GetAwaiter().GetResult();
public static TResult Wait2<TResult>(this Task<TResult> task) => task.GetAwaiter().GetResult();
Might this be impacting performance?
It might impact the memory efficiency. Your threads will be blocked during the asynchronous operations, so you program might need more threads than usual. This could have some temporary effects on performance, in case the ThreadPool becomes saturated, and needs to spawn more threads. The thread-injecting heuristics are a bit conservative, by injecting at maximum one new thread per second. In case this is a problem, you can configure the ThreadPool in advance with the ThreadPool.SetMinThreads method.
Note: Blocking the current thread with .GetAwaiter().GetResult() is not a good way to write code in general. It violates the common wisdom of not blocking on async code. Here I am just answering directly your question about how to prevent the thread from changing. I am not advising you to actually do it. If you asked for my advice, I would say to rethink everything that you have done so far, and maybe restart your project from scratch.
What is the main difference of ReaderWriterLockSlim than regular Lock?
Since I am trying to achieve async, await, I can't use regular Lock
So I am trying to make it work with ReaderWriterLockSlim
However I am getting this error,
System.Threading.LockRecursionException: 'Recursive write lock acquisitions not allowed in this mode.'
Isn't supposed ReaderWriterLockSlim to make queued threads/tasks to wait until released? So when the currently working thread on the given method is done, isn't the next in queued thread/task supposed to enter the method?
So lets say I have 5 tasks that wanted to access my method written as below and let's name them as A,B,C,D,E
Let's say B got the method and _lockRootAdd is now locked. Isn't A,C,D,E tasks supposed to wait until lock is released and once released aren't they supposed to enter the method 1 by 1?
private static readonly ReaderWriterLockSlim _lockRootAdd = new ReaderWriterLockSlim();
private static async Task<int> returnRootDomainId(this string srUrl)
{
_lockRootAdd.EnterWriteLock();
try
{
using ExampleCrawlerContext _context = new ExampleCrawlerContext();
string rootDomain = srUrl.NormalizeUrl().returnRootDomainUrl();
var rootDomainHash = rootDomain.SHA256Hash();
var result = await _context.RootDomains.Where(pr => pr.RootDomainUrlHash == rootDomainHash).FirstOrDefaultAsync().ConfigureAwait(false);
if (result == null)
{
RootDomains _RootDomain = new RootDomains();
_RootDomain.RootDomainUrlHash = rootDomainHash;
_context.Add(_RootDomain);
await _context.SaveChangesAsync().ConfigureAwait(false);
await addUrl(rootDomain).ConfigureAwait(false);
}
var result2 = await _context.RootDomains.Where(pr => pr.RootDomainUrlHash == rootDomainHash).FirstOrDefaultAsync().ConfigureAwait(false);
return result2.RootDomainId;
}
finally
{
_lockRootAdd.ExitWriteLock();
}
}
As noted in the comments, ReaderWriterLockSlim doesn't support await. The core problem is that traditional locks are thread-affine, and await can change threads.
The only out-of-the-box solution for await-compatible mutual exclusion is SemaphoreSlim:
private static readonly SemaphoreSlim _lockRootAdd = new SemaphoreSlim(1, 1);
private static async Task<int> returnRootDomainId(this string srUrl)
{
await _lockRootAdd.WaitAsync();
try
{
...
}
finally
{
_lockRootAdd.Release();
}
}
This question already has answers here:
Question about terminating a thread cleanly in .NET
(8 answers)
Closed 1 year ago.
I am coding an application that starts and stops a thread by a command. If client receives "start" it starts the thread, and if client receives "stop" -- it stops. My concern is in thread-safety in my code and the proper practice for this problem.
My code:
Thread thread = new Thread(doStuff);
...
//wait for the commands
if(command.Equals("start")
{
thread.Start();
}
if(command.Equals("stop")
{
//this code won't work of course
thread.Abort();
}
public static void doStuff()
{
while(true)
{
//do stuff
}
}
The problem is that abort will not work, because it does not know if it was even started. Also, i need to somehow know if the thread is actually alive..
Maybe I need to wrap it around abort statement and check thread.isAlive status
But what I tried to do is to create a common variable for the function.
bool running = true;
Thread thread = new Thread(doStuff);
...
//wait for the commands
if(command.Equals("start")
{
thread.Start();
}
if(command.Equals("stop")
{
//this code won't work of course
running = false;
}
public void doStuff()
{
while(running)
{
//do stuff
}
}
This implementation is horrible and causes a crash sometimes within 15 seconds.
Could someone please show me an appropriate way to achieve my goal? Thank you
You cannot call Thread.Abort() safely or at all - depending on your version of the framework.
The way to do this safely is to have co-operative cancellation.
Here's how:
public static void doStuff(CancellationToken ct)
{
while (true)
{
//do stuff
if (ct.IsCancellationRequested)
{
return;
}
}
}
And then call it like this:
var cts = new CancellationTokenSource();
Thread thread = new Thread(() => doStuff(cts.Token));
//wait for the commands
if (command.Equals("start")
{
thread.Start();
}
if (command.Equals("stop")
{
cts.Cancel();
}
Do note that the CancellationTokenSource is disposable so it should be disposed when it has been finished with.
Requirement :- At any given point of time only 4 threads should be calling four different functions. As soon as these threads complete, next available thread should call the same functions.
Current code :- This seems to be the worst possible way to achieve something like this. While(True) will cause unnecessary CPU spikes and i could see CPU rising to 70% when running the following code.
Question :- How can i use AutoResetEventHandler to signal Main thread Process() function to start next 4 threads again once the first 4 worker threads are done processing without wasting CPU cycles. Please suggest
public class Demo
{
object protect = new object();
private int counter;
public void Process()
{
int maxthread = 4;
while (true)
{
if (counter <= maxthread)
{
counter++;
Thread t = new Thread(new ThreadStart(DoSomething));
t.Start();
}
}
}
private void DoSomething()
{
try
{
Thread.Sleep(50000); //simulate long running process
}
finally
{
lock (protect)
{
counter--;
}
}
}
You can use TPL to achieve what you want in a simpler way. If you run the code below you'll notice that an entry is written after each thread terminates and only after all four threads terminate the "Finished batch" entry is written.
This sample uses the Task.WaitAll to wait for the completion of all tasks. The code uses an infinite loop for illustration purposes only, you should calculate the hasPendingWork condition based on your requirements so that you only start a new batch of tasks if required.
For example:
private static void Main(string[] args)
{
bool hasPendingWork = true;
do
{
var tasks = InitiateTasks();
Task.WaitAll(tasks);
Console.WriteLine("Finished batch...");
} while (hasPendingWork);
}
private static Task[] InitiateTasks()
{
var tasks = new Task[4];
for (int i = 0; i < tasks.Length; i++)
{
int wait = 1000*i;
tasks[i] = Task.Factory.StartNew(() =>
{
Thread.Sleep(wait);
Console.WriteLine("Finished waiting: {0}", wait);
});
}
return tasks;
}
One other thing, from the textual requirement section on your question I'm lead to believe that a batch of four new threads should only start after all previously four threads completed. However the code you posted is not compatible with that requirement, since it starts a new thread immediately after a previous thread terminate. You should clarify what exactly is your requirement.
UPDATE:
If you want to start a thread immediately after one of the four threads terminate you can still use TPL instead of starting new threads explicitly but you can limit the number of running threads to four by using a SemaphoreSlim. For example:
private static SemaphoreSlim TaskController = new SemaphoreSlim(4);
private static void Main(string[] args)
{
var random = new Random(570);
while (true)
{
// Blocks thread without wasting CPU
// if the number of resources (4) is exhausted
TaskController.Wait();
Task.Factory.StartNew(() =>
{
Console.WriteLine("Started");
Thread.Sleep(random.Next(1000, 3000));
Console.WriteLine("Completed");
// Releases a resource meaning TaskController.Wait will unblock
TaskController.Release();
});
}
}
I want to check the state of a Semaphore to see if it is signalled or not (so if t is signalled, I can release it). How can I do this?
EDIT1:
I have two threads, one would wait on semaphore and the other should release a Semaphore. The problem is that the second thread may call Release() several times when the first thread is not waiting on it. So the second thread should detect that if it calls Release() it generate any error or not (it generate an error if you try to release a semaphore if nobody waiting on it). How can I do this? I know that I can use a flag to do this, but it is ugly. Is there any better way?
You can check to see if a Semaphore is signaled by calling WaitOne and passing a timeout value of 0 as a parameter. This will cause WaitOne to return immediately with a true or false value indicating whether the semaphore was signaled. This, of course, could change the state of the semaphore which makes it cumbersome to use.
Another reason why this trick will not help you is because a semaphore is said to be signaled when at least one count is available. It sounds like you want to know when the semaphore has all counts available. The Semaphore class does not have that exact ability. You can use the return value from Release to infer what the count is, but that causes the semaphore to change its state and, of course, it will still throw an exception if the semaphore already had all counts available prior to making the call.
What we need is a semaphore with a release operation that does not throw. This is not terribly difficult. The TryRelease method below will return true if a count became available or false if the semaphore was already at the maximumCount. Either way it will never throw an exception.
public class Semaphore
{
private int count = 0;
private int limit = 0;
private object locker = new object();
public Semaphore(int initialCount, int maximumCount)
{
count = initialCount;
limit = maximumCount;
}
public void Wait()
{
lock (locker)
{
while (count == 0)
{
Monitor.Wait(locker);
}
count--;
}
}
public bool TryRelease()
{
lock (locker)
{
if (count < limit)
{
count++;
Monitor.PulseAll(locker);
return true;
}
return false;
}
}
}
Looks like you need an other synchronization object because Semaphore does not provide such functionality to check whether it is signalled or not in specific moment of time.
Semaphore allows automatic triggering of code which awaiting for signalled state using WaitOne()/Release() methods for instance.
You can take a look at the new .NET 4 class SemaphoreSlim which exposes CurrentCount property perhaps you can leverage it.
CurrentCount
Gets the number of threads that will be allowed to enter
the SemaphoreSlim.
EDIT: Updated due to updated question
As a quick solution you can wrap semaphore.Release() by try/catch and handle SemaphoreFullException , does it work as you expected?
Using SemaphoreSlim you can check CurrentCount in such way:
int maxCount = 5;
SemaphoreSlim slim = new SemaphoreSlim(5, maxCount);
if (slim.CurrentCount == maxCount)
{
// generate error
}
else
{
slim.Release();
}
The way to implement semaphore using signalling is as follows. It doesn't make sense to be able to query the state outside of this, as it wouldn't be threadsafe.
Create an instance with maxThreads slots, initially all available:
var threadLimit = new Semaphore(maxThreads, maxThreads);
Use the following to wait (block) for a spare slot (in case maxThreads have already been taken):
threadLimit.WaitOne();
Use the following to release a slot:
threadLimit.Release(1);
There's a full example here.
Knowing when all counts are available in a semaphore is useful. I have used the following logic/solution. I am sharing here because I haven't seen any other solutions addressing this.
//List to add a variable number of handles
private List<WaitHandle> waitHandles;
//Using a mutex to make sure that only one thread/process enters this section
using (new Mutex(....))
{
waitHandles = new List<WaitHandle>();
int x = [Maximum number of slots available in the semaphore];
//In this for loop we spin a thread per each slot of the semaphore
//The idea is to consume all the slots in this process
//not allowing anything else to enter the code protected by the semaphore
for (int i = 0; i < x; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(TWorker));
ManualResetEvent mre = new ManualResetEvent(false);
waitHandles.Add(mre);
t.Start(mre);
}
WaitHandle.WaitAll(waitHandles.ToArray());
[... do stuff here, all semaphore slots are blocked now ...]
//Release all slots
semaphore.Release(x);
}
private void TWorker(object sObject)
{
ManualResetEvent mre = (ManualResetEvent)sObject;
//This is an static Semaphore declared and instantiated somewhere else
semaphore.WaitOne();
mre.Set();
}