Is System.Threading.Semaphore weak? - c#

There are two types of Semaphore
Strong Semaphore: maintains an order internally.
Weak semaphore: which does not provide any ordered access to a critical section which can cause starvation.
“There is no guaranteed order, such as FIFO or LIFO, in which blocked threads enter the semaphore.” from MSDN remarks in System.Threading.Semaphore.
I would like to confirm what type of semaphore implementation is provided by .Net Framework ?

static int count = 0;
static Semaphore writerSem = new Semaphore(0, 10);
static void Main(string[] args)
{
Thread[] readers = new Thread[10];
for (int i = 0; i < readers.Length; i++)
{
readers[i] = new Thread(new ThreadStart(Reader));
readers[i].Name = "Reader: " + i;
readers[i].Start();
}
Thread writer = new Thread(new ThreadStart(Writer));
writer.Start();
for (int i = 0; i < readers.Length; i++)
{
readers[i].Join();
}
writer.Join();
}
static void Reader()
{
while (true)
{
writerSem.WaitOne();
Console.WriteLine(count + " " + Thread.CurrentThread.Name);
}
}
static void Writer()
{
while (true)
{
count++;
writerSem.Release(10);
Thread.Sleep(1000);
}
}
I have tested it by writing this program. It is just confirmed that System.Threading.Semaphore is implemented as Weak Semaphore.

Related

Am i using semaphore wrong?

I need to do some task in parallel using semaphore. I try this:
Semaphore sema = new Semaphore(2,2);
Thread[] Threads = new Thread[5];
for (int k = 0; k < 5; k++) {
sema.WaitOne();
Console.WriteLine((k + 1) + " started");
Threads[k] = new Thread(ThreadMethod1);
Threads[k].Start(k + 1);
sema.Release();
}
static void ThreadMethod1(object id) {
Thread.Sleep(50);
Console.WriteLine(id + " completed");
}
Output looks like:
1 started
2 started
3 started
4 started
5 started
1 completed
2 completed
4 completed
3 completed
5 completed
Isn't semaphore supposed to let only 2 threads to run? I don't get it or doing something wrong?
You are entering/exiting the semaphore in the "main" thread. It's useless, because in each "cycle" you'll both enter and exit it. In this modified example, you enter the semaphore in the main thread and upon finishing the worker thread you exit it.
Note that I had to pass the semaphore to the worker thread (I used a Tuple, but other methods are ok)
static void Main(string[] args) {
Semaphore sema = new Semaphore(2, 2);
Thread[] Threads = new Thread[5];
for (int k = 0; k < 5; k++) {
sema.WaitOne();
Console.WriteLine((k + 1) + " started");
Threads[k] = new Thread(ThreadMethod1);
Threads[k].Start(Tuple.Create(k + 1, sema));
}
}
static void ThreadMethod1(object tuple) {
Tuple<int, Semaphore> tuple2 = (Tuple<int, Semaphore>)tuple;
Thread.Sleep(50);
Console.WriteLine(tuple2.Item1 + " completed");
tuple2.Item2.Release();
}
You could move the sema.WaitOne "inside" the ThreadMethod1, but it would be different: all the threads would be created but would "wait" and only 2 at a time would do the "real work". As written instead up to two threads are created (and do the work)
All you have to do is to move operations on the semaphore from main thread. Small correction to your code will solve it.
public static Semaphore sema = new Semaphore(2, 2);
static void Main(string[] args)
{
Thread[] Threads = new Thread[5];
for (int k = 0; k < 5; k++)
{
Console.WriteLine((k + 1) + " started");
Threads[k] = new Thread(ThreadMethod1);
Threads[k].Start(k + 1);
}
}
static void ThreadMethod1(object id)
{
sema.WaitOne();
Thread.Sleep(1000);
Console.WriteLine(id + " completed");
sema.Release();
}

Why this C# code throws SemaphoreFullException?

I have following code which throws SemaphoreFullException, I don't understand why ?
If I change _semaphore = new SemaphoreSlim(0, 2) to
_semaphore = new SemaphoreSlim(0, int.MaxValue)
then all works fine.
Can anyone please find fault with this code and explain to me.
class BlockingQueue<T>
{
private Queue<T> _queue = new Queue<T>();
private SemaphoreSlim _semaphore = new SemaphoreSlim(0, 2);
public void Enqueue(T data)
{
if (data == null) throw new ArgumentNullException("data");
lock (_queue)
{
_queue.Enqueue(data);
}
_semaphore.Release();
}
public T Dequeue()
{
_semaphore.Wait();
lock (_queue)
{
return _queue.Dequeue();
}
}
}
public class Test
{
private static BlockingQueue<string> _bq = new BlockingQueue<string>();
public static void Main()
{
for (int i = 0; i < 100; i++)
{
_bq.Enqueue("item-" + i);
}
for (int i = 0; i < 5; i++)
{
Thread t = new Thread(Produce);
t.Start();
}
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(Consume);
t.Start();
}
Console.ReadLine();
}
private static Random _random = new Random();
private static void Produce()
{
while (true)
{
_bq.Enqueue("item-" + _random.Next());
Thread.Sleep(2000);
}
}
private static void Consume()
{
while (true)
{
Console.WriteLine("Consumed-" + _bq.Dequeue());
Thread.Sleep(1000);
}
}
}
If you want to use the semaphore to control the number of concurrent threads, you're using it wrong. You should acquire the semaphore when you dequeue an item, and release the semaphore when the thread is done processing that item.
What you have right now is a system that allows only two items to be in the queue at any one time. Initially, your semaphore has a count of 2. Each time you enqueue an item, the count is reduced. After two items, the count is 0 and if you try to release again you're going to get a semaphore full exception.
If you really want to do this with a semaphore, you need to remove the Release call from the Enqueue method. And add a Release method to the BlockingQueue class. You then would write:
private static void Consume()
{
while (true)
{
Console.WriteLine("Consumed-" + _bq.Dequeue());
Thread.Sleep(1000);
bq.Release();
}
}
That would make your code work, but it's not a very good solution. A much better solution would be to use BlockingCollection<T> and two persistent consumers. Something like:
private BlockingCollection<int> bq = new BlockingCollection<int>();
void Test()
{
// create two consumers
var c1 = new Thread(Consume);
var c2 = new Thread(Consume);
c1.Start();
c2.Start();
// produce
for (var i = 0; i < 100; ++i)
{
bq.Add(i);
}
bq.CompleteAdding();
c1.Join();
c2.Join();
}
void Consume()
{
foreach (var i in bq.GetConsumingEnumerable())
{
Console.WriteLine("Consumed-" + i);
Thread.Sleep(1000);
}
}
That gives you two persistent threads consuming the items. The benefit is that you avoid the cost of spinning up a new thread (or having the RTL assign a pool thread) for each item. Instead, the threads do non-busy waits on the queue. You also don't have to worry about explicit locking, etc. The code is simpler, more robust, and much less likely to contain a bug.

Multithreaded code to do work using configured number of thread

I want to create a multithreaded application code. I want to execute configured no of threads and each thread do the work. I want to know is this the write approach or do we have better approach. All the threads needs to be executed asynchronously.
public static bool keepThreadsAlive = false;
static void Main(string[] args)
{
Program pgm = new Program();
int noOfThreads = 4;
keepThreadsAlive = true;
for (int i = 1; i <= noOfThreads; i++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork), (object)i);
}
System.Console.ReadLine();
StopAllThreads();
System.Console.ReadLine();
}
private static void DoWork(object threadNumber)
{
int threadNum = (int)threadNumber;
int counter = 1;
while (keepThreadsAlive)
{
counter = ProcessACK(threadNum, counter);
}
}
private static int ProcessACK(int threadNum, int counter)
{
System.Console.WriteLine("Thread {0} count {1}", threadNum, counter++);
Random ran = new Random();
int randomNumber = ran.Next(5000, 100000);
for (int i = 0; i < randomNumber; i++) ;
Thread.Sleep(2000);
return counter;
}
As others have pointed out, the methods you are using are dated and not as elegant as the more modern C# approach to accomplishing the same tasks.
Have a look at System.Threading.Tasks for an overview of what is available to you these days. There is even a way to set the maximum threads used in a parallel operation. Here is a simple (pseudocode) example:
Parallel.ForEach(someListOfItems, new ParallelOptions { MaxDegreeOfParallelism = 8 }, item =>
{
//do stuff for each item in "someListOfItems" using a maximum of 8 threads.
});
Hope this helps.

C# using loop to start threads and pass parameters

In below sample code, I use lambda function to make 3 threads doing different things. My goal is make the thread count configurable, so I was thinking using a loop to start threads. But I always got in static function can't call non-static members error. Can the community help me or direct me to a tutorial? Thanks a lot!
My Code:
internal class FeedClient
{
private static void Main(string[] args)
{
int iteration = 10;
int ranSleepTime = 1000;
var obj = new MyClass();
var threads = new Thread[3];
(threads[0] = new Thread(() =>
{
Random random = new System.Random();
for (int i = 0; i < iteration; i++)
{
obj.MyMethod("my string 1");
Thread.Sleep(random.Next(ranSleepTime));
}
})).Start();
(threads[1] = new Thread(() =>
{
Random random = new System.Random();
for (int i = 0; i < iteration; i++)
{
obj.MyMethod("my string 2");
Thread.Sleep(random.Next(ranSleepTime));
}
})).Start();
(threads[2] = new Thread(() =>
{
Random random = new System.Random();
for (int i = 0; i < iteration; i++)
{
obj.MyMethod("my string 3");
Thread.Sleep(random.Next(ranSleepTime));
}
})).Start();
foreach (Thread thread in threads)
{
thread.Join();
}
obj.Close(false);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
}
Desired look:
for(int i=0;i<3;i++){
threads[i] = new Thread(func); // func is the lambda function
threads[i].Start(myData[i]); // myData[] may be a string array
}
The error message seems to indicate that you are attempting to use an instance member from a static method somewhere. Naturally that is not allowed since a static method does not have a this reference. Here is how I would refactor your code.
public static void Main()
{
string[] myData = GetStringArray();
int iteration = 10;
int ranSleepTime = 1000;
var obj = new MyClass();
var threads = new Thread[myData.Length];
for (int i = 0; i < threads.Length; i++)
{
int captured = i; // This is required to avoid capturing the loop variable.
threads[i] = new Thread(
() =>
{
var random = new Random();
for (int i = 0; i < iteration; i++)
{
obj.MyMethod(myData[captured]);
Thread.Sleep(random.Next(ranSleepTime));
}
});
threads[i].Start();
}
foreach (Thread thread in threads)
{
thread.Join();
}
obj.Close(false);
}
I must mention, however, that creating new threads in an unbounded loop is generally undesirable. If the loop has a tight bound then maybe, but I would have to get a better understanding of the problem before making any further comments regarding this point.

ThreadQueue problems in "Accelerated C# 2008"

Example for threading queue book "Accelerated C# 2008" (CrudeThreadPool class) not work correctly. If I insert long job in WorkFunction() on 2-processor machine executing for next task don't run before first is over. How to solve this problem? I want to load the processor to 100 percent
public class CrudeThreadPool
{
static readonly int MAX_WORK_THREADS = 4;
static readonly int WAIT_TIMEOUT = 2000;
public delegate void WorkDelegate();
public CrudeThreadPool()
{
stop = 0;
workLock = new Object();
workQueue = new Queue();
threads = new Thread[MAX_WORK_THREADS];
for (int i = 0; i < MAX_WORK_THREADS; ++i)
{
threads[i] = new Thread(new ThreadStart(this.ThreadFunc));
threads[i].Start();
}
}
private void ThreadFunc()
{
lock (workLock)
{
int shouldStop = 0;
do
{
shouldStop = Interlocked.Exchange(ref stop, stop);
if (shouldStop == 0)
{
WorkDelegate workItem = null;
if (Monitor.Wait(workLock, WAIT_TIMEOUT))
{
// Process the item on the front of the queue
lock (workQueue)
{
workItem = (WorkDelegate)workQueue.Dequeue();
}
workItem();
}
}
} while (shouldStop == 0);
}
}
public void SubmitWorkItem(WorkDelegate item)
{
lock (workLock)
{
lock (workQueue)
{
workQueue.Enqueue(item);
}
Monitor.Pulse(workLock);
}
}
public void Shutdown()
{
Interlocked.Exchange(ref stop, 1);
}
private Queue workQueue;
private Object workLock;
private Thread[] threads;
private int stop;
}
public class EntryPoint
{
static void WorkFunction()
{
Console.WriteLine("WorkFunction() called on Thread 0}", Thread.CurrentThread.GetHashCode());
//some long job
double s = 0;
for (int i = 0; i < 100000000; i++)
s += Math.Sin(i);
}
static void Main()
{
CrudeThreadPool pool = new CrudeThreadPool();
for (int i = 0; i < 10; ++i)
{
pool.SubmitWorkItem(
new CrudeThreadPool.WorkDelegate(EntryPoint.WorkFunction));
}
pool.Shutdown();
}
}
I can see 2 problems:
Inside ThreadFunc() you take a lock(workLock) for the duration of the method, meaning your threadpool is no longer async.
in the Main() method, you close down the threadpool w/o waiting for it to finish. Oddly enough that is why it is working now, stopping each ThreadFunc after 1 loop.
It's hard to tell because there's no indentation, but it looks to me like it's executing the work item while still holding workLock - which is basically going to serialize all the work.
If at all possible, I suggest you start using the Parallel Extensions framework in .NET 4, which has obviously had rather more time spent on it. Otherwise, there's the existing thread pool in the framework, and there are other implementations around if you're willing to have a look. I have one in MiscUtil although I haven't looked at the code for quite a while - it's pretty primitive.

Categories

Resources