concurrect oprating with limit on thread count in a infinity loop - c#

I write an infinity loop for pulling from queue(RabbitMQ) and processing each pulled item in concurrent threads with limited count on running threads.
Now i want a solution for make a limit in thread execution count.see an example of my loop:
public class ThreadWorker<T>
{
public List<T> _lst;
private int _threadCount;
private int _maxThreadCount;
public ThreadWorker(List<T> lst, int maxThreadCount)
{
_lst = lst;
_maxThreadCount = maxThreadCount;
}
public void Start()
{
var i = 0;
while (i < _lst.Count)
{
i++;
var pull = _lst[i];
Process(pull);
}
}
public void Process(T item)
{
if (_threadCount > _maxThreadCount)
{
//wait any opration be done
// How to wait for one thread?
Interlocked.Decrement(ref _threadCount);
}
var t = new Thread(() => Opration(item));
t.Start();
Interlocked.Increment(ref _threadCount);
}
public void Opration(T item)
{
Console.WriteLine(item.ToString());
}
}
Notice that when i use a semaphore for limitation, Start() method don't wait for all running threads. my loop should after running threads with _maxThreadCount, be wait until release a thread and then push new thread for concurrent processing.

I would use Semaphore this way to control the number of threads:
public class ThreadWorker<T>
{
SemaphoreSlim _sem = null;
List<T> _lst;
public ThreadWorker(List<T> lst, int maxThreadCount)
{
_lst = lst;
_sem = new SemaphoreSlim(maxThreadCount);
}
public void Start()
{
var i = 0;
while (i < _lst.Count)
{
i++;
var pull = _lst[i];
_sem.Wait(); /*****/
Process(pull);
}
}
public void Process(T item)
{
var t = new Thread(() => Opration(item));
t.Start();
}
public void Opration(T item)
{
Console.WriteLine(item.ToString());
_sem.Release(); /*****/
}
}

Related

Thread executes "slowly"

I am starting with threads and wrote for the sake of learning the following simple program, which later would be used to calculate about 100,000 times a formula (it is a relatively simple one but which takes an iterated range of values).
The problem with it is that I expected every thread to execute in almost no time and thus the complete program to finish nearly immediately, but the fact is that everything runs too slow (about 10s)...
static readonly double TotalIterations = 1000;
public static Iterations ActualIterations = new Iterations();
public static void Main()
{
var par1 = "foo";
var par2 = "boo";
var par3 = 3;
for (int i = 0; i < TotalIterations; i++)
{
new Thread(() => new Calculations().Calculate(par1, par2, par3)).Start();
}
AwaitingThreads();
}
static void AwaitThreads()
{
Console.WriteLine("Awaiting threads to finished...");
while (true)
{
lock (ActualIterations)
{
if (ActualIterations.Progress() == TotalIterations) break;
}
Thread.Sleep(1 * 1000);
}
Console.WriteLine("All threads finished!");
}
public class Calculations {
public bool Calculate(string par1, string par2, int par3)
{
// ...
bool result = false;
lock (ActualIterations)
{
ActualIterations.Incr();
}
return result;
}
}
public class Iterations
{
int progress = 0;
public void Incr()
{
progress++;
}
public int Progress()
{
return progress;
}
}
I also tried using a ThreadPool like this, but there was no improvement...
static readonly double TotalIterations = 1000;
static string par1 = "foo";
static string par2 = "boo";
static int par3 = 3;
public static Iterations ActualIterations = new Iterations();
public static void Main()
{
ThreadPool.QueueUserWorkItem(MyThreadPool);
AwaitThreads();
}
static void AwaitThreads()
{
Console.WriteLine("Awaiting threads to finished...");
while (true)
{
lock (ActualIterations)
{
if (ActualIterations.Progress() == TotalIterations) break;
}
Thread.Sleep(1 * 1000);
}
Console.WriteLine("All threads finished!");
}
static void MyThreadPool(Object stateInfo)
{
for (int i = 0; i < TotalIterations; i++)
{
new Thread(() => new Calculations().Calculate(par1, par2, par3)).Start();
}
}
public class Calculations {
public bool Calculate(string par1, string par2, int par3)
{
// ...
bool result = false;
lock (ActualIterations)
{
ActualIterations.Incr();
}
return result;
}
}
public class Iterations
{
int progress = 0;
public void Incr()
{
progress++;
}
public int Progress()
{
return progress;
}
}
When I quit using threads in this example and use a static method, executing it sequentially in my for loop, the program finishes in 1s...
Can anybody enlighten me what I am doing wrong here with those threads?
The problem with it is that I expected every thread to execute in almost no time
Right. You're ignoring the fact that creating a new thread is a relatively expensive operation. Far, far more expensive than "acquiring a lock and incrementing an integer" which is the work you're doing in the thread.
To give a real world comparison, it's a little like ordering a new car, waiting it to be delivered, and then driving it 1km. That's going to be slower than just walking 1km.
Using the thread pool would be faster, but you're not using it correctly - you're launching one thread pool task which then creates all the other threads again.
I would encourage you to look at using Task<T> instead, which normally uses the thread pool under the hood, and is a generally more modern abstraction for this sort of work.
This is the way to proceed doing what you wanted to do:
class Program
{
static void Main(string[] args)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 1000; i++)
{
tasks.Add(Task.Run(() =>
{
Console.WriteLine("Calculations " + DateTime.Now);
}));
}
Task.WaitAll(tasks.ToArray());
}
}
Tasks are actually optimized and programmer-friendly to use if you need to work with threads.
Another advice i want to give you is to create an Object just for locking purposes, example:
class Program
{
private static Object _locker = new Object();
static void Main(string[] args)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 1000; i++)
{
tasks.Add(Task.Run(() =>
{
lock (_locker)
{
Console.WriteLine("Calculations " + DateTime.Now);
}
}));
}
Task.WaitAll(tasks.ToArray());
}
}
I see the problem in the AwaitThreads method.
It uses the same lock (ActualIterations) as working thread and it makes working threads to wait for shared resource additionally.
Also (as it was mentioned by #Euphoric) the thread working code you have shown is just about single increment and it uses the shared resource between all threads.
You have to change it in some another way and try to avoid shared resource usage in multi threaded environment.
For example, if you need to make some calculation on huge data array you have to feed each thread own data part to be processed and then wait for all tasks to be finished. There is Task concept and Task.WaitAll

How to wait for a method to finish on another thread?

I am new to multi-thread programming in C#. My problem is that I don't know how to wait for a method that is being run on another thread to finish, before it can continue to the next line. For example, something like this
public class A
{
int i;
public A()
{
i = 0;
}
protected void RunLoop()
{
while(i < 100)
{
i++;
}
}
public void Start()
{
TimerResolution.TimeBeginPeriod(1);
runThread = new Thread(new ThreadStart(RunLoop));
running = true;
runThread.Start();
}
}
public class B
{
A classAInstance = new A();
A.Start();
Console.Writeline(i);
}
Right now, it prints 0 on the console, which is not what I want (i.e. i = 100).
What is the best way to do this? BTW, I don't have access to the runThread that is created in class A
Thanks.
EDIT:
It was a bit difficult to solve this problem without modifying a lot codes. Therefore, we ended up with adding a condition in the public void Start() with which it can decide whether to run the RunLoop in a separate thread or not. The condition was defined using an Enum field.
public void Start()
{
TimerResolution.TimeBeginPeriod(1);
running = true;
if (runningMode == RunningMode.Asynchronous)
{
runThread = new Thread(new ThreadStart(RunLoop));
runThread.Start();
}
else
{
RunLoop();
}
}
And
public enum RunningMode { Asynchronous, Synchronous };
Thanks everyone for help.
The preferred method is to use the Task Parallel Library (TPL) and use Task with await.
If you must use Threads, then use a ManualResetEvent or ManualResetEventSlim to signal the end of a method.
void Main()
{
var a = new A();
a.Start();
a.FinishedEvent.WaitOne();
Console.WriteLine(a.Index);
}
// Define other methods and classes here
public class A
{
ManualResetEvent mre = new ManualResetEvent(false);
int i;
public EventWaitHandle FinishedEvent
{
get { return mre; }
}
public int Index
{
get { return i; }
}
public A()
{
i = 0;
}
protected void RunLoop()
{
while (i < 1000)
{
i++;
}
mre.Set();
}
public void Start()
{
var runThread = new Thread(new ThreadStart(RunLoop));
runThread.Start();
}
}
Your life would be so much better with tasks.
Your code could be this simple:
var task = Task.Factory.StartNew(() =>
{
var i = 0;
while (i < 100)
{
i++;
}
return i;
});
Console.WriteLine(task.Result);
I like use Monitor.Wait() and Monitor.Pulse() in conjunction with "lock" operator. It works, but you must be careful, when you use this technique.
I'm added some changes to your code to demonstrate it. Code below are prints i== 100, as you want.
public class A
{
int i;
public object SyncObject
{ get; private set; }
public A()
{
SyncObject = new object();
i = 0;
}
protected void RunLoop()
{
while (i < 100)
{
i++;
}
lock (SyncObject)
{
Monitor.Pulse(SyncObject);
}
}
public void Start()
{
var runThread = new Thread(new ThreadStart(RunLoop));
runThread.Start();
}
public void PrintI()
{
Console.WriteLine("I == " + i);
}
}
public class B
{
public static void Run()
{
A classAInstance = new A();
lock (classAInstance.SyncObject)
{
classAInstance.Start();
Monitor.Wait(classAInstance.SyncObject);
}
classAInstance.PrintI();
}
}

How to clean the queue from all of the processed items?

There is a global queue of objects that you have to send to your customers. Queue is continually filled with new elements in its flow (one element in a second), that`s why you have to send constantly. Every client is served in a separate thread. After the object is sent to all clients it must be removed from the queue. It seems to be easy, but how to know that all the threads have already sent a particular object?
I do everything on the socket.
Thread threadForClientSending = new Thread(delegate()
{
while (true)
{
try
{
List<SymbolsTable> [] localArrayList ;
//main.que -- global queue
foreach (var eachlist in localArrayList = main.que.ToArray())
{
foreach (var item in eachlist)
{
byte[] message =
encoding.GetBytes((item.GetHashCode()%100).ToString() + " "+item.SDate +"\n\r");
client.Send(message);
}
Thread.Sleep(500);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
break;
}
}
});
Such code sends everything to everyone, but it doesn`t clean the queue.
How to clean the queue from all of the processed items?
public static ConcurrentQueue<List<SymbolsTable>> que = new ConcurrentQueue<List<SymbolsTable>>();
public partial class SymbolsTable
{
public string SName { get; set; }
public Nullable<double> SPrice { get; set; }
public Nullable<int> SVolume { get; set; }
public System.DateTime SDate { get; set; }
}
NOTE: I highly recommend to define local queue for each client (task in server) to achieve maximum concurrency and cleaner code.
You can achieve what you need by using a CountDownEvent which would hold thread access for each item, We should set it to number of available worker that send data to clients.
here is how we can do it:
Definitions:
public static ConcurrentQueue<List<SymbolsTable>> que = new ConcurrentQueue<List<SymbolsTable>>();
public static CountdownEvent counter = new CountdownEvent(NumberOfThreads);
private const int NumberOfThreads = 3; //for example we have 3 clients here
Thread:
Thread threadForClientSending = new Thread(delegate()
{
while (true)
{
try
{
List<SymbolsTable> list;
var peek = que.TryPeek(out list);
if (!peek)
{
Thread.Sleep(100); //nothing to pull
continue;
}
foreach (var item in list)
{
main.que -- global queue
byte[] message =
encoding.GetBytes((item.GetHashCode() % 100).ToString() + " " + item.SDate + "\n\r");
client.Send(message);
Thread.Sleep(500);
}
counter.Signal(); //the thread would signal itself as finished, and wait for others to finish the task
lock (que)
{
List<SymbolsTable> lastList;
if (que.TryPeek(out lastList) && lastList.Equals(list))
{
//just one of the threads would dequeue the item
que.TryDequeue(out lastList);
counter.Reset(); //reset counter for next iteration
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
break;
}
}
});
here we used TryPeek to just access the item in queue so we won't remove it, at the end in:
lock (que)
{
List<SymbolsTable> lastList;
if (que.TryPeek(out lastList) && lastList.Equals(list))
{
//just one of the threads would dequeue the item
que.TryDequeue(out lastList);
counter.Reset(); //reset counter for next iteration
}
}
we would lock the que so only one thread at a time can access it, then we check to see if the processed item has been removed from queue and if not we will remove it here.
More Elegant Solution (in my Humble Opinion):
as you saw in previous solution we're blocking threads to finish the task for each item together,adding a local queue to each thread would remove this blocking mechanism, so we can achieve maximum concurrency.
I suggest something like:
class GlobalQueue
{
private readonly List<IMyTask> _subscribers=new List<IMyTask>();
public void Subscribe(IMyTask task)
{
_subscribers.Add(task);
}
public void Unsubscribe(IMyTask task)
{
_subscribers.Remove(task);
}
public void Enqueue(List<SymbolsTable> table)
{
foreach (var s in _subscribers)
s.Enqueue(table);
}
}
interface IMyTask
{
void Enqueue(List<SymbolsTable> table);
}
which your task would be roughly like:
class MyTask : IMyTask
{
private readonly ConcurrentQueue<List<SymbolsTable>> _localQueue = new ConcurrentQueue<List<SymbolsTable>>();
private readonly Thread _thread;
private bool _started;
public void Enqueue(List<SymbolsTable> table)
{
_localQueue.Enqueue(table);
}
public MyTask()
{
_thread = new Thread(Execute);
}
public void Start()
{
_started = true;
_thread.Start();
}
public void Stop()
{
_started = false;
}
private void Execute()
{
while (_started)
{
try
{
List<SymbolsTable> list;
var peek = _localQueue.TryDequeue(out list);
if (!peek)
{
Thread.Sleep(100); //nothing to pull
continue;
}
foreach (var item in list)
{
byte[] message =
encoding.GetBytes((item.GetHashCode() % 100).ToString() + " " + item.SDate + "\n\r");
client.Send(message);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
break;
}
}
}
}

Blocking collections + Multiple Worker threads per blocking collection + Wait For Work Completion

I have to do action in batch of 1000 message say Action A, B, C. I can do these actions in parallel.
I created groups for them. To increase parallelism, I created subgroups with in each group. Task with in a subgroup needs to be executed serially. But two subgroups can execute in parallel.
After a batch of 1000 finishes, I have to do some processing ie save in db. But I am unable to understand , how to wait for all the task to finish (I am not interested in waiting in middle just at the end of 1000 taks). Any suggestions are welcome.
public class OrderlyThreadPool<t> : IDisposable
{
BlockingCollection<t> _workingqueue = null;
Action<t> _handler = null;
public OrderlyThreadPool(int wrkerCount, Action<t> handler)
{
_workingqueue = new BlockingCollection<t>();
_handler = handler;
Worker worker = new Worker(wrkerCount, Process); //WorkerCount is always 1
worker.Start();
}
public void AddItem(t item)
{
_workingqueue.Add(item);
}
private void Process()
{
foreach (t item in _workingqueue.GetConsumingEnumerable())
{
_handler(item);
}
}
public void Dispose()
{
_workingqueue.CompleteAdding();
_workingqueue = null;
}
}
public class Worker
{
int _wrkerCount = 0;
Action _action = null;
public Worker(int workerCount, Action action)
{
_wrkerCount = workerCount;
_action = action;
}
public void Start()
{
// Create and start a separate Task for each consumer:
for (int i = 0; i < _wrkerCount; i++)
{
Task.Factory.StartNew(_action);
}
}
}
So basically I will create OrderlyThreadPool for each subgroup.
I am recv messages from say source, which blocks if no message is available. So my code, looks like
while(true)
{
var message = GetMsg();
foreach(OrderlyThreadPool<Msg> a in myList)
{
a.AddMsg(message);
}
if(msgCount > 1000)
{
Wait for all threads to finish work;
}
else
{
msgCount =msgCount+1;
}
}
You start your tasks but you don't keep a reference. Simply store these tasks, expose them through the Worker and OrderlyThreadPool and use Task.WhenAll to wait for all of them to complete:
public class Worker
{
//...
List<Task> _tasks = new List<Task>();
public Task Completion { get { return Task.WhenAll(_tasks); } }
public void Start()
{
// Create and start a separate Task for each consumer:
for (int i = 0; i < _wrkerCount; i++)
{
Tasks.Add(Task.Factory.StartNew(_action));
}
}
}
public class OrderlyThreadPool<t> : IDisposable
{
//...
public Task Completion { get { return _worker.Completion; }}
}
await Task.WhenAll(myList.Select(orderlyThreadPool => orderlyThreadPool.Completion));
However, you should probably consider using TPL Dataflow instead. It's an actor-based framework that encapsulates completion, batching, concurrency levels and so forth...

Can this code be refactored by using the reactive framework?

copy paste the following code in new C# console app.
class Program
{
static void Main(string[] args)
{
var enumerator = new QueuedEnumerator<long>();
var listenerWaitHandle = Listener(enumerator);
Publisher(enumerator);
listenerWaitHandle.WaitOne();
}
private static AutoResetEvent Listener(IEnumerator<long> items)
{
var #event = new AutoResetEvent(false);
ThreadPool.QueueUserWorkItem((o) =>
{
while (items.MoveNext())
{
Console.WriteLine("Received : " + items.Current);
Thread.Sleep(2 * 1000);
}
(o as AutoResetEvent).Set();
}, #event);
return #event;
}
private static void Publisher(QueuedEnumerator<long> enumerator)
{
for (int i = 0; i < 10; i++)
{
enumerator.Set(i);
Console.WriteLine("Sended : " + i);
Thread.Sleep(1 * 1000);
}
enumerator.Finish();
}
class QueuedEnumerator<T> : IEnumerator<T>
{
private Queue _internal = Queue.Synchronized(new Queue());
private T _current;
private bool _finished;
private AutoResetEvent _setted = new AutoResetEvent(false);
public void Finish()
{
_finished = true;
_setted.Set();
}
public void Set(T item)
{
if (_internal.Count > 3)
{
Console.WriteLine("I'm full, give the listener some slack !");
Thread.Sleep(3 * 1000);
Set(item);
}
else
{
_internal.Enqueue(item);
_setted.Set();
}
}
public T Current
{
get { return _current; }
}
public void Dispose()
{
}
object System.Collections.IEnumerator.Current
{
get { return _current; }
}
public bool MoveNext()
{
if (_finished && _internal.Count == 0)
return false;
else if (_internal.Count > 0)
{
_current = (T)_internal.Dequeue();
return true;
}
else
{
_setted.WaitOne();
return MoveNext();
}
}
public void Reset()
{
}
}
}
2 threads (A,B)
A thread can provide one instance at a time and calls the Set method
B thread wants to receive a sequence of instances (provided by thread A)
So literally transforming an Add(item), Add(item), .. to a IEnumerable between different threads
Other solutions also welcome of course!
Sure - this code might not be the best way to do it, but here was my initial stab at it:
Subject<Item> toAddObservable;
ListObservable<Item> buffer;
void Init()
{
// Subjects are an IObservable we can trigger by-hand, they're the
// mutable variables of Rx
toAddObservable = new Subject(Scheduler.TaskPool);
// ListObservable will hold all our items until someone asks for them
// It will yield exactly *one* item, but only when toAddObservable
// is completed.
buffer = new ListObservable<Item>(toAddObservable);
}
void Add(Item to_add)
{
lock (this) {
// Subjects themselves are thread-safe, but we still need the lock
// to protect against the reset in FetchResults
ToAddOnAnotherThread.OnNext(to_add);
}
}
IEnumerable<Item> FetchResults()
{
IEnumerable<Item> ret = null;
buffer.Subscribe(x => ret = x);
lock (this) {
toAddObservable.OnCompleted();
Init(); // Recreate everything
}
return ret;
}

Categories

Resources