Calling one non thread-safe method in a thread from another thread - c#

Suppose I have a non thread-safe Class X on the main thread and I have another class Y which is in another thread and needs to call a method doX() of Class X.
I would simply pass a reference of Class X to Class Y and call doX() from Y however this class X is non thread-safe and if called from another thread behaves weirdly.
How can I let Y call method doX() of X from X's thread? in the SSCC below the managedthreadid should always be the same (but it isn't).
using System;
using System.Threading;
namespace ThreadApp
{
static class Program
{
[STAThread]
static void Main()
{
int managedThreadId = Thread.CurrentThread.ManagedThreadId;
System.Diagnostics.Debug.WriteLine("Main ManagedThreadId = " + managedThreadId);
X x = new X();
x.doX();
Y y = new Y();
y.fun(x);
}
}
class X
{
public void doX()
{
int managedThreadId = Thread.CurrentThread.ManagedThreadId;
System.Diagnostics.Debug.WriteLine("X ManagedThreadId = " + managedThreadId);
}
}
class Y
{
public void fun(X x)
{
Thread t = new Thread(x.doX);
t.Start();
}
}
}
EDIT: This page explains my problem better than I can: http://mikehadlow.blogspot.it/2012/11/using-blockingcollection-to-communicate.html
Consider these (somewhat) common programming challenges:
I’m using a third party library that is not thread safe, but I want my
application to share work between multiple threads. How do I marshal
calls between my multi-threaded code to the single threaded library? I
have a single source of events on a single thread, but I want to share
the work between a pool of multiple threads? I have multiple threads
emitting events, but I want to consume them on a single thread? One
way of doing this would be to have some shared state, a field or a
property on a static class, and wrap locks around it so that multiple
threads can access it safely. This is a pretty common way of trying to
skin this particular cat, but it’s shot through with traps for the
unwary. Also, it can hurt performance because access to the shared
resource is serialized, even though the things accessing it are
running in parallel.
A better way is to use a BlockingCollection and have your threads
communicate via message classes.
Here's a working solution based on that website suggestion of using BlockingCollection:
namespace ThreadApp
{
static class Program
{
[STAThread]
static void Main()
{
int managedThreadId = Thread.CurrentThread.ManagedThreadId;
System.Diagnostics.Debug.WriteLine("Main ManagedThreadId = " + managedThreadId);
X x = new X();
Y y = new Y();
y.fun(x);
x.doX();
}
}
class X
{
private BlockingCollection<String> queue = new BlockingCollection<String>();
public void Produce(String item)
{
queue.Add(item);
}
public void doX()
{
while (true)
{
String item = queue.Take();
int managedThreadId = Thread.CurrentThread.ManagedThreadId;
System.Diagnostics.Debug.WriteLine("X ManagedThreadId = " + managedThreadId + " randomid=" + item);
// Add your code to process the item here.
// Do not start another task or thread.
}
}
}
class Y
{
X x;
public void fun(X x)
{
this.x = x;
Thread t = new Thread(threadBody);
t.Start();
}
void threadBody()
{
while (true)
{
int managedThreadId = Thread.CurrentThread.ManagedThreadId;
Random rand = new Random();
int randInt = rand.Next(1, 90);
System.Diagnostics.Debug.WriteLine("Y ManagedThreadId = " + managedThreadId + " random-int" + randInt);
x.Produce("random-int" + randInt);
Thread.Sleep(randInt * 10);
}
}
}
}
The above solution works, here's the output:
Main ManagedThreadId = 1
Y ManagedThreadId = 3 random-int24
X ManagedThreadId = 1 randomid=random-int24
Y ManagedThreadId = 3 random-int46
X ManagedThreadId = 1 randomid=random-int46
Y ManagedThreadId = 3 random-int48
X ManagedThreadId = 1 randomid=random-int48
The Y thread inserts a random-int and the X thread receives it in the queue and executes its method in the same thread as the Main thread.
However the problem is the doX() method is inside a while loop so it is blocking. If I have an X class which has some other functions to do and cannot block looping inside a method this approach would not work...

Here's an awesome approach. Use Microsoft's Reactive Framework (Rx).
Rx primarily provides an observable/observer model that is extremely powerful, but it also provides a set of schedulers that can be used to simply work with threads. The EventLoopScheduler scheduler can be used to ensure that code runs on a single thread.
Try this example:
var els = new System.Reactive.Concurrency.EventLoopScheduler();
Console.WriteLine("A" + Thread.CurrentThread.ManagedThreadId);
els.Schedule(() =>
{
Console.WriteLine("B" + Thread.CurrentThread.ManagedThreadId);
});
var thread = new Thread((ThreadStart)(() =>
{
Console.WriteLine("C" + Thread.CurrentThread.ManagedThreadId);
els.Schedule(() =>
{
Console.WriteLine("D" + Thread.CurrentThread.ManagedThreadId);
});
}));
thread.Start();
It outputs:
A12
B14
C16
D14
Both "B" and "D" run on the same thread even though the call to schedule an action came from two different threads.
You can use an EventLoopScheduler to make sure you code on X runs on the same thread.
Just NuGet "System.Reactive" to get the bits.

Related

Μeasuring the time of a thread quantum [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 months ago.
Improve this question
I'm trying to measure the time each thread quantum takes using c#.
using System;
using System.Diagnostics;
using System.Threading;
using System.Collections.Concurrent;
namespace pr
{
public static class Program
{
private static Stopwatch watch = new Stopwatch();
private static long t = 0;
private static volatile int id = 0;
private static BlockingCollection<long> deltas = new();
public static void Main(string[] args)
{
ProcessAffinity();
PriorityClass();
ThreadDemo();
}
private static void ProcessAffinity()
{
Process.GetCurrentProcess().ProcessorAffinity = (IntPtr)(1 << 0);
}
private static void PriorityClass()
{
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime;
}
private static void ThreadDemo()
{
var thread1 = new Thread(() => InsideThread())
{
IsBackground = true,
Priority = ThreadPriority.Highest
};
var thread2 = new Thread(() => InsideThread())
{
IsBackground = true,
Priority = ThreadPriority.Highest
};
watch.Start();
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
var avg = deltas.Average();
Console.WriteLine("Average: " + avg);
foreach (var e in deltas)
{
Console.WriteLine("delta: " + e);
}
}
private static void InsideThread()
{
while (true)
{
var currentId = Thread.CurrentThread.ManagedThreadId;
Console.WriteLine("id" + currentId);
if (id == 0)
{
id = currentId;
}
if (id != currentId)
{
var newt = watch.ElapsedMilliseconds;
deltas.Add(newt - t);
t = newt;
id = currentId;
}
if (watch.ElapsedMilliseconds > 3000) break;
}
}
}
}
If I print the id of the current thread to the console, everything kind of works and the delta gets counted. But the Console.WriteLine changes the context (I guess? I'm new in this topic) and increases the time, so the result is not correct (it's around 100 ms, while it's supposed to be around 32 ms). However, if I comment the Console.WriteLine("id" + currentId); out, the threads don't change: thread2 seems to start only after thread1 completes its work, so the final delta is 3000ms + something. Increasing the time doesn't help too. My question is: why don't the threads run simultaneously when Console.WriteLine() is commented out? How to measure the time of a thread quantum correctly? Should I use any locks? (I've tried, but nothing changed)
My guess is that Console.WriteLine will yield the time slice, possibly due to blocking on a lock. This is not really unexpected.
Without yielding, the thread1 will have a higher priority than your "main" thread. So it is likely that thread2.Start(); will never be run before the thread1 has completed.
To get around this you could use a manualResetEvent. Let each thread wait on the same event, and set this from the main thread after both have been started. Waiting on the event will block the threads, letting the main thread run. When the event has been set, both threads should be eligible to run.
Note that I'm not at all sure that this will give any accurate results, but it should fix the current behavior you are seeing.

I am trying to call a method in a loop .It should be called only 20 times in 10 seconds . I am using semaphore like the below code

By using the below code firstly some of the calls are not getting made lets say out of 250 , 238 calls are made and rest doesn't.Secondly I am not sure if the calls are made at the rate of 20 calls per 10 seconds.
public List<ShowData> GetAllShowAndTheirCast()
{
ShowResponse allShows = GetAllShows();
ShowCasts showCast = new ShowCasts();
showCast.showCastList = new List<ShowData>();
using (Semaphore pool = new Semaphore(20, 20))
{
for (int i = 0; i < allShows.Shows.Length; i++)
{
pool.WaitOne();
Thread t = new Thread(new ParameterizedThreadStart((taskId) =>
{
showCast.showCastList.Add(MapResponse(allShows.Shows[i]));
}));
pool.Release();
t.Start(i);
}
}
//for (int i = 0; i < allShows.Shows.Length; i++)
//{
// showCast.showCastList.Add(MapResponse(allShows.Shows[i]));
//}
return showCast.showCastList;
}
public ShowData MapResponse(Show s)
{
CastResponse castres = new CastResponse();
castres.CastlistResponse = (GetShowCast(s.id)).CastlistResponse;
ShowData sd = new ShowData();
sd.id = s.id;
sd.name = s.name;
if (castres.CastlistResponse != null && castres.CastlistResponse.Any())
{
sd.cast = new List<CastData>();
foreach (var item in castres.CastlistResponse)
{
CastData cd = new CastData();
cd.birthday = item.person.birthday;
cd.id = item.person.id;
cd.name = item.person.name;
sd.cast.Add(cd);
}
}
return sd;
}
public ShowResponse GetAllShows()
{
ShowResponse response = new ShowResponse();
string showUrl = ClientAPIUtils.apiUrl + "shows";
response.Shows = JsonConvert.DeserializeObject<Show[]>(ClientAPIUtils.GetDataFromUrl(showUrl));
return response;
}
public CastResponse GetShowCast(int showid)
{
CastResponse res = new CastResponse();
string castUrl = ClientAPIUtils.apiUrl + "shows/" + showid + "/cast";
res.CastlistResponse = JsonConvert.DeserializeObject<List<Cast>>(ClientAPIUtils.GetDataFromUrl(castUrl));
return res;
}
All the Calls should be made , but I am not sure where they are getting aborted and even please let me know how to check the rate of calls being made.
I'm assuming that your goal is to process all data about shows but no more than 20 at once.
For that kind of task you should probably use ThreadPool and limit maximum number of concurrent threads using SetMaxThreads.
https://learn.microsoft.com/en-us/dotnet/api/system.threading.threadpool?view=netframework-4.7.2
You have to make sure that collection that you are using to store your results is thread-safe.
showCast.showCastList = new List<ShowData>();
I don't think that standard List is thread-safe. Thread-safe collection is ConcurrentBag (there are others as well). You can make standard list thread-safe but it requires more code. After you are done processing and need to have results in list or array you can convert collection to desired type.
https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentbag-1?view=netframework-4.7.2
Now to usage of semaphore. What your semaphore is doing is ensuring that maximum 20 threads can be created at once. Assuming that this loop runs in your app main thread your semaphore has no purpose. To make it work you need to release semaphore once thread is completed; but you are calling thread Start() after calling Release(). That results in thread being executed outside "critical area".
using (Semaphore pool = new Semaphore(20, 20)) {
for (int i = 0; i < allShows.Shows.Length; i++) {
pool.WaitOne();
Thread t = new Thread(new ParameterizedThreadStart((taskId) =>
{
showCast.showCastList.Add(MapResponse(allShows.Shows[i]));
pool.Release();
}));
t.Start(i);
}
}
I did not test this solution; additional problems might arise.
Another issue with this program is that it does not wait for all threads to complete. Once all threads are started; program will end. It is possible (and in your case I'm sure) that not all threads completed its operation; this is why ~240 data packets are done when program finishes.
thread.Join();
But if called right after Start() it will stop main thread until it is completed so to keep program concurrent you need to create a list of threads and Join() them at the end of program. It is not the best solution. How to wait on all threads that program can add to ThreadPool
Wait until all threads finished their work in ThreadPool
As final note you cannot access loop counter like that. Final value of loop counter is evaluated later and with test I ran; code has tendency to process odd records twice and skip even. This is happening because loop increases counter before previous thread is executed and causes to access elements outside bounds of array.
Possible solution to that is to create method that will create thread. Having it in separate method will evaluate allShows.Shows[i] to show before next loop pass.
public void CreateAndStartThread(Show show, Semaphore pool, ShowCasts showCast)
{
pool.WaitOne();
Thread t = new Thread(new ParameterizedThreadStart((s) => {
showCast.showCastList.Add(MapResponse((Show)s));
pool.Release();
}));
t.Start(show);
}
Concurrent programming is tricky and I would highly recommend to do some exercises with examples on common pitfalls. Books on C# programming are sure to have a chapter or two on the topic. There are plenty of online courses and tutorials on this topic to learn from.
Edit:
Working solution. Still might have some issues.
public ShowCasts GetAllShowAndTheirCast()
{
ShowResponse allShows = GetAllShows();
ConcurrentBag<ShowData> result = new ConcurrentBag<ShowData>();
using (var countdownEvent = new CountdownEvent(allShows.Shows.Length))
{
using (Semaphore pool = new Semaphore(20, 20))
{
for (int i = 0; i < allShows.Shows.Length; i++)
{
CreateAndStartThread(allShows.Shows[i], pool, result, countdownEvent);
}
countdownEvent.Wait();
}
}
return new ShowCasts() { showCastList = result.ToList() };
}
public void CreateAndStartThread(Show show, Semaphore pool, ConcurrentBag<ShowData> result, CountdownEvent countdownEvent)
{
pool.WaitOne();
Thread t = new Thread(new ParameterizedThreadStart((s) =>
{
result.Add(MapResponse((Show)s));
pool.Release();
countdownEvent.Signal();
}));
t.Start(show);
}

Parallel.Invoke and thread invocation - Clarification?

I have this code which creates a deadlock :
void Main()
{
ClassTest test = new ClassTest();
lock(test)
{
Task t1 = new Task(() => test.DoWorkUsingThisLock(1));
t1.Start();
t1.Wait();
}
}
public class ClassTest
{
public void DoWorkUsingThisLock(int i)
{
Console.WriteLine("Before " + i);
Console.WriteLine ("Current Thread ID is = "+Thread.CurrentThread.ManagedThreadId);
lock(this)
{
Console.WriteLine("Work " + i);
Thread.Sleep(1000);
}
Console.WriteLine("Done " + i);
}
}
Result :
Before 1
(and deadlock....)
I know that this is a bad practice to lock over instances beyond code's control or , this. But it's just for this question.
I can understand why a deadlock is created here.
The main thread acquires the lock(test) in main and then a new thread starts to invoke DoWorkUsingThisLock - there it tries to acquire a lock over the same instance variable and it's stuck ( because of t1.Wait() at main)
OK
But I've seen this answer here which also creates deadlock.
void Main()
{
ClassTest test = new ClassTest();
lock(test)
{
Parallel.Invoke (
() => test.DoWorkUsingThisLock(1),
() => test.DoWorkUsingThisLock(2)
);
}
}
public class ClassTest
{
public void DoWorkUsingThisLock(int i)
{
Console.WriteLine("Before ClassTest.DoWorkUsingThisLock " + i);
lock(this)
{
Console.WriteLine("ClassTest.DoWorkUsingThisLock " + i);
Thread.Sleep(1000);
}
Console.WriteLine("ClassTest.DoWorkUsingThisLock Done " + i);
}
}
The result is :
Before ClassTest.DoWorkUsingThisLock 1
Before ClassTest.DoWorkUsingThisLock 2
ClassTest.DoWorkUsingThisLock 1 // <---- how ?
ClassTest.DoWorkUsingThisLock Done 1
Question:
How come it DID acquire the lock for the first invocation (DoWorkUsingThisLock(1))? The lock at main is still blocked due to Parallel.Invoke which DOES block !
I don't understand how the thread has succeeded to enter the lock(this) section.
The Parallel class uses the current thread to do a part of the work. This is a nice performance optimization but it is observable in the case of thread-specific state.
The TPL has this kind of "inline execution" in many places and it causes a lot of trouble in different ways. Many programs are not made to deal with reentrancy.

Adding a multithreading scenario for an application in c#

I have developed an application in c#. The class structure is as follows.
Form1 => The UI form. Has a backgroundworker, processbar, and a "ok" button.
SourceReader, TimedWebClient, HttpWorker, ReportWriter //clases do some work
Controller => Has the all over control. From "ok" button click an instance of this class called "cntrl" is created. This cntrlr is a global variable in Form1.cs.
(At the constructor of the Controler I create SourceReader, TimedWebClient,HttpWorker,ReportWriter instances. )
Then I call the RunWorkerAsync() of the background worker.
Within it code is as follows.
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
int iterator = 1;
for (iterator = 1; iterator <= this.urlList.Count; iterator++)
{
cntrlr.Vmain(iterator-1);
backgroundWorker1.ReportProgress(iterator);
}
}
At themoment ReportProgress updates the progressbar.
The urlList mentioned above has 1000 of urls. cntlr.Vamin(int i) process the whole process at themoment. I want to give the task to several threads, each one having to process 100 of urls. Though access for other instances or methods of them is not prohibited, access to ReportWriter should be limited to only one thread at a time. I can't find a way to do this. If any one have an idea or an answer, please explain.
If you do want to restrict multiple threads using the same method concurrently then I would use the Semaphore class to facilitate the required thread limit; here's how...
A semaphore is like a mean night club bouncer, it has been provide a club capacity and is not allowed to exceed this limit. Once the club is full, no one else can enter... A queue builds up outside. Then as one person leaves another can enter (analogy thanks to J. Albahari).
A Semaphore with a value of one is equivalent to a Mutex or Lock except that the Semaphore has no owner so that it is thread ignorant. Any thread can call Release on a Semaphore whereas with a Mutex/Lock only the thread that obtained the Mutex/Lock can release it.
Now, for your case we are able to use Semaphores to limit concurrency and prevent too many threads from executing a particular piece of code at once. In the following example five threads try to enter a night club that only allows entry to three...
class BadAssClub
{
static SemaphoreSlim sem = new SemaphoreSlim(3);
static void Main()
{
for (int i = 1; i <= 5; i++)
new Thread(Enter).Start(i);
}
// Enfore only three threads running this method at once.
static void Enter(int i)
{
try
{
Console.WriteLine(i + " wants to enter.");
sem.Wait();
Console.WriteLine(i + " is in!");
Thread.Sleep(1000 * (int)i);
Console.WriteLine(i + " is leaving...");
}
finally
{
sem.Release();
}
}
}
Note, that SemaphoreSlim is a lighter weight version of the Semaphore class and incurs about a quarter of the overhead. it is sufficient for what you require.
I hope this helps.
I think I would have used the ThreadPool, instead of background worker, and given each thread 1, not 100 url's to process. The thread pool will limit the number of threads it starts at once, so you wont have to worry about getting 1000 requests at once. Have a look here for a good example
http://msdn.microsoft.com/en-us/library/3dasc8as.aspx
Feeling a little more adventurous? Consider using TPL DataFlow to download a bunch of urls:
var urls = new[]{
"http://www.google.com",
"http://www.microsoft.com",
"http://www.apple.com",
"http://www.stackoverflow.com"};
var tb = new TransformBlock<string, string>(async url => {
using(var wc = new WebClient())
{
var data = await wc.DownloadStringTaskAsync(url);
Console.WriteLine("Downloaded : {0}", url);
return data;
}
}, new ExecutionDataflowBlockOptions{MaxDegreeOfParallelism = 4});
var ab = new ActionBlock<string>(data => {
//process your data
Console.WriteLine("data length = {0}", data.Length);
}, new ExecutionDataflowBlockOptions{MaxDegreeOfParallelism = 1});
tb.LinkTo(ab); //join output of producer to consumer block
foreach(var u in urls)
{
tb.Post(u);
}
tb.Complete();
Note how you can control the parallelism of each block explicitly, so you can gather in parallel but process without going concurrent (for example).
Just grab it with nuget. Easy.

How to use ThreadPool class

namespace ThPool
{
class Program
{
private static long val = 0;
private static string obj = string.Empty;
static void Main(string[] args)
{
Thread objThread1 = new Thread(new ThreadStart(IncrementValue));
objThread1.Start();
Thread objThread2 = new Thread(new ThreadStart(IncrementValue));
objThread2.Start();
objThread1.Join();
objThread2.Join();
Console.WriteLine("val=" + val + " however it should be " + (10000000 + 10000000));
Console.ReadLine();
}
private static void IncrementValue()
{
for (int i = 0; i < 10000000; i++)
{
Monitor.Enter(obj);
val++;
Monitor.Exit(obj);
}
}
}
}
How do I use ThreadPool class in replacement of thread & monitor?
There are a couple of ways to use the thread pool. For your task, you should look at the following.
If you just need a task to run the easiest way is to use QueueUserWorkItem, which simply takes a delegate to run. The disadvantage is that you have little control over the job. The delegate can't return a value, and you don't know when the run is completed.
If you want a little more control, you can use the BeginInvoke/EndInvoke interface of delegates. This schedules the code to run on a thread pool thread. You can query the status via the IAsyncResult handle returned by BeginInvoke, and you can get the result (as well as any exception on the worker thread) via EndInvoke.
To use the Enter/Exit on Monitor properly, you have to make sure that Exit is always called. Therefore you should place your Exit call in a finally block.
However, if you don't need to supply a timeout value for Enter, you would be much better off just using the lock keyword, which compiles into a proper set of Enter and Exit calls.
EventWaitHandle[] waitHandles = new EventWaitHandle[2];
for(int i = 0; i < 2; i++)
{
waitHandles[i] = new AutoResetEvent(false);
ThreadPool.QueueUserWorkItem(state =>
{
EventWaitHandle handle = state as EventWaitHandle;
for(int j = 0; j < 10000000; j++)
{
Interlocked.Increment(ref val); //instead of Monitor
}
handle.Set();
}, waitHandles[i]);
}
WaitHandle.WaitAll(waitHandles);
Console.WriteLine("val=" + val + " however it should be " + (10000000 + 10000000));
You should look into ThreadPool.QueueUserWorkItem(). It takes a delegate that is run on the threadpool thread, passing in a state object.
i.e.
string fullname = "Neil Barnwell";
ThreadPool.QueueUserWorkItem(state =>
{
Console.WriteLine("Hello, " + (string)state);
}, fullname);
Don't be confused by Control.BeginInvoke(). This will marshal the call to the thread that created the control, to prevent the issue where cross-thread calls update Controls. If you want simple multi-threading on a Windows form, then look into the BackgroundWorker.
Do not use the thread pool for anything but the most simple things. In fact it is extremely dangerous to aquire a lock on a thread pool thread. However you can safely use the Interlocked API's.
You can use BeginInvoke, EndInvoke. It uses the threadpool behind the scenes but it is easier to program. See here:
http://msdn.microsoft.com/en-us/library/2e08f6yc.aspx

Categories

Resources