This question already has answers here:
How can I run both of these methods 'at the same time' in .NET 4.5?
(5 answers)
Closed 6 years ago.
I know this question had been answered a thousand times, but i still can't get the functions to work at the same time. Here is my code:
static void Main(string[] args)
{
a();
a();
}
static void a()
{
string sampleString = "a";
Console.WriteLine(sampleString);
for (int i = 0; i < 10; i++)
{
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(0.1));
Console.SetCursorPosition(0, Console.CursorTop -1);
sampleString = " " + sampleString;
Console.WriteLine(sampleString);
}
}
The function just writes the letter 'a' and after 0.1 seconds adds a space behind it, not complicated. After the function ends, it does the same thing again. I want both of the functions to work at the same time, the first function in the first line and the second in the second line.
Thanks in advance.
You can use Parallel class:
static void Main(string[] args)
{
Parallel.Invoke(a,a);
}
You functions will be run simultaneously.
An easy solution would be to create two threads, start them and wait for them to finish. For your specific question:
static void Main(string[] args)
{
var t1 = new Thread(a); t1.Start();
var t2 = new Thread(a); t2.Start();
t1.Join(); t2.Join();
}
But of course, this does not solve your initial problem, writing into different lines of the console, as the a function does not know, which line to write on. You could solve this, by adding a parameter with the line number.
Furthermore, you'll have to protect your console output by a lock, because it's a single target where multiple sources write data into, which may lead to inconsistencies if not synchronized correctly. So I really suggest, that you read some tutorials on multi-threading.
Related
This question already has answers here:
How and when to use ‘async’ and ‘await’
(25 answers)
Call asynchronous method in constructor?
(15 answers)
How Async and Await works
(4 answers)
Closed 4 years ago.
I have a question about both keywords (Async and Await)
Their main goal is to make more than one method to be done at the same time as i understood, so for example if i'm going to do a long task which will prevent the main task from being executed in that case i have to use (Async and Await).
But when I tried it in two different programs that do the same thing i found that the execution is the same w/o using (Async and Await)
first one
class Async
{
public Async()
{
main();
}
public async void main()
{
Console.WriteLine("Main Thread....");
await SlowTask();
Console.WriteLine("Back To Main Thread");
}
public async Task SlowTask()
{
Console.WriteLine("Useless Thread....");
for (int i = 0; i < 1000000; i++)
{
// TODO
}
Console.WriteLine("Finished Useless Thread....");
}
}
and the second
class NoAsync
{
public NoAsync()
{
main();
}
public void main()
{
Console.WriteLine("Main Thread....");
SlowTask();
Console.WriteLine("Back To Main Thread");
}
public void SlowTask()
{
Console.WriteLine("Useless Thread....");
for (int i = 0; i < 1000000; i++)
{
// TODO
}
Console.WriteLine("Finished Useless Thread....");
}
}
the execution of both are exactly the same, from what i understood that when using (Async and Await) both main and SlowTask will execute at the same time without waiting for one to end so the other start to execute but that doesn't happen.
What's wrong i've made or i'm miss-understanding Asynchronous in c# ?
Kind regards.
I'd expect the execution of both to be the same. The issue is the latter, you have a misunderstanding of async in C#.
In short, when you await something in an async method, that thread is freed up to go do other stuff, until the awaited task finishes. But the program does not continue executing the next line until the awaited method completes. That would be bananas. You're thinking more of a background worker.
This can be handy, for example, when you are doing I/O and want a thread to be available to field http requests in the meantime. There is an abundance of reading material on this already. Here is a more in-depth answer
The execution should be exactly the same, they are logically consistent programs. There's likely not even a single task switch given your whole program is merely incrementing a counter variable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Good afternoon everybody.
I am new to Parallel.ForEach and even newer to threading and I wanted to ask your opinion on what is wrong with this code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MultiThreading_Example
{
class theMeter {
int count1 = 0;
public void CounterMeter()
{
while (this.count1 < 10000)
{
this.count1++;
Console.WriteLine(this.count1);
}
}
}
class Program
{
static void Main(string[] args)
{
theMeter met = new theMeter();
ThreadStart countForMe = new ThreadStart(met.CounterMeter);
for (int i = 0; i < 1000; i++)
{
Thread t1 = new Thread(countForMe);
t1.Start();
t1.Join();
t1.Abort();
}
Console.ReadLine();
}
}
}
Any idea what is wrong with this? I mean, I have tried doing it without t1.Join() and also have tried doing it on one thread. All are at the same speed. How can I get he program to process faster?
Thanks in advance!
Here is sample of working example. The difference is that each thread is stored in list and started in parallel, and then in the end we wait for each thread.
class theMeter
{
private int count1 = 0;
public void CounterMeter()
{
while (this.count1 < 10000)
{
this.count1++;
Console.WriteLine(this.count1);
}
}
}
class Program
{
static void Main(string[] args)
{
theMeter met = new theMeter();
ThreadStart countForMe = new ThreadStart(met.CounterMeter);
List<Thread> threads = new List<Thread>();
for (int i = 0; i < 10; i++)
{
Thread t1 = new Thread(countForMe);
t1.Start();
threads.Add(t1);
}
// here we will wait for completion of every thread that was created and added to list
threads.ForEach(t => t.Join());
Console.ReadLine();
}
}
This code has a couple of issues worth to mention:
Each thread uses the same theMeter instance, so each of them will iterate the same this.count1 variable, so it's not so thread safe. IMO, the better approach is to create own instance for each thread and divide work for each of them. So they won't share the same resources and it won't cause shared resource access issues.
If any of threads will stuck, then your program will freeze. As Join will wait unlimited amount of time for that. If your thread might take a lot of time(network, db, etc), then you can use Join with timeout and then call Abort. But still using of Abort method isn't recommended, as thread should complete his logic in graceful way, not by cruel interruption of Abort method.
In your orignal example you have 1000 threads, and it's too much, as they will fight for CPU resources and might make things much slower. But it will depend on actual things they will do and where bottleneck will be. If this is CPU intensive task, then thread count should be similar to CPUs count. If this they are doing network requests, then it might depend on network bandiwth.
Another way might be using real Parallel.ForEach or Task classes, that can make things a bit easier.
Sample with actual Parallel.ForEach will be like:
class Program
{
static void Main(string[] args)
{
theMeter met = new theMeter();
Enumerable.Range(0, 1000).AsParallel().ForAll((i) => met.CounterMeter());
Console.ReadLine();
}
}
But you won't have so much flexibility here, but it's easier
Here's a description of what the program should do. The program should create a file and five threads to write in that file...
The first thread should write from 1 to 5 into that file.
The second thread should write from 1 to 10.
The third thread should write from 1 to 15.
The fourth thread should write from 1 to 20.
The fifth thread should write from 1 to 25.
Moreover, an algorithm should be implemented to make each thread print 2 numbers and stops. the next thread should print two numbers and stop. and so on until all the threads finish printing their numbers.
Here's the code I've developed so far...
using System;
using System.IO;
using System.Threading;
using System.Collections;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
public static class OSAssignment
{
// First Thread Tasks...
static void FirstThreadTasks(StreamWriter WritingBuffer)
{
for (int i = 1; i <= 5; i++)
{
if (i % 2 == 0)
{
Console.WriteLine("[Thread1] " + i);
Thread.Sleep(i);
}
else
{
Console.WriteLine("[Thread1] " + i);
}
}
}
// Second Thread Tasks...
static void SecondThreadTasks(StreamWriter WritingBuffer)
{
for (int i = 1; i <= 10; i++)
{
if (i % 2 == 0)
{
if (i == 10)
Console.WriteLine("[Thread2] " + i);
else
{
Console.WriteLine("[Thread2] " + i);
Thread.Sleep(i);
}
}
else
{
Console.WriteLine("[Thread2] " + i);
}
}
}
// Third Thread Tasks..
static void ThirdThreadTasks(StreamWriter WritingBuffer)
{
for (int i = 1; i <= 15; i++)
{
if (i % 2 == 0)
{
Console.WriteLine("[Thread3] " + i);
Thread.Sleep(i);
}
else
{
Console.WriteLine("[Thread3] " + i);
}
}
}
// Fourth Thread Tasks...
static void FourthThreadTasks(StreamWriter WritingBuffer)
{
for (int i = 1; i <= 20; i++)
{
if (i % 2 == 0)
{
if (i == 20)
Console.WriteLine("[Thread4] " + i);
else
{
Console.WriteLine("[Thread4] " + i);
Thread.Sleep(i);
}
}
else
{
Console.WriteLine("[Thread4] " + i);
}
}
}
// Fifth Thread Tasks...
static void FifthThreadTasks(StreamWriter WritingBuffer)
{
for (int i = 1; i <= 25; i++)
{
if (i % 2 == 0)
{
Console.WriteLine("[Thread5] " + i);
Thread.Sleep(i);
}
else
{
Console.WriteLine("[Thread5] " + i);
}
}
}
// Main Function...
static void Main(string[] args)
{
FileStream File = new FileStream("output.txt", FileMode.Create, FileAccess.Write, FileShare.Write);
StreamWriter Writer = new StreamWriter(File);
Thread T1 = new Thread(() => FirstThreadTasks(Writer));
Thread T2 = new Thread(() => SecondThreadTasks(Writer));
Thread T3 = new Thread(() => ThirdThreadTasks(Writer));
Thread T4 = new Thread(() => FourthThreadTasks(Writer));
Thread T5 = new Thread(() => FifthThreadTasks(Writer));
Console.WriteLine("Initiating Jobs...");
T1.Start();
T2.Start();
T3.Start();
T4.Start();
T5.Start();
Writer.Flush();
Writer.Close();
File.Close();
}
}
}
Here's the problems I'm facing...
I cannot figure out how to make the 5 threads write into the same file at the same time even with making FileShare.Write. So I simply decided to write to console for time being and to develop the algorithm and see how it behaves first in console.
Each time I run the program, the output is slightly different from previous. It always happen that a thread prints only one of it's numbers in a specific iteration and continues to output the second number after another thread finishes its current iteration.
I've a got a question that might be somehow offtrack. If I removed the Console.WriteLine("Initiating Jobs..."); from the main method, the algorithm won't behave like I mentioned in Point 2. I really can't figure out why.
Your main function is finishing and closing the file before the threads have started writing to it, so you can you use Thread.Join to wait for a thread to exit. Also I'd advise using a using statement for IDisposable objects.
When you have a limited resources you want to share among threads, you'll need a locking mechanism. Thread scheduling is not deterministic. You've started 5 threads and at that point it's not guaranteed which one will run first. lock will force a thread to wait for a resource to become free. The order is still not determined so T3 might run before T2 unless you add additional logic/locking to force the order as well.
I'm not seeing much difference in the behavior but free running threads will produce some very hard to find bugs especially relating to timing issues.
As an extra note I'd avoid using Sleep as a way of synchronizing threads.
To effectively get one thread to write at a time you need to block all other threads, there's a few methods for doing that such as lock, Mutex, Monitor,AutoResetEvent etc. I'd use an AutoResetEvent for this situation. The problem you then face is each thread needs to know which thread it's waiting for so that it can wait on the correct event.
Please see James' answer as well. He points out a critical bug that escaped my notice: you're closing the file before the writer threads have finished. Consider posting a new question to ask how to solve that problem, since this "question" is already three questions rolled into one.
FileShare.Write tells the operating system to allow other attempts to open the file for writing. Typically this is used for systems that have multiple processes writing to the same file. In your case, you have a single process and it only opens the file once, so this flag really makes no difference. It's the wrong tool for the job.
To coordinate writes between multiple threads, you should use locking. Add a new static field to the class:
private static object synchronizer = new object();
Then wrap each write operation on the file with a lock on that object:
lock(synchronizer)
{
Console.WriteLine("[Thread1] " + i);
}
This wil make no difference while you're using the Console, but I think it will solve the problem you had with writing to the file.
Speaking of which, switching from file write to console write to sidestep the file problem was a clever idea, so kudos for that. Howver an even better implementation of that idea would be to replace all of the write calls with a call to a single function, e.g. "WriteOutput(string)" so that you can switch everything from file to console just by changing one line in that function.
And then you could put the lock into that function as well.
Threaded stuff is not deterministic. It's guaranteed that each thread will run, but there are no guarantees about ordering, when threads will be interrupted, which thread will interrupt which, etc. It's a roll of the dice every time. You just have to get used to it, or go out of your way to force thing to happen in a certain sequence if that really matters for your application.
I dunno about this one. Seems like that shouldn't matter.
OK, I'm coming to this rather late, and but from a theoretical point of view, I/O from multiple threads to a specific end-point is inevitably fraught.
In the example above, it would almost certainly faster and safer to queue the output into an in-memory structure, each thread taking an exclusive lock before doing so, and then having a separate thread to output to the device.
This question already has answers here:
.NET Reverse Semaphore?
(8 answers)
Closed 8 years ago.
Is there a way I can wait for more than one Release() in a Semaphore?
Say I have something like this:
class GoRunners {
Semaphore runnersSemaphore;
List<Result> results = new List<Result>();
private void RunForestRun(object aRunner) {
runnersSemaphore.Wait();
Runner runner = (Runner)aRunner;
results.Add(runner.Run());
runnersSemaphore.Release();
}
private List<Result> Go() {
List<Runners>() runners = CreateSomeRunners();
runnersSemaphore = new Semaphore(2, 2); // At most two threads at once
runners.ForEach((runner) => new Thread(RunForestRun).Start(runner); )}
runnersSemaphore.WaitFor(runners.Count); //How do I do this?
return results;
}
}
I know I can use multiple WaitOne()s inside the loop, but that just doesn't look good. But if there is no other way, I'm fine with it. If there is another mechanism that achieves what I want that's OK as well (I used to do stuff like this in Java using Semaphores, so my mind went in that direction).
Note: I'm locked in .NET 3.5 :(
You need to move your rate limiting code inside the foreach loop, that way the loop does not exit until after all of the runners have started. Once you have done that you only need to wait for the remaining two runners to finish before you return the result.
class GoRunners {
Semaphore runnersSemaphore;
List<Result> results = new List<Result>();
private void RunForestRun(object aRunner) {
try {
Runner runner = (Runner)aRunner;
var result = runner.Run();
lock(results)
{
results.Add(result)//List is not thread safe, you need to lock on it or use a different threadsafe collection (I don't know if there are any in .NET 3.5)
}
}
finally { //A little safety in case a execption gets thrown inside "runner.Run()"
runnersSemaphore.Release();
}
}
const int MAX_RUNNERS = 2; //I hate magic numbers in code if they get spread across more than one line, move the max out to a const variable.
private List<Result> Go() {
List<Runners>() runners = CreateSomeRunners();
runnersSemaphore = new Semaphore(MAX_RUNNERS, MAX_RUNNERS); // At most two threads at once
foreach(var runner in runners)
{
runnersSemaphore.WaitOne(); //This goes in here now. New threads will not be started unless there is less than 2 runners running.
new Thread(RunForestRun).Start(runner);
}
for(int i = 0; i < MAX_RUNNERS; i++) {
runnersSemaphore.WaitOne(); //Wait for the currently running runners to finish.
}
return results;
}
}
I had asked question about lock in here and people responded there is no problem in my lock implementation. But i catched problem. Here is same lock implementation and i am getting weird result. I expect to see numbers starts from 1 but it starts from 5.Example is at below.
class Program
{
static object locker = new object();
static void Main(string[] args)
{
for (int j = 0; j < 100; j++)
{
(new Thread(new ParameterizedThreadStart(dostuff))).Start(j);
}
Console.ReadKey();
}
static void dostuff(dynamic input)
{
lock (locker)
{
Console.WriteLine(input);
}
}
}
The code is fine. But you cannot guarantee the order the threads are executed in. When I run the code I get:
0
1
3
5
2
4
6
10
9
11
7
12
8
etc
If you need to run the threads in a specified order, you could look into using ThreadPool.QueueUserWorkItem instead.
class Program
{
static object locker = new object();
static EventWaitHandle clearCount
=new EventWaitHandle(false, EventResetMode.ManualReset);
static void Main(string[] args)
{
for (int j = 0; j < 100; j++)
{
ThreadPool.QueueUserWorkItem(dostuff, j);
}
clearCount.WaitOne();
}
static void dostuff(dynamic input)
{
lock (locker)
{
Console.WriteLine(input);
if (input == 99) clearCount.Set();
}
}
}
It doesn't make sense to put a lock where you're putting it, as you're not locking code which changes a value shared by multiple threads. The section of code you're locking doesn't change any variables at all.
The reason the numbers are out of order is because the threads aren't guaranteed to start in any particular order, unless you do something like #Mikael Svenson suggests.
For an example of a shared variable, if you use this code:
class Program
{
static object locker = new object();
static int count=0;
static void Main(string[] args)
{
for (int j = 0; j < 100; j++)
{
(new Thread(new ParameterizedThreadStart(dostuff))).Start(j);
}
Console.ReadKey();
}
static void dostuff(object Id)
{
lock (locker)
{
count++;
Console.WriteLine("Thread {0}: Count is {1}", Id, count);
}
}
}
You'll probably see that the Thread numbers aren't in order, but the count is. If you remove the lock statement, the count won't be in order either.
You have a couple of problems and wrong assumptions here.
Creating 100 threads in this fashion is not recommended.
The threads are not going to execute in the order they are started.
Placing the lock where you have it will effectively serialize the execution of the threads immediately removing any advantage you were hoping to gain by using threading.
The best approach to use is to partition your problem into separate independent chunks which can be computed simultaneously using only the least amount of thread synchronization as possible. These partitions should be executed on small and fairly static number of threads. You can use the ThreadPool, Parallel, or Task classes for doing this.
I have included a sample pattern using the Parallel.For method. To make the sample easy to understand lets say you have a list of objects that you want to clone and land into a separate list. Lets assume the clone operation is expensive and that you want to parallelize the cloning of many objects. Here is how you would do it. Notice the placement and limited use of the lock keyword.
public static void Main()
{
List<ICloneable> original = GetCloneableObjects();
List<ICloneable> copies = new List<ICloneable>();
Parallel.For(0, 100,
i =>
{
ICloneable cloneable = original[i];
ICloneable copy = cloneable.Clone();
lock (copies)
{
copies.Add(copy);
}
});
}