I am using a static variables to get access between threads, but is taking so long to get their values.
Context: I have a static class Results.cs, where I store the result variables of two running Process.cs instances.
public static int ResultsStation0 { get; set; }
public static int ResultsStation1 { get; set; }
Then, a function of the two process instances is called at the same time, with initial value of ResultsStation0/1 = -1.
Because the result will be provided not at the same time, the function is checking that both results are available. The fast instance will set the result and await for the result of the slower instance.
void StationResult(){
Stopwatch sw = new Stopwatch();
sw.Restart();
switch (stationIndex) //Set the result of the station thread
{
case 0: Results.ResultsStation0 = 1; break;
case 1: Results.ResultsStation1 = 1; break;
}
//Waits to get the results of both threads
while (true)
{
if (Results.ResultsStation0 != -1 && Results.ResultsStation1 != -1)
{
break;
}
}
Trace_Info("GOT RESULTS " + stationIndex + "Time: " + sw.ElapsedMilliseconds.ToString() + "ms");
if (Results.ResultsStation0 == 1 && Results.ResultsStation1 == 1)
{
//set OK if both results are OK
Device.profinet.WritePorts(new Enum[] { NOK, OK },
new int[] { 0, 1 });
}
}
It works, but the problem is that the value of sw of the thread that awaits, should be 1ms more or less. I am getting 1ms sometimes, but most of the times I have values up to 80ms.
My question is: why it takes that much if they are sharing the same memory (I guess)?
Is this the right way to access to a variable between threads?
Don't use this method. Global mutable state is bad enough. Mixing in multiple threads sounds like a recipe for unmaintainable code. Since there is no synchronization at all in sight there is no real guarantee that your program may ever finish. On a single CPU system your loop will prevent any real work from actually being done until the scheduler picks one of the worker threads to run, an even on multi core system you will waste a ton of CPU cycles.
If you really want global variables, these should be something that can signal the completion of the operation, i.e. a Task, or ManualResetEvent. That way you can get rid of your horrible spin-wait, and actually wait for each task to complete.
But I would highly recommend to get rid of the global variables and just use standard task based programming:
var result1 = Task.Run(MyMethod1);
var result2 = Task.Run(MyMethod2);
await Task.WhenAll(new []{result1, result2});
Such code is much easier to reason about and understand.
Multi threaded programming is difficult. There are a bunch of new ways your program can break, and the compiler will not help you. You are lucky if you even get an exception, in many cases you will just get an incorrect result. If you are unlucky you will only get incorrect results in production, not in development or testing. So you should read a fair amount about the topic so that you are at least familiar with the common dangers and the ways to mitigate them.
You are using flags as signaling for this you have a class called AutoResetEvent.
There's a difference between safe access and synchronization.
For safe access (atomic) purpose you can use the class Interlocked
For synchronization you use mutex based solutions - either spinlocks, barriers, etc...
What it looks like is you need a synchronization mechanism because you relay on an atomic behavior to signal a process that it is done.
Further more,
For C# there's the async way to do things and that is to call await.
It is Task based so in case you can redesign your flow to use Tasks instead of Threads it will suit you more.
Just to be clear - atomicity means you perform the call in one go.
So for example this is not atomic
int a = 0;
int b = a; //not atomic - read 'a' and then assign to 'b'.
I won't teach you everything to know about threading in C# in one post answer - so my advice is to read the MSDN articles about threading and tasks.
Related
My professor gave me this semi-pseudo code. He said I should find a mistake somewhere in this code's logic. At the moment I can not find anything, what could be wrong. Could you please give me some hints about what could be wrong? I'm not asking for an answer because I would like to find it myself, but some hints in what direction I should look would be awesome.
class Program
{
int progressValue = 0;
int totalFiles = 0;
int i = 0;
bool toContinue = true;
void MasterThread()
{
Thread thread1 = new Thread(Worker1);
Thread thread2 = new Thread(Worker2);
Thread progressThread = new Thread(ProgressThread);
thread1.Start();
thread2.Start();
progressThread.Start();
}
void Worker1()
{
string[] files = Directory.GetFiles(#"C:\test1");
totalFiles += files.Length;
foreach (string file in files)
{
Encryption.Encrypt(file);
i++;
progressValue = 100 * i / totalFiles;
}
toContinue = false;
}
void Worker2()
{
string[] files = Directory.GetFiles(#"C:\test2");
totalFiles += files.Length;
foreach (string file in files)
{
Encryption.Encrypt(file);
i++;
progressValue = 100 * i / totalFiles;
}
toContinue = false;
}
void ProgressThread()
{
while (toContinue == true)
{
Update(progressValue);
Thread.Sleep(500);
}
}
}
toContinue = false;
This is set at the end of the first completing thread - this will cause the ProgressThread to cease as soon as the first thread completes, not when both threads complete. There should be two separate thread completion flags and both should be checked.
To add to the good answers already provided, I'm being a little thorough, but the idea is to learn.
Exception Handling
Could be issues with exception handling. Always check your program for places where there could be an unexpected result.
How will this code behave if the value of this variable is not what we are expecting?
What will happen if we divide by zero?
Things like that.
Have a look at where variables are initialized and ask yourself is there a possibility that this might not initialize in the way it's expected?
Exception Handling (C# Programming Guide)
Method Calls
Also check out any libraries being used in the code. e.g. Encryption.
Ask yourself, are these statements going to give me an expected result?
e.g.
string[] files = Directory.GetFiles(#"C:\test1");
Will this return an array of strings?
Is this how I should initialise an array of strings?
Question the calls:
e.g.
Update(progressValue);
What does this really do?
Class Library
Threading
How will it work calling three threads like that.
Do they need to be coordinated?
Should threads sleep, to allow other actions to complete?
Also as for accessing variables from different threads.
Is there going to be a mess of trying to track the value of that variable?
Are they being overwritten?
Thread Class
How to: Create and Terminate Threads (C# Programming Guide)
Naming Conventions
On a smaller note there are issues with naming conventions in C#. The use of implicit typing using the generic var is preferred to explicit type declarations in C#.
C# Coding Conventions (C# Programming Guide)
I am not saying that there are issues with all these points, but if you investigate all of these and the points made in the other answers, you will find all the errors and you will obtain a better understanding of the code you are reading.
Here are the items:
There is nothing holding on to the "MasterThread" - so it's hard to tell if the program will instantly end or not.
Access is made to totalFiles from two threads and if both do so at the same time then it is possible that one or the other may win (or both may partially update the value) so there's no telling if you have a valid value or not. Interlocked.Add(ref totalFiles, files.Length); should be used instead.
Both worker threads also update i which could also become corrupted. Interlocked.Increment(ref i); should be used instead.
There is no telling if Encryption.Encrypt is thread-safe. Possibly a lock should be used.
The loop in ProgressThread is bad - Thread.Sleep should always be avoided - it is better to have an explicit update call (or other mechanism) to update progress.
There is no telling if Update(progressValue); is thread-safe. Possibly a lock should be used.
There are a few; I'll just enumerate the two obvious ones, I'm supposing this is not an exercise of how to code precise and correct multithreaded code.
You should ask yourself the following questions:
progressValue should measure progress from zero to hundred of the total work (a progress value equal to 150 seems a little off, doesn't it?). Is it really doing that?
You should not stop updating progressValue (Update(progressValue)) until all the work is done. Are you really doing that?
I don't know too much about multithreading but ill try to give a coupe of hints.
First look at the global variables, what happens when you access the same variable in different threads?
Besides the hints on the other answer I can't find anything else "wrong".
I have a program which uses two client threads and a server. There is a point in my program where I want a value in one thread to influence the path of another thread.
More specifically, I have this code in the server:
class Handler
{
public void clientInteraction(Socket connection, bool isFirstThread, Barrier barrier)
{
string pAnswer = string.Empty;
string endGameTrigger = string.Empty;
//setup streamReaders and streamWriters
while(true) //infinite game loop
{
//read in a question and send to both threads.
pAnswer = sr.ReadLine();
Console.WriteLine(pAnswer);
awardPoints();
writeToConsole("Press ENTER to ask another question or enter 0 to end the game", isFirstThread);
if(isFirstThread == true)
{
endGameTrigger = Console.ReadLine(); //this is only assigning to one thread...
}
barrier.SignalAndWait();
if(endGameTrigger == "0")//...meaning this is never satisfied in one thread
{
endGame();
}
}
}
}
The boolean value isFirstThread is a value set up in the constructor of the thread to which I can detect which thread was connected first.
Is there some way, or perhaps a threading method, that will allow the second connected thread to detect that the endGameTrigger in the first thread has been set and therefore both threads execute the endGame() method properly.
It's best to be concerned with multithreading
If it's absolutely necessary to start a separate thread for performance/UI reasons
If your code may be running in a multithreaded environment (like a web site) and you need to know that it won't break when multiple threads operate on the same class or same values.
But exercise extreme caution. Incorrect use/handling of multiple threads can cause your code to behave unpredictably and inconsistently. Something will work most of the time and then not work for no apparent reason. Bugs will be difficult to reproduce and identify.
That being said, one of the essential concepts of handling multithreading is to ensure that two threads don't try to update the same value at the same time. They can corrupt or partially modify values in ways that would be impossible for a single thread.
One way to accomplish this is with locking.
private object _lockObject = new Object();
private string _myString;
void SetStringValue(string newValue)
{
lock(_lockObject)
{
_myString = newValue;
}
}
You generally have an object that exists only to serve as a lock. When one thread enters that lock block it acquires a lock on the object. If another thread already has a lock on that object then the next thread just waits for the previous thread to release the lock. That ensures that two threads can't update the value at the same time.
You want to keep the amount of code inside the lock as small as possible so that the lock is released as soon as possible. And be aware that if it gets complicated with multiple locks then two threads can permanently block each other.
For incrementing and updating numbers there are also interlocked operations that handle the locking for you, ensuring that those operations are executed by one thread at a time.
Just for fun I wrote this console app. It takes a sentence, breaks it into words, and then adds each word back onto a new string using multiple threads and outputs the string.
using System;
using System.Threading.Tasks;
namespace FunWithThreading
{
class Program
{
static void Main(string[] args)
{
var sentence =
"I am going to add each of these words to a string "
+ "using multiple threads just to see what happens.";
var words = sentence.Split(' ');
var output = "";
Parallel.ForEach(words, word => output = output + " " + word);
Console.WriteLine(output);
Console.ReadLine();
}
}
}
The first two times I ran it, the output string was exactly what I started with. Great, it works perfectly! Then I got this:
I am going to add of these words to a string using multiple threads just to see what happens. each
Then I ran it 20 more times and couldn't repeat the error. Just imagine the frustration if this was a real application and something unpredictable like this happened even though I tested over and over and over, and then I couldn't get it to happen again.
So the point isn't that multithreading is evil, but just to understand the risks, only introduce it if you need to, and then carefully consider how to prevent threads from interfering with each other.
In response to Luaan's comment. I have put the endGameTrigger as private static string endGameTrigger in the Handler class. Making it a static field instead of a local method variable allows all instances of the handler class (each thread) access to this variable's most recent assignation. Many thanks.
I was recently reminded of the UpgradeableReadLock construct C# provides and I'm trying to discern when it really makes sense to use it.
Say, for example, I have a cache of settings that are heavily read by many classes, but periodically need to be updated with a very low frequency based on a set of conditions that aren't necessarily deterministic...
would it make more sense to simply lock like so:
List<Setting> cachedSettings = this.GetCachedSettings( sessionId );
lock(cachedSettings)
{
bool requiresRefresh = cachedSettings.RequiresUpdate();
if(requiresRefresh)
{
// a potentially long operation
UpdateSettings( cachedSettings, sessionId );
}
return cachedSettings;
}
or use an UpgradeableReadLock:
public class SomeRepitory {
private ReaderWriterLockSlim _rw = new ReaderWriterLockSlim();
public List<Setting> GetCachedSettings( string sessionId )
{
_rw.EnterUpgradeableReadLock();
List<Setting> cachedSettings = this.GetCachedSettings( sessionId );
bool requiresRefresh = cachedSettings.RequiresUpdate();
if(requiresRefresh)
{
_rw.EnterWriteLock();
UpdateSettings( cachedSettings, sessionId );
_rw.ExitWriteLock();
}
_rw.ExitUpgradeableReadLock();
return cachedSettings;
}
perhaps what confuses me the most is how we can get away with checking if an update is required outside of the write block. In my example above I am referring to when I check for where a refresh is required, but to simplify I'll use an example from "C# 5.0 In A Nutshell":
while (true)
{
int newNumber = GetRandNum (100);
_rw.EnterUpgradeableReadLock();
if (!_items.Contains (newNumber))
{
_rw.EnterWriteLock();
_items.Add (newNumber);
_rw.ExitWriteLock();
Console.WriteLine ("Thread " + threadID + " added " + newNumber);
}
_rw.ExitUpgradeableReadLock();
Thread.Sleep (100);
}
my understanding is that this allows concurrent reads unless a thread needs to write, but what if two or more threads end up with the same random number and determine !_items.Contains(newNumber)? Given my understanding that this should allow concurrent reads (and correct me if I have misunderstood, of course).. it seems that, as soon as a write lock is obtained, any threads that were concurrently reading would need to be suspended and forced back to the start of _rw.EnterUpgradeableReadLock(); ?
Of course your second approach is better in case of many simultaneous readers and relatively rare write operations. When read lock is acquired (using _rw.EnterUpgradeableReadLock()) by a thread - other threads can also acquire it and read the value simultaneously. When some thread then enters write lock, it waits all reads to complete and then acquires exclusive access to lock object (all other threads trying to execute EnterXXX() operations wait) to update the value. When it releases the lock, other threads can do their job.
First example lock(cachedSettings) blocks all other threads so that only one thread can read the value at a time.
I would recommend in addition use the following pattern:
_rw.EnterUpgradeableReadLock();
try
{
//Do your job
}
finally
{
_rw.ExitUpgradeableReadLock();
}
for all Enter/Exit lock operations. It ensures (with high probability) that if exception happened inside your synchronized code, lock won't remain locked forever.
EDIT:
Answering Martin's comment. If you don't want multiple threads updating the value simultaneously, you need to change your logic to achieve that. For example, using a double-checked lock construct:
if(cachedSettings.RequiresUpdate())
{
_rw.EnterWriteLock();
try
{
if(cachedSettings.RequiresUpdate())
{
UpdateSettings( cachedSettings, sessionId );
}
}
finally
{
_rw.ExitWriteLock();
}
}
This will check if while we were waiting for write lock other thread haven't refreshed that value already. And if value doesn't require refresh anymore - just release the lock.
IMPORTANT: it's very bad to take exclusive lock for long time. So it the UpdateSettings function is long-running, you better execute it outside the lock and implement some additional logic to allow readers read expired value while some thread is refreshing it. I used to implement cache once and it's really complex to make it fast and thread-safe. You better use one of the existing implementations (for example System.Runtime.MemoryCache).
I'm sorry I know this topic has been done to death (I've read I've read this and this and a few more) but there is one issue I have which I am not sure how to do 'correctly'.
Currently my code for a multithreaded Sudoku strategy is the following:
public class MultithreadedStrategy : ISudokuSolverStrategy
{
private Sudoku Sudoku;
private List<Thread> ThreadList = new List<Thread>();
private Object solvedLocker = new Object();
private bool _solved;
public bool Solved // This is slow!
{
get
{
lock (solvedLocker)
{
return _solved;
}
}
set
{
lock (solvedLocker)
{
_solved = value;
}
}
}
private int threads;
private ConcurrentQueue<Node> queue = new ConcurrentQueue<Node>();
public MultithreadedStrategy(int t)
{
threads = t;
Solved = false;
}
public Sudoku Solve(Sudoku sudoku)
{
// It seems concevable to me that there may not be
// a starting point where there is only one option.
// Therefore we may need to search multiple trees.
Console.WriteLine("WARNING: This may require a large amount of memory.");
Sudoku = sudoku;
//Throw nodes on queue
int firstPos = Sudoku.FindZero();
foreach (int i in Sudoku.AvailableNumbers(firstPos))
{
Sudoku.Values[firstPos] = i;
queue.Enqueue(new Node(firstPos, i, false, Sudoku));
}
//Setup threads
for (int i = 0; i < threads; i++)
{
ThreadList.Add(new Thread(new ThreadStart(ProcessQueue)));
ThreadList[i].Name = String.Format("Thread {0}", i + 1);
}
//Set them running
foreach (Thread t in ThreadList)
t.Start();
//Wait until solution found (conditional timeout?)
foreach (Thread t in ThreadList)
t.Join();
//Return Sudoku
return Sudoku;
}
public void ProcessQueue()
{
Console.WriteLine("{0} running...",Thread.CurrentThread.Name);
Node currentNode;
while (!Solved) // ACCESSING Solved IS SLOW FIX ME!
{
if (queue.TryDequeue(out currentNode))
{
currentNode.GenerateChildrenAndRecordSudoku();
foreach (Node child in currentNode.Children)
{
queue.Enqueue(child);
}
// Only 1 thread will have the solution (no?)
// so no need to be careful about locking
if (currentNode.CurrentSudoku.Complete())
{
Sudoku = currentNode.CurrentSudoku;
Solved = true;
}
}
}
}
}
(Yes I have done DFS with and without recursion and using a BFS which is what the above strategy modifies)
I was wondering whether I am allowed to change my private bool _solved; to a private volatile solved; and get rid of the accessors. I think this might be a bad thing because my ProcessQueue() method changes the state of _solved Am I correct? I know booleans are atomic but I don't want compiler optimisations to mess up the order of my read/write statements (esp. since the write only happens once).
Basically the lock statement adds tens of seconds to the run time of this strategy. Without the lock it runs an awful lot faster (although is relatively slow compared to a DFS because of the memory allocation within currentNode.GenerateChildrenAndRecordSudoku()
Before getting into alternatives: it is probably safe to go with a low-lock solution here by making access to the boolean volatile. This situation is ideal, as it is unlikely that you have complex observation-ordering requirements. ("volatile" does not guarantee that multiple volatile operations are observed to have consistent ordering from multiple threads, only that reads and writes have acquire and release semantics.)
However, low-lock solutions make me very nervous and I would not use one unless I was sure I had need to.
The first thing I would do is find out why there is so much contention on the lock. An uncontended lock should take 20-80 nanoseconds; you should only get a significant performance decrease if the lock is contended. Why is the lock so heavily contended? Fix that problem and your performance problems will go away.
The second thing I might do if contention cannot be reduced is to use a reader-writer lock. If I understand your scenario correctly, you will have many readers and only one writer, which is ideal for a reader-writer lock.
Leaving the question of volatility aside: as others have pointed out, there are basic mistakes in your threading logic like spinning on a boolean. This stuff is hard to get right. You might consider using the Task Parallel Library here as a higher-level abstraction than rolling your own threading logic. The TPL is ideally suited for problems where significant work must be done on multiple threads. (Note that the TPL does not magically make not-thread-safe code thread-safe. But it does provide a higher level of abstraction, so that you are dealing with Tasks rather than Threads. Let the TPL schedule the threads for you.)
Finally: the idea that a sudoku solver should take tens of seconds indicates to me that the solver is, frankly, not very good. The sudoku problem is, in its theoretically worst possible case, a hard problem to solve quickly no matter how many threads you throw at it. But for "newspaper" quality sudokus you should be able to write a solver that runs in a fraction of a second. There's no need to farm the work out to multiple threads if you can do the whole thing in a few hundred milliseconds.
If you're interested, I have a C# program that quickly finds solutions to sudoku problems here:
http://blogs.msdn.com/b/ericlippert/archive/tags/graph+colouring/
So the first thing, fix you're while loop to just join the threads...
//Set them running
foreach (Thread t in ThreadList)
t.Start();
//Wait until solution found (conditional timeout?)
foreach (Thread t in ThreadList)
t.Join(/* timeout optional here */);
Then there is issue with when to shutdown the threads. My advise is to introduce a wait handle on the class and then in the worker threads just loop on that...
ManualResetEvent mreStop = new ManualResetEvent(false);
//...
while(!mreStop.WaitOne(0))
{
//...
Now just modify the Solved property to signal all threads that they should quit...
public bool Solved
{
get
{
return _solved;
}
}
// As Eric suggests, this should be a private method, not a property set.
private void SetCompleted()
{
_solved = value;
mreStop.Set();
}
The benefit to this approach is that if a thread fails to quit within a timeout period you can signal the mreStop to stop the workers without setting _solved to true.
volatile IS used to prevent optimizations such as caching and reordering of reads/writes for a single variable. Using it in this case is exactly what it's designed for. I don't see what your concern is.
lock is a slow yet working alternative because it introduces a memory fence implicitly, but in your case you are using a lock just for the memory fence side-effect, which is not really a nice idea.
I'm just diving into learning about the Parallel class in the 4.0 Framework and am trying to understand when it would be useful. At first after reviewing some of the documentation I tried to execute two loops, one using Parallel.Invoke and one sequentially like so:
static void Main()
{
DateTime start = DateTime.Now;
Parallel.Invoke(BasicAction, BasicAction2);
DateTime end = DateTime.Now;
var parallel = end.Subtract(start).TotalSeconds;
start = DateTime.Now;
BasicAction();
BasicAction2();
end = DateTime.Now;
var sequential = end.Subtract(start).TotalSeconds;
Console.WriteLine("Parallel:{0}", parallel.ToString());
Console.WriteLine("Sequential:{0}", sequential.ToString());
Console.Read();
}
static void BasicAction()
{
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Method=BasicAction, Thread={0}, i={1}", Thread.CurrentThread.ManagedThreadId, i.ToString());
}
}
static void BasicAction2()
{
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Method=BasicAction2, Thread={0}, i={1}", Thread.CurrentThread.ManagedThreadId, i.ToString());
}
}
There is no noticeable difference in time of execution here, or am I missing the point? Is it more useful for asynchronous invocations of web services or...?
EDIT: I removed the DateTime with Stopwatch, removed the write to the console with a simple addition operation.
UPDATE - Big Time Difference Now: Thanks for clearing up the problems I had when I involved Console
static void Main()
{
Stopwatch s = new Stopwatch();
s.Start();
Parallel.Invoke(BasicAction, BasicAction2);
s.Stop();
var parallel = s.ElapsedMilliseconds;
s.Reset();
s.Start();
BasicAction();
BasicAction2();
s.Stop();
var sequential = s.ElapsedMilliseconds;
Console.WriteLine("Parallel:{0}", parallel.ToString());
Console.WriteLine("Sequential:{0}", sequential.ToString());
Console.Read();
}
static void BasicAction()
{
Thread.Sleep(100);
}
static void BasicAction2()
{
Thread.Sleep(100);
}
The test you are doing is nonsensical; you are testing to see if something that you can not perform in parallel is faster if you perform it in parallel.
Console.Writeline handles synchronization for you so it will always act as though it is running on a single thread.
From here:
...call the SetIn, SetOut, or SetError method, respectively. I/O
operations using these streams are synchronized, which means multiple
threads can read from, or write to, the streams.
Any advantage that the parallel version gains from running on multiple threads is lost through the marshaling done by the console. In fact I wouldn't be surprised to see that all the thread switching actually means that the parallel run would be slower.
Try doing something else in the actions (a simple Thread.Sleep would do) that can be processed by multiple threads concurrently and you should see a large difference in the run times. Large enough that the inaccuracy of using DateTime as your timing mechanism will not matter too much.
It's not a matter of time of execution. The output to the console is determined by how the actions are scheduled to run. To get an accurate time of execution, you should be using StopWatch. At any rate, you are using Console.Writeline so it will appear as though it is in one thread of execution. Any thing you have tried to attain by using parallel.invoke is lost by the nature of Console.Writeline.
On something simple like that the run times will be the same. What Parallel.Invoke is doing is running the two methods at the same time.
In the first case you'll have lines spat out to the console in a mixed up order.
Method=BasicAction2, Thread=6, i=9776
Method=BasicAction, Thread=10, i=9985
// <snip>
Method=BasicAction, Thread=10, i=9999
Method=BasicAction2, Thread=6, i=9777
In the second case you'll have all the BasicAction's before the BasicAction2's.
What this shows you is that the two methods are running at the same time.
In ideal case (if number of delegates is equal to number of parallel threads & there are enough cpu cores) duration of operations will become MAX(AllDurations) instead of SUM(AllDurations) (if AllDurations is a list of each delegate execution times like {1sec,10sec, 20sec, 5sec} ). In less idealcase its moving in this direction.
Its useful when you don't care about the order in which delegates are invoked, but you care that you block thread execution until every delegate is completed, so yes it can be a situation where you need to gather data from various sources before you can proceed (they can be webservices or other types of sources).
Parallel.For can be used much more often I think, in this case its pretty much required that you got different tasks and each is taking substantial duration to execute, and I guess if you don't have an idea of possible range of execution times ( which is true for webservices) Invoke will shine the most.
Maybe your static constructor requires to build up two independant dictionaries for your type to use, you can invoke methods that fill them using Invoke() in parallel and shorten time 2x if they both take roughly same time for example.