I am trying to implement a program which allow a maximum of ten people to simultaneously pick an object say a cup or car. Meaning that when one of then is finished there is a free place for another person to pick an object. The maximum time one could spend picking is 5 seconds. I have tried to use an array of tasks but this is not working since the picker are on different machines. I could update the database anytime one person picks an object and then check the value from the database but I think, it is a bad Idea. How could I control those threads or picks?
I need to control/keep track the maximum number of threads run irrespective of where the pick of the object is done.
Thank you
This doesn't sound like a threading question, but more of a server/client object manager. There are lots of directions you could go with this, but a simple solution would be to have a service that manages each object.
/* Common interface each object shares */
public interface IObject { ... }
/* Sharable Object implementing IObject */
public class Cup : IObject { ... }
/* This class would be exposed via WCF or Remoting */
public class ObjectSharer : IObjectSharer {
enum ObjectType { Cup, Car }
IObject GetObject(ObjectType ObjType) { ... }
ReturnObj(IObject) { ... }
}
You'll have to fill in the implementation, but hopefully this gives you some ideas on how you could approach this type of problem.
In the GetObject method, jwde's suggestion of using a Semaphore would be a good way to handle resource management, limiting the object(s) to 10.
A Semaphore is the idiomatic data structure for limiting the number of concurrent accesses to a resource.
Example:
public static class foo
{
private static Semaphore _resources = new Semaphore(_limit, _limit);
private const _limit = 10;
public void Pick()
{
_resources.WaitOne();
doWork();
_resources.Release();
}
}
Now only 10 threads can doWork() at once. Once one finishes, the next one will get to start.
It is very hard to understand what is being asked..
But I think you want to control your threads?
In that case, you can suspend a thread, which can be used to synchronizing threads. However this can leak to deadlocks.
Related
I have written a static class which is a repository of some functions which I am calling from different class.
public static class CommonStructures
{
public struct SendMailParameters
{
public string To { get; set; }
public string From { get; set; }
public string Subject { get; set; }
public string Body { get; set; }
public string Attachment { get; set; }
}
}
public static class CommonFunctions
{
private static readonly object LockObj = new object();
public static bool SendMail(SendMailParameters sendMailParam)
{
lock (LockObj)
{
try
{
//send mail
return true;
}
catch (Exception ex)
{
//some exception handling
return false;
}
}
}
private static readonly object LockObjCommonFunction2 = new object();
public static int CommonFunction2(int i)
{
lock (LockObjCommonFunction2)
{
int returnValue = 0;
try
{
//send operation
return returnValue;
}
catch (Exception ex)
{
//some exception handling
return returnValue;
}
}
}
}
Question 1: For my second method CommonFunction2, do I use a new static lock i.e. LockObjCommonFunction2 in this example or can I reuse the same lock object LockObj defined at the begining of the function.
Question 2: Is there anything which might lead to threading related issues or can I improve the code to be safe thread.
Quesiton 3: Can there be any issues in passing common class instead of struct.. in this example SendMailParameters( which i make use of wrapping up all parameters, instead of having multiple parameters to the SendMail function)?
Regards,
MH
Question 1: For my second method CommonFunction2, do I use a new
static lock i.e. LockObjCommonFunction2 in this example or can I reuse
the same lock object LockObj defined at the begining of the function.
If you want to synchronize these two methods, then you need to use the same lock for them. Example, if thread1 is accessing your Method1, and thread2 is accessing your Method2 and you want them to not concurrently access both insides, use the same lock. But, if you just want to restrict concurrent access to just either Method1 or 2, use different locks.
Question 2: Is there anything which might lead to threading related
issues or can I improve the code to be safe thread.
Always remember that shared resources (eg. static variables, files) are not thread-safe since they are easily accessed by all threads, thus you need to apply any kind of synchronization (via locks, signals, mutex, etc).
Quesiton 3: Can there be any issues in passing common class instead of
struct.. in this example SendMailParameters( which i make use of
wrapping up all parameters, instead of having multiple parameters to
the SendMail function)?
As long as you apply proper synchronizations, it would be thread-safe. For structs, look at this as a reference.
Bottomline is that you need to apply correct synchronizations for anything that in a shared memory. Also you should always take note of the scope the thread you are spawning and the state of the variables each method is using. Do they change the state or just depend on the internal state of the variable? Does the thread always create an object, although it's static/shared? If yes, then it should be thread-safe. Otherwise, if it just reuses that certain shared resource, then you should apply proper synchronization. And most of all, even without a shared resource, deadlocks could still happen, so remember the basic rules in C# to avoid deadlocks. P.S. thanks to Euphoric for sharing Eric Lippert's article.
But be careful with your synchronizations. As much as possible, limit their scopes to only where the shared resource is being modified. Because it could result to inconvenient bottlenecks to your application where performance will be greatly affected.
static readonly object _lock = new object();
static SomeClass sc = new SomeClass();
static void workerMethod()
{
//assuming this method is called by multiple threads
longProcessingMethod();
modifySharedResource(sc);
}
static void modifySharedResource(SomeClass sc)
{
//do something
lock (_lock)
{
//where sc is modified
}
}
static void longProcessingMethod()
{
//a long process
}
You can reuse the same lock object as many times as you like, but that means that none of the areas of code surrounded by that same lock can be accessed at the same time by various threads. So you need to plan accordingly, and carefully.
Sometimes it's better to use one lock object for multiple location, if there are multiple functions which edit the same array, for instance. Other times, more than one lock object is better, because even if one section of code is locked, the other can still run.
Multi-threaded coding is all about careful planning...
To be super duper safe, at the expense of potentially writing much slower code... you can add an accessor to your static class surround by a lock. That way you can make sure that none of the methods of that class will ever be called by two threads at the same time. It's pretty brute force, and definitely a 'no-no' for professionals. But if you're just getting familiar with how these things work, it's not a bad place to start learning.
1) As to first it depends on what you want to have:
As is (two separate lock objects) - no two threads will execute the same method at the same time but they can execute different methods at the same time.
If you change to have single lock object then no two threads will execute those sections under shared locking object.
2) In your snippet there is nothing that strikes me as wrong - but there is not much of code. If your repository calls methods from itself then you can have a problem and there is a world of issues that you can run into :)
3) As to structs I would not use them. Use classes it is better/easier that way there is another bag of issues related with structs you just don't need those problems.
The number of lock objects to use depends on what kind of data you're trying to protect. If you have several variables that are read/updated on multiple threads, you should use a separate lock object for each independent variable. So if you have 10 variables that form 6 independent variable groups (as far as how you intend to read / write them), you should use 6 lock objects for best performance. (An independent variable is one that's read / written on multiple threads without affecting the value of other variables. If 2 variables must be read together for a given action, they're dependent on each other so they'd have to be locked together. I hope this is not too confusing.)
Locked regions should be as short as possible for maximum performance - every time you lock a region of code, no other thread can enter that region until the lock is released. If you have a number of independent variables but use too few lock objects, your performance will suffer because your locked regions will grow longer.
Having more lock objects allows for higher parallelism since each thread can read / write a different independent variable - threads will only have to wait on each other if they try to read / write variables that are dependent on each other (and thus are locked through the same lock object).
In your code you must be careful with your SendMailParameters input parameter - if this is a reference type (class, not struct) you must make sure that its properties are locked or that it isn't accessed on multiple threads. If it's a reference type, it's just a pointer and without locking inside its property getters / setters, multiple threads may attempt to read / write some properties of the same instance. If this happens, your SendMail() function may end up using a corrupted instance. It's not enough to simply have a lock inside SendMail() - properties and methods of SendMailParameters must be protected as well.
I have written a program in C#. Now I finished all the functionality and it works. But only running with one thread. I'm doing a lot of calculation and sometimes loading about 300 MB or more of measurement files into the application.
I now want to make the program multithreaded because the user experiance is really bad in times of intense processing or i/o operations.
What is the best way to refactor the program, so that it can be made multithreaded without too much affort? I know this is stuff I should have thougth before. But I havn't.
I used the singleton pattern for about 3 big and important modules which are involved in nearly every other functionality of the program.
I used a more or less clean MVC (Model View Control) architecture. So I wonder if it is maybe possible to let the User Interface run in one thread and the rest of the application in another.
If not, loading and parsing 300MB, creating objects will take about 3 minutes to finish. In this time the user gets no response from the GUI. :/
UPDATE:
My singletons are used as a kind of storage. One singleton saves the objects of the parsed measurement files, while the other singleton saves the result. I have different calculations, which use the same measurementfiles and creating results which they want to save using the other singleton. This is one problem.
The second is to keep the guy responsive to user action or at least avoid this warning that the window is not responding.
Thank you all for all advices. I will try them. Sorry for the late answere.
Generally, I avoid the singleton pattern because it creates a lot of issues down the road, particularly in testing. However, there is a fairly simple solution to making this work for multiple threads, if what you want is a singleton per thread. Put your singleton reference in a field (not a property) and decorate it with the ThreadStaticAttribute:
public class MySingleton
{
[ThreadStatic]
private static MySingletonClass _instance = new MySingletonClass();
public static MySingletonClass Instance { get { return _instance; } }
}
Now each thread will have its own instance of MySingleton.
The easiest way is to move all calculations to one separate thread and update the GUI using Invoke/InvokeRequired.
public partial class MyForm : Form
{
Thread _workerThread;
public MyForm()
{
_workerThread = new Thread(Calculate);
}
public void StartCalc()
{
_workerThread.Start();
}
public void Calculate()
{
//call singleton here
}
// true if user are allowed to change calc settings
public bool CanUpdateSettings
{
get { return !_workerThread.IsAlive; } }
}
}
In this way you have get a response GUI while the calculations are running.
The application will be thread safe as long as you don't allow the user to make changes during a running calculation.
Using several threads for doing the calculations is a much more complex story which we need more information for to give you a proper answer.
You can use TPL
You can make the loops with TPL parallel, and further more it is built-in with .NET 4.0 so that you don't have to change your program so much
I have a method that is getting called from multiple threads. Each of the threads have their own instance of the class. What's the most straightforward way to synchronize access to the code?
I can't just use lock(obj) where obj is an instance member, but would it be sufficient to just declare obj as static on the class? So all calls to the method would be locking on the same object? A simple illustration follows:
class Foo
{
static object locker = new object();
public void Method()
{
lock(locker)
{
//do work
}
}
}
EDIT: The //do work bit is writing to a database. Why I need to serialize the writes would take 3 pages to explain in this particular instance, and I really don't want to relive all the specifics that lead me to this point. All I'm trying to do is make sure that each record has finished writing before writing the next one.
Why do you need any synchronization when the threads each have their own instance? Protect the resource that is shared, don't bother with unshared state. That automatically helps you find the best place for the locking object. If it is a static member that the objects have in common then you indeed need a static locking object as well.
Your example would certainly work, though there must be some resource that is being shared across the different instances of the class to make that necessary.
You left out the most important part: what data is involved in // do work
If // do work uses static data then you have the right solution.
If // do work only uses instance data then you can leave out the lock() {} altogether (because 1 instance belongs to 1 Thread) or use a non-static locker (1 instance, multiple threads).
I have a producer/consumer process. The consumed object has an ID property (of type integer), I want only one object with the same ID to be consumed at a time. How can I perform this ?
Maybe I can do something like this, but I don't like it (too many objects created while only one or two with the same ID a day can be consumed and the lock(_lockers) is a bit time consuming :
private readonly Dictionary<int,object> _lockers = new Dictionary<int,object>();
private object GetLocker(int id)
{
lock(_lockers)
{
if(!_lockers.ContainsKey(id))
_lockers.Add(id,new object());
return _lockers[id];
}
}
private void Consume(T notif)
{
lock(GetLocker(notif.ID))
{
...
}
}
enter code here
NB : Same question with the ID property being of type string (in that cas maybe I can lock over the string.Internal(currentObject.ID)
As indicated in comment, one approach would be to have a fixed pool of locks (say 32), and take the ID modulo 32 to determine which lock to take. This would result in some false sharing of locks. 32 is number picked from the air - it would depend on your distibution of ID values, how many consumers, etc.
Can you make your IDs to be unique for each object? If so, you could just apply a lock on the object itself.
First off,
have you profiled to establish that lock(_lockers) is indeed a bottleneck? Because if it's not broken, don't fix it.
Edit: I didn't read carefully enough, this is about the (large) number of helper objects created.
I think Damien's got a good idea for that, I'll leave this bit about the strings:
Regarding
NB : Same question with the ID
property being of type string (in that
cas maybe I can lock over the
string.Internal(currentObject.ID)
No, bad idea. You can lock on a string but then you will have to worry about wheter they may be interned. Hard to be sure they are unique.
I would consider a synced FIFO queue as a seperate class/singleton for all your produced objects - the producers enqueues the objects and the consumers dequeue - thus the actual objects do not require any synchronization anymore. The synchronisation is then done outside the actual objects.
How about assigning IDs from a pool of ID objects and locking on these?
When you create your item:
var item = CreateItem();
ID id = IDPool.Instance.Get(id);
//assign id to object
item.ID = id;
the ID pool creates and maintains shared ID instances:
class IDPool
{
private Dictionary<int, ID> ids = new Dictionary<int, ID>();
public ID Get(int id)
{
//get ID from the shared pool or create new instance in the pool.
//always returns same ID instance for given integer
}
}
you then lock on ID which is now a reference in your Consume method:
private void Consume(T notif)
{
lock(notif.ID)
{
...
}
}
This is not the optimal solution and only offsets the problem to a different place - but if you believe that you have pressure on the lock you may get a performance improvement using this approach (given that e.g. you objects are created on a single thread, you do not need to synchronize the ID pool then).
See How to: Synchronize a Producer and a Consumer Thread (C# Programming Guide)
In addition to simply preventing
simultaneous access with the lock
keyword, further synchronization is
provided by two event objects. One is
used to signal the worker threads to
terminate, and the other is used by
the producer thread to signal to the
consumer thread when a new item has
been added to the queue. These two
event objects are encapsulated in a
class called SyncEvents. This allows
the events to be passed to the objects
that represent the consumer and
producer threads easily.
--Edit--
A simple code snippet that I wrote sometime back; see if this helps. I think this is what weismat is pointing towards?
--Edit--
How about following:
Create an object, say CCustomer that would hold:
An object of type object
And a bool - for instance, bool bInProgress
Dictionary that would hold
now when you check following
if(!_lockers.ContainsKey(id))
_lockers.Add(id,new CCustomer(/**bInProgress=true**/));
return _lockers[id]; **//here you can check the bInProgress value and respond accordingly**.
I'm writing a service that has five different methods that can take between 5 seconds and 5 minutes to run.
The service will schedule these different methods to run at different intervals.
I don't want any of the methods to run concurrently, so how do I have the methods check to see if another method is running and queue itself to run when it finishes?
Anthony
If you want simple, and all the methods are in the same class, ou can just use [MethodImpl]:
[MethodImpl(MethodImplOptions.Synchronized)]
public void Foo() {...}
[MethodImpl(MethodImplOptions.Synchronized)]
public void Bar() {...}
For instance methods, this locks on this; for static methods, this locks on typeof(TheClass).
As such, these lock objects are public - so there is a remote (but genuine) chance that another bit of code might be locking on them. It is generally considered better practice to create your own lock object:
private readonly object syncLock = new object(); // or static if needed
...
public void Foo() {
lock(syncLock) {
...
}
}
etc
Aside: a curious fact; the ECMA spec doesn't define a specific pattern for [MethodImpl], even including an example of a private lock, as "valid". The MS spec, however, insists on this/typeof.
There's the MethodImplOptions.Synchronized attribute, as noted in the
article Synchronized method access in C#, but that can lead to deadlocks as noted at MSDN. It sounds like, for your usage, this won't be a big concern.
Otherwise, the simplest approach would be to use the lock statement to make sure that only one method is executing at a time:
class ServiceClass
{
private object thisLock = new object();
public Method1()
{
lock ( thisLock )
{
...
}
}
public Method2()
{
lock ( thisLock )
{
...
}
}
...
}
If you are using java, you can make the methods synchronized, which will prevent more than one thread from accessing it simultaneously.
In general, I strongly discourage using the MethodImpl(MethodImplOptions.Synchronized) attribute to do thread synchronization. If you are going to do multi-threaded programming you really should think very carefully about exactly where and how you should be locking.
I may be exaggerating a bit but I find too many similarities between the MethodImpl synchronization method and others such as the use of the End statement in VB. It often signals to me that you don't really know what you are doing and hope that this statement/method/attribute will magically solve your problem.