I have a class that is a "manager" sort of class. One of it's functions is to signal that the long running process of the class should shut down. It does this by setting a boolean called "IsStopping" in class.
public class Foo
{
bool isStoping
void DoWork() {
while (!isStopping)
{
// do work...
}
}
}
Now, DoWork() was a gigantic function, and I decided to refactor it out and as part of the process broke some of it into other classes. The problem is, Some of these classes also have long running functions that need to check if isStopping is true.
public class Foo
{
bool isStoping
void DoWork() {
while (!isStopping)
{
MoreWork mw = new MoreWork()
mw.DoMoreWork() // possibly long running
// do work...
}
}
}
What are my options here?
I have considered passing isStopping by reference, which I don't really like because it requires there to be an outside object. I would prefer to make the additional classes as stand alone and dependancy free as possible.
I have also considered making isStopping a property, and then then having it call an event that the inner classes could be subscribed to, but this seems overly complex.
Another option was to create a "Process Cancelation Token" class, similar to what .net 4 Tasks use, then that token be passed to those classes.
How have you handled this situation?
EDIT:
Also consider that MoreWork might have a EvenMoreWork object that it instantiates and calls a potentially long running method on... and so on. I guess what i'm looking for is a way to be able to signal an arbitrary number of objects down a call tree to tell them to stop what they're doing and clean up and return.
EDIT2:
Thanks for the responses so far. Seems like there's no real consensus on methods to use, and everyone has a different opinion. Seems like this should be a design pattern...
You can go two ways here:
1) The solution you've already outlined: pass a signaling mechanism to your subordinate objects: a bool (by ref), the parent object itself cloaked in an interface (Foo: IController in the example below), or something else. The child objects check the signal as needed.
// Either in the MoreWork constructor
public MoreWork(IController controller) {
this.controller = controller;
}
// Or in DoMoreWork, depending on your preferences
public void DoMoreWork(IController controller) {
do {
// More work here
} while (!controller.IsStopping);
}
2) Turn it around and use the observer pattern - which will let you decouple your subordinate objects from the parent. If I were doing it by hand (instead of using events), I'd modify my subordinate classes to implement an IStoppable interface, and make my manager class tell them when to stop:
public interface IStoppable {
void Stop();
}
public class MoreWork: IStoppable {
bool isStopping = false;
public void Stop() { isStopping = true; }
public void DoMoreWork() {
do {
// More work here
} while (!isStopping);
}
}
Foo maintains a list of its stoppables and in its own stop method, stops them all:
public void Stop() {
this.isStopping = true;
foreach(IStoppable stoppable in stoppables) {
stoppable.Stop();
}
}
I think firing an event that your subclasses subscribe to makes sense.
You could create a Cancel() method on your manager class, and on each of your other worker classes. Base it on an interface.
The manager class, or classes that instantiate other worker classes, would have to propagate the Cancel() call to the objects they are composed of.
The deepest nested classes would then just set an internal _isStopping bool to false and your long-running tasks would check for that.
Alternatively, you could maybe create a context of some sort that all the classes know about and where they can check for a canceled flag.
Another option was to create a
"Process Cancelation Token" class,
similar to what .net 4 Tasks use, then
that token be passed to those classes.
I am not familiar with this, but if it is basically an object with a bool property flag, and that you pass into each class, then this seems like the cleanest way to me. Then you could make an abstract base class that has a constructor that takes this in and sets it to a private member variable. Then your process loops can just check that for cancellation.
Obviously you will have to keep a reference to this object you have passed into your workers so that it's bool flag can be set on it from your UI.
Your nested types could accept a delegate (or expose an event) to check for a cancel condition. Your manager then supplies a delegate to the nested types that checks its own "shouldStop" boolean. This way, the only dependency is of the ManagerType on the NestedType, which you already had anyway.
class NestedType
{
// note: the argument of Predicate<T> is not used,
// you could create a new delegate type that accepts no arguments
// and returns T
public Predicate<bool> ShouldStop = delegate() { return false; };
public void DoWork()
{
while (!this.ShouldStop(false))
{
// do work here
}
}
}
class ManagerType
{
private bool shouldStop = false;
private bool checkShouldStop(bool ignored)
{
return shouldStop;
}
public void ManageStuff()
{
NestedType nestedType = new NestedType();
nestedType.ShouldStop = checkShouldStop;
nestedType.DoWork();
}
}
You could abstract this behavior into an interface if you really wanted to.
interface IStoppable
{
Predicate<bool> ShouldStop;
}
Also, rather than just check a boolean, you could have the "stop" mechanism be throwing an exception. In the manager's checkShouldStop method, it could simply throw an OperationCanceledException:
class NestedType
{
public MethodInvoker Stop = delegate() { };
public void DoWork()
{
while (true)
{
Stop();
// do work here
}
}
}
class ManagerType
{
private bool shouldStop = false;
private void checkShouldStop()
{
if (this.shouldStop) { throw new OperationCanceledException(); }
}
public void ManageStuff()
{
NestedType nestedType = new NestedType();
nestedType.Stop = checkShouldStop;
nestedType.DoWork();
}
}
I've used this technique before and find it very effective.
Litter your code with statements like this wherever it is most sensible to check the stop flag:
if(isStopping) { throw new OperationCanceledException(); }
Catch OperationCanceledException right at the top level.
There is no real performance penalty for this because (a) it won't happen very often, and (b) when it does happen, it only happens once.
This method also works well in conjunction with a WinForms BackgroundWorker component. The worker will automatically catch a thrown exception in the worker thread and marshal it back to the UI thread. You just have to check the type of the e.Error property, e.g.:
private void worker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) {
if(e.Error == null) {
// Finished
} else if(e.Error is OperationCanceledException) {
// Cancelled
} else {
// Genuine error - maybe display some UI?
}
}
You can flatten your call stack by turning each DoWork() call into a command using the Command pattern. At the top level, you maintain a queue of commands to perform (or a stack, depending on how your commands interact with each other). "Calling" a function is translated to enqueuing a new command onto the queue. Then, between processing each command, you can check whether or not to cancel. Like:
void DoWork() {
var commands = new Queue<ICommand>();
commands.Enqueue(new MoreWorkCommand());
while (!isStopping && !commands.IsEmpty)
{
commands.Deque().Perform(commands);
}
}
public class MoreWorkCommand : ICommand {
public void Perform(Queue<ICommand> commands) {
commands.Enqueue(new DoMoreWorkCommand());
}
}
Basically, by turning the low-level callstack into a data structure you control, you have the ability to check stuff between each "call", pause, resume, cancel, etc..
Related
I'm probably totally misunderstanding what RX is all about, but I thought it would be a neat way of allowing various client applications in my code to subscribe to notifications of changes to certain Entity Framework Code First types.
So in my UOW Commit methood I have
var changes = DbContext.ChangeTracker.Entries<EntEvent>().Where(ee => ee.State != EntityState.Unchanged);
Hub.Instance.NotifyBeforeSave(changes);
and my (rather basic) hub class looks like this...
public sealed class Hub
{
private static readonly Hub instance = new Hub();
static Hub(){}
private Hub(){}
public static Hub Instance
{
get { return instance; }
}
public IObservable<System.Data.Entity.Infrastructure.DbEntityEntry<EntEvent>> BeforeSave = new Subject<DbEntityEntry<EntEvent>>();
public void NotifyBeforeSave<T>(IEnumerable<System.Data.Entity.Infrastructure.DbEntityEntry<T>> changes) where T:class
{
var x = changes.Where(c => typeof(T) == typeof(EntEvent)) as IEnumerable<System.Data.Entity.Infrastructure.DbEntityEntry<EntEvent>>;
BeforeSave = x.ToObservable();
}
}
and then I thought I could subscribe a client (observer) by creating an instance of the following and calling attach.
public class SampleConsumer : IObserver<DbEntityEntry<EntEvent>>
{
public void attach()
{
Hub.Instance.BeforeSave.Subscribe(this);
}
public void OnNext(DbEntityEntry<EntEvent> value)
{
var x = value;
}
public void OnError(Exception error)
{
var y = error;
}
public void OnCompleted()
{
}
}
but breakpoints in OnNext and OnError never get called.
I'm probably 180deg away from where I should be, but we have to start somewhere!
The problem is that you don't have an asynchronous source.
DbContext.ChangeTracker.Entries<EntEvent>()
is a collection. You can convert it to an observable using
IEnumerble.ToObservable();
but that does not make it asynchronous. In fact, it will enumerate the collection right away upon subscription. If the collection happens to be empty, it will do nothing at all. Google the difference between cold/hot observables to understand.
You need an asynchronous source, something like an event.
I don't know EF very well, my guess is that the
((IObjectContextAdapter)DbContext).ObjectContext.SavingChanges
event might be what you need.
Good luck!
Plug in Nick's
https://github.com/NickStrupat/EntityFramework.Triggers
https://github.com/NickStrupat/EntityFramework.Rx
He has patterns with and without deriving from his context, that permit:
DbObservable<Context>.FromInserted<Person>();
In Dustin Campbell's answer in question Return a value from a Event — is there a Good Practice for this? it is stated that instead of returning data from an event handler, we can have a writable property on a set of custom EventArgs that is passed to the event similar to Cancel property of the WinForms FormClosing event.
How do I provide feedback to event caller using properties in EventArgs?
My specific scenario is that there is a Controller class that does Job A and there are many classes requesting the Job A to be done. Thus, the controller is subscribed to this event on all classes.
I want to give some feedback to the caller that the job is done. The tricky part is that those classes are module-like and controller doesn't know anything about them.
My though is to include that writable property to the delegate of the event in order for the controller to give feedback through it. This property could somehow be invoked using reflection, which is fine in my scenario.
you cannot define properties for delegates.
Also you do not need reflection for such a mechanism.
What you want to do is to define your "return"-properties in the EventArgs-derived class.
A simple such class would be:
public class JobEventArgs : EventArgs {
public bool Done { get; set; }
}
Now you can declare your event in the class as
public event EventHandler<JobEventArgs> Job;
Usage in the method which handles the event:
public void DoJob(object s, JobEventArgs args) {
// do stuff
args.Done = true;
}
and in the event invoking code:
public void FireJobEvent() {
var args = new JobEventArgs();
this.Job(this, args);
if(!args.Done) {
// the job was not handled
}
}
But frankly it rather seems like you want to do a job asynchronously with a notification when it finishes.
Which would result in syntax like..
class Module {
public void JobCompleted(IAsyncResult r) {
if(!r.IsCompleted)
return;
Console.WriteLine("The job has finished.");
}
public void ExecuteJob() {
var job = new EventArgs<JobEventArgs>((s, a) => { this.controller.JobA(); });
job.BeginInvoke(null, null,
r =>
{
this.JobCompleted(r);
if(r.IsCompleted)
job.EndInvoke(r);
}, null);
}
}
I'm integrating with a PIN device with an api containing asynchronous methods. For example one of them is called GetStatus and it raises a DeviceStateChangedEvent with the state passed into it as a parameter.
I'd like to have an interface that is not asynchronous over it though, so that when I call GetStatus on my interface it will actually return the status rather than raising an event to pass that data to me.
I'm thinking I could do something like this:
public class MSRDevice
{
StatusInfo _status;
bool _stateChangedEventCompleted = false;
IPAD _ipad; // <-- the device
public MSRDevice()
{
//Initialize device, wire up events, etc.
}
public StatusInfo GetStatus()
{
_ipad.GetStatus() // <- raises StatusChangedEvent
while(!_stateChangedEventCompleted);
_stateChangedEventCompleted = false;
return _status;
}
void StateChangedEvent(object sender, DeviceStateChangeEventArgs e)
{
_status = e.StatusInfo;
}
}
Is this a good way to address this or this there a better solution?
What you’re doing in your example is called “busy-waiting” (or “spinning”), which is unrecommended in most scenarios since it wastes a lot of CPU power. Preferably, you should use a signalling mechanism, such as the WaitHandle class, for synchronizing when an event of interest (in your case, StatusChangedEvent) has occurred:
public class MSRDevice
{
StatusInfo _status;
IPAD _ipad; // <-- the device
private EventWaitHandle waitHandle = new AutoResetEvent(false);
public MSRDevice()
{
//Initialize device, wire up events, etc.
}
public StatusInfo GetStatus()
{
_ipad.GetStatus() // <- raises StatusChangedEvent asynchronously
waitHandle.WaitOne(); // <- waits for signal
return _status;
}
void StateChangedEvent(object sender, DeviceStateChangeEventArgs e)
{
_status = e.StatusInfo;
waitHandle.Set(); // <- sets signal
}
}
The best option: code it async.
No; that is a hot loop. It will hammer the CPU. It also isn't guaranteed to exit due to register caching (this is trivial to demonstrate on x86 in particular).
If you need it sync, you should use something like an AutoResetEvent.
I'm working on a winforms application that is very complicated, and has massive callback chains being passed around all over the place.
As an example loosely based on this code, there could be a "Manager" class, that spawns a class "SetUpWorkerThreads" class, which creates a "HandleWorker" thread for say 10 workers.
The worker thread needs to call back to the manager class on occasion, to achieve this the code looks like this:
public class Manager
{
public delegate void SomethingHappenedHandler();
private void Init()
{
var x = new SetUpWorkerThreads(SomethingHappened);
}
private void SomethingHappened()
{
// Handle something happened
}
}
public class SetUpWorkerThreads
{
private readonly Manager.SomethingHappenedHandler _somethingHappened;
public SetUpWorkerThreads(Manager.SomethingHappenedHandler somethingHappened)
{
_somethingHappened = somethingHappened;
}
public void SetupTheThreads()
{
// Contrived!
for (int x=0; x<10; x++)
{
var worker = new Worker(_somethingHappened);
new Thread(worker.DoingSomething).Start();
}
}
}
public class Worker
{
private readonly Manager.SomethingHappenedHandler _somethingHappened;
public Worker(Manager.SomethingHappenedHandler somethingHappened)
{
_somethingHappened = somethingHappened;
}
public void DoingSomething()
{
// ... Do Something
_somethingHappened();
}
}
In reality, there can be many more classes involved, each passing around a mass of callbacks for various things. I realise that poor class/application design is playing a part in this, but are there better ways to go about handling these interactions between classes, specifically in winforms apps, and when a lot of threading is going on?
I can't see that the threading makes it more or less problematic.
One alternative is to use events instead of callbacks, but that won't break up the long chains and will give you an unsubscribing hell too.
One possible approach is to create an object responsible for handling all the events. Either as a singleton or as a single object that you pass to all your threads (instead of the callbacks). Then you can have a simple interface on the EventRouter object to raise events from the threads. You can then subscribe to events on the EventRouter where you need to handle "something happened".
Edit
Something the GoF pattern Mediator but with a publisher-subscriber twist.
I am just wondering about problems involved passing a parent to child, should it be done etc, so that a child can access functionality from the parent, in this case it involves threads. My scenario follows:
public class A
{
public A()
{
B b = new B(this);
Thread thread = new Thread(new ThreadStart(b.GO));
}
public string DoSomething() { return "Something Done"; }
}
public class B
{
A _a;
public B(A a)
{
_a = a;
}
public void GO() { _a.DoSomething(); }
}
Based on above I'm wondering about any convention clashes the occur, or problems that come into effect when you do something like this, is it bad to do this? I know that there definitely brings up some thread safety issues. But my overall question is ok to do this, does it bring up some other issues? How would I update values in the main thread?
The reason why I want it separate is because class B has a timer in it(not in shown code) that when it runs out does somethings, then tells class A that it has finished, resets the timer and goes back to sleep until the timer runs out again, how would I do that otherwise?
~Regards,
Heinrich
Looking at the code, you don't appear to have any threading issues. You might introduce a race condition if you are working on the same variables, but that isn't special to the situation you propose.
You would treat this like any other multi-threaded situation and lock resources that might be accessed by multiple threads.
I would recommend the following online book: http://www.albahari.com/threading/
I don't think you really have to connect them as tightly as you are, what you are trying to do is simply pass messages or states between threads. So the reason I would recommend not having them so tightly connected is to reduce coupling.
The website I referenced contains many different signaling techniques. Pick the simplest for your needs. I would need more details about your exact requirements to pick one for you.
Another way to handle what you are doing is for B to raise an event and for A to handle the event. That way you don't have to pass A into B. I don't know what your real structure is, but lets say that B's thread function does something more complicated and A implements IDisposable. What happens if A is disposed before B gets to the point that it is calling a method on A. To me the cleaner way to handle that situation is to have B raise an event and A register for it.
public class A
{
B _b;
public A()
{
_b = new B();
_b.DidSomething += HandleDidSomething;
}
private void HandleDidSomething(object source, EventArgs e)
{
// Handle the B did something case
}
public void WaitForBToFinish() { _b.DoneDoingThings.WaitOne(); }
}
public class B
{
Event EventHandler DidSomething;
ManualResetEvent DoneDoingThings = new ManualResetEvent(false);
public B() {}
public void StartDoingThings()
{
new Thread(DoThings).Start();
}
private void DoThings()
{
for (int i=0; i < 10; i++)
{
Thread.Sleep(1000);
OnDidSomething(new EventArgs());
}
DoneDoingThings.Set();
}
private void OnDidSomething(EventArgs e)
{
if (DidSomething != null)
{
DidSomething(e);
}
}
}
Note - You should implement IDisposable in class B and dispose of the ManualResetEvent, I just am too lazy to do all that for sample code, and just wanted to give you an idea about using events to signal work was done.