I'm working on an application that loads untrusted assemblies via an interface. Each of those assemblies should be able to add one or more GameAction objects to a thread-safe queue used by the server.
The first iteration of design was to just pass the queue--something like this:
public interface IGameClient
{
void HandleStateChange(IGameState gameState,
ref Queue<IGameAction> actionQueue);
}
But, the problem with this is that it gives an untrusted assembly access to a shared queue, allowing it to manipulate other members of the queue and discover information about other queue actions.
The second pass was:
public interface IGameClient
{
void HandleStateChange(IGameState gameState);
GameActionDelegate event HasNewEvent; // passes IGameAction as a parameter
}
The problem with this is that it doesn't necessarily allow for the ordering or grouping of actions.
What I'm really hoping for is to be able to pass a reference to an object that encapsulates the thread-safe queue, but only exposes Enqueue(). But, I'm afraid that an untrusted assembly could manipulate a private Queue object using reflection.
So, what's the best way to handle this?
Thoughts in no particular order:
1) Events do guarantee ordering (or at least, your implementation could guarantee whatever ordering you want, and the simplest implementations will preserve ordering).
2) I don't see why you'd want to pass the queue by reference in the first example of the interface. It may be worth checking that you understand parameter passing and "ref".
3) If you come up with an interface which only exposes Enqueue then the implementation presumably won't be a Queue<T>. It might contain a Queue<T>, but if you really don't trust assemblies not to mess around with your private members, you should load them in such a way that you don't grant them the relevant reflection permissions.
Another alternative might be to pass in an Action<IGameAction> which the client can call when it wants to add an item to the queue. The delegate would be created from whatever Enqueue method you've got.
Don't expose the queue at all--simply expose a method on a facade that allows the GameClient to submit an entry that the GameServer will place on the internal queue.
You could make the HandleStateChange method return an IEnumerable<IGameAction>, IList<IGameAction> or IGameAction[]:
public interface IGameClient
{
IGameAction[] HandleStateChange(IGameState gameState);
}
Then use that return value to add actions to the queue.
Pass a newly-created instance of queue, and when the HandleStateChange returns, merge the stuff from your dummy queue into the real queue.
Related
I'm trying to specify an interface for a Folder. That interface should allow to
- Add or delete files of type IFile
- Get an List of IFile
- Broadcast events whenever a file was added/deleted/changed (e.g. for the GUI to subscribe to)
and I'm trying to find the best way to do it. So far, I came up with three ideas:
1
public interface IFolder_v1
{
ObservableCollection<IFile> files;
}
2
public interface IFolder_v2
{
void add(IFile);
void remove(IFile);
IEnumerable<IFile> files { get; }
EventHandler OnFileAdded { get; }
EventHandler OnFileRemoved { get; }
EventHandler OnFileDeleted { get; }
}
3
public interface IFolder_v3
{
void add(IFile);
void remove(IFile);
IEnumerable<IFile> files { get; }
EventHandler<CRUD_EventArgs> OnFilesChanged { get; }
}
public class CRUD_EventArgs : EventArgs
{
public enum Operations
{
added,
removed,
updated
}
private Operations _op;
public CRUD_EventArgs(Operations operation)
{
this._op = operation;
}
public Operations operation
{
get
{
return this._op;
}
}
}
Idea #1 seems really nice to implement as doesn't require much code, but has some problems: What, for example, if an implementation of IFolder only allows to add files of specific types (Say, text files), and throws an exception whenever another file is being added? I don't think that would be feasible with a simple ObservableCollection.
Idea #2 seems ok, but requires more code. Also, defining three separate events seems a bit tedious - what if an object needs to subscribe to all events? We'd need to subscribe to 3 different eventhandlers for that. Seems annoying.
Also a little less easy to use than solution #1 as now, one needs to call .Add to add files, but a list of files is stored in .files etc. - so the naming conventions are a bit less clear than having everything bundled up in one simple sub-object (.files from idea #1).
Idea #3 circumvents all of those problems, but has the longest code. Also, I have to use a custom EventArgs class, which I can't imagine is particularly clean in an interface definition? (Also seems overkill to define a class like that for simple CRUD event notifications, shouldn't there be an existing class of some sort?)
Would appreciate some feedback on what you think is the best solution (possibly even something I haven't thought of at all). Is there any best practice?
Take a look at the Framework's FileSystemWatcher class. It does pretty much what you need, but if anyway you still need to implement your own class, you can take ideas by looking at how it is implemented (which is by the way similar to your #2 approach).
Having said that, I personally think that #3 is also a very valid approach. Don't be afraid of writing long code (within reasonable limits of course) if the result is more readable and maintainable than it would be with shorter code.
Personally I would go with #2.
In #1 you just expose a entire collection of objects, allowing everyone to do anything with them.
#3 seems less self explanatory to me. Though - I like to keep thing simple when coding so I may be biased.
If watchers are going to be shorter-lived than the thing being watched, I would avoid events. The pattern exemplified by ObservableCollection, where the collection gives a subscribed observer an IDisposable object which can be used to unsubscribe is a much better approach. If you use such a pattern, you can have your class hold a weak reference (probably use a "long" weak reference) to the the subscription object, which would in turn hold a strong reference (probably a delegate) to the subscriber and to the weak reference which identifies it. Abandoned subscriptions will thus get cleaned up by the garbage collector; it will be the duty of a subscriber to ensure that a strongly-rooted reference exists to the subscription object.
Beyond the fact that abandoned subscriptions can get cleaned up, another advantage of using the
"disposable subscription-object" approach is that unsubscription can easily be made lock-free and thread-safe, and run in constant time. To dispose a subscription, simply null out the delegate contained therein. If each attempt to add a subscription causes the subscription manager to inspect a couple of subscriptions to ensure that they are still valid, the total number of subscriptions in existence will never grow to more than twice the number that were valid as of the last garbage collection.
I was wondering what is the best practice for accessing the owner instance when using composition (not aggregation)
public class Manager
{
public List<ElementToManage> Listelmt;
public List<Filter> ListeFilters;
public void LoadState(){}
}
public class Filter
{
public ElementToManage instance1;
public ElementToManage instance2;
public object value1;
public object value2;
public LoadState()
{
//need to access the property Listelmt in the owner instance (manager instance)
//instance1 = Listelmt.SingleOrDefault(...
}
}
So far I'm thinking about two possibilities:
Keep a reference to the owner in the Filter instance.
Declare an event in the Filter class. The manager instance subscribe to it, and the filter throw it when needed.
I feel more like using the second possibility. It seems more OOP to me, and there is less dependencies between the classes ( any refactoring later will be easier),
But debugging and tracing may be a bit harder on the long run.
Regarding business layer classes, i don't remember seeing events for this purpose.
Any insight would be greatly appreciated
There is no concept of an "owner" of a class instance, there should not be any strong coupling between the Filter instance and the object that happens to have an instance of it.
That being the case an event seems appropriate: It allows for loose coupling while enabling the functionality you want. If you went with option #1 on the other hand you would limit the overall usefulness of the Filter class - now it can only be contained in Manager classes, I don't think that is what you would want.
Overall looking at your code you might want to pass in the relevant data the method LoadState operates on so it doesn't have to "reach out".
I recomend the reference to owner of filter instance. The event can be handled by more handlers and can change result of previous handler(s). And you propadly don't want change the owner during lifetime of Filter without notification the Filter instance.
My short answer : Neither.
First option to keep a reference to the owner is problematic for several reasons. Filter class no longer has a single responsibility. Filter and Manager are tightly coupled. etc.
Second option is only a little better, and yes I've used events in similar scenearios, it rarely if ever ends well.
It's difficult to give a definite advice without more specific details. Some thoughts:
1) Are you sure your classes are as they should be? Maybe there should be a class to compose a single ElementToManage and a single Filter ?
2) Who is responsible for creating a Filter? For example, if it is Manager, maybe the Manager can give the list as a construction parameter? Maybe you can create a FilterFactory class that does any needed initializations.
3) Who calls filter.LoadState()? Maybe the needed list could be passed as a parameter to the LoadState() method.
4) I frequently use an "Initialization Design Pattern" (my terminology) For example I'll have a BinaryTree where parent and child will point to each other. The Factory constructs the nodes in a plain state, and than calls an initialize method with other needed objects. The class becomes complicated because I probably need to ensure that an uninitialized object raises an error for every other usage, and need to ensure that an object is initialized only once, is initialized only through the Factory, etc. But if it works, it is usually the best solution, in my opinion.
5) I'm still trying to learn "Dependency Injection" and getting nowhere, I guess it may have something to do with your question. I wonder if someone will come with an answer involving Dependency Injection.
When creating an interface having methods that are expected to be called in a specific order, is such dependency good practice, or should more patterns and practices be applied to "fix" it or make the situation better?
It's important users of some interfaces call methods in a specific order.
There are likely many various examples. This is the one that came to mind first:
A data source interface of which the author envisions the init method to always be called first by any caller (i.e to connect to the data source or look up preliminary meta info, etc), before any other of the operation methods are called.
interface DataAccess {
// Note to callers: this init must be called first and only once.
void InitSelf();
// operation: get the record having the given id
T Op_GetDataValue<T>(int id);
// operation: get a cound
int Op_GetCountOfData();
// operation: persist something to the data store
void Op_Persist(object o);
//etc.
}
However the caller may choose not to call the initialization method first.
In general I'm wondering if there are better ways for this situation.
You could have the other methods throw an exception if the object is uninitialized, or you could go for a more strict API. It would be more complicated to implement, but for example, InitSelf() could return an interface containing the data operations:
interface DataAccess {
DataOperations InitSelf();
}
interface DataOperations {
T Op_GetDataValue<T>(int id);
...
}
This would sort of require the consumer to initialize before performing operations, though there would be ways to circumvent that.
I'm a little confused. Implementors are not necessarily callers or users of the interface. In your example, implementors can assume that InitSelf is called before anything else, but aren't responsible for making that happen.
I think naming something InitXXX is a good indication that that is the case.
For more odd dependencies (not Init, which is very common), it would probably be better to not have the dependency.
Sometimes, it's not possible, and if you decide that it's not overkill to try to fix it, then a common thing is to separate into multiple interfaces that you get access to as you call early ones.
A common example is a database interface. You call connect, it returns a connection. You call createStatement on the connection, it returns a statement. You call setParam on the statement, you call runStatement on the statement, then you get a result, etc.
Any initialization should be done in the constructor. Clients of the DataAccess interface should not have to worry about the construction details.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a Class that retrieves some data and images does some stuff to them and them uploads them to a third party app using web services.
The object needs to perform some specific steps in order.
My question is should I be explicitly exposing each method publicly like so.
myObject obj = new myObject();
obj.RetrieveImages();
obj.RetrieveAssociatedData();
obj.LogIntoThirdPartyWebService();
obj.UploadStuffToWebService();
or should all of these methods be private and encapsulated in a single public method like so.
public class myObject()
{
private void RetrieveImages(){};
private void RetrieveAssociatedData(){};
private void LogIntoThirdPartyWebService(){};
private void UploadStuffToWebService(){};
public void DoStuff()
{
this.RetrieveImages();
this.RetrieveAssociatedData();
this.LogIntoThirdPartyWebService();
this.UploadStuffToWebService();
}
}
which is called like so.
myObject obj = new myObject();
obj.DoStuff();
It depends on who knows that the methods should be called that way.
Consumer knows: For example, if the object is a Stream, usually the consumer of the Stream decides when to Open, Read, and Close the stream. Obviously, these methods need to be public or else the object can't be used properly. (*)
Object knows: If the object knows the order of the methods (e.g. it's a TaxForm and has to make calculations in a specific order), then those methods should be private and exposed through a single higher-level step (e.g. ComputeFederalTax will invoke CalculateDeductions, AdjustGrossIncome, and DeductStateIncome).
If the number of steps is more than a handful, you will want to consider a Strategy instead of having the steps coupled directly into the object. Then you can change things around without mucking too much with the object or its interface.
In your specific case, it does not appear that a consumer of your object cares about anything other than a processing operation taking place. Since it doesn't need to know about the order in which those steps happen, there should be just a single public method called Process (or something to that effect).
(*) However, usually the object knows at least the order in which the methods can be called to prevent an invalid state, even if it doesn't know when to actually do the steps. That is, the object should know enough to prevent itself from getting into a nonsensical state; throwing some sort of exception if you try to call Close before Open is a good example of this.
If method B() truly cannot be called unless A() is called first, then proper design dictates that A should return some object that B requires as a parameter.
Whether this is always practical is another matter, but that's how it should be done.
Yes private, otherwise you are leaving the door open for users to do things wrong, which will only be a cause for pain for everyone.
Do you ever need to call any of these methods on its own? ie does any of them do anything which is useful and might be needed stand alone? if so then you might want to keep those public, but even if you keep them all public, you should have the method which calls them in the correct order (preferably with a useful name) to make things easier for your users.
It all depends on whether the operation is essentially atomic. In this case it looks like a single operation to us outsiders, but is it really? If LogIntoThirdPartyWebService fails, does the UI need to present a dialog box to ask the user if they want to retry? In the case where you have a single operation, retrying the LogIntoThirdPartyWebService operation also requires redoing potentially expensive operations like RetrieveImages, while making them separate enables more granular logic.
What I would do in this case is something like this:
Images images = RetrieveImages();
ImagesAndData data = RetrieveAssociatedData(images);
WebService webservice = LogIntoThirdPartyWebService();
UploadStuffToWebService(data, webservice);
or maybe more ideally something like this:
UploadStuffToWebService(RetrieveImages().RetrieveAssociatedData(),
LogIntoThirdPartyWebService());
Now you have granularity while enforcing the proper order of operations.
It sounds to me like from the consumer of your object's point of view, the object does one thing: it moves images from one place to another. As the consumer of the object, all of the individual steps you need to take to accomplish that are irrelevant to me; after all that's why I have you to do it for me.
So you should have a single DoStuff() method that takes all the necessary params, and make all the implementation details private.
Private -- and take the parameters in the constructor and execute the order there.
Do not assume the caller will, or knows how to, call them in order.
So, rather than the example you have listed, I would do it this way:
MyObject myObject = new MyObject(); // make a constructor to take any parameters that are required to "setup" the object per your requirements.
myObject.UploadToWebService();
It really depends on whether you estimate that anyone would want to invoke only one of these methods and whether they make sense individually or can be implemented independently. If not, then it is better to avoid exposing anything but the high level op.
Expose as little as possible, as much as necessary. If a call to FuncA() is always followed by a call to FuncB(), make one public and have it call the other, or else have public FuncC() call them in sequence.
Yes, it should definitely be private, especially as all the methods seem to be parameterless and you're just concerned with the order.
The only time I would consider calling each method explicitly is if they each took several, non-overlapping parameters, and you wouldn't want to pass such a long string of parameters to one method and would want to modularize. And then you should make sure to document it clearly. But remember that comments are not executable... You'll still have to trust your user a bit more than you really should.
One of the biggest factors of information hiding and OOP... only give the user what is absolutely necessary. Allow as little room for mess-up as possible.
The question of public or private depends entirely on the contract you wish to expose for your object. Do you want users of your object to call the methods individually, or do you want them to call a single "DoStuff" method and be done with it?
It all depends on the intended usage of the class.
In the example you've given, I'd say DoStuff should be public and the rest private.
Which do you think would be easier for the consumers of your class?
Absolutely write one public method that performs the correct steps in the correct order. Otherwise, the caller is not going to do it right; they're going to forget a step or skip something.
Neither. I think you have at least 3 objects otherwise you are breaking the Single-Responsibility Principal. You need an object that "Gets and holds images", one that "manipulates images", and one that "manages external vendor communication".
One reason they would be public is if you intend the user to be able to insert logic between steps. In this case, you should impose that the functions are called in the correct order internally by keeping a really tiny state machine. If the state machine transitions in the wrong order, you have options besides just doing something wrong, such as throwing an exception.
However, an alternative design that allows them all to be remain private if the case of needing to act beween steps does exist. Instead of making the methods public, provide a public callback interface that lets the users attach handlers that you call at each step of the process. In your now private doItAll() method, you can do something as granular as:
if(preRetrieveHandlerExists){
preRetrieveHandler()
}
obj.RetrieveImages();
if(postRetrieveHandlerExists){
postRetrieveHandler()
}
//so on and so forth
My software engineering rule of thumb is to always give the user/consumer/caller as little chance to screw things up as possible. Therefore, keep the methods private to ensure working order.
Fowler uses the term "Feature Envy" to describe a situation where one object calls a handful of methods (especially repeatedly) on another.
I don't know where he got it from. You don't see it much in the literature, and a lot of people over the years have had no idea what I was talking about (I dunno why, I thought the name was perfectly obvious once I heard it. Which is why I repeat it)
I have what I think is a simple "problem" to which I have found a couple of solutions but I am not sure which way to go andn the best practice in C#.
I have a master object (say a singleton) instanciated once during the lifespan of the application. This "MasterClass" creates a bunch of new type of objects, say "SlaveClass" every time MasterClass.Instance.CreateSlaveObject is called.
This MasterClass also monitors some other object for status change, and when that happens, notifies the SlaveClass objects it created of the change. Seems simple enough.
Since I come from the native C++ world, the way I did it first it to have an interface
Interface IChangeEventListener
{
void ChangeHappened();
}
from which I derived "SlaveClass". Then in my "MasterClass" i have:
...
IList<IChangeEventListener> slaveList;
...
CreateSlaveObject
{
...
slaveList.Add(slave);
}
...
ChangeHappened()
{
...
foreach(var slave in slaveList)
{
slave.ChangeHappened();
}
}
And this works. But I kept wondering in the back of my mind if there is another (better) way of doing this. So I researched a bit more on the topic and saw the C# events.
So instead of maintaining a collection of slaves in the MasterClass, I would basically inject the MasterClass into the ctor of SlaveClass (or via a property) and let the SlaveClass object add it's ChangeHappened as an event handler. this would be illustrated:
...Master...
public delegate void ChangeHappenedDelegate(object sender, NewsInfoArgs args);
public event NewUpdateDelegate ChangeHappenedEvent;
....
public SlaveClass (MasterClass publisher) //inject publisher service
{
publisher.ChangeHappenedEvent += ChangeHappened;
}
But this seems to be like an un-necessary coupling between the Slave and the Master, but I like the elegance of the provided build-in event notification mechanism.
So should I keep my current code, or move to the event based approach (with publisher injection)? and why?
Or if you can propose an alternative solution I might have missed, I would appreciate that as well.
Well, in my mind, events and interfaces like you showed are two sides of the same coin (at least in the context you described it), but they're really two sides of this.
The way I think about events is that "I need to subscribe to your event because I need you to tell me when something happens to you".
Whereas the interface way is "I need to call a method on you to inform you that something happened to me".
It can sound like the same, but it differs in who is talking, in both cases it is your "masterclass" that is talking, and that makes all the difference.
Note that if your slave classes have a method available that would be suitable for calling when something happened in your master class, you don't need the slave class to contain the code to hook this up, you can just as easily do this in your CreateSlaveClass method:
SlaveClass sc = new SlaveClass();
ChangeHappenedEvent += sc.ChangeHappened;
return sc;
This will basically use the event system, but let the MasterClass code do all the wiring of the events.
Does the SlaveClass objects live as long as the singleton class? If not, then you need to handle the case when they become stale/no longer needed, as in the above case (basically in both of yours and mine), you're holding a reference to those objects in your MasterClass, and thus they will never be eligible for garbage collection, unless you forcibly remove those events or unregisters the interfaces.
To handle the problem with the SlaveClass not living as long as the MasterClass, you're going to run into the same coupling problem, as you also noted in the comment.
One way to "handle" (note the quotes) this could be to not really link directly to the correct method on the SlaveClass object, but instead create a wrapper object that internally will call this method. The benefit from this would be that the wrapper object could use a WeakReference object internally, so that once your SlaveClass object is eligible for garbage collection, it might be collected, and then the next time you try to call the right method on it, you would notice this, and thus you would have to clean up.
For instance, like this (and here I'm typing without the benefit of a Visual Studio intellisense and a compiler, please take the meaning of this code, and not the syntax (errors).)
public class WrapperClass
{
private WeakReference _Slave;
public WrapperClass(SlaveClass slave)
{
_Slave = new WeakReference(slave);
}
public WrapperClass.ChangeHappened()
{
Object o = _Slave.Target;
if (o != null)
((SlaveClass)o).ChangeHappened();
else
MasterClass.ChangeHappenedEvent -= ChangeHappened;
}
}
In your MasterClass, you would thus do something like this:
SlaveClass sc = new SlaveClass();
WrapperClass wc = new WrapperClass(sc);
ChangeHappenedEvent += wc.ChangeHappened;
return sc;
Once the SlaveClass object is collected, the next call (but not sooner than that) from your MasterClass to the event handlers to inform them of the change, all those wrappers that no longer has an object will be removed.
I think it's just a matter of personal preference... personnally I like to use events, because it fits better in .NET "philosophy".
In your case, if the MasterClass is a singleton, you don't need to pass it to the constructor of the SlaveClass, since it can be retrieved using the singleton property (or method) :
public SlaveClass ()
{
MasterClass.Instance.ChangeHappenedEvent += ChangeHappened;
}
As you appear to have a singleton instance of MasterClass, why not subscribe to MasterClass.Instance.ChangeHappenedEvent? It's still a tight-ish coupling, but relatively neat.
Although events would be the normal paradigm for exposing public subscribe/unsubscribe functionality, in many cases they're not the best paradigm when subscribe/unsubscribe functionality don't need to be exposed. In this scenario, the only things that are allowed to subscribe/unsubscribe the event are the ones that you yourself create, so there's no need for public subscribe/unsubscribe methods. Further, the master's knowledge of the slaves far exceeds the typical event publisher's knowledge of subscribers. Therefore, I favor having the master explicitly handle the connection/disconnection of subscriptions.
One feature I would probably add, though, which is made somewhat easier by the coupling between master and slave, would be a means of allowing slaves to be garbage-collected if all outside references are abandoned. The easiest way of doing this would probably be to have the master keep a list of 'WeakReference's to slaves. When it's necssary to notify slaves that something has happened, go through the list, derereference any WeakReference that's still alive, and notify the slave. If more than half the items in the list have been added since the last sweep and the list contains at least 250 or so items, copy all live references to a new list and replace the old one.
An alternative approach which would avoid having to dereference the WeakReference objects so often would be to have the public "slave" object be a wrapper to a private object, and have the public object override Finalize to let the private object know that no outside references to it exist. This would require adding an extra level of strong indirection for public accesses to the object, but it would avoid creating--even momentarily--strong references to abandoned objects.