Mimicking multiple inheritance in C# without interfaces - c#

I know that multiple inheritance in C# is only allowed by using Interfaces and that there are very valid reasons why multiple inheritance can quickly become a real headache. (Working in .NET Framework if that makes any difference to the answers)
However.
In working on various projects accross many classes I find myself returning to the same patterns to handle behaviour.
For example I have an Interface IXMLSavable which requires the functions GetXML() and SetFromXML(XElement e) to be implemented. The way I implement this in every class is, that I write different functions for different versions of the XML (If I changed something in the GetXML() I want to maintain backwards compatibility...). And according to a version-attribute on the root Element I switch case to the right ReadVersionX(XElement e) function so all my data stays consitent.
Another example would be centered around eventing. If for example I want to implement a "stop firing events for the time being"-Lock I would go about thusly:
private bool suppressEvents;
public bool SuppressEvents
{
get { return suppressEvents; }
set
{
bool prevValue=SuppressEvents;
suppressEvents=value;
if(prevValue!=SuppressEvents && !SuppressEvents) TheChangeEvent?.Invoke();
}
}
So I can run multiple operations on the object in question without it giving of a right old firework display of events. Again: This code will be almost unchanged for a lot of classes.
For the XML one I could refactor this to a class that has a Dictionary<int,delegate> ReadFunctions which I could then set in every implementation (I concede that there needs to be a bit of customisation in the "implementing" class) and reduce the amount of bolierplate for every class (the explicit switching on the version attribute) to just filling this dictionary.
The eventing one could go into a class on its own quite readily, I would probably only need to hook up the right event to the invokation, but that could easily be remedied by an abstract function I will have to implement (again: customisation still necessary but much less boilerplate).
Each "now-class-was-interface" on its own would make a splendid base class for any object. I could use functionality down an inheritance tree and customise it by overwriting functionality with new if I would need it.
The problem starts when I want to combine the two now-classes together. Due to the limitation in C# (which, again, is there for a reason) I cannot inherit from both above described classes at the same time. This would only be possible if I have one of these classes inherit from the other. Which would be possible, but would lead to a whole lot of a different headache when I want one functionality but not the other. Or the other functionality and not the one. The way around that would be to create a plethora of permutation classes (so one class for each combination of the functionaities). And while that would solve the problem it would probably be a nightmare to maintain.
So the real question is: Is there a way to correctly plug in different already implemented functionality into a class in an inheritance like manner that allows the addition of multiple distinct functionality packages as opposed to interfaces that cannot by the very nature of themselves provide any concrete implementation.

In many cases you can avoid inheritance with the use of interfaces/default interface methods/extension methods, decorators, or some other pattern.
In your case with xml you could simply change your interface to have one ReadMethod per version, and use a extension method to select the correct one
public interface IXMLReadable{
void ReadVersion1(XElement e);
void ReadVersion2(XElement e);
}
public static class IXMLReadableExtensions {
public static void Read(this IXMLReadable self, XElement e){
// Read version from xml, call ReadVersion1 or ReadVersion2
}
}
default interface methods would do more or less the same thing, with the added advantage of allowing the class to override the Read-method if it wants some other behavior.
However, my preferred solution would be to instead convert your object to a Data Transfer Object (DTO), add any required serialization attributes to this object, and use a library to serialize this. Added fields etc can usually be accommodated by just marking it as optional. Larger changes can usually be done by creating a new DTO class.
One way to solve your event problem could be to move this logic to a separate class
public class SuppressibleEvent
{
private bool suppressEvents;
private bool pendingEvent;
public void Raise()
{
if (!suppressEvents)
{
TheChangeEvent?.Invoke(this, EventArgs.Empty);
}
else
{
pendingEvent = true;
}
}
public event EventHandler TheChangeEvent;
public bool SuppressEvents
{
get => suppressEvents;
set
{
suppressEvents = value;
if (!suppressEvents && pendingEvent)
{
TheChangeEvent?.Invoke(this, EventArgs.Empty);
pendingEvent = false;
}
}
}
}
Optionally you may add a interface, so that only the owner can raise the event, but others can listen and register. You could also add methods/events to your class that just forwards to the actual implementation.
The overall point is that there is usually a better pattern to use than implementation inheritance. Some might require a bit more code, but usually gain a bit of flexibility as a result.

Related

Inheritance with functions?

I'm aware of inheritance with classes (obviously) but I want to know if the behaviour can be replicated within functions?
I have a situation where all methods need to implement some logic before being called, and rather than duplicate the code across all functions, I want to know if there's some way to inherit a base function or something similar that executes before the rest of the function processes?
Is this possible with C#?
For example, I have the following methods:
public void MyFunction1 {
if(debug)
return;
MyProductionApi.DoSomething();
}
public void MyFunction2 {
if(debug)
return;
MyProductionApi.DoSomethingElse();
}
As you see from above, my scenario basically involves checking whether I'm in development so I can avoid expensive 3rd party API calls. I just want to skip over them if I'm testing, but want to avoid writing a check for each method as there are a large number of functions.
Ideally I could create some base functionality that executes the check that I can inherit from, but I don't know if this is possible?
I want to know if there's some way to inherit a base function or
something similar that executes before the rest of the function
processes?
You don't need necessarily inheritance to solve the problem of repeating the code. You could also pass the function as parameter to an other function, doing some basic jobs before calling it. You can do it like
public void MyFunction(Action a)
{
if(debug)
return;
a();
}
and call it like
MyFunction(MyProductionApi.DoSomething);
This solves your scenario of
I just want to skip over them if I'm testing
in a very simple way without complicated structures or inheritance.

Best practice for interface to allow adding, deleting etc. child objects w/ broadcasting events (similar to ObservableCollection)

I'm trying to specify an interface for a Folder. That interface should allow to
- Add or delete files of type IFile
- Get an List of IFile
- Broadcast events whenever a file was added/deleted/changed (e.g. for the GUI to subscribe to)
and I'm trying to find the best way to do it. So far, I came up with three ideas:
1
public interface IFolder_v1
{
ObservableCollection<IFile> files;
}
2
public interface IFolder_v2
{
void add(IFile);
void remove(IFile);
IEnumerable<IFile> files { get; }
EventHandler OnFileAdded { get; }
EventHandler OnFileRemoved { get; }
EventHandler OnFileDeleted { get; }
}
3
public interface IFolder_v3
{
void add(IFile);
void remove(IFile);
IEnumerable<IFile> files { get; }
EventHandler<CRUD_EventArgs> OnFilesChanged { get; }
}
public class CRUD_EventArgs : EventArgs
{
public enum Operations
{
added,
removed,
updated
}
private Operations _op;
public CRUD_EventArgs(Operations operation)
{
this._op = operation;
}
public Operations operation
{
get
{
return this._op;
}
}
}
Idea #1 seems really nice to implement as doesn't require much code, but has some problems: What, for example, if an implementation of IFolder only allows to add files of specific types (Say, text files), and throws an exception whenever another file is being added? I don't think that would be feasible with a simple ObservableCollection.
Idea #2 seems ok, but requires more code. Also, defining three separate events seems a bit tedious - what if an object needs to subscribe to all events? We'd need to subscribe to 3 different eventhandlers for that. Seems annoying.
Also a little less easy to use than solution #1 as now, one needs to call .Add to add files, but a list of files is stored in .files etc. - so the naming conventions are a bit less clear than having everything bundled up in one simple sub-object (.files from idea #1).
Idea #3 circumvents all of those problems, but has the longest code. Also, I have to use a custom EventArgs class, which I can't imagine is particularly clean in an interface definition? (Also seems overkill to define a class like that for simple CRUD event notifications, shouldn't there be an existing class of some sort?)
Would appreciate some feedback on what you think is the best solution (possibly even something I haven't thought of at all). Is there any best practice?
Take a look at the Framework's FileSystemWatcher class. It does pretty much what you need, but if anyway you still need to implement your own class, you can take ideas by looking at how it is implemented (which is by the way similar to your #2 approach).
Having said that, I personally think that #3 is also a very valid approach. Don't be afraid of writing long code (within reasonable limits of course) if the result is more readable and maintainable than it would be with shorter code.
Personally I would go with #2.
In #1 you just expose a entire collection of objects, allowing everyone to do anything with them.
#3 seems less self explanatory to me. Though - I like to keep thing simple when coding so I may be biased.
If watchers are going to be shorter-lived than the thing being watched, I would avoid events. The pattern exemplified by ObservableCollection, where the collection gives a subscribed observer an IDisposable object which can be used to unsubscribe is a much better approach. If you use such a pattern, you can have your class hold a weak reference (probably use a "long" weak reference) to the the subscription object, which would in turn hold a strong reference (probably a delegate) to the subscriber and to the weak reference which identifies it. Abandoned subscriptions will thus get cleaned up by the garbage collector; it will be the duty of a subscriber to ensure that a strongly-rooted reference exists to the subscription object.
Beyond the fact that abandoned subscriptions can get cleaned up, another advantage of using the
"disposable subscription-object" approach is that unsubscription can easily be made lock-free and thread-safe, and run in constant time. To dispose a subscription, simply null out the delegate contained therein. If each attempt to add a subscription causes the subscription manager to inspect a couple of subscriptions to ensure that they are still valid, the total number of subscriptions in existence will never grow to more than twice the number that were valid as of the last garbage collection.

thoughts on configuration through delegates

i'm working on a fork of the Divan CouchDB library, and ran into a need to set some configuration parameters on the httpwebrequest that's used behind the scenes. At first i started threading the parameters through all the layers of constructors and method calls involved, but then decided - why not pass in a configuration delegate?
so in a more generic scenario,
given :
class Foo {
private parm1, parm2, ... , parmN
public Foo(parm1, parm2, ... , parmN) {
this.parm1 = parm1;
this.parm2 = parm2;
...
this.parmN = parmN;
}
public Bar DoWork() {
var r = new externallyKnownResource();
r.parm1 = parm1;
r.parm2 = parm2;
...
r.parmN = parmN;
r.doStuff();
}
}
do:
class Foo {
private Action<externallyKnownResource> configurator;
public Foo(Action<externallyKnownResource> configurator) {
this.configurator = configurator;
}
public Bar DoWork() {
var r = new externallyKnownResource();
configurator(r);
r.doStuff();
}
}
the latter seems a lot cleaner to me, but it does expose to the outside world that class Foo uses externallyKnownResource
thoughts?
This can lead to cleaner looking code, but has a huge disadvantage.
If you use a delegate for your configuration, you lose a lot of control over how the objects get configured. The problem is that the delegate can do anything - you can't control what happens here. You're letting a third party run arbitrary code inside of your constructors, and trusting them to do the "right thing." This usually means you end up having to write a lot of code to make sure that everything was setup properly by the delegate, or you can wind up with very brittle, easy to break classes.
It becomes much more difficult to verify that the delegate properly sets up each requirement, especially as you go deeper into the tree. Usually, the verification code ends up much messier than the original code would have been, passing parameters through the hierarchy.
I may be missing something here, but it seems like a big disadvantage to create the externallyKnownResource object down in DoWork(). This precludes easy substitution of an alternate implementation.
Why not:
public Bar DoWork( IExternallyKnownResource r ) { ... }
IMO, you're best off accepting a configuration object as a single parameter to your Foo constructor, rather than a dozen (or so) separate parameters.
Edit:
there's no one-size-fits-all solution, no. but the question is fairly simple. i'm writing something that consumes an externally known entity (httpwebrequest) that's already self-validating and has a ton of potentially necessary parameters. my options, really, are to re-create almost all of the configuration parameters this has, and shuttle them in every time, or put the onus on the consumer to configure it as they see fit. – kolosy
The problem with your request is that in general it is poor class design to make the user of the class configure an external resource, even if it's a well-known or commonly used resource. It is better class design to have your class hide all of that from the user of your class. That means more work in your class, yes, passing configuration information to your external resource, but that's the point of having a separate class. Otherwise why not just have the caller of your class do all the work on your external resource? Why bother with a separate class in the first place?
Now, if this is an internal class doing some simple utility work for another class that you will always control, then you're fine. But don't expose this type of paradigm publicly.

Help on implementing how creatures and items interact in a computer role playing game

I am programming a simple role playing game (to learn and for fun) and I'm at the point where I'm trying to come up with a way for game objects to interact with each other. There are two things I am trying to avoid.
Creating a gigantic game object that can be anything and do everything
Complexity - so I am staying away from a component based design like you see here
So with those parameters in mind I need advice on a good way for game objects to perform actions on each other.
For example
Creatures (Characters, Monsters, NPCs) can perform actions on Creatures or Items (weapons, potions, traps, doors)
Items can perform actions on Creatures or Items as well. An example would be a trap going off when a character tries to open a chest
What I've come up with is a PerformAction method that can take Creatures or Items as parameters. Like this
PerformAction(Creature sourceC, Item sourceI, Creature targetC, Item targetI)
// this will usually end up with 2 null params since
// only 1 source and 1 target will be valid
Or should I do this instead?
PerformAction(Object source, Object target)
// cast to correct types and continue
Or is there a completely different way I should be thinking about this?
This is a "double dispatch" problem. In regular OO programming, you "dispatch" the operation of a virtual method call to the concrete type of the class implementing the object instance you call against. A client doesn't need to know the actual implementation type, it is simply making a method call against an abstract type description. That's "single dispatch".
Most OO languages don't implement anything but single-dispatch. Double-dispatch is when the operation that needs to be called depends on two different objects. The standard mechanism for implementing double dispatch in OO languages without direct double-dispatch support is the "Visitor" design pattern. See the link for how to use this pattern.
This sounds like a case for polymorphism. Instead of taking Item or Creature as an argument, make both of them derive (or implement) from ActionTarget or ActionSource. Let the implementation of Creature or Item determine which way to go from there.
You very rarely want to leave it so open by just taking Object. Even a little information is better than none.
You can try mixing the Command pattern with some clever use of interfaces to solve this:
// everything in the game (creature, item, hero, etc.) derives from this
public class Entity {}
// every action that can be performed derives from this
public abstract class Command
{
public abstract void Perform(Entity source, Entity target);
}
// these are the capabilities an entity may have. these are how the Commands
// interact with entities:
public interface IDamageable
{
void TakeDamage(int amount);
}
public interface IOpenable
{
void Open();
}
public interface IMoveable
{
void Move(int x, int y);
}
Then a derived Command downcasts to see if it can do what it needs to the target:
public class FireBallCommand : Command
{
public override void Perform(Entity source, Entity target)
{
// a fireball hurts the target and blows it back
var damageTarget = target as IDamageable;
if (damageTarget != null)
{
damageTarget.TakeDamage(234);
}
var moveTarget = target as IMoveable;
if (moveTarget != null)
{
moveTarget.Move(1, 1);
}
}
}
Note that:
A derived Entity only has to implement the capabilities that are appropriate for it.
The base Entity class doesn't have code for any capability. It's nice and simple.
Commands can gracefully do nothing if an entity is unaffected by it.
I think you're examining too small a part of the problem; how do you even determine the arguments to the PerformAction function in the first place? Something outside of the PerformAction function already knows (or somehow must find out) whether the action it wants to invoke requires a target or not, and how many targets, and which item or character it's operating upon. Crucially, some part of the code must decide what operation is taking place. You've omitted that from the post but I think that is the absolute most important aspect, because it's the action that determines the required arguments. And once you know those arguments, you know the form of the function or method to invoke.
Say a character has opened a chest, and a trap goes off. You presumably already have code which is an event handler for the chest being opened, and you can easily pass in the character that did it. You also presumably already ascertained that the object was a trapped chest. So you have the information you need already:
// pseudocode
function on_opened(Character opener)
{
this.triggerTrap(opener)
}
If you have a single Item class, the base implementation of triggerTrap will be empty, and you'll need to insert some sort of checks, eg. is_chest and is_trapped. If you have a derived Chest class, you'll probably just need is_trapped. But really, it's only as difficult as you make it.
Same goes for opening the chest in the first place: your input code will know who is acting (eg. the current player, or the current AI character), can determine what the target is (by finding an item under the mouse, or on the command line), and can determine the required action based on the input. It then simply becomes a case of looking up the right objects and calling the right method with those arguments.
item = get_object_under_cursor()
if item is not None:
if currently_held_item is not None:
player_use_item_on_other_item(currently_held_item, item)
else
player.use_item(item)
return
character = get_character_under_cursor()
if character is not None:
if character.is_friendly_to(player):
player.talk_to(character)
else
player.attack(character)
return
Keep it simple. :)
in the Zork model, each action one can do to an object is expressed as a method of that object, e.g.
door.Open()
monster.Attack()
something generic like PerformAction will end up being a big ball of mud...
What about having a method on your Actors (creatures, items) that Perform the action on a target(s). That way each item can act differently and you won't have one big massive method to deal with all the individual items/creatures.
example:
public abstract bool PerformAction(Object target); //returns if object is a valid target and action was performed
I've had a similar situation to this, although mine wasn't Role playing, but devices that sometimes had similar characteristics to other devices, but also some characteristics that are unique. The key is to use Interfaces to define a class of actions, such as ICanAttack and then implement the particular method on the objects. If you need common code to handle this across multiple objects and there's no clear way to derive one from the other then you simply use a utility class with a static method to do the implementation:
public interface ICanAttack { void Attack(Character attackee); }
public class Character { ... }
public class Warrior : Character, ICanAttack
{
public void Attack(Character attackee) { CharacterUtils.Attack(this, attackee); }
}
public static class CharacterUtils
{
public static void Attack(Character attacker, Character attackee) { ... }
}
Then if you have code that needs to determine whether a character can or can't do something:
public void Process(Character myCharacter)
{
...
ICanAttack attacker = null;
if ((attacker = (myCharacter as ICanAttack)) != null) attacker.Attack(anotherCharacter);
}
This way, you explicitly know what capabilities any particular type of character has, you get good code reuse, and the code is relatively self-documenting. The main drawback to this is that it is easy to end up with objects that implement a LOT of interfaces, depending on how complex your game is.
This might not be something that many would agree upon, but I'm not a team and it works for me (in most cases).
Instead of thinking of every Object as a collection of stuff, think of it as a collection of references to stuff. Basically, instead of one huge list of many
Object
- Position
- Legs
- [..n]
You would have something like this (with values stripped, leaving only relationships):
Whenever your player (or creature, or [..n]) wants to open a box, simply call
Player.Open(Something Target); //or
Creature.Open(Something Target); //or
[..n].Open(Something Target);
Where "Something" can be a set of rules, or just an integer which identifies the target (or even better, the target itself), if the target exists and indeed can be opened, open it.
All this can (quite) easily be implemented through a series of, say interfaces, like this:
interface IDraggable
{
void DragTo(
int X,
int Y
);
}
interface IDamageable
{
void Damage(
int A
);
}
With clever usage of these interfaces you might even ending up using stuff like delegates to make an abstraction between top-level
IDamageable
and the sub-level
IBurnable
Hope it helped :)
EDIT: This was embarassing, but it seems I hijacked #munificent's answer! I'm sorry #munificent! Anyway, look at his example if you want an actual example instead of an explanation of how the concept works.
EDIT 2: Oh crap. I just saw that you clearly stated you didn't want any of the stuff that was contained in the article you linked, which clearly is exactly the same as I have written about here! Disregard this answer if you like and sorry for it!

Using delegates instead of interfaces for decoupling. Good idea?

When writing GUI apps I use a top level class that "controls" or "coordinates" the application. The top level class would be responsible for coordinating things like initialising network connections, handling application wide UI actions, loading configuration files etc.
At certain stages in the GUI app control is handed off to a different class, for example the main control swaps from the login screen to the data entry screen once the user authenticates. The different classes need to use functionality of objects owned by the top level control. In the past I would simply pass the objects to the subordinate controls or create an interface. Lately I have changed to passing method delegates instead of whole objects with the two main reasons being:
It's a lot easier to mock a method than a class when unit testing,
It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using.
Some simplified example code is below:
delegate bool LoginDelegate(string username, string password);
delegate void UpdateDataDelegate(BizData data);
delegate void PrintDataDelegate(BizData data);
class MainScreen {
private MyNetwork m_network;
private MyPrinter m_printer;
private LoginScreen m_loginScreen;
private DataEntryScreen m_dataEntryScreen;
public MainScreen() {
m_network = new Network();
m_printer = new Printer();
m_loginScreen = new LoginScreen(m_network.Login);
m_dataEntryScreen = new DataEntryScreen(m_network.Update, m_printer.Print);
}
}
class LoginScreen {
LoginDelegate Login_External;
public LoginScreen(LoginDelegate login) {
Login_External = login
}
}
class DataEntryScreen {
UpdateDataDelegate UpdateData_External;
PrintDataDelegate PrintData_External;
public DataEntryScreen(UpdateDataDelegate updateData, PrintDataDelegate printData) {
UpdateData_External = updateData;
PrintData_External = printData;
}
}
My question is that while I prefer this approach and it makes good sense to me how is the next developer that comes along going to find it? In sample and open source C# code interfaces are the preferred approach for decoupling whereas this approach of using delegates leans more towards functional programming. Am I likely to get the subsequent developers swearing under their breath for what is to them a counter-intuitive approach?
It's an interesting approach. You may want to pay attention to two things:
Like Philip mentioned, when you have a lot of methods to define, you will end up with a big constructor. This will cause deep coupling between classes. One more or one less delegate will require everyone to modify the signature. You should consider making them public properties and using some DI framework.
Breaking down the implementation to the method level can be too granular sometimes. With class/interface, you can group methods by the domain/functionality. If you replace them with delegates, they can be mixed up and become difficult to read/maintain.
It seems the number of delegates is an important factor here.
While I can certainly see the positive side of using delegates rather than an interface, I have to disagree with both of your bullet points:
"It's a lot easier to mock a method than a class when unit testing". Most mock frameworks for c# are built around the idea of mocking a type. While many can mock methods, the samples and documentation (and focus) are normally around types. Mocking an interface with one method is just as easy or easier to mock than a method.
"It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using." Also has it's cons - once a class needs multiple methods, the constructors get large; and once a subordinate class needs a new property or method, rather than just modifying the interface you must also add it to allthe class constructors up the chain.
I'm not saying this is a bad approach by any means - passing functions rather than types does clearly state what you are doing and can reduce your object model complexity. However, in c# your next developer will probably see this as odd or confusing (depending on skill level). Mixing bits of OO and Functional approaches will probably get a raised eyebrow at the very least from most developers you will work with.

Categories

Resources