I have a generic Vector<T> class and a generic Matrix<T> class and I was wondering if it would be a good idea to have both classes implement an interface.
Basically, I'm implementing two algorithms: AlgorithmA and AlgorithmB, both of which perform very similar operations (reset, average...etc) but with different algorithms and act on different structures:
AlgorithmA uses Vector<double> while AlgorithmB uses Matrix<Complex>.
The design I have so far:
abstract class AlgorithmArray
{
// Operator overloading
}
class AlgorithmAArray : AlgorithmArray
{
private Vector<double> _vector;
// Overrides
}
class AlgorithmBArray : AlgorithmArray
{
private Matrix<Complex> _matrix;
// Overrides
}
I would prefer to have AlgorithmAArray derive from Vector<T> and also implement an interface 'IAlgorithmArray' (instead of the abstract class). Anyway, these algorithms are then used to simulate transmission/receiving between two locations:
public class CommunicationParameters
{
private AlgorithmArray _transmission;
private AlgorithmArray _receiving;
public void Compute()
{
if(_transmission != null)
_transmission.Compute();
if(_receiving != null)
_receiving.Compute()
}
}
Are there better ways to approach my problem ?
Note: The base class AlgorithmArray duplicates many of the operator/cloning...etc methods and I feel this could be avoided, perhaps using generics ?
Thanks !
I would suggest making two Algorithm classes that can take any data structure as a parameter and do their thing. I don't see a need for all this OOP inheritance, it just adds complexity.
Interfaces would allow routines which only read or only write vectors/matrices to accept vectors/matrices of a subtype or supertype of the expected vector/matrix type. I'm not sure that would generally be useful with matrices, but it could be handy with some applications of vectors.
Another advantage of interfaces over classes, which might be more applicable to your situation, would be that they could allow smooth interoperation between mutable, immutable, and copy-on-write objects (the latter requiring an extra level of indirection). This could be useful if you have lots of vectors or matrices which are going to be copies of one another, but a few of them will end up getting modified. Methods AsImmutable, AsNewMutable, and AsPossiblyExistingMutable can be useful for that. The first method (if invoked on a mutable object) will either create a new immutable object whose contents match those of its subject at the time of the call, or (if invoked on an immutable) object, simply return its subject. The second will create a new mutable object regardless of whether the existing object is mutable or immutable. The third method will return its subject if mutable, or else create a new mutable object; it should generally only be used in cases where the holder of an object would know that, if the object is mutable, it holds the only reference.
For example, if I have a private field _thing of type IReadableVector<Foo>, my setter could set it to value.AsImmutable() and my getter could return _thing.AsImmutable(). My mutating method would set _thing = _thing.AsPossiblyExistingMutable() before calling mutating methods on it. If I haven't tried to mutate _thing since I received it, it would be an immutable object (to which other objects might also hold references). The first time I mutate it, it would be copied to a new mutable object. Subsequent mutations, however, could keep using the same mutable object, since it would never get exposed to any outside code.
PS--There are arguments both for and against having IImmutableVector<T> and IImmutableMatrix<T> as interfaces, versus only having ImmutableVector<T> and ImmutableMatrix<T> classes. On the one hand, if they're interfaces, it's possible to have useful implementations which don't need to actually store all the elements. For example, one could have a classes like AllMatchingVector<T> which inherits IImmutableVector<T> but just contains one T and a number indicating its length; its indexed getter would simply return that element regardless of the specified index, or DiagonalMatrix<T> which simply holds an IImmutableVector<T> for the contents of its diagonal, and a T which would be returned everywhere else; especially for large vectors/matrices, such classes could save memory. On the other hand, there would be no way of ensuring that nobody implemented one of those interfaces with a class which was not, in fact, immutable. My personal feeling is that it's fine to use interfaces for that. After all, few people complain that SortedDictionary<T> will fail if a class implements IComparable<T> in a fashion that doesn't yield an immutable sorting relation. Nonetheless, a lot of people disagree with such a concept.
Related
Recently I was implementing a Trie data structure and decided the Nodes could store different types of data or have its implementation varied so then I went for Node<T>. Then as I got into the algorithm for constructing the Trie I realised it required more intimate knowledge of the Node so I constrained the generic class to use an INode interface. This allows for more flexibility but felt wrong in the context of a generic class.
Generic classes have a different use case to classes which implement an interface. For example, List<T> - the algorithm can work without being dependent on a related set of abstractions. A class which implements an interface may require polymorphism/DI but the interfaces will be more specialized.
Under what circumstances do others apply a generic class T where T may implement a more specialized interface?
I thought that a generic class is used when T does not really need to expose operations/data though I can see a generic class may be used where T implements IDisposable or some other more general interface.
Any help in clarifying these points?
When faced with a choice to use a generic with an interface constraint vs. a non-generic with an interface type, I would go for generic+interface only in situations when some or all of types passed as generic arguments are value types. This would prevent my implementation from requiring costly boxing and unboxing when dealing with my structs.
For example, if the interface happens to be IComparable, I wold definitely prefer a generic with a constraint, because it would let me avoid boxing when working with primitives.
Note that an alternative way of providing functionality to your generic class is passing a delegate along with the value. For example, if you plan to do something like this
interface IScoreable {
decimal GetScore(object context);
}
class Node<T> where T : IScoreable {
...
void DoSomething(T data) {
var score = data.GetScore(someContext);
...
}
}
you can also do this:
class Node<T> {
private Func<T,object,decimal> scorer;
public Node(Func<T,object,decimal> scorer) {
this.scorer = scorer;
}
...
void DoSomething(T data) {
var score = scorer(data, someContext);
...
}
}
The second solution lets you "decouple" the scoring functionality from the type being scored, at the expense of having the caller to write a little more code.
I see nothing wrong with placing constraints on the generic argument. Having a generic argument does not imply "this will work for anything", it implies that there is more than one way that the code will make sense.
It might actually expose a completely generic concept, like List<T>, but it might expose a concept that makes sense only in some contexts (like Nullable<T> only making sense for non-nullable entities)
The constraints are just that mechanism that you use to tell the world under what circumstances the class will make sense, and will enable you to actually use that (constrained) argument in a reasonable way, i.e. calling Dispose on things that implement IDisposable
The extreme of this is when the context is very constrained, i.e. what if there are only two possible implementations? I actually have that case in my current codebase, and I use generics. I need some processing done on some data point, and currently (and in the foreseeable future) there are only two kinds of data points. This is, in principle, the code I use:
interface IDataPoint
{
SomeResultType Process();
}
class FirstKindDataPoint : IDataPoint
{
SomeResultType Process(){...}
};
class SecondKindDataPoint : IDataPoint
{
SomeResultType Process(){...}
};
class DataPointProcessor<T> where T: IDataPoint
{
void AcquireAndProcessDataPoints(){...}
}
It makes sense, even in this constrained context, because I have only one processor, so only one logic to take care of, instead of two separate processor that I will have to try to keep in sync.
This way I can have a List<T> and an Action<T> within the processor instead of a List<IDataPoint> and Action<IDataPoint> which will be incorrect in my scenario, as I need a processor for a more specific data type, that is still, implementing IDataPoint.
If I needed a processor that will process anything, as long as it is an IDataPoint, it might make sense to remove the its genericity, and simply use IDataPoint within the code.
Additionally, the point raised in #dasblinkenlight's answer is very valid. If the generic parameters can be both structs and classes than using generics will avoid any boxing.
Generics are usually used where using an interface or a base class (and this includes object) are not enough, for example where you are worried about the return value's of your function being the original type rather than just the interface, or where the parameters you are passing in may be expressions that operate on the specific type.
So if you approach the logic from the other end. The decisions on type restrictions should be the same decision as when you are choosing the types of your function parameters.
I've been looking into empty interfaces and abstract classes and from what I have read, they are generally bad practice. I intend to use them as the foundation for a small search application that I am writing. I would write the initial search provider and others would be allowed to create their own providers as well. My code's intent is enforce relationships between the classes for anyone who would like to implement them.
Can someone chime in and describe if and why this is still a bad practice and what, if any alternatives are available.
namespace Api.SearchProviders
{
public abstract class ListingSeachResult
{
public abstract string GetResultsAsJSON();
}
public abstract class SearchParameters
{
}
public interface IListingSearchProvider
{
ListingSeachResult SearchListings(SearchParameters p);
}
}
Empty classes and interfaces are generally only "usably useful" as generic constraints; the types are not usable by themselves, and generic constraints are generally the only context in which one may use them in conjunction with something else useful. For example, if IMagicThing encapsulates some values, some implementations are mutable, and some aren't, a method which wants to record the values associated with an IMagicThing might be written something like:
void RecordValues<T>(T it) where T:IImagicThing,IIsImmutable {...}
where IIsImmutable is an empty interface whose contract says that any class which implements it and reports some value for any property must forevermore report the same value for that property. A method written as indicated could know that its parameter was contractually obligated to behave as an immutable implementation of IMagicThing.
Conceptually, if various implementations of an interface will make different promises regarding their behaviors, being able to combine those promises with constraints would seem helpful. Unfortunately, there's a rather nasty limitation with this approach: it won't be possible to pass an object to the above method unless one knows a particular type which satisfies all of the constraints, and from which object derives. If there were only one constraint, one could cast the object to that type, but that won't work if there are two or more.
Because of the above difficulty when using constrained generics, it's better to express the concept of "an IMagicThing which promises to be immutable" by defining an interface IImmutableMagicThing which derives from IMagicThing but adds no new members. A method which expects an IImmutableMagicThing won't accept any IMagicThing that doesn't implement the immutable interface, even if it happens to be immutable, but if one has a reference to an IMagicThing that happens to implement IImmutableMagicThing, one can cast that reference to the latter type and pass it to a routine that requires it.
Incidentally, there's one other usage I can see for an empty class type: as an identity token. A class need not have any members to serve as a dictionary key, a monitor lock, or the target of a weak reference. Especially if one has extension methods associated with such usage, defining an empty class for such purpose may be much more convenient than using Object.
Either in C# or Java or in any other language which follows oops concepts generally has 'Object' as super class for it by default. Why do we need to have Object as base class for all the classes we create?
When multiple inheritance is not possible in a language such as C# or Java how can we derive our class from another class when it is already derived from Object class. This question may look like silly but wanted to know some experts opinions on it.
Having a single-rooted type hierarchy can be handy in various ways. In particular, before generics came along, it was the only way that something like ArrayList would work. With generics, there's significantly less advantage to it - although it could still be useful in some situations, I suspect. EDIT: As an example, LINQ to XML's construction model is very "loose" in terms of being specified via object... but it works really well.
As for deriving from different classes - you derive directly from one class, but that will in turn derive indirectly from another one, and so on up to Object.
Note that the things which "all objects have in common" such as hash code, equality and monitors count as another design decision which I would question the wisdom of. Without a single rooted hierarchy these design decisions possibly wouldn't have been made the same way ;)
The fact that every class inherits object ensured by the compiler.
Meaning that is you write:
class A {}
It will compile like:
class A : Object{}
But if you state:
class B : A {}
Object will be in the hierarchy of B but not directly - so there is still no multiple inheritance.
In short
1) The Object class defines the basic state and behavior that all objects must have, such as the ability to compare oneself to another object, to convert to a string, to wait on a condition variable, to notify other objects that a condition variable has changed, and to return the object's class.
2) You can have B extend C, and A extend B. A is the child class of B, and B is the child class of C. Naturally, A is also a child class of C.
Well, the multiple inheritance of Object does not apply - you can think of it as:
"If a type doesn't have a base type, then implicitly inject Object".
Thus, applying the rule ad-nauseam, all types inherit from object once and once only - since at the bottom of the hierarchy must be a type that has no base; and therefore which will implicitly inherit from Object.
As for why these languages/frameworks have this as a feature, I have a few reasons:
1) The clue's in the name 'Object Oriented'. Everything is an object, therefore everything should have 'Object' (or equivalent) at it's core otherwise the design principle would be broken from the get-go.
2) Allows the framework to provide hooks for common operations that all types should/might need to support. Such as hash-code generation, string output for debugging etc etc.
3) It means that you can avoid resorting to nasty type casts that can break stuff - like (((int *)(void*))value) - since you have a nice friendly supertype for everything
There's probably loads more than this - and in the time it's taken me to write this 6 new answers have been posted; so I'll leave it there and hope that better people than I can explain in more detail and perhaps better :)
Regarding the first part of your question, it's how classes receive common properties and methods. It's also how we can have strongly-typed parameters to functions that can accept any object.
Regarding your second question, you simply derive your class from the other class; it will then be a descendant of that class, which is in turn a descendant of Object. There's no conflict.
You have the Object base class because amongst others because the Object class has methods (like, in .NET, GetHashCode(), which contain common functionality every object should have).
Multiple inheritance is indeed not possible, but it is possible to derive class A from class B, because A may not directly derive from Object, but B does, so all classes ultimately derive from Object, if you go far enough in the class' inheritance hierarchy.
Just to compare, let's take a look at a language that doesn't enforce a single root class - Objective-C. In most Objective-C environments there will be three root classes available (Object, NSObject and NSProxy), and you can write your own root class by just not declaring a superclass. In fact Object is deprecated and only exists for legacy reasons, but it's informative to include it in this discussion. The language is duck typed, so you can declare a variable's type as "any old object" (written as id), then it doesn't even matter what root class it has.
OK, so we've got all of these base classes. In fact, even for the compiler and runtime libraries to be able to get anywhere they need some common behaviour: the root classes must all have a pointer ivar called isa that references a class definition structure. Without that pointer, the compiler doesn't know how to create an object structure, and the runtime library won't know how to find out what class an object is, what its instance variables are, what messages it responds to and so forth.
So even though Objective-C claims to have multiple root classes, in fact there's some behaviour that all objects must implement. So in all but name, there's really a common primitive superclass, albeit one with less API than java.lang.Object.
N.B. as it happens both NSObject and NSProxy do provide a rich API similar to java.lang.Object, via a protocol (like a Java interface). Most API that claims to deal with the id type (remember, that's the "any old object" type) will actually assume it responds to messages in the protocol. By the time you actually need to use an object, rather than just create it with a compiler, it turns out to be useful to fold all of this common behaviour like equality, hashing, string descriptions etc. into the root class.
Well multiple inheritance is a totally different ball game.
An example of multiple inheritance:-
class Root
{
public abstract void Test();
}
class leftChild : Root
{
public override void Test()
{
}
}
class rightChild : Root
{
public override void Test()
{
}
}
class leafChild : rightChild, leftChild
{
}
The problem here being leafChild inherits Test of rightChild and leftChild. So a case of conflicting methods. This is called a diamond problem.
But when you use the object as super class the hierarchy goes like this:-
class Object
{
public abstract void hashcode();
//other methods
}
class leftChild : Object
{
public override void hashcode()
{
}
}
class rightChild : Object
{
public override void hashcode()
{
}
}
So here we derive both classes from Object but that's the end of it.
It acts like a template for all the objects which will derive from it, so that some common functionality required by every object is provided by default. For example cloning, hashcode and object locking etc.
I have a situation where I would like to have objects of a certain type be able to be used as two different types. If one of the "base" types was an interface this wouldn't be an issue, but in my case it is preferable that they both be concrete types.
I am considering adding copies of the methods and properties of one of the base types to the derived type, and adding an implicit conversion from the derived type to that base type. Then users will be able treat the derived type as the base type by using the duplicated methods directly, by assigning it to a variable of the base type, or by passing it to a method that takes the base type.
It seems like this solution will fit my needs well, but am I missing anything? Is there a situation where this won't work, or where it is likely to add confusion instead of simplicity when using the API?
EDIT: More details about my specific scenario:
This is for a potential future redesign of the way indicators are written in RightEdge, which is an automated trading system development environment. Price data is represented as a series of bars, which have values for the open, low, high, and close prices for a given period (1 minute, 1 day, etc). Indicators perform calculations on series of data. An example of a simple indicator is the moving average indicator, which gives the moving average of the most recent n values of its input, where n is user-specified. The moving average might be applied to the bar close, or it could be applied to the output of another indicator to smooth it out.
Each time a new bar comes in, the indicators compute the new value for their output for that bar.
Most indicators have only one output series, but sometimes it is convenient to have more than one output (see MACD), and I want to support this.
So, indicators need to derive from a "Component" class which has the methods that are called when new data comes in. However, for indicators which have only one output series (and this is most of them), it would be good for them to act as a series themselves. That way, users can use SMA.Current for the current value of an SMA, instead of having to use SMA.Output.Current. Likewise, Indicator2.Input = Indicator1; is preferable to Indicator2.Input = Indicator1.Output;. This may not seem like much of a difference, but a lot of our target customers are not professional .NET developers so I want to make this as easy as possible.
My idea is to have an implicit conversion from the indicator to its output series for indicators that have only one output series.
You don't provide too many details, so here is an attempt to answering from what you provide.
Take a look at the basic differences:
When you have a base type B and a derived type D, an assignment like this:
B my_B_object = my_D_object;
assigns a reference to the same object. On the other hand, when B and D are independent types with an implicit conversion between them, the above assignment would create a copy of my_D_object and store it (or a reference to it if B is a class) on my_B_object.
In summary, with "real" inheritance works by reference (changes to a reference affect the object shared by many references), while custom type conversions generally work by value (that depends on how you implement it, but implementing something close to "by reference" behavior for converters would be nearly insane): each reference will point to its own object.
You say you don't want to use interfaces, but why? Using the combo interface + helper class + extension methods (C# 3.0 and .Net 3.5 or newer required) can get quite close to real multiple inheritance. Look at this:
interface MyType { ... }
static class MyTypeHelper {
public static void MyMethod(this MyType value) {...}
}
Doing that for each "base" type would allow you to provide default implementations for the methods you want to.
These won't behave as virtual methods out-of-the-box; but you may use reflection to achieve that; you would need to do the following from within the implementation on the Helper class:
retrieve a System.Type with value.GetType()
find if that type has a method matching the signature
if you find a matching method, invoke it and return (so the rest of the Helper's method is not run).
Finally, if you found no specific implementation, let the rest of the method run and work as a "base class implementation".
There you go: multiple inheritance in C#, with the only caveat of requiring some ugly code in the base classes that will support this, and some overhead due to reflection; but unless your application is working under heavy pressure this should do the trick.
So, once again, why you don't want to use interfaces? If the only reason is their inability to provide method implementations, the trick above solves it. If you have any other issue with interfaces, I might try to sort them out, but I'd have to know about them first ;)
Hope this helps.
[EDIT: Addition based on the comments]
I've added a bunch of details to the original question. I don't want to use interfaces because I want to prevent users from shooting themselves in the foot by implementing them incorrectly, or accidentally calling a method (ie NewBar) which they need to override if they want to implement an indicator, but which they should never need to call directly.
I've looked at your updated question, but the comment quite summarizes it. Maybe I'm missing something, but interfaces + extensions + reflection can solve everything multiple inheritance could, and fares far better than implicit conversions at the task:
Virtual method behavior (an implementation is provided, inheritors can override): include method on the helper (wrapped in the reflection "virtualization" described above), don't declare on the interface.
Abstract method behavior (no implementation provided, inheritors must implement): declare method on the interface, don't include it on the helper.
Non-virtual method behavior (an implementation is provided, inheritors may hide but can't override): Just implement it as normal on the helper.
Bonus: weird method (an implementation is provided, but inheritors must implement anyway; they may explicitly invoke the base implementation): that's not doable with normal or multiple inheritance, but I'm including it for completeness: that's what you'd get if you provide an implementation on the helper and also declare it on the interface. I'm not sure of how would that work (on the aspect of virtual vs. non-virtual) or what use it'd have, but hey, my solution has already beaten multiple inheritance :P
Note: On the case of the non-virtual method, you'd need to have the interface type as the "declared" type to ensure that the base implementation is used. That's exactly the same as when an inheritor hides a method.
I want to prevent users from shooting themselves in the foot by implementing them incorrectly
Seems that non-virtual (implemented only on the helper) will work best here.
or accidentally calling a method (ie NewBar) which they need to override if they want to implement an indicator
That's where abstract methods (or interfaces, which are a kind of super-abstract thing) shine most. The inheritor must implement the method, or the code won't even compile. On some cases virtual methods may do (if you have a generic base implementation but more specific implementations are reasonable).
but which they should never need to call directly
If a method (or any other member) is exposed to client code but shouldn't be called from client code, there is no programmatic solution to enforce that (actually, there is, bear with me). The right place to address that is on the documentation. Because you are documenting you API, aren't you? ;) Neither conversions nor multiple inheritance could help you here. However, reflection may help:
if(System.Reflection.Assembly.GetCallingAssembly()!=System.Reflection.Assembly.GetExecutingAssembly())
throw new Exception("Don't call me. Don't call me!. DON'T CALL ME!!!");
Of course, you may shorten that if you have a using System.Reflection; statement on your file. And, BTW, feel free to change the Exception's type and message to something more descriptive ;).
I see two issues:
User-defined type conversion operators are generally not very discoverable -- they don't show up in IntelliSense.
With an implicit user-defined type conversion operator, it's often not obvious when the operator is applied.
This doesn't been you shouldn't be defining type conversion operators at all, but you have to keep this in mind when designing your solution.
An easily discoverable, easily recognizable solution would be to define explicit conversion methods:
class Person { }
abstract class Student : Person
{
public abstract decimal Wage { get; }
}
abstract class Musician : Person
{
public abstract decimal Wage { get; }
}
class StudentMusician : Person
{
public decimal MusicianWage { get { return 10; } }
public decimal StudentWage { get { return 8; } }
public Musician AsMusician() { return new MusicianFacade(this); }
public Student AsStudent() { return new StudentFacade(this); }
}
Usage:
void PayMusician(Musician musician) { GiveMoney(musician, musician.Wage); }
void PayStudent(Student student) { GiveMoney(student, student.Wage); }
StudentMusician alice;
PayStudent(alice.AsStudent());
It doesn't sound as if your method would support a cross-cast. True multiple inheritance would.
An example from C++, which has multiple inheritance:
class A {};
class B {};
class C : public A, public B {};
C o;
B* pB = &o;
A* pA = dynamic_cast<A*>(pB); // with true MI, this succeeds
Then users will be able treat the derived type as the base type by using the duplicated methods directly, by assigning it to a variable of the base type, or by passing it to a method that takes the base type.
This will behave differently, however. In the case of inheritance, you're just passing your object. However, by implementing an implicit converter, you'll always be constructing a new object when the conversion takes place. This could be very unexpected, since it will behave quite differently in the two cases.
Personally, I'd make this a method that returns the new type, since it would make the actual implementation obvious to the end user.
Maybe I'm going too far off with this, but your use case sounds suspiciously as if it could heavily benefit from building on Rx (Rx in 15 Minutes).
Rx is a framework for working with objects that produce values. It allows such objects to be composed in a very expressive way and to transform, filter and aggregate such streams of produced values.
You say you have a bar:
class Bar
{
double Open { get; }
double Low { get; }
double High { get; }
double Close { get; }
}
A series is an object that produces bars:
class Series : IObservable<Bar>
{
// ...
}
A moving average indicator is an object that produces the average of the last count bars whenever a new bar is produced:
static class IndicatorExtensions
{
public static IObservable<double> MovingAverage(
this IObservable<Bar> source,
int count)
{
// ...
}
}
The usage would be as follows:
Series series = GetSeries();
series.MovingAverage(20).Subscribe(average =>
{
txtCurrentAverage.Text = average.ToString();
});
An indicator with multiple outputs is similar to GroupBy.
This might be a stupid idea, but: if your design requires multiple inheritance, then why don't you simply use a language with MI? There are several .NET languages which support multiple inheritance. Off the top of my head: Eiffel, Python, Ioke. There's probable more.
I'm making a game where each Actor is represented by a GameObjectController. Game Objects that can partake in combat implement ICombatant. How can I specify that arguments to a combat function must inherit from GameObjectController and implement ICombatant? Or does this indicate that my code is structured poorly?
public void ComputeAttackUpdate(ICombatant attacker, AttackType attackType, ICombatant victim)
In the above code, I want attacker and victim to inherit from GameObjectController and implement ICombatant. Is this syntactically possible?
I'd say it probably indicates you could restructure somehow, like, have a base Combatant class that attacker and victim inherit from, which inherits from GameObjectController and implements ICombatant.
however, you could do something like
ComputeAttackUpdate<T,U>(T attacker, AttackType attackType, U victim)
where T: ICombatant, GameObjectController
where U: ICombatant, GameObjectController
Although I probably wouldn't.
Presumably all ICombatants must also be GameObjectControllers? If so, you might want to make a new interface IGameObjectController and then declare:
interface IGameObjectController
{
// Interface here.
}
interface ICombatant : IGameObjectController
{
// Interface for combat stuff here.
}
class GameObjectController : IGameObjectController
{
// Implementation here.
}
class FooActor : GameObjectController, ICombatant
{
// Implementation for fighting here.
}
It is only syntactically possible if GameObjectController itself implements ICombatant; otherwise, I would say you have a design problem.
Interfaces are intended to define the operations available on some object; base classes identify what that object is. You can only pick one or the other. If accepting the ICombatant interface as an argument is not sufficient, it might indicate that ICombatant is defined too narrowly (i.e. doesn't support everything you need it to do).
I'd have to see the specifics of what you're trying to do with this object in order to go into much more depth.
What if you did this instead:
public class GameObjectControllerCombatant : GameObjectController, ICombatant
{
// ...
}
Then derive your combatant classes from this instead of directly from GameObjectController. It still feels to me like it's breaking encapsulation, and the awkwardness of the name is a strong indication that your combatant classes are violating the Single Responsibility Principle... but it would work.
Well, sort of. You can write a generic method:
public void ComputeAttackUpdate<T>(T attacker, AttackType type, T victim)
where T : GameObjectController, ICombatant
That means T has to satisfy both the constraints you need. It's pretty grim though - and if the attacker and victim could be different (somewhat unrelated) types, you'd have to make it generic in two type parameters instead.
However, I would personally try to go for a more natural solution. This isn't a situation I find myself in, certainly. If you need to regard an argument in two different ways, perhaps you actually want two different methods?
If you control all the classes in question, and if GameObjectController doesn't define any fields, the cleanest approach would be to define an IGameObjectController (whose properties and methods match those of GameObjectController) and an ICombatantGameObjectContoller (which derives from both IGameObjectController and ICombatant). Every class which is to be usable in situations that require both interfaces must be explicitly declared as implementing ICombatantGameObjectController, even though adding that declaration wouldn't require adding any extra code. If one does that, one can use parameters, fields, and variables of type ICombatantGameObjectController without difficulty.
If you can't set up your classes and interfaces as described above, the approach offered by Jon Skeet is a generally good one, but with a nasty caveat: to call a generic function like Mr. Skeet's ComputeAttackUpdate, the compiler has to be able to determine a single type which it knows is compatible with the type of the object being passed in and with all of the constraints. If there are descendants of GameObjectController which implement ICombatant but do not derive from a common base type which also implements GameObjectController, it may be difficult to store such objects in a field and later pass them to generic routines. There is a way, and if you need to I can explain it, but it's a bit tricky.