Why do we need extension method if inheritance is already there..? [duplicate] - c#

This question already has answers here:
Extension methods versus inheritance
(9 answers)
Closed 6 years ago.
I am facing this question regularly in interview. But I am not getting it's answer anywhere.Please help me.

Inheritance and extension methods are entirely orthogonal. It's just a simple way of writing a function in C# using the familiar . syntax. They only look similar if you use inheritance for code reuse, which tends to be frowned upon (but make sure you understand why - "best" practices are context-sensitive :)).
In any case, I'd expect the question is there to get you talking. Don't focus too hard on what the "right" answer is - just outline what inheritance is used for according to you, and what extension methods are used for, and what benefits and drawbacks each of those has. Get the dialog running.
For me, for example, extension methods are all about making common functions against an interface. That is, the functions add functionality on top of an interface (or class) while only using its public interface. This makes the "size" of the interface available to the function much smaller, which in turn makes it much easier to reason about.
Different people use extension methods differently, just like different people use inheritance differently. However, I find that as you shift from inheritance to composition, extension methods become more and more useful (and really, natural). Inheritance is a very specific technique that started being used for pretty much everything with little reason, but that's a big topic on Programmers.SE already anyway, just like the composition vs. inheritance debate :))

Related

C# 8 - multiple inheritance "abstract class"? [duplicate]

This question already has answers here:
Default Interface Methods. What is deep meaningful difference now, between abstract class and interface?
(6 answers)
Closed 4 years ago.
It seems to me like the C# 8.0 feature, default interface member implementation, essentially allows one to create implementations at the interface level. Pairing that with the fact that a class can implement multiple interfaces, it seems eerily close to a multiple inheritance structure for classes. As far as I understand, this seems to be quite opposite to the core of the design of the language.
Where does this discrepancy stem from and what room does this leave for actual abstract classes to occupy?
This question has been suggested as an answer to mine and while it is useful, it doesn't exactly answer my question. To be more precise:
I always assumed that single inheritance is one of the core principles of C#'s design, which is why the decision to implement this feature is surprising to me, and I would be interested to know where it stems from (C#-specifically).
The linked question does not answer what room it leaves for abstract classes.
I always assumed that single inheritance is one of the core principles of C#'s design
This is just not accurate. Single inheritance is a means to design goal, but not a goal in itself.
It's like saying the automatic transmission is a core design principle for car makers, when the actual goal is making the car easier and safer. And looking the car market, manual transmissions still thrive in both the low end (because they're cheaper) and the high end (performance sports cars) of the market, where they are good fit for purpose. Many models in those areas can still be had with either type of transmission.
The actual design goal in C# leading to single inheritance is more about safety and correctness with regards to memory access and overload resolution. Multiple inheritance is difficult to verify mathematically for these things compared to single inheritance. But as they find elegant solutions, C# designers have added a number of features that stretch the bounds of single inheritance. Beyond interfaces, we have partial classes, generics (and later co/contravariance), and delegate members that all trend this direction.
In this case, the default implementation is effective in safely providing a weak multiple inheritance because the inherited functionality doesn't cascade down the inheritance tree from two directions. You can't create a conflict by inheriting two different classes with differing interface implementations; you are limited to either your own class implementation, the default implementation, or the single implementation available via inheritance.
Note that default interface implementation does not allow for multiple inheritance, at least not in the sense that was a problem for C++. The reason multiple inheritance is a problem in C++ is that when a class inherits from multiple classes that have methods with equal signatures, it can become ambiguous as to which implementation is desired. With default interface implementation, that ambiguity is impossible because the class itself does not implement the method. An object must be cast to the interface in order to call the implemented methods. So multiple methods with the same signature may be called on the same instance, but you must explicitly tell the compiler which method you are executing.
The linked post answers your first question to a good extent.
As for:
The linked question does not answer what room it leaves for abstract
classes.
While it may read and sound similar interface default method implementation certainly does not replace abstract classes nor does it make them redundant, the very big reason being:
an interface cannot define class level fields/variables whereas an abstract class can have state.
There are some other differences although not as big as the aforementioned, which you can find in various blogs/posts:
https://dotnetcoretutorials.com/2018/03/25/proposed-default-interface-methods-in-c-8/
https://www.infoq.com/articles/default-interface-methods-cs8
etc.

Use of extension methods [duplicate]

This question already has answers here:
When do you use extension methods, ext. methods vs. inheritance?
(7 answers)
Closed 9 years ago.
I have been using C# extension methods for a while now and I find them to be really handy. However I am not really sure of what is the most ideal situation to use them, I feel at times I abuse them. When would you recommend the use of extension methods ?
IMO when either it's really an extension (and not a core/critical operation), or when it's a shortcut.
An example of an extension:
Often in sandbox applications (but it could also be used in real ones, of course) is extending IEnumerable with a Print method.
The print method shouldn't be there (that is: it shouldn't be a part of the IEnumerable class), but it's helping and making the syntax easier and cleaner. Also, you probably wouldn't want to ship a library with it as a part of that class.
An example of a shortcut:
Another thing I find myself often creating is an helper extension for objects with containers. Instead of calling Items.Add and similar, I just make an AddItem extension-method.
Something to consider is that it's just syntactic sugar, that is, it's for the you - the developer. So for .NET types and such use it when you think it's a good idea and will make things cleaner.
When it comes to "Should this method be an extension or a member?" see the first sentence in this answer, and also look here & here for more information.

Interfaces - why are they used? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How will I know when to create an interface?
Hi guys,
This will sound a bit thick, I guess, but I am battling to understand the reason to use interfaces. People keep saying that they are 'contracts' for classes. But, why use them? If I was a single developer, on an application, that I knew no one would ever work on (I know - not a common example, but I am just trying to understand), would I use Interfaces? They seem to just duplicate work. It seem I define what a class must implement, and then go an implement it. I'm doing it twice - why?
Please note: I am not in anyway saying they're useless... I'm just tying to find out why, in projects I work on, they define an IClass, and then based on that, define the class which they use...
Sorry if it's very basic... Just hoping someone can help me out.
I use interfaces because it makes my code a lot more modular. Using interfaces in combination with an inversion of control container (http://code.google.com/p/autofac/) will allow you to swap in various implementations of an interface easily.
Also, interfaces are easier to unit test.
Those are just a couple good reasons; really, there are more. But those are strong enough to make me want to use interfaces.
if you want to have several classes that all support the same set of methods. For example you might have a class that stores data and the code that calls them does not care about the details of which class its working with. They must implement methods store and fetch. In this case you can either have an interface with those 2 methods or you can have a common base class.
Why not have a comon base class.
you can only have 1 common base class and you need that for something else
they are really quite different that having a common base class feels forced

Why are methods virtual by default in Java, but non-virtual by default in C#? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In Java, methods are virtual by default; C# is the opposite.
Which is better? What are the advantages and disadvantages in each approach?
Anders Hejlsberg: (C# lead architect)
There are several reasons. One is
performance. We can observe that as
people write code in Java, they forget
to mark their methods final.
Therefore, those methods are virtual.
Because they're virtual, they don't
perform as well. There's just
performance overhead associated with
being a virtual method. That's one
issue.
A more important issue is versioning.
There are two schools of thought about
virtual methods. The academic school
of thought says, "Everything should be
virtual, because I might want to
override it someday." The pragmatic
school of thought, which comes from
building real applications that run in
the real world, says, "We've got to be
real careful about what we make
virtual."
When we make something virtual in a
platform, we're making an awful lot of
promises about how it evolves in the
future. For a non-virtual method, we
promise that when you call this
method, x and y will happen. When we
publish a virtual method in an API, we
not only promise that when you call
this method, x and y will happen. We
also promise that when you override
this method, we will call it in this
particular sequence with regard to
these other ones and the state will be
in this and that invariant.
Every time you say virtual in an API,
you are creating a call back hook. As
an OS or API framework designer,
you've got to be real careful about
that. You don't want users overriding
and hooking at any arbitrary point in
an API, because you cannot necessarily
make those promises. And people may
not fully understand the promises they
are making when they make something
virtual.
Java's way is simpler, C#'s way is more granular, safer and more efficient by default. Which is better is in the eye of the beer holder.
.Net forces the programmer to define which functions may be overriden, whereas Java functions, by default, can be overriden unless the final keyword is used.
If you're a strong advocate of the Open/Close Principle you may tend to support the Java way. It's best to allow classes to be extended and methods to be overriden such that the base functionality/code is untouched. For this reason, I support/prefer the Java way. If I were looking at the question from a different perspective, my opinion may be the opposite.
There are 2 major reasons why virtual by default is so much better than non-virtual.
The main principles about usefulness of OOP is Liskov substitution principle, polymorphism and late binding . I use strategy pattern all the time and for that I want my methods to be virtual. If you are fan of Open/closed principle you should like Java philosophy more. You should be able to change behavior without changing your source code. You can do that with dependency injection and virtual methods.
If you call a non-virtual method then you want to know from your code which class method you are calling. The flaw of .net is that you cannot know that from your code.
Another benefit of virtual-only method is that it is much easier to test your code because you can make Mocks of your (or 3rd party) classes. Testing with Mockito is really easy in Java.
Example
In Java if you define ClassB as
public class ClassB extends ClassA {
#Override
public void run() {
}
}
and object
ClassA obj=new ClassB();
If you call obj.run() how will you know if that code is following the rules of polymorphic open/close principle or it will code method related to ClassA? In Java you will know that there is always polymorphism. It is easier to make mocks and it is easier to extend classes and follow Liskov substitution principle.
On the other hand static methods are bounded to a class so if you want to call a method that is related to ClassA you can define that method like this:
public static run(ClassA obj)
and you can call it with
ClassB obj=new ClassB();
ClassA.run(obj);
and from the code you will know that the method you are calling is defined in ClassA and not in ClassB. Also in that case you will not have the overhead of virtual methods. (Note that JIT will also reduce the overhead of virtual methods in many cases).
For C#, if the reason to make the method non-virtual is to be able to define it in a subclass but not involve polymorphism, you're probably subclassing for no real reason.
If it's for design, I'd suggest making the class sealed (final in java) instead of individual methods if possible.
Jon Skeet said that in C# the classes should be sealed by default because methods are non-virtual by default as well. Joshua Bloch said that you should design for inheritance or forbid it (make classes final). C# designers chose a hybrid approach which is non-consistent.
It's a perturbation in time. C++ uses the virtual keyword and final as default.
Java follows C++ and attempts to take the best and improve on its shortcomings. The dangers of overuse of inheritance haven't come to light, so Java chooses to use the final keyword and virtual as default.
C# follows Java and has the benefit of hindsight. Anders chooses to go back to the C++ convention after observing Java's experience.
As always, there are pros and cons. C# has made AOP more difficult to implement, but hindsight can be 20/20 as mentioned by others. It all boils down to whether or not you believe that classes must be designed for extension or simply left open for unforeseen behavior modifications. When designing a language, this is a tough question to answer. I think industry experience is leaning towards the more conservative approach that C# takes.

evaluating cost/benefits of using extension methods in C# => 3.0 [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In what circumstances (usage scenarios) would you choose to write an extension rather than sub-classing an object ?
< full disclosure : I am not an MS employee; I do not know Mitsu Furota personally; I do know the author of the open-source Componax library mentioned here, but I have no business dealings with him whatsoever; I am not creating, or planning to create any commercial product using extensions : in sum : this post is from pure intellectal curiousity related to my trying to (continually) become aware of "best practices" >
I find the idea of extension methods "cool," and obviously you can do "far-out" things with them as in the many examples you can in Mitsu Furota's (MS) blog postslink text.
A personal friend wrote the open-source Componax librarylink text, and there's some remarkable facilities in there; but he is in complete command of his small company with total control over code guidelines, and every line of code "passes through his hands."
While this is speculation on my part : I think/guess other issues might come into play in a medium-to-large software team situation re use of Extensions.
Looking at MS's guidelines at link text, you find :
In general, you will probably be
calling extension methods far more
often than implementing your own. ...
In general, we recommend that you
implement extension methods sparingly
and only when you have to. Whenever
possible, client code that must extend
an existing type should do so by
creating a new type derived from the
existing type. For more information,
see Inheritance (C# Programming
Guide). ... When the compiler
encounters a method invocation, it
first looks for a match in the type's
instance methods. If no match is
found, it will search for any
extension methods that are defined for
the type, and bind to the first
extension method that it finds.
And at Ms's link text :
Extension methods present no specific
security vulnerabilities. They can
never be used to impersonate existing
methods on a type, because all name
collisions are resolved in favor of
the instance or static method defined
by the type itself. Extension methods
cannot access any private data in the
extended class.
Factors that seem obvious to me would include :
I assume you would not write an extension unless you expected it be used very generally and very frequently. On the other hand : couldn't you say the same thing about sub-classing ?
Knowing we can compile them into a seperate dll, and add the compiled dll, and reference it, and then use the extensions : is "cool," but does that "balance out" the cost inherent in the compiler first having to check to see if instance methods are defined as described above. Or the cost, in case of a "name clash," of using the Static invocation methods to make sure your extension is invoked rather than the instance definition ?
How frequent use of Extensions would affect run-time performance or memory use : I have no idea.
So, I'd appreciate your thoughts, or knowing about how/when you do, or don't do, use Extensions, compared to sub-classing.
thanks, Bill
My greatest usage for them is to extend closed-off 3rd party APIs.
Most of the time, when a software developer is offering an API on Windows these days, they are leaning more and more toward .NET for that extensibility. I like to do this because I prefer to depend on my own methods that I can modify in the future and serve as a global entry point to their API, in the case that they change it.
Previously, when having to do this, and I couldn't inherit the API object because it was sealed or something, I would rely on the Adapter pattern to make my own classes that wrapped up their objects. This is a functional, but rather inelegant solution. Extension methods give you a beautiful way to add more functionality to something that you don't control.
Many other peoples' greatest usage for them is LINQ!
LINQ would not be possible without the extension methods provided to IEnumerable.
The reason why people love them is because they make code more readable.
I have noticed another MAJOR usage of extension methods (myself included) is to make code more readable, and make it appear as if the code to do something belongs where it is supposed to. It also gets rid of the dreaded "Util" static-god-class that I have seen many times over. What looks better... Util.DecimalToFraction(decimal value); or value.ToFraction();? If you're like me, the latter.
Finally, there are those who deem the "static method" as EVIL!
Many 'good programmers' will tell you that you should try to avoid static methods, especially those who use extensive unit testing. Static methods are difficult to test in some cases, but they are not evil if used properly. While extension methods ARE static... they don't look or act like it. This allows you to get those static methods out of your classes, and onto the objects that they really should be attached to.
Regarding performance..
Extension methods are no different than calling a static method, passing the object being extended as a parameter... because that is what the compiler turns it into. The great thing about that is that your code looks clean, it does what you want, and the compiler handles the dirty work for you.
I use extension methods as a way to improve the functionality for classes without increasing the complexity of the class. You can keep your classes simple, and then add your repetitive work later on as an extension.
The Min() and Max() extension methods are great examples of this. You could just as easily declare a private method that would calculate these, but an extension method provides better readability, makes the functionality available to your entire project, and didn't require making an array any more complex of an object.
Taking the sub-classing approach vs. extension methods requires a couple of things to be true
The type must be extendable (not-sealed)
All places the type is created must support a factory pattern of sorts or the other code will just create the base type.
Adding an extension method requires really nothing other than using a C# 3.0+ compiler.
But most importantly, an inheritance hierarchy should represent an is-a relationship. I don't feel that adding 1 or 2 new methods / behaviors to a class truly expressing this type of relationship. It is instead augmenting existing behavior. A wrapper class or extension method much better fits the scenario.
In some cases you can't use a subclass: string for instance is sealed. You can however still add extension methods.

Categories

Resources