Anthropomorphising interfaces - good or bad idea? - c#

I have for some time tried to anthropomorphise (meaning human readable) the names I give to interfaces, to me this is the same as give an interface a role based name – trying to capture the purpose of the interface in the name.
I was having a discussion with other developers who think this is a little strange and childish.
What do the folks of SO think?
Examples (C# syntax):
public interface IShowMessages
{
void Show(string message);
void Show(string title, string message);
}
public class TraceMessenger : IShowMessages
{
}
public interface IHaveMessageParameters
{
IList<string> Parameters { get; }
}
public class SomeClass : IHaveMessageParameters
{
}

IThinkItsATerribleIdea

Of course you should always choose identifiers which are human readable. As in: transport the meaning which they convey even to somebody who is not as familiar with the problem to be solved by the code as you are.
However, using long identifiers does not make your identifiers more 'readable'. To any reasonably experienced programmer, 'tmp' conveys as much information as 'temporaryVariable' does. Same goes for 'i' vs. 'dummyCounter' etc..
In your particular example, the interface names are actually quite annoying since somebody who's used to developing object oriented systems will read the inheritance as 'is a'. And 'SomeClass is a IHaveMessageParameters' sounds silly.
Try using IMessagePrinter and IMessageParameterProvider instead.

Yes, that sounds like a good idea.
What's the alternative?
Code should be human-readable. Any fool can write code a computer can understand. The difficult part is writing code a human can understand.
Humans have to maintain the code, so it's pretty darn important that it is as easy to maintain as possible - that includes that the code should be as readable as possible.

Interfaces describe behavior, and so I name them so as to to communicate the behavior they are mandating. This 'generally' means that the name is a verb, (or adverb) or some form of action-describing phrase. Combined with the "I" for interface, this looks like what you are doing...
ICanMove, IControllable, ICanPrint, ISendMesssages, etc...
using adverbs as in IControllable, IDisposable, IEnumerable, etc. communicates the same thought as a verb form and is terser, so I use this form as well...
Finally, more important (or at least equally important) than what you name the interface, is to keep the interfaces you design as small and logically contained as possible. You should strive to have each interface represent as small and logically connected a set of methods/properties as possible. When an interface has so much in it that there is no obvious name that would describe all the behavior it mandates, it's a sign that there is too much in it, and that it needs to be refactored into two or more smaller interfaces. So, maming interfaces in the way you are proposing helps to enforce this type of organizational design, which is a good thing.

There's nothing strange about using simple human-readable names. But using the I for interface to also stand for the first-person I as though it's talking about itself... is a little unusual, yes.
But the bottom line is, whatever works for you and is understood by you and your team is fine. You gotta go with what works.

In my opinion this approach just adds a greater burden on the developers to come up with such names since it intergrates the I as part of a sentence. I don't find IDisposable for example to be more difficult to read than ICanBeDisposed.

In the OP's examples, the anthropomorphic way compares well against alternatives - eg: IShowMessages vs. something like IMessageShower. But - this is not always the case. Interfaces I have used when programming game objects include: IOpenClosable and ILockable. Alternatives like ICanBeOpenedAndClosed and ICanBeLocked would be more verbose. Or you could simply do IAmOpenClosable and IAmLockable - but then you'd be adding the "Am" just for the anthropomorphic effect with no real information benefit. I am all for minimizing verbosity if the same amount of information is conveyed.

So long as the semantics of what is trying to be achieved aren't lost and terseness isn't irreparably compromised (IDoLotsOfThingsWhichIncludesTheFollowingColonSpace...). I wouldn't generally mind somebody other than myself doing it. Still, there are plenty of contexts in which terseness is paramount, in which this would be unacceptable.

Intentionally using the 'I for Interface' convention in the first person seems a bit silly to be honest. What starts out as a cute pun becomes impossible to follow consistently, and ends up clouding meaning later on. That said, your standalone example reads clearly enough and I wouldn't have a problem with it.

Related

Getting my head around object oriented programming

I am entry level .Net developer and using it to develop web sites. I started with classic asp and last year jumped on the ship with a short C# book.
As I developed I learned more and started to see that coming from classic asp I always used C# like scripting language.
For example in my last project I needed to encode video on the webserver and wrote a code like
public class Encoder
{
Public static bool Encode(string videopath) {
...snip...
return true;
}
}
While searching samples related to my project I’ve seen people doing this
public class Encoder
{
Public static Encode(string videopath) {
EncodedVideo encoded = new EncodedVideo();
...snip...
encoded.EncodedVideoPath = outputFile;
encoded.Success = true;
...snip...
}
}
public class EncodedVideo
{
public string EncodedVideoPath { get; set; }
public bool Success { get; set; }
}
As I understand second example is more object oriented but I don’t see the point of using EncodedVideo object.
Am I doing something wrong? Does it really necessary to use this sort of code in a web app?
someone once explained OO to me as a a soda can.
A Soda can is an object, an object has many properties. And many methods. For example..
SodaCan.Drink();
SodaCan.Crush();
SocaCan.PourSomeForMyHomies();
etc...
The purpose of OO Design is theoretically to write a line of code once, and have abstraction between objects.
This means that Coder.Consume(SodaCan.contents); is relative to your question.
An encoded video is not the same thing as an encoder. An encoder returns an encoded video. and encoded video may use an encoder but they are two seperate objects. because they are two different entities serving different functions, they simply work together.
Much like me consuming a soda can does not mean that I am a soda can.
Neither example is really complete enough to evaluate. The second example seems to be more complex than the first, but without knowing how it will be used it's difficult to tell.
Object Oriented design is at it's best when it allows you to either:
1) Keep related information and/or functions together (instead of using parallel arrays or the like).
Or
2) Take advantage of inheritance and interface implementation.
Your second example MIGHT be keeping the data together better, if it returns the EncodedVideo object AND the success or failure of the method needs to be kept track of after the fact. In this case you would be replacing a combination of a boolean "success" variable and a path with a single object, clearly documenting the relation of the two pieces of data.
Another possibility not touched on by either example is using inheritance to better organize the encoding process. You could have a single base class that handles the "grunt work" of opening the file, copying the data, etc. and then inherit from that class for each different type of encoding you need to perform. In this case much of your code can be written directly against the base class, without needing to worry about what kind of encoding is actually being performed.
Actually the first looks better to me, but shouldn't return anything (or return an encoded video object).
Usually we assume methods complete successfully without exceptional errors - if exceptional errors are encountered, we throw an exception.
Object oriented programming is fundamentally about organization. You can program in an OO way even without an OO language like C#. By grouping related functions and data together, it is easier to deal with increasingly complex projects.
You aren't necessarily doing something wrong. The question of what paradigm works best is highly debatable and isn't likely to have a clear winner as there are so many different ways to measure "good" code,e.g. maintainable, scalable, performance, re-usable, modular, etc.
It isn't necessary, but it can be useful in some cases. Take a look at various MVC examples to see OO code. Generally, OO code has the advantage of being re-usable so that what was written for one application can be used for others over and over again. For example, look at log4net for example of a logging framework that many people use.
The way your structure an OO program--which objects you use and how you arrange them--really depends on many factors: the age of the project, the overall size of the project, complexity of the problem, and a bit for just personal taste.
The best advice I can think of that will wrap all the reasons for OO into one quick lesson is something I picked up learning design patterns: "Encapsulate the parts that change." The value of OO is to reuse elements that will be repeated without writing additional code. But obviously you only care to "wrap up" code into objects if it will actually be reused or modified in the future, thus you should figure out what is likely to change and make objects out of it.
In your example, the reason to use the second set up may be that you can reuse the EncodedVideo object else where in the program. Anytime you need to deal with EncodedVideo, you don't concern yourself with the "how do I encode and use video", you just use the object you have and trust it to handle the logic. It may also be valuable to encapsulate the encoding logic if it's complex, and likely to change. Then you isolate changes to just one place in the code, rather than many potential places where you might have used the object.
(Brief aside: The particular example you posted isn't valid C# code. In the second example, the static method has no return type, though I assume you meant to have it return the EncodedVideo object.)
This is a design question, so answer depends on what you need, meaning there's no right or wrong answer. First method is more simple, but in second case you incapsulate encoding logic in EncodedVideo class and you can easily change the logic (based on incoming video type, for instance) in your Encoder class.
I think the first example seems more simple, except I would avoid using statics whenever possible to increase testability.
public class Encoder
{
private string videoPath;
public Encoder(string videoPath) {
this.videoPath = videoPath;
}
public bool Encode() {
...snip...
return true;
}
}
Is OOP necessary? No.
Is OOP a good idea? Yes.
You're not necessarily doing something wrong. Maybe there's a better way, maybe not.
OOP, in general, promotes modularity, extensibility, and ease of maintenance. This goes for web applications, too.
In your specific Encoder/EncodedVideo example, I don't know if it makes sense to use two discrete objects to accomplish this task, because it depends on a lot of things.
For example, is the data stored in EncodedVideo only ever used within the Encode() method? Then it might not make sense to use a separate object.
However, if other parts of the application need to know some of the information that's in EncodedVideo, such as the path or whether the status is successful, then it's good to have an EncodedVideo object that can be passed around in the rest of the application. In this case, Encode() could return an object of type EncodedVideo rather than a bool, making that data available to the rest of your app.
Unless you want to reuse the EncodedVideo class for something else, then (from what code you've given) I think your method is perfectly acceptable for this task. Unless there's unrelated functionality in EncodedVideo and the Encoder classes or it forms a massive lump of code that should be split down, then you're not really lowering the cohesion of your classes, which is fine. Assuming you don't need to reuse EncodedVideo and the classes are cohesive, by splitting them you're probably creating unnecessary classes and increasing coupling.
Remember: 1. the OO philosophy can be quite subjective and there's no single right answer, 2. you can always refactor later :p

Why is C# Case Sensitive? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What reasoning exists behind making C# case sensitive?
I'm considering switching from VB.NET to take advantage of some language features (CCR and yield), and understanding the reasoning behind this difference may make the transition easier.
[UPDATE]
Well I took the plunge three days ago. Learning C# hasn't been particularly hard, I could barely remember my C++ days in the late 90's though.
Is the Case Sensitivity annoying me? not as much as i'd thought... plus I am finding that it actually is advantageous. I'm actually really happy with the CCR as a asynchronous coordination programming model. If only I had more time on the current project i'd port the code base into C# to take full advantage. Wouldn't be fair to my client though.
Assessing my current project now and I'm seeing blocking threads EVERYWHERE! AHhhh!!!
[UPDATE]
Well i've been programming in C# for nearly a year now. I'm really enjoying the language, and I really REALLY hate crossing over to VB (especially when it is unavoidable!)
And the case sensitivity thing? not even an issue
C# is case sensistive because it takes after the C style languages which are all case sensitive. This is from memory here's an MSDN link which is not working for me right now I can't verify.
I would also like to point out that this is a very valid use case:
public class Child
{
private Person parent;
public Person Parent
{
get { return parent;}
}
}
Yes you can get around this using prefixes on your member variables but some people don't like to do that.
They were probably thinking "we don't want people using SoMeVaRiAbLe in one place and sOmEvArIaBlE in another.
Consider the variable names in the following pseudocode:
class Foo extends Object { ... }
...
foo = new Foo();
Having case sensitivity allows conventions which use case to separate class names and instances; such conventions are not at all uncommon in the development world.
I think the fact that case can convey information is a very good reason. For example, by convention, class names, public methods and properties start with an uppercase letter by convention. And conversely, fields and local variables start with a lowercase letter.
After using the language for years, I've come to really enjoy this, code is much easier to read when you can read information simply from the casing of the words.
And that's why people do this sometimes, and it can make sense:
Foo foo = new Foo();
I do that all the time, it's very useful. If you think about it in a more useful situation like this:
Image image = Image.LoadFrom(path);
It just makes sense sometimes to call the instance the same thing as the class name, and the only way to tell them apart is the casing. In C++ the case-sensitivity becomes even more useful, but that's another story. I can elaborate if you're interested.
I think that having case sensitive identifiers can make code more readable, through the use of naming conventions, well, and even without naming conventions, the consistency enforced by case sensitivity ensures you that the same entity is always written the same way.
C# inherits the case sensitivity from C and Java, which it tries to mimic to make it easier for developers to move to C#
There might have been some good reasons for making C case sensitive when it was created three decades ago, but there don't seem to be any records on why. Jeff Atwood wrote a good article advocating for that case sensitivity might no longer make sense.
Probably copied from C, C++, Java, etc. or may be kept the same on purpose so that its similar to what other langauges have.
It was just a matter of taste on the C# langauge designer team. I am willing to bet it was for comonality with other C family languages. It does however lead to some bad programming practices such as a private field and its associalted property differing only in the case of the first letter.
EDIT:
Why might this be cosidered bad.
class SomeClass
{
private int someField;
public int SomeField
{
get { return SomeField; }
// now we have recursion where its not wanted and its
// difficult for the eye to pick out and results in a
// StackOverflowException.
}
}
Prefixing private fields with an _ or an m might make it easier to spot. Its not a huge biggie and personally I sill do exactly what I have just said is bad (so sue me!).
Parsing is also a tiny bit easier for case-sensitive languages. As long as there's no good reason to choose the non-case-sensitive way, why bother?
You are looking at this in a very limited way - from your point of view. Language designers must take into account a whole other set of considerations: cultural reasons, compatibility with other languages, common coding practices etc
All modern languages use case-sensitivity: which ones don't?
As someone who used BASIC for a number of years I got very tired of developers using different cases for the same variable. This sort of thing is very tiresome to look at and encourages sloppy programming. If you can't be bothered to get the case right - what else can't you be bothered to do?
From
.NET Framework Developer's Guide
Capitalization Conventions, Case-Sensitivity:
The capitalization guidelines exist
solely to make identifiers easier to
read and recognize. Casing cannot be
used as a means of avoiding name
collisions between library elements.
Do not assume that all programming
languages are case-sensitive. They are
not. Names cannot differ by case
alone.
My best guess as to the why it is case sensitive would be because Unix is case sensitive. Dennis Ritchie, the father of C also co-wrote Unix, so it makes sense that the language he wrote would coincide with the environment available at that time. C# just inherited this from its ancestor. I think this was a good decision on Microsoft's part, being that windows paths are not case sensitive.
I assume you'd like to see code like:
SomeField=28;
someField++;
somefield++;
SOMEFIELD++;
compiled as if SomeField in any casing variation is the same variable.
Personally, I think it's a bad idea. It encourages laziness and/or carelessness. Why bother properly casing the thing when it doesn't matter anyway? Well, while we're at it, maybe the compiler should allow misspellings, like SoemField++?
Naming. Names are hard to come by and you don't want to have to create a new name when talking about something similar or have to use some notation to set them apart.
Such scenarios are properties for fields or arguments in the constructor that you are about to assign to fields.

Where should I put my first method

I've a need to add method that will calculate a weighted sum of worker salary and his superior salary. I would like something like this:
class CompanyFinanse
{
public decimal WeightedSumOfWorkerSalaryAndSuperior(Worker WorkerA, Worker Superior)
{
return WorkerA.Salary + Superior.Salary * 2;
}
}
Is this a good design or should I put this method somewhere else? I'm just staring designing project and think about a good, Object Oriented way of organize methods in classes. So I would like start from beginning with OOP on my mind. Best practice needed!
I would either put it in the worker class, or have a static function in a finance library. I don't think a Finance object really makes sense, I think it would be more of a set of business rules than anything, so it would be static.
public class Worker {
public Worker Superior {get;set;}
public readonly decimal WeightedSalary {
get {
return (Superior.Salary * 2) + (this.Salary)
}
}
public decimal Salary {get;set;}
}
or
public static class Finance {
public static decimal WeightedSumOfWorkerSalaryAndSuperior(Worker WorkerA, Worker Superior) {
return WorkerA.Salary + Superior.Salary * 2; }
}
For your design to be Object Oriented, you should start by thinking of the purpose of the entire application. If there is only one method in your application (weighted sum), then there isn't too much design to go on.
If this is a finance application, maybe you could have a Salary class which contains a worker's salary and some utility functions.
For the method you pointed out, if the Worker class has a reference to his Superior, you could make this method part of the Worker class.
Without more information on the purpose of the application, it's difficult to give good guidance.
So it may be impossible to give you a complete answer about "best practices" without knowing more about your domain, but I can tell you that you may be setting yourself up for disaster by thinking about the implementation details this early.
If you're like me then you were taught that good OOD/OOP is meticulously detailed and involves BDUF. It wasn't until later in my career that I found out this is the reason so many projects become egregiously unmaintainable later on down the road. Assumptions are made about how the project might work, instead of allowing the design to emerge naturally from how the code is actually going to be used.
Simply stated: You need to being doing BDD / TDD (Behavior/Test Driven Development).
Start with a rough domain model sketched out, but avoid too much detail.
Pick a functional area that you want to get started with. Preferably at the top of the model, or one the user will be interacting with.
Brainstorm on expected functionality that the unit should have and make a list.
Start the TDD cycle on that unit and then refactor aggressively as you go.
What you will end up with is exactly what you do need, and nothing you don't (most of the time). You gain the added benefit of having full test coverage so you can refactor later on down the road without worrying about breaking stuff :)
I know I haven't given you any code here, but that is because anything I give you will probably be wrong, and then you will be stuck with it. Only you know how the code is actually going to be used, and you should start by writing the code in that way. TDD focuses on how the code should look, and then you can fill in the implementation details as you go.
A full explanation of this is beyond the scope of this post, but there are a myriad of resources available online as well as a number of books that are excellent resources for beginning the practice of TDD. These two guys should get you off to a good start.
Martin Fowler
Kent Beck
Following up on the answer by brien, I suggest looking at the practice of CRC cards (Class-Responsibility-Collaboration). There are many sources of information, including:
this tutorial from Cal Poly,
this orientation on the Agile Modeling web site, and
The CRC Card Book, which discusses the practice and its use with multiple languages.
Understanding which class should "own" a particular behavior (and/or which classes should collaborate in implementing a given use case), is in general a top-down kind of discussion driven by the overall design of what your system is doing for its users.
It is easy to find out whether your code needs improvement. There is a code smell in your snippet. You should address that.
It is good that you have very declarative name for the method. But it is too long. It sounds like if you keep that method in this Finanse class it is inevitable that you have to use all those words in the method name to get the sense of what that method is intended to do.
It basically means that this method may not belong to this class.
One way to address this code smell is to see if you could get a shorter method name if we have the method on other class. I see you have Worker and Salary classes.
Assuming those are the only classes left and you don't want to add up more classes, I would put this on Salary. Salary knows how to calculate weighted salary given another salary (Superior salary in this case) as input. You don't need more than two words for the method name now.
#Shawn's answer is one variation of addressing this code smell. (I think you can call it as 'long method name' code smell)

Ab-using languages

Some time ago I had to address a certain C# design problem when I was implementing a JavaScript code-generation framework. One of the solutions I came with was using the “using” keyword in a totally different (hackish, if you please) way. I used it as a syntax sugar (well, originally it is one anyway) for building hierarchical code structure. Something that looked like this:
CodeBuilder cb = new CodeBuilder();
using(cb.Function("foo"))
{
// Generate some function code
cb.Add(someStatement);
cb.Add(someOtherStatement);
using(cb.While(someCondition))
{
cb.Add(someLoopStatement);
// Generate some more code
}
}
It is working because the Function and the While methods return IDisposable object, that, upon dispose, tells the builder to close the current scope. Such thing can be helpful for any tree-like structure that need to be hard-codded.
Do you think such “hacks” are justified? Because you can say that in C++, for example, many of the features such as templates and operator overloading get over-abused and this behavior is encouraged by many (look at boost for example). On the other side, you can say that many modern languages discourage such abuse and give you specific, much more restricted features.
My example is, of course, somewhat esoteric, but real. So what do you think about the specific hack and of the whole issue? Have you encountered similar dilemmas? How much abuse can you tolerate?
I think this is something that has blown over from languages like Ruby that have much more extensive mechanisms to let you create languages within your language (google for "dsl" or "domain specific languages" if you want to know more). C# is less flexible in this respect.
I think creating DSL's in this way is a good thing. It makes for more readable code. Using blocks can be a useful part of a DSL in C#. In this case I think there are better alternatives. The use of using is this case strays a bit too far from its original purpose. This can confuse the reader. I like Anton Gogolev's solution better for example.
Offtopic, but just take a look at how pretty this becomes with lambdas:
var codeBuilder = new CodeBuilder();
codeBuilder.DefineFunction("Foo", x =>
{
codeBuilder.While(condition, y =>
{
}
}
It would be better if the disposable object returned from cb.Function(name) was the object on which the statements should be added. That internally this function builder passed through the calls to private/internal functions on the CodeBuilder is fine, just that to public consumers the sequence is clear.
So long as the Dispose implementation would make the following code cause a runtime error.
CodeBuilder cb = new CodeBuilder();
var f = cb.Function("foo")
using(function)
{
// Generate some function code
f.Add(someStatement);
}
function.Add(something); // this should throw
Then the behaviour is intuitive and relatively reasonable and correct usage (below) encourages and prevents this happening
CodeBuilder cb = new CodeBuilder();
using(var function = cb.Function("foo"))
{
// Generate some function code
function.Add(someStatement);
}
I have to ask why you are using your own classes rather than the provided CodeDomProvider implementations though. (There are good reasons for this, notably that the current implementation lacks many of the c# 3.0 features) but since you don't mention it yourself...
Edit: I would second Anoton's suggest to use lamdas. The readability is much improved (and you have the option of allowing Expression Trees
If you go by the strictest definitions of IDisposable then this is an abuse. It's meant to be used as a method for releasing native resources in a deterministic fashion by a managed object.
The use of IDisposable has evolved to essentially be used by "any object which should have a deterministic lifetime". I'm not saying this is write or wrong but that's how many API's and users are choosing to use IDisposable. Given that definition it's not an abuse.
I wouldn't consider it terribly bad abuse, but I also wouldn't consider it good form because of the cognitive wall you're building for your maintenance developers. The using statement implies a certain class of lifetime management. This is fine in its usual uses and in slightly customized ones (like #heeen's reference to an RAII analogue), but those situations still keep the spirit of the using statement intact.
In your particular case, I might argue that a more functional approach like #Anton Gogolev's would be more in the spirit of the language as well as maintainable.
As to your primary question, I think each such hack must ultimately stand on its own merits as the "best" solution for a particular language in a particular situation. The definition of best is subjective, of course, but there are definitely times (especially when the external constraints of budgets and schedules are thrown into the mix) where a slightly more hackish approach is the only reasonable answer.
I often "abuse" using blocks. I think they provide a great way of defining scope. I have a whole series of objects that I use for capture and restoring state (e.g. of Combo boxes or the mouse pointer) during operations that may change the state. I also use them for creating and dropping database connections.
E.g.:
using(_cursorStack.ChangeCursor(System.Windows.Forms.Cursors.WaitCursor))
{
...
}
I wouldn't call it abuse. Looks more like a fancied up RAII technique to me. People have been using these for things like monitors.

C# Extension Methods - How far is too far?

Rails introduced some core extensions to Ruby like 3.days.from_now which returns, as you'd expect a date three days in the future. With extension methods in C# we can now do something similar:
static class Extensions
{
public static TimeSpan Days(this int i)
{
return new TimeSpan(i, 0, 0, 0, 0);
}
public static DateTime FromNow(this TimeSpan ts)
{
return DateTime.Now.Add(ts);
}
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine(
3.Days().FromNow()
);
}
}
Or how about:
static class Extensions
{
public static IEnumerable<int> To(this int from, int to)
{
return Enumerable.Range(from, to - from + 1);
}
}
class Program
{
static void Main(string[] args)
{
foreach (var i in 10.To(20))
{
Console.WriteLine(i);
}
}
}
Is this fundamentally wrong, or are there times when it is a good idea, like in a framework like Rails?
I like extension methods a lot but I do feel that when they are used outside of LINQ that they improve readability at the expense of maintainability.
Take 3.Days().FromNow() as an example. This is wonderfully expressive and anyone could read this code and tell you exactly what it does. That is a truly beautiful thing. As coders it is our joy to write code that is self-describing and expressive so that it requires almost no comments and is a pleasure to read. This code is paramount in that respect.
However, as coders we are also responsible to posterity, and those who come after us will spend most of their time trying to comprehend how this code works. We must be careful not to be so expressive that debugging our code requires leaping around amongst a myriad of extension methods.
Extension methods veil the "how" to better express the "what". I guess that makes them a double edged sword that is best used (like all things) in moderation.
First, my gut feeling: 3.Minutes.from_now looks totally cool, but does not demonstrate why extension methods are good. This also reflects my general view: cool, but I've never really missed them.
Question: Is 3.Minutes a timespan, or an angle?
Namespaces referenced through a using statement "normally" only affect types, now they suddenly decide what 3.Minutes means.
So the best is to "not let them escape".
All public extension methods in a likely-to-be-referenced namespace end up being "kind of global" - with all the potential problems associated with that. Keep them internal to your assembly, or put them into a separate namespace that is added to each file separately.
Personally I like int.To, I am ambivalent about int.Days, and I dislike TimeSpan.FromNow.
I dislike what I see as a bit of a fad for 'fluent' interfaces that let you write pseudo English code but do it by implementing methods with names that can be baffling in isolation.
For example, this doesnt read well to me:
TimeSpan.FromSeconds(4).FromNow()
Clearly, it's a subjective thing.
I agree with siz and lean conservative on this issue. Rails has that sort of stuff baked in, so it's not really that confusing ever. When you write your "days" and "fromnow" methods, there is no guarantee that your code is bug free. Also, you are adding a dependency to your code. If you put your extension methods in their own file, you need that file in every project. In a project, you need to include that project whenever you need it.
All that said, for really simple extension methods (like Jeff's usage of "left" or thatismatt's usage of days.fromnow above) that exist in other frameworks/worlds, I think it's ok. Anyone who is familiar with dates should understand what "3.Days().FromNow()" means.
I'm on the conservative side of the spectrum, at least for the time being, and am against extension methods. It is just syntactic sugar that, to me, is not that important. I think it can also be a nightmare for junior developers if they are new to C#. I'd rather encapsulate the extensions in my own objects or static methods.
If you are going to use them, just please don't overuse them to a point that you are making it convenient for yourself but messing with anyone else who touches your code. :-)
Each language has its own perspective on what a language should be. Rails and Ruby are designed with their own, very distinct opinions. PHP has clearly different opinions, as does C(++/#)...as does Visual Basic (though apparently we don't like their style).
The balance is having many, easily-read, built-in functions vs. the nitty-gritty control over everything. I wouldn't want SO many functions that you have to go to a lookup every time you want to do anything (and there's got to be a performance overhead to a bloated framework), but I personally love Rails, because what it has saves me a lot of time developing.
I guess what I'm saying here is that if you were designing a language, take a stance, go from there, and build in the functions you (or your target developer would) use most often.
My personal preference would be to use them sparingly for now and to wait to see how Microsoft and other big organizations use them. If we start seeing a lot of code, tutorials, and books use code like 3.Days().FromNow() it makes use it a lot. If only a small number of people use it, then you run the risk of having your code be overly difficult to maintain because not enough people are familiar with how extensions work.
On a related note, I wonder how the performance compares between a normal for loop and the foreach one? It would seem like the second method would involve a lot of extra work for the computer, but I'm not familiar enough with the concept to know for sure.

Categories

Resources