Why are singletons considered to be a bad practice? [duplicate] - c#

This question already has answers here:
Closed 13 years ago.
Duplicate:
What is so bad about Singletons?
I was reading this question, and was surprised to see that (s)he considered a singleton to be considered a "bad practice," and in fact thought that this was common knowledge.
I've used singletons quite a bit in any project that uses iBatis to load the queries from XML. It great improves speed in these instances. I'm not sure why you wouldn't use them in a case like this.
So... why are they bad?

They are not necessarily bad, just misused and overused. People seem inexplicably attracted to the pattern and look for new and creative ways to shoehorn it into their application whether or not it really is applicable.

They're a tool, and like any tool there are times you should use them and times you should use something else. In this case, it's very often true that something else (Factory, static class) would be better in situations that at first glance may seem appropriate for a Singleton.
When Design Patterns came out it seemed like everyone jumped on the Singleton bandwagon — they were everywhere, even places they shouldn't be. What you see now is a (perhaps well-deserved) backlash. Not that you shouldn't use them at all, but it might be a good idea to take a step back and look at all the options available.

Singletons are not bad practice at all. In fact they are extremely useful for many situations. But they do have two major areas ripe for abuse and/or failure:
Unit testability
Multi-threading
Both can be handled, but beginners often neglect to do so (usually through ignorance) and it ends up causing far more trouble than they know how to deal with.

They are usually considered bad in conjunction with unit testing.
If you have a singleton, that means one or more of your classes are using it somewhere in some methods. That's a dependency you cannot spoof when unit-testing your class because the class is directly using it without requesting it through either its constructor or through a property.
That is usually why people say they are "bad".
Also, in a multi-threaded app, lots of people implement them badly enough that there is a chance for the instanciation to be called more than once.

They arent necessarily bad, its that they tend to be overused, and used a lot when they aren't needed.

Related

object to object mapping and unit-testing / TDD

I am trying to follow TDD principles in all my code base. The frontend (MVC) and backend part are split, and frontend use their own Model objects, while backed use database objects which are then saved to a document database (RavenDb).
This requires a lot of conversion from say CustomerModel to CustomerData. These are created independently from each other, so the strucutre might not match. For example, the CustomerModel might be flat while CustomerData has a nested object ContactDetails.
Currently, we are implementing two methods, one say ConvertCustomerModelToCustomerData and ConvertCustomerDataToCustomerModel. These are very similar, but inverse of each other. Apart from this, these methods are also unit-tested. Hence, similar code is created in four instances - twice for each conversion, and twice for each unit test.
This is a big headache to maintain, and does not seem right to me. I've tried to use AutoMapper, however I found it quite rigid. Also, I could not find any way how I can unit-test this.
Any ideas would be greatly appreciated.
I think that having well defined boundaries and anti-corruption layers (see this and this) like the one you did is a great way to avoid headache, and bug hunting an highly coupled application is far worse.
Then, these layers are for sure boring to maintain, but I bet that dealing with is is simple, a no-brainer activity.
If you find yourself modifying your entities often (and so having many tests to update), maybe they are not well defined yet, or they have too wide scope. Why do you need to update them? What's the trigger?
Then, AutoMapper can help you a lot, I agree with other comments.
Anyway, without seeing the code it's difficult for me to judge and maybe offer any kind of advice, so feel free to consider this just my humble opinion :)

How to decide on creating classes in an application..?

I am having some serious problem here. When do we need a class exactly?
Specifically, I thought of designing an desktop application that will be able to generate a profiling test or a unit test for any number of methods i specify. I was having a simple list for storing the methods. I did not think of having a class. But now, I thought of creating a class to store all the classes and gets the set of methods in the class. If this idea is correct, my last 4 days of effort is nullified. So putting up a new question if i can get some information.
Also I could not find the head or tail in my approach. So wanted to discuss with anyone who are interested in helping me with the design.
In general the rule to define the boundaries of a set of data and functionality to be moved into a class of their own is the single responsibility principle.
In Martin Fowler's excellent refactoring bliki you will find lots of patterns to move responsibilities, data and functionalities between classes (the obvious Extract Class, of course, but with the powerful aid of Extract Method and, in your case, Encapsulate Collection, maybe).
TDD is a good way to outline the design very early. Usually "easy to test" leads to "decoupled" and thus to separation of concerns.
Using both these approches together (TDD+Refactoring) may help you with the transition from a design to another: things should go a tad more smoothly.
And another excellent guideline is DIYDI (do it yourself dependency injection).
Also: are you going for code generation or runtime analysis here?
In the first case you might be interested in template engines which might save you a lot of work in the post-processing phase.
In the second case you might use Aspect Oriented Programming and/or Reflection to inspect the classes and find out what methods they have.
Please read this text by Grady Booch et al to get started into Objected Oriented Design.
Design can be quite difficult, and until you get some experience you are going to make bad choices, so write tests to make it easier to refactor your code. I would recommend reading, Code Complete. However since you probably want to get started right away and you question is directly asking about OO and classes I also recommend reading Uncle Bob's Blog post
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Hope this helps
In a simple statement - If you have any data on which operations has to be performed, then you need a class. Good example for these are data containers like linklist, vector, ....
This is known as Object Based programming and is the first step of class designs.
The next step is Object Oriented (Inheritance, Polymorphism), the Proficiency for this comes with experience and looking at well designed codes.
If your application is not reusable (which is implied by the "desktop application") it is pretty much up to you to decide the granularity of your objects.
As long as you are fine with having (or not having) an additional classes there is no reason to change that.
If you are looking for principles for OO (object oriented design) there is plenty of literature and weblinks available.

Assembly wide multicast attributes. Are they evil?

I am working on a project where we have several attributes in AssemblyInfo.cs, that are being multicast to methods of a particular class.
[assembly: Repeatable(
AspectPriority = 2,
AttributeTargetAssemblies = "MyNamespace",
AttributeTargetTypes = "MyNamespace.MyClass",
AttributeTargetMemberAttributes = MulticastAttributes.Public,
AttributeTargetMembers = "*Impl", Prefix = "Cls")]
What I don't like about this, is that it puts a piece of logic into AssemblyInfo (Info, mind you!), which for starters should not contain any logic at all. The worst part of it, is that the actual MyClass.cs does not have the attribute anywhere in the file, and it is completely unclear that methods of this class might have them. From my perspective it greatly hurts readability of the code (not to mention that overuse of PostSharp can make debugging a nightmare).Especially when you have multiple multicast attributes.
What is the best practice here? Is anyone out there is using PostSharp attributes like this?
Let me first answer to Max: indeed, aspects are not an alternative to good OOP patterns. They are a complement. Any good AOP design starts with a good OOP design. But OOP patterns sometimes force you to write a lot of plumbing code manually. For these cases, aspects can be used to automate the implementation of OOP pattern, not to replace them.
When you use AOP intelligently, your solution can become easier to understand (business code is not mixed with maintenance code), to test (you can test the aspect independently from business code, i.e. you don't have to test that any business method traces properly), change (you just have to change the aspect when you want to change the pattern, instead of changing every implementation of the pattern). Now, if you abuse from AOP, if you use it as a hacking tool, if you do not think in terms of OOP patterns before, then your're going to get more costs than benefits from AOP. As any sharp tool, AOP should be used intelligently.
Back to the original question.
Who tells you should put aspects in AssemblyInfo.cs? You could create a new file called GlobalAspects.cs and put all assembly-level aspects there. You're right that AssemblyInfo.cs should just be for assembly-level metadata.
But like you, I don't like assembly-level aspects. I think there should be avoided. The principal problem with assemly-level aspects is that they rely on naming conventions, and this is evil. (This evil is called pointcut fragility in the academic AOSD community.) Indeed, when you rename a class or namespace, you change the set of methods to which the aspect applies, and this can quickly become a nightmare. That's why I never use aspects based on naming conventions for myself.
What about code readibility? To a great extent, I think readable code is short code. If I have a business method called CreateProduct, I probably want to see just the code creating the product. Most of the time, I am not interested in code that handles transactions, exceptions, or tracing. It's enough if I know that some aspects handle that for me.
And how do I know? With PostSharp, you have the Visual Studio Extension. With AspectJ, you have the AspectJ plug-in for Eclipse (AJDT). They show you, inside the IDE, which aspects are applied to the code you currently see. And if you really want to see details (but you seldom really want), you can use the debugger to step into aspects, or use Reflector to see produced code.
Summary:
Good AOP design always starts with a good OOP design.
Avoid relying on naming conventions to apply aspects.
Use PostSharp extension for Visual Studio or AJDT to visualize aspects in your code.
I'm sure this will be an unpopular answer but maybe I can get my peer pressure badge...
Your instincts are correct. Putting logic in metadata of any kind is a horrible, horrible sin for which one burns eternally in the hellfire of unmaintainability.
I mean no disrespect by this although I'm certain it will be interpreted otherwise.
The best practice would be to not use "aspect-oreinted programming" tools, which are crutches that enable the lameness of poor design and testing practices. Instead, look at your design and ask yourself "why."
Why did I feel the need to use this
tool? What design problem was I
trying to solve?
Once you have a firm grasp of the problem, go pick up Design Patterns Explained (Shalloway & Trott) or Head First Design Patterns (Freeman, Robson, Bates, & Sierra).
In the end, a pattern-oriented solution will be easier to understand, easier to test, and easier to change. The only additional cost will be the one-time fee of mastering design patterns in place of the recurring charge of trying to figure out where all these aspects are, how they fit together, and how they influence one another every time you make a change.

SRP and a lot of classes

I'm refactoring some code I wrote a few months ago and now I find myself creating a lot of smallish classes (few properties, 2-4 methods, 1-2 events).
Is this how it's supposed to be? Or is this also a bit of a code smell?
I mean if a class does need a lot of methods to carry out it's responsibility, I guess that's how it's gotta be, but I'm not so sure that a lot small classes is particularly good practice either?
Lots of small classes sounds just fine :)
Particularly if you let each class implement an interface and have the different collaborators communicate through those interfaces instead of directly with each other, you should be able to achieve a so-called Supple Design (a term from Domain-Driven Design) with lots of loose coupling.
If you can boil it down so that important operations have the same type of output as input, you will achieve what Evans call Closure of Operations, which I've found to be a particularly strong design technique.
What tend to happen when you apply the SRP is that although all classes start out small, you constantly refactor, and from time to time it happens that a rush of insight clarifies that a few particular classes could be a lot richer than previously assumed.
Do it, but keep refactoring forever :)
Lots of small classes with focused responsibilties are what srp is all about. So, yes, this is the way things are "supposed to be" as far as srp advocates are concerned. But you're seeing an explosion of the number of classes in your system and it's beginning to become very difficult to remember or to intuitively know where things are actually done, isn't it? You are, indeed, exposing a new code smell, which is the (usually unnecessary) increase of complexity that comes aong with srp. I wrote an entry about it here. See if you might agree.
I think you have to find the middle way. Too many classes are sometimes overkill. From my side I try to separate concerns on a smaller level and if things are getting then refactor out more coarse grained:
First write separate concerns by extracting methods. If you can see a group of methods on data (instance + static fields) to form a dedicated responsibility 'extract class'. After a while if you can see different groupings of classes inside a package do 'extract packages'.
I found this (explosion) approach more natural as creating lots of classes and packages from start on. But this also depends...: If I can already see bigger components at the beginning I already create dedicated package structures.
Maybe some more details about your code to offer some more concrete help :)

Do you have any good advice/links to a set of coding standards or best practices to follow?

For those of us that have programmed enough I’m sure we have come across many different flavours of coding standards that you can use when it comes to programming.
e.g. http://msdn.microsoft.com/en-us/library/ms229042.aspx
You might derive your coding standards for the current company you work for or from the original author of the code you’re working on. Coding styles are often used for specific program languages and some styles in one coding language might not be considered appropriate for others. Of course some coding standards can be applied across many different program languages.
Thank you for your time.
EDIT: As we know there are many related articles on this subject, but C# Coding standard / Best practices in SO has some very useful links in there which is worth a visit. (Check out the 2 links on .NET/C# guidelines by ESV - Accepted Answer)
Google has a posted style guide for C++ here which I consult sometimes. Just reading through the explanations and reasoning, despite whether you end up agreeing with some of the styles or not, may teach you some things you might not have thought about.
My best advice regarding coding standards: don't let them get in the way when trying to get work done.
A big bureaucracy might actually hinder progress in projects instead of helping to achieve better team work. When people complain about not following coding standards instead of the actual quality of the code, then it is too much regulation.
Other than that, pick one from the many suggestions and try to stick with it for as long as possible to build a code base following a single standard that you are used to.
Coding standards are good, but coding standards written from scratch in which the company reinvents the wheel, or coding standards imposed by a single "prophet", can be worse than having no coding standards at all.
This means:
Coding standards should be discussed and agreed upon.
The coding standards document should include the reasons behind each rule.
Coding standards should be at least partially based on reliable sources.
The sources I know of for the languages in your tags are:
For C++: The book C++ Coding Standards by Sutter/Alexandrescu.
For C#: 4 or 5 PDF's I found googling for C# Coding Standards :)
Adam Cogan has a great set of rules on his web site. There are coding guidelines, but there is much more there also.
Adam Cogan's Rules to Better...
Coding standards are great. We've been using Lance Hunt's C# Coding Standards for .NET almost without modifications
If you are maintaining code that continue to use the same standard as the original code was developed in (there is nothing worse then trying to debug a problem when the code looks all higgildy piggeldy)
Some comment to the post suggesting looking at the Google C++ guidelines. Detailed discussion about some aspects of these guidelines are posted at comp.lang.c++.moderated.
Some weird or controversial points include:
We don't believe that the available
alternatives to exceptions, such as
error codes and assertions, introduce
a significant burden.
As if assertions were a viable alternative... Assertions are usually for programming errors and situations that should never happen, while exceptions can happen (are somewhat anticipated) in the execution flow.
Reference Arguments: All parameters
passed by reference must be labeled
const. ... In fact it is a very strong
convention that input arguments are
values or const references while
output arguments are pointers.
No comment, about weasel phrase a very strong convention.
Doing Work in Constructors: Do only
trivial initialization in a
constructor. If at all possible, use
an Init() method for non-trivial
initialization.  ... If your object
requires non-trivial initialization,
consider having an explicit Init()
method and/or adding a member flag
that indicates whether the object was
successfully initialized.
Yes... 2-phase init to make things simpler... What if I have const fields? This rule is probably the effect of attitude towards exceptions.
Use streams only for logging
Which streams? IOStreams, standard C streams, other?
On one hand they advise to use macros only in exceptional situations, while they recommend using DISALLOW_COPY_AND_ASSIGN to prohibit copy/assign. They could have advised the approach with special class (like in Boost)
Do not overload operators except in rare, special circumstances.
What about assignment, or arithmetic operators for numeric calculations, etc?
Default parameters are more difficult to maintain because copy-and-
paste from previous code may not reveal all the parameters. Copy-and-
pasting of code segments can cause major problems when the default
arguments are not appropriate for the new code.
The what? Copy/paste from previous code?
Remember that reading any of the guidelines can introduce a bias to your way of thinking. And sometimes it won't be beneficial for you or your code. I agree with some other posts advising reading good books by good authors beforehand. When you have sufficient amount of knowledge, then you are able to look at the guidelines and find good and weak points easily, without creating a mess in your brain ;)
If you plan to introduce a code-formatting standard to an existing programming team, get input from each member of the team so they'll have "buy in" and be more likely to write code to that standard.
Programming styles are as difficult to change as habits, and you'll have to accept that some people won't make their code 100% compliant 100% of the time. It would be worth your time to find (or write your own) pretty-printer program and periodically run all your code through it to enforce consistency. (I always felt uneasy when manually checking in source code changes that only consisted of formatting corrections for other peoples' code; I worried that others would label me a nitpicker.)
Sun Java Code Conventions
Python Style Guide
Zend Coding Standard for PHP
Having asked this question. I found that the accepted answer proved to be sufficient for my needs.
However, I realise that this is not a 'one-size-fits-all' scenario, so there is a large quantity of information within the thread that you may find more or less useful. Weel worth a read!
For Java and other C-family languages I recommend Sofware Monkey's coding standards (of course, since they're mine).
In general, keep them simple, and provide examples and justification for every requirement.
What's in the standard doesn't really matter all that much. What matters is that you have one, and that your developers follow it.
It doesn't quite answer the question, but it's worth a mention...
I read Steve McConnell's Code Complete. Whilst it doesn't give you a pre-baked set of coding standards it does set out a lot of good arguments for the various approaches. It'll make you think about things you'd not thought of before.
It changed my little world for the better.
Coding standards themselves are great and all, but what I think is much, much, MUCH more important is keeping with the style of whatever code you're maintaining. I've seen people add a function to some class written one way and forcing their coding standard on just that function. It's inconsistent, it sticks out, and, in my opinion, it makes it harder to enjoy the class "as a whole".
Whenever you're maintaining code, look at the code around it. See what the style is. K&R braces? Capital Camel Case methods? Hungarian? Double-line comment blocks between every function? Whatever it is, you should do it too in that specific area.
Before I leave, one thing I'd like to note that's related - naming files. I'm mainly a C++ guy, so this may not apply to whatever else, but basically it goes _.h or .cpp. So, Foo::Bar would be in Foo_Bar.h. Common things (i.e. a precompiled header) for the Foo namespace would be in Foo_common.h (note the lowercase common). Of course, that's a taste thing, but everybody who has worked with this has come out in favor of this.
i think Code Craft - The Practice of Writing Excellent Code pretty much sums it all up
Very popular are Ellemtel rules for C++.
For C# I recommend Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition) (Microsoft .NET Development Series).
Mono Coding Guidelines
The answers here a pretty complete, thus I am not pointing to another coding standard document. However, once you decided to stick to one style you should use an automated coding style enforcer throughout your team.
For Java there is checkstyle and for .NET Microsoft Style Cop.
Here is a similar discussion on Stackoverflow: C# Coding standard / Best practices
Camel and pascal casing alone solves a lot of coding standard problems

Categories

Resources