I'm refactoring some code I wrote a few months ago and now I find myself creating a lot of smallish classes (few properties, 2-4 methods, 1-2 events).
Is this how it's supposed to be? Or is this also a bit of a code smell?
I mean if a class does need a lot of methods to carry out it's responsibility, I guess that's how it's gotta be, but I'm not so sure that a lot small classes is particularly good practice either?
Lots of small classes sounds just fine :)
Particularly if you let each class implement an interface and have the different collaborators communicate through those interfaces instead of directly with each other, you should be able to achieve a so-called Supple Design (a term from Domain-Driven Design) with lots of loose coupling.
If you can boil it down so that important operations have the same type of output as input, you will achieve what Evans call Closure of Operations, which I've found to be a particularly strong design technique.
What tend to happen when you apply the SRP is that although all classes start out small, you constantly refactor, and from time to time it happens that a rush of insight clarifies that a few particular classes could be a lot richer than previously assumed.
Do it, but keep refactoring forever :)
Lots of small classes with focused responsibilties are what srp is all about. So, yes, this is the way things are "supposed to be" as far as srp advocates are concerned. But you're seeing an explosion of the number of classes in your system and it's beginning to become very difficult to remember or to intuitively know where things are actually done, isn't it? You are, indeed, exposing a new code smell, which is the (usually unnecessary) increase of complexity that comes aong with srp. I wrote an entry about it here. See if you might agree.
I think you have to find the middle way. Too many classes are sometimes overkill. From my side I try to separate concerns on a smaller level and if things are getting then refactor out more coarse grained:
First write separate concerns by extracting methods. If you can see a group of methods on data (instance + static fields) to form a dedicated responsibility 'extract class'. After a while if you can see different groupings of classes inside a package do 'extract packages'.
I found this (explosion) approach more natural as creating lots of classes and packages from start on. But this also depends...: If I can already see bigger components at the beginning I already create dedicated package structures.
Maybe some more details about your code to offer some more concrete help :)
Related
I am trying to follow TDD principles in all my code base. The frontend (MVC) and backend part are split, and frontend use their own Model objects, while backed use database objects which are then saved to a document database (RavenDb).
This requires a lot of conversion from say CustomerModel to CustomerData. These are created independently from each other, so the strucutre might not match. For example, the CustomerModel might be flat while CustomerData has a nested object ContactDetails.
Currently, we are implementing two methods, one say ConvertCustomerModelToCustomerData and ConvertCustomerDataToCustomerModel. These are very similar, but inverse of each other. Apart from this, these methods are also unit-tested. Hence, similar code is created in four instances - twice for each conversion, and twice for each unit test.
This is a big headache to maintain, and does not seem right to me. I've tried to use AutoMapper, however I found it quite rigid. Also, I could not find any way how I can unit-test this.
Any ideas would be greatly appreciated.
I think that having well defined boundaries and anti-corruption layers (see this and this) like the one you did is a great way to avoid headache, and bug hunting an highly coupled application is far worse.
Then, these layers are for sure boring to maintain, but I bet that dealing with is is simple, a no-brainer activity.
If you find yourself modifying your entities often (and so having many tests to update), maybe they are not well defined yet, or they have too wide scope. Why do you need to update them? What's the trigger?
Then, AutoMapper can help you a lot, I agree with other comments.
Anyway, without seeing the code it's difficult for me to judge and maybe offer any kind of advice, so feel free to consider this just my humble opinion :)
I am having some serious problem here. When do we need a class exactly?
Specifically, I thought of designing an desktop application that will be able to generate a profiling test or a unit test for any number of methods i specify. I was having a simple list for storing the methods. I did not think of having a class. But now, I thought of creating a class to store all the classes and gets the set of methods in the class. If this idea is correct, my last 4 days of effort is nullified. So putting up a new question if i can get some information.
Also I could not find the head or tail in my approach. So wanted to discuss with anyone who are interested in helping me with the design.
In general the rule to define the boundaries of a set of data and functionality to be moved into a class of their own is the single responsibility principle.
In Martin Fowler's excellent refactoring bliki you will find lots of patterns to move responsibilities, data and functionalities between classes (the obvious Extract Class, of course, but with the powerful aid of Extract Method and, in your case, Encapsulate Collection, maybe).
TDD is a good way to outline the design very early. Usually "easy to test" leads to "decoupled" and thus to separation of concerns.
Using both these approches together (TDD+Refactoring) may help you with the transition from a design to another: things should go a tad more smoothly.
And another excellent guideline is DIYDI (do it yourself dependency injection).
Also: are you going for code generation or runtime analysis here?
In the first case you might be interested in template engines which might save you a lot of work in the post-processing phase.
In the second case you might use Aspect Oriented Programming and/or Reflection to inspect the classes and find out what methods they have.
Please read this text by Grady Booch et al to get started into Objected Oriented Design.
Design can be quite difficult, and until you get some experience you are going to make bad choices, so write tests to make it easier to refactor your code. I would recommend reading, Code Complete. However since you probably want to get started right away and you question is directly asking about OO and classes I also recommend reading Uncle Bob's Blog post
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Hope this helps
In a simple statement - If you have any data on which operations has to be performed, then you need a class. Good example for these are data containers like linklist, vector, ....
This is known as Object Based programming and is the first step of class designs.
The next step is Object Oriented (Inheritance, Polymorphism), the Proficiency for this comes with experience and looking at well designed codes.
If your application is not reusable (which is implied by the "desktop application") it is pretty much up to you to decide the granularity of your objects.
As long as you are fine with having (or not having) an additional classes there is no reason to change that.
If you are looking for principles for OO (object oriented design) there is plenty of literature and weblinks available.
I am working on a project where we have several attributes in AssemblyInfo.cs, that are being multicast to methods of a particular class.
[assembly: Repeatable(
AspectPriority = 2,
AttributeTargetAssemblies = "MyNamespace",
AttributeTargetTypes = "MyNamespace.MyClass",
AttributeTargetMemberAttributes = MulticastAttributes.Public,
AttributeTargetMembers = "*Impl", Prefix = "Cls")]
What I don't like about this, is that it puts a piece of logic into AssemblyInfo (Info, mind you!), which for starters should not contain any logic at all. The worst part of it, is that the actual MyClass.cs does not have the attribute anywhere in the file, and it is completely unclear that methods of this class might have them. From my perspective it greatly hurts readability of the code (not to mention that overuse of PostSharp can make debugging a nightmare).Especially when you have multiple multicast attributes.
What is the best practice here? Is anyone out there is using PostSharp attributes like this?
Let me first answer to Max: indeed, aspects are not an alternative to good OOP patterns. They are a complement. Any good AOP design starts with a good OOP design. But OOP patterns sometimes force you to write a lot of plumbing code manually. For these cases, aspects can be used to automate the implementation of OOP pattern, not to replace them.
When you use AOP intelligently, your solution can become easier to understand (business code is not mixed with maintenance code), to test (you can test the aspect independently from business code, i.e. you don't have to test that any business method traces properly), change (you just have to change the aspect when you want to change the pattern, instead of changing every implementation of the pattern). Now, if you abuse from AOP, if you use it as a hacking tool, if you do not think in terms of OOP patterns before, then your're going to get more costs than benefits from AOP. As any sharp tool, AOP should be used intelligently.
Back to the original question.
Who tells you should put aspects in AssemblyInfo.cs? You could create a new file called GlobalAspects.cs and put all assembly-level aspects there. You're right that AssemblyInfo.cs should just be for assembly-level metadata.
But like you, I don't like assembly-level aspects. I think there should be avoided. The principal problem with assemly-level aspects is that they rely on naming conventions, and this is evil. (This evil is called pointcut fragility in the academic AOSD community.) Indeed, when you rename a class or namespace, you change the set of methods to which the aspect applies, and this can quickly become a nightmare. That's why I never use aspects based on naming conventions for myself.
What about code readibility? To a great extent, I think readable code is short code. If I have a business method called CreateProduct, I probably want to see just the code creating the product. Most of the time, I am not interested in code that handles transactions, exceptions, or tracing. It's enough if I know that some aspects handle that for me.
And how do I know? With PostSharp, you have the Visual Studio Extension. With AspectJ, you have the AspectJ plug-in for Eclipse (AJDT). They show you, inside the IDE, which aspects are applied to the code you currently see. And if you really want to see details (but you seldom really want), you can use the debugger to step into aspects, or use Reflector to see produced code.
Summary:
Good AOP design always starts with a good OOP design.
Avoid relying on naming conventions to apply aspects.
Use PostSharp extension for Visual Studio or AJDT to visualize aspects in your code.
I'm sure this will be an unpopular answer but maybe I can get my peer pressure badge...
Your instincts are correct. Putting logic in metadata of any kind is a horrible, horrible sin for which one burns eternally in the hellfire of unmaintainability.
I mean no disrespect by this although I'm certain it will be interpreted otherwise.
The best practice would be to not use "aspect-oreinted programming" tools, which are crutches that enable the lameness of poor design and testing practices. Instead, look at your design and ask yourself "why."
Why did I feel the need to use this
tool? What design problem was I
trying to solve?
Once you have a firm grasp of the problem, go pick up Design Patterns Explained (Shalloway & Trott) or Head First Design Patterns (Freeman, Robson, Bates, & Sierra).
In the end, a pattern-oriented solution will be easier to understand, easier to test, and easier to change. The only additional cost will be the one-time fee of mastering design patterns in place of the recurring charge of trying to figure out where all these aspects are, how they fit together, and how they influence one another every time you make a change.
Say you want to write a Tetris clone, and you just started planning.
How do you decide what should be a class? Do you make individual blocks a class or just the different block-types?
I'm asking this because I often find myself writing either too many classes, or writing too few classes.
Take a step back.
I suspect that you're putting the cart before the horse here. OOP isn't a Good Thing in its own right, it's a technique for effectively solving problems. Problems like: "I have a large multiple-team organization of programmers with diverse skill sets and expertise. We are building large-scale complex software where many subsystems interact with each other. We have a limited budget."
OOP is good for this problem space because it emphasizes abstraction, encapsulation, polymorphism and inheritance. Each of those works well in the many-teams-writing-large-software space. Abstraction allows one team to use the work of another without having to understand the implementation details, thereby lowering the communication cost. Encapsulation allows one team to know that they can make changes to their internal structures to make them better without worrying about the costs of impacting another team. Polymorphism lowers the cost of using many different implementations of a given abstraction, depending on the current need. Inheritance allows one team to build upon the work of another, cleanly re-using existing code rather than spending time and money re-inventing it.
All of these things are good not in of themselves, but because they lower costs in large-team-complex-software scenarios.
Do they lower costs in one-guy-trivial-software scenarios? I don't think they do; I think they raise costs. The point of inheritance is to save time through code re-use; if you spend more time messing around with getting the perfect inheritance hierarchy than the time you save through code re-use, it's not a net win, it's a net loss. Similarly with all the others: if you don't have many different implementations of the same thing then spending time on polymorphism is a loss. If you don't have anyone who is going to consume your abstraction, or anyone from whom you need to protect your internal state, then abstraction and encapsulation are costs with no associated benefits.
If what you want to do is write Tetris in an OO style for practice writing in that style, by all means go right ahead and don't let me stop you. I'm just saying: don't feel that you have a moral requirement to use OOP to solve a problem that OOP is not well-suited to solve; OOP is not the be-all-and-end-all of software development styles.
You might want to check out How do you design object oriented projects?. The accepted solution is a good start. I would also pick up a design patterns book as well.
For a Tetris clone you're going to be better off I'd say creating a block class and using an enum or similar to record what shape piece it is. The reason is that all blocks act in the same way - they fall, they react to user input by rotating or falling faster, and they use collision detection to determine when to stop falling and trigger the next piece.
If you have a class per block-type then there'd be so little difference between each class that it would be a waste of time.
In another situation where you have a lot of similar concepts (like many different types of animals etc.) it might make mroe sense to have a class per sub-type, all inheriting from a parent class if the sub-types were more different from each other
Depends on your development methodology.
Assuming you do agile, then you can start with writing the classes you think you'll need. And then as you start filling in the implementation, you'll discover that some classes are obsolete or others need to be split out.
Assuming a more design-first-then-build approach (dsdm/rup/waterfall...), then you'd want to go for a design based on the "user story", see SwDevMan81's link for an example.
I would make a base-class Piece, because they each have similar functionality like move right, move left, move down, rotate CW, rotate CCW, color, position, and the list goes on. Then each piece should be a sub class like ZPiece, LPiece, SquarePiece, IPiece, BackwardsLPiece, etc... You probably do have many classes, but there are many different types of pieces.
The point of OOP you are asking about is inheritence. You don't want to reinvent the wheel when it comes to some functions like move left/right/down, nor do you want to repeat exact code in multiple locations. Those functions shouldn't change depending on the piece so put it in the base class. Each piece rotates differently, but it is in the base class because each class should implement it's own version of it.
Basically, anything all pieces have in common should be in a base class. Then everything that makes a piece unique should be in the class itself. Yes, I think making a block class and each piece has 4 of them is a bit much, but there are those that would disagree with me.
This question already has answers here:
Closed 13 years ago.
Duplicate:
What is so bad about Singletons?
I was reading this question, and was surprised to see that (s)he considered a singleton to be considered a "bad practice," and in fact thought that this was common knowledge.
I've used singletons quite a bit in any project that uses iBatis to load the queries from XML. It great improves speed in these instances. I'm not sure why you wouldn't use them in a case like this.
So... why are they bad?
They are not necessarily bad, just misused and overused. People seem inexplicably attracted to the pattern and look for new and creative ways to shoehorn it into their application whether or not it really is applicable.
They're a tool, and like any tool there are times you should use them and times you should use something else. In this case, it's very often true that something else (Factory, static class) would be better in situations that at first glance may seem appropriate for a Singleton.
When Design Patterns came out it seemed like everyone jumped on the Singleton bandwagon — they were everywhere, even places they shouldn't be. What you see now is a (perhaps well-deserved) backlash. Not that you shouldn't use them at all, but it might be a good idea to take a step back and look at all the options available.
Singletons are not bad practice at all. In fact they are extremely useful for many situations. But they do have two major areas ripe for abuse and/or failure:
Unit testability
Multi-threading
Both can be handled, but beginners often neglect to do so (usually through ignorance) and it ends up causing far more trouble than they know how to deal with.
They are usually considered bad in conjunction with unit testing.
If you have a singleton, that means one or more of your classes are using it somewhere in some methods. That's a dependency you cannot spoof when unit-testing your class because the class is directly using it without requesting it through either its constructor or through a property.
That is usually why people say they are "bad".
Also, in a multi-threaded app, lots of people implement them badly enough that there is a chance for the instanciation to be called more than once.
They arent necessarily bad, its that they tend to be overused, and used a lot when they aren't needed.