Template pattern - not useful for small projects - c#

I'm sorry to ask such a localized question but until I get confirmation I don't feel confident moving on with my project.
I have read lots about the template pattern, Wikipedia has a good example.
It shows that you create the basic virtual methods and then inherit the base class and override where you want. The example on the site is for Monopoly and Chess which both inherit the base class.
So, my question is, if you had an application which was only going to be Chess and never anything else, would there be any benefit in using the template pattern (other than as an education exercise)?

No, I think that falls under the category of "You Ain't Gonna Need It."
To be more specific, design patterns exist to solve a particular problem, and if your code doesn't need to solve that problem, all they do is add lines of code without having any benefit.

No. Expressed in a very simplified and superficial way, the template pattern is just worthwhile starting at a certain relationship between total code size and templated code size. In your example, the chess game is going to be the entire program, so there'll be no need to use the template pattern here.

The template pattern is used in specific situations. It is used when you want to sketch out an algorithm but let the specific steps differ.
This could be useful in a Chess application. However, you should not start developing an application with the idea 'I'm going to use this pattern and that one and..'. Instead, you develop the code and you discover that you need certain patterns.
This is where a Test Driven Development approach is really handy. It allows you to refactor your code each step of the way.
A nice book that explains this is Refactoring To Patterns.

I would suggest writing your chess game and then if in the future coming back and changing things to fit monopoly too. But its something totally different if you want to use the pattern to learn the pattern, in that case its good to start simple so the complex is easier to understand.

It really depends on the parts of the program. The whole idea of Template is to have an algorithm that never changes and to be able to add or edit certain steps of that algorithm.
It may well be that you never change, however, this is the issue with design principles, it IS good practice and you may later wish you'd implemented them. I would say though that if you are 100% sure then you can leave it out as it usually saves time and lines of code. Depends if you want to learn Template usage or not.
Also the GOF principles website is quite good:

Related

The concept of classes is ok if i have a single user application and i work alone at it and just i'll use it?

I make an app in c# with windows forms and all my code is behind the window code, so i have for now 2500+ lines of code and some of my collegues say to use classes to divide the code by functionalities but i don't find a purpose in that because everything will be made public and so on.
None of them know to explain me why it is the best approach, but just give me vague hints like : "if you do a modification somewhere your other functions will crash" and i don't know how that is possible...
I searched and find a keyword "partial" so what should i do? Should start learning classes and so on?
You need to go right back to the basics, and not be put in charge of writing an application by yourself. Ask your employer to excuse you from your responsibilities whilst you learn how to do them, you're just digging yourself a deeper hole with every line of code you write at the moment.
A class is designed to be a reusable block of code. the class is for relating functionality together. the class is for classing things into a particular set of instructions.
This is the idea and reasoning behind OOP.
Consider a Road, a Road can have 0 to x cars on it. All of the cars can "drive" and they can "turn" they can even leave the road and join another road. They work together but are not one and the same thing. A car can drive but a road cannot. You dont want to have a "Road" class with 900000 methods for different cars all with their own drive methods and "leave road" method... You have 1 "class" which you can create multiple times into different occurances of that class, which may or may not be on that road.
Not to labour the analogy but it's a very popular one. Your code is not all doing the same thing even if you think it is, You have to scope your opinion correctly. You may have file access code next to UI code next to Business logic code next to network communication code. they are all pieces in the puzzle of "My Application", but within "My Application" they are not doing the same thing. It is with this kind of thought that you need to move forward and not with "It's just me, Writing this same application, it all goes here"
The main answer for which you should divide your code into classes is code reusability. You could greatly benefit from techniques of object oriented programming such as inheritence, polymorphism and so on.
Moreover, the time invested in learning Object oriented programming is a time invested in you, and for your greater understanding of different programming languages or frameworks which you may be using in the future.
Although you could do this project all by yourself and without using any classes, I strongly recommend you to try and learn OOP programming.
PS: C# is an object oriented programming language, which means that all object and variable type you may be using are classes!

How to decide on creating classes in an application..?

I am having some serious problem here. When do we need a class exactly?
Specifically, I thought of designing an desktop application that will be able to generate a profiling test or a unit test for any number of methods i specify. I was having a simple list for storing the methods. I did not think of having a class. But now, I thought of creating a class to store all the classes and gets the set of methods in the class. If this idea is correct, my last 4 days of effort is nullified. So putting up a new question if i can get some information.
Also I could not find the head or tail in my approach. So wanted to discuss with anyone who are interested in helping me with the design.
In general the rule to define the boundaries of a set of data and functionality to be moved into a class of their own is the single responsibility principle.
In Martin Fowler's excellent refactoring bliki you will find lots of patterns to move responsibilities, data and functionalities between classes (the obvious Extract Class, of course, but with the powerful aid of Extract Method and, in your case, Encapsulate Collection, maybe).
TDD is a good way to outline the design very early. Usually "easy to test" leads to "decoupled" and thus to separation of concerns.
Using both these approches together (TDD+Refactoring) may help you with the transition from a design to another: things should go a tad more smoothly.
And another excellent guideline is DIYDI (do it yourself dependency injection).
Also: are you going for code generation or runtime analysis here?
In the first case you might be interested in template engines which might save you a lot of work in the post-processing phase.
In the second case you might use Aspect Oriented Programming and/or Reflection to inspect the classes and find out what methods they have.
Please read this text by Grady Booch et al to get started into Objected Oriented Design.
Design can be quite difficult, and until you get some experience you are going to make bad choices, so write tests to make it easier to refactor your code. I would recommend reading, Code Complete. However since you probably want to get started right away and you question is directly asking about OO and classes I also recommend reading Uncle Bob's Blog post
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Hope this helps
In a simple statement - If you have any data on which operations has to be performed, then you need a class. Good example for these are data containers like linklist, vector, ....
This is known as Object Based programming and is the first step of class designs.
The next step is Object Oriented (Inheritance, Polymorphism), the Proficiency for this comes with experience and looking at well designed codes.
If your application is not reusable (which is implied by the "desktop application") it is pretty much up to you to decide the granularity of your objects.
As long as you are fine with having (or not having) an additional classes there is no reason to change that.
If you are looking for principles for OO (object oriented design) there is plenty of literature and weblinks available.

Assembly wide multicast attributes. Are they evil?

I am working on a project where we have several attributes in AssemblyInfo.cs, that are being multicast to methods of a particular class.
[assembly: Repeatable(
AspectPriority = 2,
AttributeTargetAssemblies = "MyNamespace",
AttributeTargetTypes = "MyNamespace.MyClass",
AttributeTargetMemberAttributes = MulticastAttributes.Public,
AttributeTargetMembers = "*Impl", Prefix = "Cls")]
What I don't like about this, is that it puts a piece of logic into AssemblyInfo (Info, mind you!), which for starters should not contain any logic at all. The worst part of it, is that the actual MyClass.cs does not have the attribute anywhere in the file, and it is completely unclear that methods of this class might have them. From my perspective it greatly hurts readability of the code (not to mention that overuse of PostSharp can make debugging a nightmare).Especially when you have multiple multicast attributes.
What is the best practice here? Is anyone out there is using PostSharp attributes like this?
Let me first answer to Max: indeed, aspects are not an alternative to good OOP patterns. They are a complement. Any good AOP design starts with a good OOP design. But OOP patterns sometimes force you to write a lot of plumbing code manually. For these cases, aspects can be used to automate the implementation of OOP pattern, not to replace them.
When you use AOP intelligently, your solution can become easier to understand (business code is not mixed with maintenance code), to test (you can test the aspect independently from business code, i.e. you don't have to test that any business method traces properly), change (you just have to change the aspect when you want to change the pattern, instead of changing every implementation of the pattern). Now, if you abuse from AOP, if you use it as a hacking tool, if you do not think in terms of OOP patterns before, then your're going to get more costs than benefits from AOP. As any sharp tool, AOP should be used intelligently.
Back to the original question.
Who tells you should put aspects in AssemblyInfo.cs? You could create a new file called GlobalAspects.cs and put all assembly-level aspects there. You're right that AssemblyInfo.cs should just be for assembly-level metadata.
But like you, I don't like assembly-level aspects. I think there should be avoided. The principal problem with assemly-level aspects is that they rely on naming conventions, and this is evil. (This evil is called pointcut fragility in the academic AOSD community.) Indeed, when you rename a class or namespace, you change the set of methods to which the aspect applies, and this can quickly become a nightmare. That's why I never use aspects based on naming conventions for myself.
What about code readibility? To a great extent, I think readable code is short code. If I have a business method called CreateProduct, I probably want to see just the code creating the product. Most of the time, I am not interested in code that handles transactions, exceptions, or tracing. It's enough if I know that some aspects handle that for me.
And how do I know? With PostSharp, you have the Visual Studio Extension. With AspectJ, you have the AspectJ plug-in for Eclipse (AJDT). They show you, inside the IDE, which aspects are applied to the code you currently see. And if you really want to see details (but you seldom really want), you can use the debugger to step into aspects, or use Reflector to see produced code.
Summary:
Good AOP design always starts with a good OOP design.
Avoid relying on naming conventions to apply aspects.
Use PostSharp extension for Visual Studio or AJDT to visualize aspects in your code.
I'm sure this will be an unpopular answer but maybe I can get my peer pressure badge...
Your instincts are correct. Putting logic in metadata of any kind is a horrible, horrible sin for which one burns eternally in the hellfire of unmaintainability.
I mean no disrespect by this although I'm certain it will be interpreted otherwise.
The best practice would be to not use "aspect-oreinted programming" tools, which are crutches that enable the lameness of poor design and testing practices. Instead, look at your design and ask yourself "why."
Why did I feel the need to use this
tool? What design problem was I
trying to solve?
Once you have a firm grasp of the problem, go pick up Design Patterns Explained (Shalloway & Trott) or Head First Design Patterns (Freeman, Robson, Bates, & Sierra).
In the end, a pattern-oriented solution will be easier to understand, easier to test, and easier to change. The only additional cost will be the one-time fee of mastering design patterns in place of the recurring charge of trying to figure out where all these aspects are, how they fit together, and how they influence one another every time you make a change.

When do you define a class?

Say you want to write a Tetris clone, and you just started planning.
How do you decide what should be a class? Do you make individual blocks a class or just the different block-types?
I'm asking this because I often find myself writing either too many classes, or writing too few classes.
Take a step back.
I suspect that you're putting the cart before the horse here. OOP isn't a Good Thing in its own right, it's a technique for effectively solving problems. Problems like: "I have a large multiple-team organization of programmers with diverse skill sets and expertise. We are building large-scale complex software where many subsystems interact with each other. We have a limited budget."
OOP is good for this problem space because it emphasizes abstraction, encapsulation, polymorphism and inheritance. Each of those works well in the many-teams-writing-large-software space. Abstraction allows one team to use the work of another without having to understand the implementation details, thereby lowering the communication cost. Encapsulation allows one team to know that they can make changes to their internal structures to make them better without worrying about the costs of impacting another team. Polymorphism lowers the cost of using many different implementations of a given abstraction, depending on the current need. Inheritance allows one team to build upon the work of another, cleanly re-using existing code rather than spending time and money re-inventing it.
All of these things are good not in of themselves, but because they lower costs in large-team-complex-software scenarios.
Do they lower costs in one-guy-trivial-software scenarios? I don't think they do; I think they raise costs. The point of inheritance is to save time through code re-use; if you spend more time messing around with getting the perfect inheritance hierarchy than the time you save through code re-use, it's not a net win, it's a net loss. Similarly with all the others: if you don't have many different implementations of the same thing then spending time on polymorphism is a loss. If you don't have anyone who is going to consume your abstraction, or anyone from whom you need to protect your internal state, then abstraction and encapsulation are costs with no associated benefits.
If what you want to do is write Tetris in an OO style for practice writing in that style, by all means go right ahead and don't let me stop you. I'm just saying: don't feel that you have a moral requirement to use OOP to solve a problem that OOP is not well-suited to solve; OOP is not the be-all-and-end-all of software development styles.
You might want to check out How do you design object oriented projects?. The accepted solution is a good start. I would also pick up a design patterns book as well.
For a Tetris clone you're going to be better off I'd say creating a block class and using an enum or similar to record what shape piece it is. The reason is that all blocks act in the same way - they fall, they react to user input by rotating or falling faster, and they use collision detection to determine when to stop falling and trigger the next piece.
If you have a class per block-type then there'd be so little difference between each class that it would be a waste of time.
In another situation where you have a lot of similar concepts (like many different types of animals etc.) it might make mroe sense to have a class per sub-type, all inheriting from a parent class if the sub-types were more different from each other
Depends on your development methodology.
Assuming you do agile, then you can start with writing the classes you think you'll need. And then as you start filling in the implementation, you'll discover that some classes are obsolete or others need to be split out.
Assuming a more design-first-then-build approach (dsdm/rup/waterfall...), then you'd want to go for a design based on the "user story", see SwDevMan81's link for an example.
I would make a base-class Piece, because they each have similar functionality like move right, move left, move down, rotate CW, rotate CCW, color, position, and the list goes on. Then each piece should be a sub class like ZPiece, LPiece, SquarePiece, IPiece, BackwardsLPiece, etc... You probably do have many classes, but there are many different types of pieces.
The point of OOP you are asking about is inheritence. You don't want to reinvent the wheel when it comes to some functions like move left/right/down, nor do you want to repeat exact code in multiple locations. Those functions shouldn't change depending on the piece so put it in the base class. Each piece rotates differently, but it is in the base class because each class should implement it's own version of it.
Basically, anything all pieces have in common should be in a base class. Then everything that makes a piece unique should be in the class itself. Yes, I think making a block class and each piece has 4 of them is a bit much, but there are those that would disagree with me.

Ndepend and other automatic code analyser revelence?

Since yesterday, I am analyzing one of our project with Ndepend (free for most of its features) and more I am using it, and more I have doubt about the real value of this type of software (code-analysis software).
Let me explain, The system build a report about the health of the system and class by Rank every metric. I thought it would be a good starting point to do modifications but most of the top result are here because they have over 100 lines inside the class (we have big headers and we do use VS comments styles) so it's not a big deal... than the number of Afferent Coupling level (CA) is always too high and this is almost very true for Interface that we used a lot... so at this moment I do not see something wrong but NDepend seem to do not like it (if you have suggestion to improve that tell me because I do not see the need for). It's the samething for the metric called "NOC" for Number of children that most of my Interface are too high...
For the moment, the only very useful metric is the Cyclomatic Complexity...
My question is : Do you find is worth it to analyse code with Automatic Code Analyser like NDepend? If yes, how do you filter all information that I have mentionned that doesn't really show the real health of the system?
Actually metrics are just one feature of NDepend, did you try to use VisualNDepend that lets you analyze your project much more in depth than the report? By reading your comment, I am almost sure you didn't play with NDepend UI (standalone or integrated in Visual Studio) which is the best way to filter data about your code base.
I am one of the developers of NDepend and we use it a lot to analyze our own code. Basically we write our own quality rules with Code Rules over LINQ Queries (CQLinq). These rules automatically make sure that we don't have regression on our design. Here you'll find the list of around 200 default code rules.
Here are some unique features of NDepend and not related to code metrics:
Write CQLinq rules to make sure we don't have architectural flaws, such as dependency cycles between our components, UI using directly the DB or DB entangled with the business objects.
Make sure we don't have problem with code coverage by tests (like we make sure with a CQLinq rule that if a class is supposed to be 100% covered, it will remain 100% covered in the future)
Enforce side-effects-free code (immutable class/pure methods)
Use the ability to compare 2 analysis to code review changes since the last release, before doing a new release. More specifically, I enjoy using NDepend to know which method has been added and refactored since the last release, and is not 100% covered by tests.
Having an optimal encapsulation for all our members and types (like knowing which internal methods can be declared as private). This is also related to dead-code detection that NDepend also supports.
For a complete list of features if NDepend, see here.
I don't necessarily see NDepend results as "good" or "bad" in software engineering, there's always a good reason why an application is designed the way it is. I see it as a report that can probably help me point out issues with my design, but I have the final word when it comes to deciding if a method needs to be refactored or if it's good the way I designed it. In general, don't get too caught up trying to answer if it's worth it or not. It definitely is, instead I would suggest you carefully review the results. This will help you view your design from another perspective and there may be occasions where you decide the way you designed it is the best to achieve your applications goals.

Categories

Resources