Best Practices on using C# Intellisense Comments - c#

We have a Visual Studio 2010 solution that contains several C# projects in accordance with Jeffery Palermo's Onion Architecture pattern (http://jeffreypalermo.com/blog/the-onion-architecture-part-1/). We want to add the Visual Studio Intellisense Comments using the triple slashes, but we want to see if anyone knows of best practices on how far to take this. Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to?

Add them to any methods exposed in public APIs, that way you can give the caller all the information they need when working with a foreign interface. For example, which exceptions the method may throw and other remarks.
It's still beneficial to add these kinds of comments to private methods, I do it anyway to be consistent. It also helps if you plan on generating documentation from the comments.

While, technically, there is such a thing as too much documentation, 99.99999% of the time this exception doesn't apply.
Document everything as much as you can. Formal, informal, stream of thought..every scrap of comments will help some poor soul who inherits your code or has to interface with it.
(It's like the old rule "The error may be in the Compiler and not your code. Compilers have errors too. This is not one of those times.")
Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Yes
Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to? If you want to. Apply them to any function you write, and not what VS autogenerates
I've seen limited "intellisense" comments..but extensive in-code comments that follow. So long as the "content" is there, life will be good. I generally include a brief blurb about each function in the intellisense comments, but put the majority of "here's why i did this" in the function and dead-tree documents.

I agree with fletcher. Start with public facing classes and methods and then work your way down into private code. If you were starting from scratch I would highly recommend adding the XML comments to all code for your own convenience, but in this case starting with public methods and then updating other classes whenever you go in to update them is a good solution.

Related

How to decide on creating classes in an application..?

I am having some serious problem here. When do we need a class exactly?
Specifically, I thought of designing an desktop application that will be able to generate a profiling test or a unit test for any number of methods i specify. I was having a simple list for storing the methods. I did not think of having a class. But now, I thought of creating a class to store all the classes and gets the set of methods in the class. If this idea is correct, my last 4 days of effort is nullified. So putting up a new question if i can get some information.
Also I could not find the head or tail in my approach. So wanted to discuss with anyone who are interested in helping me with the design.
In general the rule to define the boundaries of a set of data and functionality to be moved into a class of their own is the single responsibility principle.
In Martin Fowler's excellent refactoring bliki you will find lots of patterns to move responsibilities, data and functionalities between classes (the obvious Extract Class, of course, but with the powerful aid of Extract Method and, in your case, Encapsulate Collection, maybe).
TDD is a good way to outline the design very early. Usually "easy to test" leads to "decoupled" and thus to separation of concerns.
Using both these approches together (TDD+Refactoring) may help you with the transition from a design to another: things should go a tad more smoothly.
And another excellent guideline is DIYDI (do it yourself dependency injection).
Also: are you going for code generation or runtime analysis here?
In the first case you might be interested in template engines which might save you a lot of work in the post-processing phase.
In the second case you might use Aspect Oriented Programming and/or Reflection to inspect the classes and find out what methods they have.
Please read this text by Grady Booch et al to get started into Objected Oriented Design.
Design can be quite difficult, and until you get some experience you are going to make bad choices, so write tests to make it easier to refactor your code. I would recommend reading, Code Complete. However since you probably want to get started right away and you question is directly asking about OO and classes I also recommend reading Uncle Bob's Blog post
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Hope this helps
In a simple statement - If you have any data on which operations has to be performed, then you need a class. Good example for these are data containers like linklist, vector, ....
This is known as Object Based programming and is the first step of class designs.
The next step is Object Oriented (Inheritance, Polymorphism), the Proficiency for this comes with experience and looking at well designed codes.
If your application is not reusable (which is implied by the "desktop application") it is pretty much up to you to decide the granularity of your objects.
As long as you are fine with having (or not having) an additional classes there is no reason to change that.
If you are looking for principles for OO (object oriented design) there is plenty of literature and weblinks available.

commenting code C# visual studio best practice

I'm looking for a rather arbitrary answer, so this might more be a discussion. I'm wondering what the best practise for commenting my C# code in visual studio is. Right now I'm using the triple /// to generate the xml and use sand castle to build a chm or html file. But my problem is that I use code comments for two reasons:
When other developers are using my code they can read the documentation, both intellisence and the chm. or html file.
But I also use commenting as reminders to myself. So I can remember my thoughts when I come back half a year later, to some complex methods.
How can both goals be accomplished without interfering with each other, and in the same time be a quick task, not taking a lot of coding time?
The best advice I can give you is:
Don't comment bad code; rewrite it!
If methods are very complex, most of the time you are doing something wrong (not always, but very close to always). Writing readable code is hard, but it pays of, because writing good comments that you (or your colleges) will understand a year later is hard as well (and perhaps even harder). Ways to make things clear is by breaking methods up in smaller well named methods and using very clear variable names.
A book that has helped me a lot in creating better code was Robert Martins Clean Code. If you haven't read it, please do. And let all the developers in your company read it.
Good luck.
Use /// comments to document your public and protected API. Use <remarks> to describe how your API should be used. The audience of these comments are other developers using your code.
Use // comments to comment your code whenever the code alone isn't adequate to fully understand what is going on. The audience of these comments are yourself perhaps three months out in the future or another developer going to maintain your code. You can use special comments like TODO or BUGBUG to flag codes you have to revisit.
I combine both commenting styles - /// for 'public' documentation on classes, methods, etc, and // on 'private' comments for myself or the coders that follow me to read.

Dynamic code generation

I am currently developing an application where you can create "programs" with it without writing source code, just click&play if you like.
Now the question is how do I generate an executable program from my data model. There are many possibilities but I am not sure which one is the best for me. I need to generate assemblies with classes and namespace and everything which can be part of the application.
CodeDOM class: I heard of lots of limitations and bugs of this class. I need to create attributes on method parameters and return values. Is this supported?
Create C# source code programmatically and then call CompileAssemblyFromFile on it: This would work since I can generate any code I want and C# supports most CLR features. But wouldn't this be slow?
Use the reflection ILGenerator class: I think with this I can generate every possible .NET code. But I think this is much more complicated and error prone than the other approaches?
Are there other possible solutions?
EDIT:
The tool is general for developing applications, it is not restricted to a specific domain. I don't know if it can be considered a visual programming language. The user can create classes, methods, method calls, all kinds of expressions. It won't be very limitating because you should be able to do most things which are allowed in real programming languages.
At the moment lots of things must still be written by the user as text, but the goal at the end is, that nearly everything can be clicked together.
You my find it is rewarding to look at the Dynamic Language Runtime which is more or less designed for creating high-level languages based on .NET.
It's perhaps also worth looking at some of the previous Stack Overflow threads on Domain Specific Languages which contain some useful links to tools for working with DSLs, which sounds a little like what you are planning although I'm still not absolutely clear from the question what exactly your aim is.
Most things "click and play" should be simple enough just to stick some pre-defined building-block objects together (probably using interfaces on the boundaries). Meaning: you might not need to do dynamic code generation - just "fake it". For example, using property-bag objects (like DataTable etc, although that isn't my first choice) for values, etc.
Another option for dynamic evaluation is the Expression class; especially in .NET 4.0, this is hugely versatile, and allows compilation to a delegate.
Do the C# source generation and don't care about speed until it matters. The C# compiler is quite quick.
When I wrote a dynamic code generator, I relied heavily on System.Reflection.Emit.
Basically, you programatically create dynamic assemblies and add new types to them. These types are constructed using the Emit constructs (properties, events, fields, etc..). When it comes to implementing methods, you'll have to use an ILGenerator to pump out MSIL op-codes into your method. That sounds super scary, but you can use a couple of tools to help:
A pre-built sample implementation
ILDasm to inspect the op-codes of the sample implementation.
It depends on your requirements, CodeDOM would certainly be the best fit for a "program" stored it in a "data model".
However its unlikely that using option 2 will be in any way measurably slower in comparision with any other approach.
I would echo others in that 1) the compiler is quick, and 2) "Click and Play" things should be simple enough so that no single widget added to a pile of widgets can make it an illegal pile.
Good luck. I'm skeptical that you can achieve point (2) for anything but really toy-level programs.

C# - what are the benefits of "partial" classes?

I'm asking this because I find it quite a dangerous feature to distribute the class definition so that you can't really be sure if you know all about it. Even if I find three partial definitions, how do I know that there's not a fourth somewhere?
I'm new to C# but have spent 10 years with C++, maybe that's why I'm shaken up?
Anyway, the "partial" concept must have some great benefit, which I'm obviously missing. I would love to learn more about the philosophy behind it.
EDIT: Sorry, missed this duplicate when searching for existing posts.
Partial classes are handy when using code generation. If you want to modify a generated class (rather than inheriting from it) then you run the risk of losing your changes when the code is regenerated. If you are able to define your extra methods etc in a separate file, the generated parts of the class can be re-created without nuking your hand-crafted code.
The great benefit is to hide computer generated code (by the designer).
Eric Lippert has a recent blog post about the partial-keyword in general.
Another usage could be to give nested classes their own file.
An other point is, that when a class implements multiple interfaces, you can split the interface implementations on diffrent files.
So every code file has only the code that belongs to the interface implementation. It´s according to separation of concerns concept.
Two people editing the same class, and autogenerated designer code are the two immediate features I can see that were solved by partial classes and methods.
Having designer generated code in a separate file is a lot easier to work with compared to 1.1, where you code could often be mangled by Visual Studio (in windows forms).
Visual Studio still makes a mess of syncing the designer file, code behind and design file with ASP.NET.
If you have some kind of absurdly large class that for some reason are unable or not allowed to logically break apart into smaller classes then you can at least physically break them into multiple files in order to work with it more effectively. Essentially, you can view small chunks at a time avoiding scrolling up and down.
This might apply to legacy code that perhaps due to some arcane policy are not allowed to mess with the existing API because of numerous and entrenched dependencies.
Not necessarily the best use of partial classes, but certainly gives you an alternate option to organize code you might not be able to otherwise modify.
maybe its too late but please let me to add my 2 cents too:
*.When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.
*.You can easily write your code (for extended functionality) for a VS.NET generated class. This will allow you to write the code of your own need without messing with the system generated code

Ndepend and other automatic code analyser revelence?

Since yesterday, I am analyzing one of our project with Ndepend (free for most of its features) and more I am using it, and more I have doubt about the real value of this type of software (code-analysis software).
Let me explain, The system build a report about the health of the system and class by Rank every metric. I thought it would be a good starting point to do modifications but most of the top result are here because they have over 100 lines inside the class (we have big headers and we do use VS comments styles) so it's not a big deal... than the number of Afferent Coupling level (CA) is always too high and this is almost very true for Interface that we used a lot... so at this moment I do not see something wrong but NDepend seem to do not like it (if you have suggestion to improve that tell me because I do not see the need for). It's the samething for the metric called "NOC" for Number of children that most of my Interface are too high...
For the moment, the only very useful metric is the Cyclomatic Complexity...
My question is : Do you find is worth it to analyse code with Automatic Code Analyser like NDepend? If yes, how do you filter all information that I have mentionned that doesn't really show the real health of the system?
Actually metrics are just one feature of NDepend, did you try to use VisualNDepend that lets you analyze your project much more in depth than the report? By reading your comment, I am almost sure you didn't play with NDepend UI (standalone or integrated in Visual Studio) which is the best way to filter data about your code base.
I am one of the developers of NDepend and we use it a lot to analyze our own code. Basically we write our own quality rules with Code Rules over LINQ Queries (CQLinq). These rules automatically make sure that we don't have regression on our design. Here you'll find the list of around 200 default code rules.
Here are some unique features of NDepend and not related to code metrics:
Write CQLinq rules to make sure we don't have architectural flaws, such as dependency cycles between our components, UI using directly the DB or DB entangled with the business objects.
Make sure we don't have problem with code coverage by tests (like we make sure with a CQLinq rule that if a class is supposed to be 100% covered, it will remain 100% covered in the future)
Enforce side-effects-free code (immutable class/pure methods)
Use the ability to compare 2 analysis to code review changes since the last release, before doing a new release. More specifically, I enjoy using NDepend to know which method has been added and refactored since the last release, and is not 100% covered by tests.
Having an optimal encapsulation for all our members and types (like knowing which internal methods can be declared as private). This is also related to dead-code detection that NDepend also supports.
For a complete list of features if NDepend, see here.
I don't necessarily see NDepend results as "good" or "bad" in software engineering, there's always a good reason why an application is designed the way it is. I see it as a report that can probably help me point out issues with my design, but I have the final word when it comes to deciding if a method needs to be refactored or if it's good the way I designed it. In general, don't get too caught up trying to answer if it's worth it or not. It definitely is, instead I would suggest you carefully review the results. This will help you view your design from another perspective and there may be occasions where you decide the way you designed it is the best to achieve your applications goals.

Categories

Resources