I have inherited an API which doesn't have proper comments, and I am working on changing that.
Anyone know if there is some sort of mechanism to add a default XML comments to all the members of a class or an assembly?
(I remember seeing something like that on a webcast and I think he might have used PowerShell script to achieve that.)
This way I can avoid lots of repetitive steps, and have everything in place to go and start writing just the comments.
Anyone has any better suggestions?
GhostDoc is pretty fantastic for XML documentation, although you'll need to purchase a copy to generate automatic documentation for all classes/members. The free version allows you to right click (or use a hotkey) on class or member and it will generate the documentation.
I've found GhostDoc to be pretty good.
Once you've run it over your code you then simply add details where required.
http://submain.com/products/ghostdoc.aspx
Related
We have a Visual Studio 2010 solution that contains several C# projects in accordance with Jeffery Palermo's Onion Architecture pattern (http://jeffreypalermo.com/blog/the-onion-architecture-part-1/). We want to add the Visual Studio Intellisense Comments using the triple slashes, but we want to see if anyone knows of best practices on how far to take this. Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to?
Add them to any methods exposed in public APIs, that way you can give the caller all the information they need when working with a foreign interface. For example, which exceptions the method may throw and other remarks.
It's still beneficial to add these kinds of comments to private methods, I do it anyway to be consistent. It also helps if you plan on generating documentation from the comments.
While, technically, there is such a thing as too much documentation, 99.99999% of the time this exception doesn't apply.
Document everything as much as you can. Formal, informal, stream of thought..every scrap of comments will help some poor soul who inherits your code or has to interface with it.
(It's like the old rule "The error may be in the Compiler and not your code. Compilers have errors too. This is not one of those times.")
Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Yes
Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to? If you want to. Apply them to any function you write, and not what VS autogenerates
I've seen limited "intellisense" comments..but extensive in-code comments that follow. So long as the "content" is there, life will be good. I generally include a brief blurb about each function in the intellisense comments, but put the majority of "here's why i did this" in the function and dead-tree documents.
I agree with fletcher. Start with public facing classes and methods and then work your way down into private code. If you were starting from scratch I would highly recommend adding the XML comments to all code for your own convenience, but in this case starting with public methods and then updating other classes whenever you go in to update them is a good solution.
UPDATE: In trying to replicate this problem one more time to answer your questions I could not! I can only conclude that my initial setup of Mercurial was problematic and/or possibly I was trying to checkin a build that failed compilation before the checkin. Sigh! Thank you so very much for your help. I gave credit for the help on how to do a script. I need to try that for general purposes.
hi all, I hope you can help me :). I am trying to see if Mercurial would be a good DCVS for my project at work, and I'm surely a newbie to many things.
We have a fairly large codebase in C# (Dotnet3.0 not 3.5 , WindowsXP) and it utilizes the GUID feature. I confess to know little about how or why we use the GUID, but I do know that I cannot touch it.
So, when I try hg clone, it fails unless I change the GUID in the cloned directory (ie create new GUID in Visual Studio and then paste that new GUID to replace the old one). To me, this completely defeats the purpose and utility of quick easy clones. It also makes difficult all the many workflows that require multiple clones.
Is there a workaround, or is there something I'm doing wrong? How can I simplify and/or remove this problem?
Would Bazaar make this easier?
Thank you!
You can probably do it in an update hook. I'm no windows scripter, but if you can write a powershell script that calls [system.guid].newguid() and replaces it in that file you can use a hook like:
[hooks]
update=c:\scripts\replace-guid-in-file.PS1 path\to\file\with\guid
Whatever file you're changing these guids in should probably be untracked (put it in your .hgignore) or you're going to end up with a lot of accidental guid changes commited to the repo.
Is this GUID something the editor creates in the project files? In that case, do not check in the project files, or only those that do not contain the GUID.
Does anybody know a class, that writes the structure(public (static, instance) members)/data of an object/class to string (for debugging-purposes) or even generate fancy html-divs or something like that?
Well, the obvious answer is to use the debugger build into Visual Studio, it has some wonderful tools (Watches, Quick Watch, Immediate window, etc...) If for some reason you dont have access to the debugger, I suggest you fix whatever it keeping you from it, but otherwise you can write yourself a fairly robust object dumper using Reflection. Or you can take Eric White's advice and use the ObjectDumper.
Check out: http://blogs.msdn.com/ericwhite/archive/2008/08/14/object-dumper-an-invaluable-tool-for-writing-code-in-the-functional-programming-style.aspx
I don't know of any vendor packages off the top of my head, but the XmlSerializer would do that for you pretty easily.
Actually, maybe not full-blown Lex/Yacc. I'm implementing a command-interpreter front-end to administer a webapp. I'm looking for something that'll take a grammar definition and turn it into a parser that directly invokes methods on my object. Similar to how ASP.NET MVC can figure out which controller method to invoke, and how to pony up the arguments.
So, if the user types "create foo" at my command-prompt, it should transparently call a method:
private void Create(string id) { /* ... */ }
Oh, and if it could generate help text from (e.g.) attributes on those controller methods, that'd be awesome, too.
I've done a couple of small projects with GPLEX/GPPG, which are pretty straightforward reimplementations of LEX/YACC in C#. I've not used any of the other tools above, so I can't really compare them, but these worked fine.
GPPG can be found here and GPLEX here.
That being said, I agree, a full LEX/YACC solution probably is overkill for your problem. I would suggest generating a set of bindings using IronPython: it interfaces easily with .NET code, non-programmers seem to find the basic syntax fairly usable, and it gives you a lot of flexibility/power if you choose to use it.
I'm not sure Lex/Yacc will be of any help. You'll just need a basic tokenizer and an interpreter which are faster to write by hand. If you're still into parsing route see Irony.
As a sidenote: have you considered PowerShell and its commandlets?
Also look at Antlr, which has C# support.
Still early CTP so can't be used in production apps but you may be interested in Oslo/MGrammar:
http://msdn.microsoft.com/en-us/oslo/
Jison is getting a lot of traction recently. It is a Bison port to javascript. Because of it's extremely simple nature, I've ported the jison parsing/lexing template to php, and now to C#. It is still very new, but if you get a chance, take a look at it here: https://github.com/robertleeplummerjr/jison/tree/master/ports/csharp/Jison
If you don't fear alpha software and want an alternative to Lex / Yacc for creating your own languages, you might look into Oslo. I would recommend you to sit through session recordings of sessions TL27 and TL31 from last years PDC. TL31 directly addresses the creation of Domain Specific Languages using Oslo.
Coco/R is a compiler generator with a .NET implementation. You could try that out, but I'm not sure if getting such a library to work would be faster than writing your own tokenizer.
http://www.ssw.uni-linz.ac.at/Research/Projects/Coco/
I would suggest csflex - C# port of flex - most famous unix scanner generator.
I believe that lex/yacc are in one of the SDKs already (i.e. RTM). Either Windows or .NET Framework SDK.
Gardens Point Parser Generator here provides Yacc/Bison functionality for C#. It can be donwloaded here. A usefull example using GPPG is provided here
As Anton said, PowerShell is probably the way to go. If you do want a lex/ yacc implementation then Malcolm Crowe has a good set.
Edit: Direct Link to the Compiler Tools
Just for the record, implementation of lexer and LALR parser in C# for C#:
http://code.google.com/p/naive-language-tools/
It should be similar in use to Lex/Yacc, however those tools (NLT) are not generators! Thus, forget about speed.
Over the years as I have gone through school and been working in the industry I have often asked people for advice on commenting. Sadly, as we all know, commenting with many developers is something that is taken as a side note and not much else. With that said I usually get a fairly general answer. Really this does not help much to see what will really help as time goes on.
So, what do you think is the best way to structure C#, with Visual Studio, commenting?
At the very least, I would comment all parts of your public API, using a triple-slash XML comment block. This will make it easy to auto-generate documentation if and when the time comes.
Beyond that, I would comment any particular algorithms or pieces of code which are going to be hard to decipher in six months time. This 'selfish' approach to commenting (i.e. assume you'll have to maintain this code later) often leads to the best balance of ample documentation without overkill.
I try to follow some basic guidelines when writing comments.
Comments should be simple
Comments should provide clarity
Write documentation before you write implementation
Document why you're doing something, not what you're doing.
Use built-in (XML-style) comments for interfaces, methods, properties, and classes.
Provide a summary at the top of every file (e.g., Something.cs) with the file name, description, development history, and copyright information
Add comments for bug fixes (with bug number, if appropriate)
Make use of helpful annotations like //TODO //BUG and //BUGFIX
Don't comment out code unless you plan to use it
Add comments above the line(s) of code they apply to, not to the end of the line
Try to limit comments to a single line
Use // instead of /* */ for multi-line comments
Be clear--do not use "foo," "bar," etc.
Follow casing rules where appropriate (i.e., camelCasing and PascalCasing)
"Plenty and often"
--Bilbo, The Hobbit.
More seriously, comment things that are not obvious, and tell the reader what the goal of the code is, and perhaps why you chose it.
That's not going to change based on language.
Personally I use a combination of triple slash, SandCastle XML comments and inline comments for more complicated sections. Comment often but keep it concise, nobody needs to read reams of fluff before then can figure out what something does :-)