I'm asking this because I find it quite a dangerous feature to distribute the class definition so that you can't really be sure if you know all about it. Even if I find three partial definitions, how do I know that there's not a fourth somewhere?
I'm new to C# but have spent 10 years with C++, maybe that's why I'm shaken up?
Anyway, the "partial" concept must have some great benefit, which I'm obviously missing. I would love to learn more about the philosophy behind it.
EDIT: Sorry, missed this duplicate when searching for existing posts.
Partial classes are handy when using code generation. If you want to modify a generated class (rather than inheriting from it) then you run the risk of losing your changes when the code is regenerated. If you are able to define your extra methods etc in a separate file, the generated parts of the class can be re-created without nuking your hand-crafted code.
The great benefit is to hide computer generated code (by the designer).
Eric Lippert has a recent blog post about the partial-keyword in general.
Another usage could be to give nested classes their own file.
An other point is, that when a class implements multiple interfaces, you can split the interface implementations on diffrent files.
So every code file has only the code that belongs to the interface implementation. It´s according to separation of concerns concept.
Two people editing the same class, and autogenerated designer code are the two immediate features I can see that were solved by partial classes and methods.
Having designer generated code in a separate file is a lot easier to work with compared to 1.1, where you code could often be mangled by Visual Studio (in windows forms).
Visual Studio still makes a mess of syncing the designer file, code behind and design file with ASP.NET.
If you have some kind of absurdly large class that for some reason are unable or not allowed to logically break apart into smaller classes then you can at least physically break them into multiple files in order to work with it more effectively. Essentially, you can view small chunks at a time avoiding scrolling up and down.
This might apply to legacy code that perhaps due to some arcane policy are not allowed to mess with the existing API because of numerous and entrenched dependencies.
Not necessarily the best use of partial classes, but certainly gives you an alternate option to organize code you might not be able to otherwise modify.
maybe its too late but please let me to add my 2 cents too:
*.When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.
*.You can easily write your code (for extended functionality) for a VS.NET generated class. This will allow you to write the code of your own need without messing with the system generated code
Related
We have a Visual Studio 2010 solution that contains several C# projects in accordance with Jeffery Palermo's Onion Architecture pattern (http://jeffreypalermo.com/blog/the-onion-architecture-part-1/). We want to add the Visual Studio Intellisense Comments using the triple slashes, but we want to see if anyone knows of best practices on how far to take this. Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to?
Add them to any methods exposed in public APIs, that way you can give the caller all the information they need when working with a foreign interface. For example, which exceptions the method may throw and other remarks.
It's still beneficial to add these kinds of comments to private methods, I do it anyway to be consistent. It also helps if you plan on generating documentation from the comments.
While, technically, there is such a thing as too much documentation, 99.99999% of the time this exception doesn't apply.
Document everything as much as you can. Formal, informal, stream of thought..every scrap of comments will help some poor soul who inherits your code or has to interface with it.
(It's like the old rule "The error may be in the Compiler and not your code. Compilers have errors too. This is not one of those times.")
Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Yes
Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to? If you want to. Apply them to any function you write, and not what VS autogenerates
I've seen limited "intellisense" comments..but extensive in-code comments that follow. So long as the "content" is there, life will be good. I generally include a brief blurb about each function in the intellisense comments, but put the majority of "here's why i did this" in the function and dead-tree documents.
I agree with fletcher. Start with public facing classes and methods and then work your way down into private code. If you were starting from scratch I would highly recommend adding the XML comments to all code for your own convenience, but in this case starting with public methods and then updating other classes whenever you go in to update them is a good solution.
Coming from Java , I'm used to the package structure (com.domain.appname.tier)
Now I've started working on a C# project , where all the projects have depth of 1:
i.e
ProjectA
- Utilities.cs
- Validation.cs
- ....
- Extraction.cs
and all the cs files are around 2,500 lines long ...
How do you order your classes and namespaces in C# so it will make sense , and keep the source file in logical size ?
The same way as I'd imagine you do in Java:
A few (< 10?) classes in each namespace, with namespaces arranged in a hierarchy
One class per source file
One or two screenfuls of text per source file
The project you've joined doesn't sound very structured and isn't a good example of good source code organisation.
In a similar way in Java, you just need to make some effort :) Some C# developers, especially with VB background, tend to write looooong classes and put them at the top level.
I would suggest reading Microsoft guidelines on the subject:
Design Guidelines for Developing Class Libraries
In particular you should look at the following section:
Guidelines for Names
Even if you are not writing a class library you may still benefit a lot from these guidelines. FxCop (or Code Analysis as it is named now) will flag many constructs that are not in accordance with these guidelines.
I would first start grouping the classes together into areas of functionality, areas around authorisation for example would go under a folder within a project.
Then update the namespaces of the classes in the folder to reflect the change, Resharper does this for you and newer versions of VS will probably do too.
Lastly (if you are able) I would start to break the classes to smaller more manageable size.
Here's an example of how I organize my solutions, which mirrors the namespace structure.
The project has a default namespace which, in this case, is CompanyName.ProjectName
Source files are organized logically into a directory structure. In the example, my WF4 activity designers are organized under Activities in a folder called Designers.
The way VS works is that, as you create directories in a project, you are also creating namespaces. So, if I were to add a new activity designer called "Foo" in the shown directory, its namespace would be
"CompanyName.ProjectName.Activities.Designers"
Visual studio takes the default namespace, then uses the folder structure to determine the namespace for a particular file. Of course, once the file is created, and you move a file, it isn't automatically refactored. But the system works very well for not only controlling namespaces for classes, but also for keeping files organized.
The same way as you would in Java.
In Java, packages organize classes in physical directories. I'm not sure about this, but the compiler even encourages this convention IIRC. In C# you're not obliged to organize your classes into separate directories that match your namespaces, but it's a very common convention though.
Speaking of namespaces in C#, they do not follow the com.domain.appname.tier convention, but use the Company.Product.Tier format.
How to reorganize large classes depends on the application. This is an exercise in applying OOP guidelines and applies to both Java and C#.
if you are deeply engaged in the project ,i recommend investing some time in redesinging the stucture the way you used to in java ,considering that packages are equivalent to namespaces in c#.
A C# programmer rewrote a Delphi 6 program (no GUI, just files-in-files-out grinding, about 50 procedures and functions totaling less than 1200 lines == 57kb keystrokes) that lives as a single .DPR file.
He delivered a project containing 58 files (52 of them .CS files) in 13 folders nested to various degrees, totaling over 330kb.
Is that typical of C# projects? What strategy do C# programmers typically use to decide how to chop up and organize their project?
Code-file size is a horrible metric to determine the worth of a project, especially in line-of-business projects. Three reasons for that:
1) Small code files are easier to understand than large ones, but this can lead to some repetition of certain constructs (using declarations, namespace declaration, etc.) and certainly adds to the number of files in the project.
2) Small classes are easier to understand than large ones. This is a major benefit for newcomers to the code. If they can wrap their head around any one class, they can expand their understanding outward from there.
3) Good code is larger than small code. When you add decent error-checking, documentation and descriptive method/variable names, your code is more resilient and maintainable, but also much larger. That's perfectly okay.
Now with that all said, of course there are plenty of cases where the code is big simply because the programmer doesn't know what they're doing. You'll be able to identify that by looking at the largest files; if you see a lot of repetition of precisely the same code... or if you see lots and lots of string concatenation.... or you don't see any comments at all (or the comments don't tell you anything useful) then you probably have some good old-fashioned code bloat on your hands.
It's more an artifact of the developer using Visual Studio IDE (VS) rather than an issue of C#/.NET itself. The tendancy, when using VS tools, is to put each class in its own .cs file because the Solution Explorer window shows files/folders in a tree-like structure allowing the programmer to visually target their classes quickly.
Also the Visual Studio Add New Item dialog encourages a one-class-per-file approach by generating a new file each time you add a Class to your project.
The namespace hierarchy of a program is usually mimicked using directory folders in Solution Explorer (although it's not required to match) but this is just another visual quickie.
Example:
(source: spaanjaars.com)
If the programmer were to work outside of the Visual Studio environment you'd likely have much less diarrhea on your hands. Ewww...
Without seeing the actual code of the original or the new code I can't tell you if the new organization is properly designed code just by knowing the line count, method count, and file size. In C# I usually :
Separate each class and interface into its own files.
Static helper methods are grouped by function
I usually separate files into folders by layers. Ex. GUI layer, Business Logic Layer, etc...
Extension methods are separated by the class or interface they relate to or sometimes by function.
Now, the new code could be broken up to follow a more object oriented design, but I can't tell without seeing the code.
Delphi is a great language, but its not a magical language. So no, what you are seeing is not a typical scenario.
Without knowing anything about what your program does its hard to impossible to make any meaningful comment about why your programmer decided to a) rewrite it and b) why the disparity when he did.
I will say this though, its common that when developers do not understand someone else's source, and especially when they do understand the requirements they will choose to rewrite rather than refactor. It's something we see in this industry time and time again.
I am currently developing an application where you can create "programs" with it without writing source code, just click&play if you like.
Now the question is how do I generate an executable program from my data model. There are many possibilities but I am not sure which one is the best for me. I need to generate assemblies with classes and namespace and everything which can be part of the application.
CodeDOM class: I heard of lots of limitations and bugs of this class. I need to create attributes on method parameters and return values. Is this supported?
Create C# source code programmatically and then call CompileAssemblyFromFile on it: This would work since I can generate any code I want and C# supports most CLR features. But wouldn't this be slow?
Use the reflection ILGenerator class: I think with this I can generate every possible .NET code. But I think this is much more complicated and error prone than the other approaches?
Are there other possible solutions?
EDIT:
The tool is general for developing applications, it is not restricted to a specific domain. I don't know if it can be considered a visual programming language. The user can create classes, methods, method calls, all kinds of expressions. It won't be very limitating because you should be able to do most things which are allowed in real programming languages.
At the moment lots of things must still be written by the user as text, but the goal at the end is, that nearly everything can be clicked together.
You my find it is rewarding to look at the Dynamic Language Runtime which is more or less designed for creating high-level languages based on .NET.
It's perhaps also worth looking at some of the previous Stack Overflow threads on Domain Specific Languages which contain some useful links to tools for working with DSLs, which sounds a little like what you are planning although I'm still not absolutely clear from the question what exactly your aim is.
Most things "click and play" should be simple enough just to stick some pre-defined building-block objects together (probably using interfaces on the boundaries). Meaning: you might not need to do dynamic code generation - just "fake it". For example, using property-bag objects (like DataTable etc, although that isn't my first choice) for values, etc.
Another option for dynamic evaluation is the Expression class; especially in .NET 4.0, this is hugely versatile, and allows compilation to a delegate.
Do the C# source generation and don't care about speed until it matters. The C# compiler is quite quick.
When I wrote a dynamic code generator, I relied heavily on System.Reflection.Emit.
Basically, you programatically create dynamic assemblies and add new types to them. These types are constructed using the Emit constructs (properties, events, fields, etc..). When it comes to implementing methods, you'll have to use an ILGenerator to pump out MSIL op-codes into your method. That sounds super scary, but you can use a couple of tools to help:
A pre-built sample implementation
ILDasm to inspect the op-codes of the sample implementation.
It depends on your requirements, CodeDOM would certainly be the best fit for a "program" stored it in a "data model".
However its unlikely that using option 2 will be in any way measurably slower in comparision with any other approach.
I would echo others in that 1) the compiler is quick, and 2) "Click and Play" things should be simple enough so that no single widget added to a pile of widgets can make it an illegal pile.
Good luck. I'm skeptical that you can achieve point (2) for anything but really toy-level programs.
I'm writing a console tool to generate some C# code for objects in a class library. The best/easiest way I can actual generate the code is to use reflection after the library has been built. It works great, but this seems like a haphazard approch at best. Since the generated code will be compiled with the library, after making a change I'll need to build the solution twice to get the final result, etc. Some of these issues could be mitigated with a build script, but it still feels like a bit too much of a hack to me.
My question is, are there any high-level best practices for this sort of thing?
Its pretty unclear what you are doing, but what does seem clear is that you have some base line code, and based on some its properties, you want to generate more code.
So the key issue here are, given the base line code, how do you extract interesting properties, and how do you generate code from those properties?
Reflection is a way to extract properties of code running (well, at least loaded) into the same execution enviroment as the reflection user code. The problem with reflection is it only provides a very limited set of properties, typically lists of classes, methods, or perhaps names of arguments. IF all the code generation you want to do can be done with just that, well, then reflection seems just fine. But if you want more detailed properties about the code, reflection won't cut it.
In fact, the only artifact from which truly arbitrary code properties can be extracted is the the source code as a character string (how else could you answer, is the number of characters between the add operator and T in middle of the variable name is a prime number?). As a practical matter, properties you can get from character strings are generally not very helpful (see the example I just gave :).
The compiler guys have spent the last 60 years figuring out how to extract interesting program properties and you'd be a complete idiot to ignore what they've learned in that half century.
They have settled on a number of relatively standard "compiler data structures": abstract syntax trees (ASTs), symbol tables (STs), control flow graphs (CFGs), data flow facts (DFFs), program triples, ponter analyses, etc.
If you want to analyze or generate code, your best bet is to process it first into such standard compiler data structures and then do the job. If you have ASTs, you can answer all kinds of question about what operators and operands are used. If you have STs, you can answer questions about where-defined, where-visible and what-type. If you have CFGs, you can answer questions about "this-before-that", "what conditions does statement X depend upon". If you have DFFs, you can determine which assignments affect the actions at a point in the code. Reflection will never provide this IMHO, because it will always be limited to what the runtime system developers are willing to keep around when running a program. (Maybe someday they'll keep all the compiler data structures around, but then it won't be reflection; it will just finally be compiler support).
Now, after you have determined the properties of interest, what do you do for code generation? Here the compiler guys have been so focused on generation of machine code that they don't offer standard answers. The guys that do are the program transformation community (http://en.wikipedia.org/wiki/Program_transformation). Here the idea is to keep at least one representation of your program as ASTs, and to provide special support for matching source code syntax (by constructing pattern-match ASTs from the code fragments of interest), and provide "rewrite" rules that say in effect, "when you see this pattern, then replace it by that pattern under this condition".
By connecting the condition to various property-extracting mechanisms from the compiler guys, you get relatively easy way to say what you want backed up by that 50 years of experience. Such program transformation systems have the ability to read in source code,
carry out analysis and transformations, and generally to regenerate code after transformation.
For your code generation task, you'd read in the base line code into ASTs, apply analyses to determine properties of interesting, use transformations to generate new ASTs, and then spit out the answer.
For such a system to be useful, it also has to be able to parse and prettyprint a wide variety of source code langauges, so that folks other than C# lovers can also have the benefits of code analysis and generation.
These ideas are all reified in the
DMS Software Reengineering Toolkit. DMS handles C, C++, C#, Java, COBOL, JavaScript, PHP, Verilog, ... and a lot of other langauges.
(I'm the architect of DMS, so I have a rather biased view. YMMV).
Have you considered using T4 templates for performing the code generation? It looks like it's getting much more publicity and attention now and more support in VS2010.
This tutorial seems database centric but it may give you some pointers: http://www.olegsych.com/2008/09/t4-tutorial-creatating-your-first-code-generator/ in addition there was a recent Hanselminutes on T4 here: http://www.hanselminutes.com/default.aspx?showID=170.
Edit: Another great place is the T4 tag here on StackOverflow: https://stackoverflow.com/questions/tagged/t4
EDIT: (By asker, new developments)
As of VS2012, T4 now supports reflection over an active project in a single step. This means you can make a change to your code, and the compiled output of the T4 template will reflect the newest version, without requiring you to perform a second reflect/build step. With this capability, I'm marking this as the accepted answer.
You may wish to use CodeDom, so that you only have to build once.
First, I would read this CodeProject article to make sure there are not language-specific features you'd be unable to support without using Reflection.
From what I understand, you could use something like Common Compiler Infrastructure (http://ccimetadata.codeplex.com/) to programatically analyze your existing c# source.
This looks pretty involved to me though, and CCI apparently only has full support for C# language spec 2. A better strategy may be to streamline your existing method instead.
I'm not sure of the best way to do this, but you could do this
As a post-build step on your base dll, run the code generator
As another post-build step, run csc or msbuild to build the generated dll
Other things which depend on the generated dll will also need to depend on the base dll, so the build order remains correct