Coming from Java , I'm used to the package structure (com.domain.appname.tier)
Now I've started working on a C# project , where all the projects have depth of 1:
i.e
ProjectA
- Utilities.cs
- Validation.cs
- ....
- Extraction.cs
and all the cs files are around 2,500 lines long ...
How do you order your classes and namespaces in C# so it will make sense , and keep the source file in logical size ?
The same way as I'd imagine you do in Java:
A few (< 10?) classes in each namespace, with namespaces arranged in a hierarchy
One class per source file
One or two screenfuls of text per source file
The project you've joined doesn't sound very structured and isn't a good example of good source code organisation.
In a similar way in Java, you just need to make some effort :) Some C# developers, especially with VB background, tend to write looooong classes and put them at the top level.
I would suggest reading Microsoft guidelines on the subject:
Design Guidelines for Developing Class Libraries
In particular you should look at the following section:
Guidelines for Names
Even if you are not writing a class library you may still benefit a lot from these guidelines. FxCop (or Code Analysis as it is named now) will flag many constructs that are not in accordance with these guidelines.
I would first start grouping the classes together into areas of functionality, areas around authorisation for example would go under a folder within a project.
Then update the namespaces of the classes in the folder to reflect the change, Resharper does this for you and newer versions of VS will probably do too.
Lastly (if you are able) I would start to break the classes to smaller more manageable size.
Here's an example of how I organize my solutions, which mirrors the namespace structure.
The project has a default namespace which, in this case, is CompanyName.ProjectName
Source files are organized logically into a directory structure. In the example, my WF4 activity designers are organized under Activities in a folder called Designers.
The way VS works is that, as you create directories in a project, you are also creating namespaces. So, if I were to add a new activity designer called "Foo" in the shown directory, its namespace would be
"CompanyName.ProjectName.Activities.Designers"
Visual studio takes the default namespace, then uses the folder structure to determine the namespace for a particular file. Of course, once the file is created, and you move a file, it isn't automatically refactored. But the system works very well for not only controlling namespaces for classes, but also for keeping files organized.
The same way as you would in Java.
In Java, packages organize classes in physical directories. I'm not sure about this, but the compiler even encourages this convention IIRC. In C# you're not obliged to organize your classes into separate directories that match your namespaces, but it's a very common convention though.
Speaking of namespaces in C#, they do not follow the com.domain.appname.tier convention, but use the Company.Product.Tier format.
How to reorganize large classes depends on the application. This is an exercise in applying OOP guidelines and applies to both Java and C#.
if you are deeply engaged in the project ,i recommend investing some time in redesinging the stucture the way you used to in java ,considering that packages are equivalent to namespaces in c#.
Related
This question is a followup to an earlier question and subsequent questions raised by research on MSDN per the links provided in the answer.
Here's an image of the solution explorer I've set up so far & I want to make sure I'm on the right track organizationally.
First, because VS2010 is for a C# Class I'm taking, I'm organizing a ProgrammingClass solution (ITDEV110) and Assignment projects (ASSN3a, ASSN3b) within that solution. I read somewhere that a solution is like a house and a project like a room...so this makes good organizational sense to me.
Given that organizational strategy, I can't find the best way to save a copy of Assignment1 as the basis of Assignment2. Sometimes I get proj2's *csproj file in the proj2 path, but the *cs files in the proj1 path. Other times, my *cs files show up in solution explorer as a dotted outline icon (not the *._cs in the pic). I can still click, edit and save, but they still look like ghost classes in explorer--and I'm not sure what that dotted ghost line means to the compile and run.
So how can I move *cs files between projects in a single solution without
confusing VS2010 into thinking it has two Mains, and
ensuring the right versions of classes and methods are called?
Is it just a matter of "Save As..." a new project name? Or should I create a new project from existing code? Is this a job for namespaces? How does this differ if I want SOME of the code from #1 to BE accessible from #2?
I've been doing a lot of creating new classes for my programs by cutting-pasting from notepad...but I know there's got to be a better way.
Any resources or tips would be awesome.
Create a new different C# project for each programming assignment you get. Feel free to use the same Solution, but it's best to keep projects separate. It is possible to share files between projects (using Linked files), or references types and classes from other projects using Project References, but putting school work code into the same C# project is a lesson in pain.
I am about to start a localization project for my employer. It concerns a pre-existing project with many windows forms and an established code base, programmed in C# and ASP.NET. I have done research into how to localize an application in visual studio and found resources.
While these are an adequate solution to the problem, I am not entirely happy with the down sides of using resources. This is to say, it has a rather large footprint, requiring changes in each of the form files. Furthermore, the resource files are only editable from within Visual Studio. I would prefer enabling external translators without programming knowledge to do the translation.
So I came up with an alternative solution:
Build a static localization utility class with an extension method on String:
public static String Localize(this String s)
The utility class loads localization strings from file on startup. When the program needs a string somewhere, it is called as
"foo".Localize();
And the program would use the string itself as the key in the table to find the translation.
It seems a safe and effective solution, and I'm happy with the small footprint that it leaves on the existing codebase.
Basically I want to ask:
Are there downsides to my solution that I've missed?
Which file formats for the localization data should I look into (I've already encountered the .po file format)?
Is it a good enough reason to deviate from the resource files solution?
Any advice and/or considerations you may have will be appreciated.
You are trying to reinvent the wheel that MS invented long ago. You can use plenty of tools available for resources or even write your own Resources provider.
Some tools available: What tools are available for adding Localization to an ASP.NET project?
If you want to use a database for translators: Data Driven Resource provider from Rick Strahl
Are there downsides to my solution that I've missed?
I can point out some, what about the texts in the aspx files. Are you going to make your extension method available to them as well? That would be tough I guess.
e.g. <asp:Label Text="Title"> - how are you going to translate that?
Further, some of your claims are not entirely true.
the resource files are only editable from within Visual Studio
They are xml files , so you can use any editor to edit them or write a custom utility to do that.
Are there downsides to my solution that I've missed?
The standard resource files go beyond changing the text.
You might need to resize certain elements to fit the new text (if you don't use the existing layout management mechanism). And for some languages you will need to change the fonts/fonts sizes (think Chinese, Japanese, Korean) or alignment (think right-to-left languages like Arabic and Hebrew).
Also, translating standard files means that using an editor that is aware of the format one can see the dialog "as is", so it gives more context than stand-alone strings, which results in better translation quality.
A C# programmer rewrote a Delphi 6 program (no GUI, just files-in-files-out grinding, about 50 procedures and functions totaling less than 1200 lines == 57kb keystrokes) that lives as a single .DPR file.
He delivered a project containing 58 files (52 of them .CS files) in 13 folders nested to various degrees, totaling over 330kb.
Is that typical of C# projects? What strategy do C# programmers typically use to decide how to chop up and organize their project?
Code-file size is a horrible metric to determine the worth of a project, especially in line-of-business projects. Three reasons for that:
1) Small code files are easier to understand than large ones, but this can lead to some repetition of certain constructs (using declarations, namespace declaration, etc.) and certainly adds to the number of files in the project.
2) Small classes are easier to understand than large ones. This is a major benefit for newcomers to the code. If they can wrap their head around any one class, they can expand their understanding outward from there.
3) Good code is larger than small code. When you add decent error-checking, documentation and descriptive method/variable names, your code is more resilient and maintainable, but also much larger. That's perfectly okay.
Now with that all said, of course there are plenty of cases where the code is big simply because the programmer doesn't know what they're doing. You'll be able to identify that by looking at the largest files; if you see a lot of repetition of precisely the same code... or if you see lots and lots of string concatenation.... or you don't see any comments at all (or the comments don't tell you anything useful) then you probably have some good old-fashioned code bloat on your hands.
It's more an artifact of the developer using Visual Studio IDE (VS) rather than an issue of C#/.NET itself. The tendancy, when using VS tools, is to put each class in its own .cs file because the Solution Explorer window shows files/folders in a tree-like structure allowing the programmer to visually target their classes quickly.
Also the Visual Studio Add New Item dialog encourages a one-class-per-file approach by generating a new file each time you add a Class to your project.
The namespace hierarchy of a program is usually mimicked using directory folders in Solution Explorer (although it's not required to match) but this is just another visual quickie.
Example:
(source: spaanjaars.com)
If the programmer were to work outside of the Visual Studio environment you'd likely have much less diarrhea on your hands. Ewww...
Without seeing the actual code of the original or the new code I can't tell you if the new organization is properly designed code just by knowing the line count, method count, and file size. In C# I usually :
Separate each class and interface into its own files.
Static helper methods are grouped by function
I usually separate files into folders by layers. Ex. GUI layer, Business Logic Layer, etc...
Extension methods are separated by the class or interface they relate to or sometimes by function.
Now, the new code could be broken up to follow a more object oriented design, but I can't tell without seeing the code.
Delphi is a great language, but its not a magical language. So no, what you are seeing is not a typical scenario.
Without knowing anything about what your program does its hard to impossible to make any meaningful comment about why your programmer decided to a) rewrite it and b) why the disparity when he did.
I will say this though, its common that when developers do not understand someone else's source, and especially when they do understand the requirements they will choose to rewrite rather than refactor. It's something we see in this industry time and time again.
I'm asking this because I find it quite a dangerous feature to distribute the class definition so that you can't really be sure if you know all about it. Even if I find three partial definitions, how do I know that there's not a fourth somewhere?
I'm new to C# but have spent 10 years with C++, maybe that's why I'm shaken up?
Anyway, the "partial" concept must have some great benefit, which I'm obviously missing. I would love to learn more about the philosophy behind it.
EDIT: Sorry, missed this duplicate when searching for existing posts.
Partial classes are handy when using code generation. If you want to modify a generated class (rather than inheriting from it) then you run the risk of losing your changes when the code is regenerated. If you are able to define your extra methods etc in a separate file, the generated parts of the class can be re-created without nuking your hand-crafted code.
The great benefit is to hide computer generated code (by the designer).
Eric Lippert has a recent blog post about the partial-keyword in general.
Another usage could be to give nested classes their own file.
An other point is, that when a class implements multiple interfaces, you can split the interface implementations on diffrent files.
So every code file has only the code that belongs to the interface implementation. It´s according to separation of concerns concept.
Two people editing the same class, and autogenerated designer code are the two immediate features I can see that were solved by partial classes and methods.
Having designer generated code in a separate file is a lot easier to work with compared to 1.1, where you code could often be mangled by Visual Studio (in windows forms).
Visual Studio still makes a mess of syncing the designer file, code behind and design file with ASP.NET.
If you have some kind of absurdly large class that for some reason are unable or not allowed to logically break apart into smaller classes then you can at least physically break them into multiple files in order to work with it more effectively. Essentially, you can view small chunks at a time avoiding scrolling up and down.
This might apply to legacy code that perhaps due to some arcane policy are not allowed to mess with the existing API because of numerous and entrenched dependencies.
Not necessarily the best use of partial classes, but certainly gives you an alternate option to organize code you might not be able to otherwise modify.
maybe its too late but please let me to add my 2 cents too:
*.When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.
*.You can easily write your code (for extended functionality) for a VS.NET generated class. This will allow you to write the code of your own need without messing with the system generated code
I'm writing a console tool to generate some C# code for objects in a class library. The best/easiest way I can actual generate the code is to use reflection after the library has been built. It works great, but this seems like a haphazard approch at best. Since the generated code will be compiled with the library, after making a change I'll need to build the solution twice to get the final result, etc. Some of these issues could be mitigated with a build script, but it still feels like a bit too much of a hack to me.
My question is, are there any high-level best practices for this sort of thing?
Its pretty unclear what you are doing, but what does seem clear is that you have some base line code, and based on some its properties, you want to generate more code.
So the key issue here are, given the base line code, how do you extract interesting properties, and how do you generate code from those properties?
Reflection is a way to extract properties of code running (well, at least loaded) into the same execution enviroment as the reflection user code. The problem with reflection is it only provides a very limited set of properties, typically lists of classes, methods, or perhaps names of arguments. IF all the code generation you want to do can be done with just that, well, then reflection seems just fine. But if you want more detailed properties about the code, reflection won't cut it.
In fact, the only artifact from which truly arbitrary code properties can be extracted is the the source code as a character string (how else could you answer, is the number of characters between the add operator and T in middle of the variable name is a prime number?). As a practical matter, properties you can get from character strings are generally not very helpful (see the example I just gave :).
The compiler guys have spent the last 60 years figuring out how to extract interesting program properties and you'd be a complete idiot to ignore what they've learned in that half century.
They have settled on a number of relatively standard "compiler data structures": abstract syntax trees (ASTs), symbol tables (STs), control flow graphs (CFGs), data flow facts (DFFs), program triples, ponter analyses, etc.
If you want to analyze or generate code, your best bet is to process it first into such standard compiler data structures and then do the job. If you have ASTs, you can answer all kinds of question about what operators and operands are used. If you have STs, you can answer questions about where-defined, where-visible and what-type. If you have CFGs, you can answer questions about "this-before-that", "what conditions does statement X depend upon". If you have DFFs, you can determine which assignments affect the actions at a point in the code. Reflection will never provide this IMHO, because it will always be limited to what the runtime system developers are willing to keep around when running a program. (Maybe someday they'll keep all the compiler data structures around, but then it won't be reflection; it will just finally be compiler support).
Now, after you have determined the properties of interest, what do you do for code generation? Here the compiler guys have been so focused on generation of machine code that they don't offer standard answers. The guys that do are the program transformation community (http://en.wikipedia.org/wiki/Program_transformation). Here the idea is to keep at least one representation of your program as ASTs, and to provide special support for matching source code syntax (by constructing pattern-match ASTs from the code fragments of interest), and provide "rewrite" rules that say in effect, "when you see this pattern, then replace it by that pattern under this condition".
By connecting the condition to various property-extracting mechanisms from the compiler guys, you get relatively easy way to say what you want backed up by that 50 years of experience. Such program transformation systems have the ability to read in source code,
carry out analysis and transformations, and generally to regenerate code after transformation.
For your code generation task, you'd read in the base line code into ASTs, apply analyses to determine properties of interesting, use transformations to generate new ASTs, and then spit out the answer.
For such a system to be useful, it also has to be able to parse and prettyprint a wide variety of source code langauges, so that folks other than C# lovers can also have the benefits of code analysis and generation.
These ideas are all reified in the
DMS Software Reengineering Toolkit. DMS handles C, C++, C#, Java, COBOL, JavaScript, PHP, Verilog, ... and a lot of other langauges.
(I'm the architect of DMS, so I have a rather biased view. YMMV).
Have you considered using T4 templates for performing the code generation? It looks like it's getting much more publicity and attention now and more support in VS2010.
This tutorial seems database centric but it may give you some pointers: http://www.olegsych.com/2008/09/t4-tutorial-creatating-your-first-code-generator/ in addition there was a recent Hanselminutes on T4 here: http://www.hanselminutes.com/default.aspx?showID=170.
Edit: Another great place is the T4 tag here on StackOverflow: https://stackoverflow.com/questions/tagged/t4
EDIT: (By asker, new developments)
As of VS2012, T4 now supports reflection over an active project in a single step. This means you can make a change to your code, and the compiled output of the T4 template will reflect the newest version, without requiring you to perform a second reflect/build step. With this capability, I'm marking this as the accepted answer.
You may wish to use CodeDom, so that you only have to build once.
First, I would read this CodeProject article to make sure there are not language-specific features you'd be unable to support without using Reflection.
From what I understand, you could use something like Common Compiler Infrastructure (http://ccimetadata.codeplex.com/) to programatically analyze your existing c# source.
This looks pretty involved to me though, and CCI apparently only has full support for C# language spec 2. A better strategy may be to streamline your existing method instead.
I'm not sure of the best way to do this, but you could do this
As a post-build step on your base dll, run the code generator
As another post-build step, run csc or msbuild to build the generated dll
Other things which depend on the generated dll will also need to depend on the base dll, so the build order remains correct