On a regular winforms solutions, how do you determine to break classes into different directories / namespaces or seperate projects. Besides binary dependencies should view, controllers, models all be in different projects ?
I tend to believe that you can happily work with a simpler system and separate your dependencies using folders. Adding extra projects makes the system slightly harder to work with, deploy and maintain as you now have several smaller things you have to coordinate.
Using folders you will still have to ensure that hasty developers do not bypass your layering, which can be a big concern with junior developers. You can watch out for violations using static checking (like NDepend) but no checker is perfect. If you have specific functionality at each level that you feel you need another protection level (internal) then by all means split it up into separate projects.
As for what folders to break them into I would likely follow the conventions found in web mvp/mvc frameworks such as.
Controllers\
Views\
Broken down by controller
Model\
You might want to read this blog post on the topic. Good luck.
Related
I am new to SpecFlow and I am wanting to reuse steps/tests (.feature files essentially) between solutions. I know there is a way to reuse steps between projects in the same solution by adding a reference to the project but I'm not sure exactly how to do essentially the same thing to a different solution. Thanks for any help on this one.
You cant reuse .feature files but you can reuse step definitions and hooks.
You will have to add reference to the project.
Here is the link how to reference a project in Visual studio: Link
I do not think it is possible to use steps from a different solution. You will need to include them in your working solution somewhere to use them. I don't think Visual Studio has the option to let you use inter-solution code unless you have compiled it and reference it within your working solution.
Doing this is a bit of an anti pattern. The reason for having feature files is to talk about WHAT the application does and WHY its important. So feature files should contain things that are unique to your application domain, and there won't be much overlap between projects
When you write features this way even common functionality isn't really worth sharing, because the complexity outweighs the simplicity of doing things again.
For example logging in is ripe for sharing between applications but all you need in a feature is
Given I am registered
When I login
Then I should be logged in
This is so simple that its easier to just write another one for your second application.
Most steps that people have shared other the years are all about HOW things are done e.g. clicking on things, filling in fields etc.. These generally lead to bloated scenarios and again the cost outweighs the benefits.
If you still feel there is alot of shared behaviour between your applications you may have an architectural problem where you need to extract the shared behaviour into its own application, and have your applications delegate responsibility.
I work in a .net c# application which contains 2 solutions for client and server. In server side there are 80+ projects that have been used to separate following Architectural layers,
Infrastructure Layer
Integration Layer (External Systems)
Domain Layer
Repository Layer
Manager Layer
Service Layer
In addition, almost every layer has test project.Now, the build time of the solution takes 2 to 3 minutes, and many developers (including me :)) feel we need to tackle this problem.
Therefore,proposed solution was to reduce the number of projects by merging the projects.In my view, it is probably a good solution to minimize the build time and we could achieve what we want.
Proposed solution is that we merge our projects into 3 areas, such as one library for production code, one library for test code, and one for deployment projects (WCF host ,etc) and logically divided layers in same project by separating the namespaces.
However, my concerns are
Could these separation good for the maintainability ? providing that more that hundread of classes for each namespace appox.
If we have common functionality such as helpers, where are we put those ?
Is there any other way to layering the solution ?
I guess you should split your solutions in logical layers.
As part of where do you put the helpers. Make a solution for it, on one of the lowest levels.
EXAMPLE
Software for a farm. You'll need to keep track of your animals, vegetables. You need a module for feeding the animals and one for Selling the animals and vegetables to the consumer market.
This could be splitted in a the following solutions
Back-end
Sell Module: Everyting for selling your products
Buy Module: Buying seeds, food for your animals, other products, ...
Sheduler Module: Trigger events for sow seeds, harvesting, ...
Prediction Module: Predicting harvests quantity's by the weather, and market prices, ...
...
Each of these back-end modules, can have it's own Data Access Layer, DTO, WCF Services, ...
This solution will only contain Business Logic, Data Access, ... . And there can be multiple front-end solution connecting to these back-end solutions.
Front-end
ASP.NET MVC Application: Webshop for selling to a consumer
WPF Application: Approving sells
Other WPF Application: Buying things.
Mobile application: Getting the events to your phone or something.
(Another option is to connect 2 or more backend solutions into 1 front-end solution)
...
This is a BIG change for your project and this will have an impact. Make sure you think this true, if you wan't to change it.
Multiple solutions will INCREASE your overall Build Time and it's important to have a nightly build so every developer can always work on the latest binaries, without having to build all the solutions on his local machine.
Note you can still use your layers in the different solutions:
Infrastructure Layer
Integration Layer (External Systems)
Domain Layer
Repository Layer
Manager Layer
Service Layer
To make this work all together and don't get messed up with binaries. You can map a drive I.E. X: where you have a folder binaries, where you have a folder for each solution. where each solutions copy's the assemblies on the post build event. (Script this, so it works on every machine)
If you have a good network infrastucture, you can also copy it on a server. So when you build all solutions for example in TFS, it can copy it to a location all developers can access.
If you build in TFS make sure your build order is correct, first the lowest layer, last the highest layer.
But as you split up your solution, in solutions you'll probably don't need them in every solution.
I recently read an article about Onion Architecture, maybe you can have a look at that too. (It's specific for ASP.NET MVC).
You can also have a look into CQRS.
Why 80+ projects while you only have 6 layers in your application ?
You might answer that they cover a large number of functional areas, but do you need all these functional areas in one solution in the first place ?
I'd recommend reflecting architectural divisions with projects and functional divisions with solutions. Different solutions can reuse the same projects. This way you'll have one project for each reusable architectural layer and as many Domain projects as there are functional areas.
I definitely wouldn't merge the projects... I think you'll quickly end up with spaghetti code in each layer as the developers take shortcuts (whether they mean to or not) that they shouldn't be taking.
I'd be more inclined to separate the layers out into separate solutions... and use binary references instead of project references across the tiers. This can play havoc with branching though, be careful.
I've seen build times drop by making the projects build to a common place - apparently this can prevent VS rebuilding projects when it doesn't need to - but I don't know if this is true or not.
Some ideas here: http://blogs.microsoft.co.il/blogs/arik/archive/2011/05/17/speed-up-visual-studio-builds.aspx
Finally.... is the three minutes for a full build or just to unit test one project? Focus on whichever is the biggest issue. If unit testing is taking a long time, you've got a problem with dependencies. If the full solution is taking a long time, get a build server and focus on bringing your unit test development time down.
Hope that helps
A low impact way I've dealt with a problem like that in the past is to create a series of solution files that include just one of the projects and its test project (and perhaps the project's dependencies). Then, get yourself a tool like NCrunch and do most of your coding in these solutions, probably using TDD. This will give you lightning fast feedback loops and is decidedly in the spirit of the layered, decoupled approach. When I've done this in the past, I find that I only actually run the entire application a few times a day, max, and I rely heavily on red-green-refactor, which is nice anyway.
If you want, you don't even have to source control these little solution files -- developers can create their own and they can be borderline throw-away.
Of course, this is by no means a panacea and won't address the problem of long compile times when you want to run the application, but it can definitely help simultaneously cut down on feedback time while promoting good design/development practice and it has the advantage of being extremely low risk and fast to setup.
When should I use multiple class libraries in .NET. I have a situation where I need to use the functionalities of Microsoft Office Object Model to check certain attributes of Microsoft Office files. Should I use different class libraries to process different file types.
eg:- 1 library for word files,
1 library for ppt,
so on.
Or should I stuff everything into a single class library.
What are the question that i should to self before going to build multiple class libraries.
Think about your consumers: If somebody might want to use the library for word files without the extra overhead of having all the other libraries, then separate them. If not, don't.
That said, keep in mind that separate assemblies is not necessarily the same as separate projects. You may want to use separate projects for each of these, even if you end up combining them into one big assembly in the end (see Single assembly from multiple projects). I've found it to be easier to manage version control on smaller projects.
It depends a bit on how and where you plan to use this functionality.
If you're going to be using portions of the functionality from multiple applications, and each application will only need to handle one of the files (or at least not all of them), then it makes sense to separate out libraries by file types.
However, if all of your applications will typically handle every type of file, keeping them together will reduce the maintenance overhead of your solution.
Keep It Simple. If you have no technial reason to seperate, don't do it.
The answer to this is mostly personal opinions. There are many times technical reasons or "best practices" patterns and dictate how you should separate code.
1) what (possible) other programs will reuse the same classes, and will they be deployed at the same location?
2)I have a small group of classes that have little or no dependency on the rest of the system; would it make logical sense to group them together in a class library?
I currently have a single solution that contains both the one application developed so far and projects for all of the homegrown libraries. This entire solution is also kept in a single Git repo. I am now going to be developing a second application that will make use of those same libraries. That application will have different release cycles than the first and different versions. The question (or questions) I have is how to split up the code, both in terms of the solution set up and in terms of Git.
A few other useful details before talking about answers:
The applications are deployed to a shared network drive, not to individual computers, so I have complete control over when they are deployed and what gets deployed with them
The library DLLs are not shared by the applications once built. Each application has in its folder a full copy of all the DLLs, PDBs and config files.
Currently, I'm the only one doing releases, but another one or two may end up doing releases, so I'd like to keep that in mind.
I've been rolling around a couple of ideas in my head, but none of them seem satisfactory. I've considered just keeping everything in one solution/one Git repo. I've also thought about splitting up the solution across several Git repos using submodules, but submodules are cumbersome. I've also thought about making each application its own solution and all of the libraries in yet another. The question then is whether I can have multiple solutions open in visual studio. The libraries frequently need to change with the applications, so separating them too much in separate solutions or Git repos is going to make it hard to keep the libraries and apps in sync. Another concern I have is branching. If I split the solution into several Git repos, I can have branches for each application, but if I keep one Git repo, I can only have one set of branches for everything.
I may not even be asking the right questions to myself, and it's also possible that I just have a mental block keeping me from solving a simple solution. Either way, I defer to the SO community to give me some ideas. I hope everything is clear, but if not, I'll be glad to clarify.
While they might be cumbersome, I think submodules are the way to go on this one. I'm just going to guess your directory structure is something like:
mainapp
\mainappdir
\somefiles
...
|
|
\library1
|
\library2
In that case you'd want library1 and library2 to be submodules (that's probably obvious). They're really not that bad, just something to get used to in Git IMHO.
Another route to consider would be to symbolically link library1 and library2 on your filesystem for both apps to use. In that case, each library could be it's own repo but not managed with submodules (I think you'd have to add them to your .gitignore file though). By using symbolic links in each application, repo/source management would just be on the two library directories. Pulling/branching in one place would have affects on both apps on not require admin'ing the library files of each app.
I would split everything up into separate solutions, especially the libraries that will be used in multiple applications. As you mentioned, different applications and libraries have different release cycles and might end up being developed separately. It's up to you to split them up into logic units and ensure that the libraries are independent of the applications they will be used in.
As for what to do in Git, it would make sense to have separate repositories for each logical unit of work (application or library), or at the very least, separate branches within the same repository.
Good luck and don't be discouraged. This will be beneficial to you in the long run.
When/where do you decide to split a large Visual Studio project into smaller multiple projects? If it can be reusable? when project is too big? (but how big is too big?)
and When you do split the project, do you,
group by database tables
group by similar functionality
other..
Pros of many projects:
Easier to isolate code for unit testing. I like to isolate code that has a dependency on a big external server thing, for example code that talks to the SMTP server gets its own assembly, code that talks to the database gets it's own assembly, code that talks to the webserver, code that is pure business logic like validations.
Pros of few projects:
Visual studio goes faster
Some developers just don't get your vision
about dividing up responsibilities
and will start putting classes
everywhere, so you end up with the
pain of extra projects and the
benefits of putting everything into
one project.
Each project has a configuration and when you make a decision about project configuration, often you have to make the same chagne everywhere, such as setting or changing the strong name key
Pros of many Solutions
You hit the maximum project level later.
Only the stuff in your current solution gets compiled everytime you hit f5
If the project isn't expected to change in the life of your application, why re-compile it over and over? Call it done and move it to its own solution.
Cons of many Solutions
It's up to you to work out the dependencies between solutions and manually compile the dependencies first. This leads to complicated build scripts.
Projects should be cohesive. Logic should be related, and accomplishing a similar goal
This answer will depend on the size of the product you are supporting. In general we organize our projects along domain and logic. And we will divide those even further, the more you divide the more organize you must be, or you are going to hit the dreaded recursive dependency issue.
When I do choose to break up project it is when it grows to be too large or two areas are becoming too similar.
When complexity is rising I do not split by tables, i generally split functionality.
Re-usability is another excellent time to reduce lines of code, as well as introduce a new project. However be careful how many "utility" libraries you introduce because they do have impact on readability/understandability.
I do not think there is a line in sand that says, if you hit 3k SLOC, you have too much. It all is contextual.
I always have several projects (and therefore a solution) , instead of one project with all of my source in it.
In some cases, it is unavoidable because you are using and open source library and want to be able to debug it. But more pragmatically, I typically have my applications provide functionality via plugins. This allows me to change the behavior or offer a user-selectable behavior at runtime. In the non-plugin case, it allows you to update one portion of your program without updating everything. There are also cases where you can provide the main apparently, and only download the modules / assemblies when you need them.
One other reason is that you can create smaller test apps to exercise an assembly, rather than building a very large solution and potentially requiring a user to execute several (and irrelevant) GUI operations before even reaching the part you want to test. And this isn't just a testing concern -- maybe you have less-savvy users in your organization that only want to be presented with the bits that concern them.
When the overall purpose of the project remains the same, but the number of classes is becoming large, I tend to create folders and namespaces to better group functionality within the project. Classes that are coupled to each-other tend to go in the same folder/namespace, so that if I need to understand a given class, the related classes are nearby in the Solution Explorer. I usually only create new projects if I realize that a particular piece of functionality is very different in purpose or if there is a common dependency between existing projects.
I usually wind up with a few relatively small Framework projects that define interfaces for loose coupling between other projects, with larger projects for the different types of concrete functionality. That's always at least one project for the UI and one project for logic and data (often split into two projects if the data layer becomes very large in its own right.)
I move code to a new project, if it has general functionality (theoretically) usable by other projects too. If the project is large, because it represents a complex problem, then namespaces provide a great way to bring order in the code. Here you can for example introduce a (sub-)namespaces for each SQL table, etc. etc.