Splitting up a .NET solution/Git repo for multiple apps - c#

I currently have a single solution that contains both the one application developed so far and projects for all of the homegrown libraries. This entire solution is also kept in a single Git repo. I am now going to be developing a second application that will make use of those same libraries. That application will have different release cycles than the first and different versions. The question (or questions) I have is how to split up the code, both in terms of the solution set up and in terms of Git.
A few other useful details before talking about answers:
The applications are deployed to a shared network drive, not to individual computers, so I have complete control over when they are deployed and what gets deployed with them
The library DLLs are not shared by the applications once built. Each application has in its folder a full copy of all the DLLs, PDBs and config files.
Currently, I'm the only one doing releases, but another one or two may end up doing releases, so I'd like to keep that in mind.
I've been rolling around a couple of ideas in my head, but none of them seem satisfactory. I've considered just keeping everything in one solution/one Git repo. I've also thought about splitting up the solution across several Git repos using submodules, but submodules are cumbersome. I've also thought about making each application its own solution and all of the libraries in yet another. The question then is whether I can have multiple solutions open in visual studio. The libraries frequently need to change with the applications, so separating them too much in separate solutions or Git repos is going to make it hard to keep the libraries and apps in sync. Another concern I have is branching. If I split the solution into several Git repos, I can have branches for each application, but if I keep one Git repo, I can only have one set of branches for everything.
I may not even be asking the right questions to myself, and it's also possible that I just have a mental block keeping me from solving a simple solution. Either way, I defer to the SO community to give me some ideas. I hope everything is clear, but if not, I'll be glad to clarify.

While they might be cumbersome, I think submodules are the way to go on this one. I'm just going to guess your directory structure is something like:
mainapp
\mainappdir
\somefiles
...
|
|
\library1
|
\library2
In that case you'd want library1 and library2 to be submodules (that's probably obvious). They're really not that bad, just something to get used to in Git IMHO.
Another route to consider would be to symbolically link library1 and library2 on your filesystem for both apps to use. In that case, each library could be it's own repo but not managed with submodules (I think you'd have to add them to your .gitignore file though). By using symbolic links in each application, repo/source management would just be on the two library directories. Pulling/branching in one place would have affects on both apps on not require admin'ing the library files of each app.

I would split everything up into separate solutions, especially the libraries that will be used in multiple applications. As you mentioned, different applications and libraries have different release cycles and might end up being developed separately. It's up to you to split them up into logic units and ensure that the libraries are independent of the applications they will be used in.
As for what to do in Git, it would make sense to have separate repositories for each logical unit of work (application or library), or at the very least, separate branches within the same repository.
Good luck and don't be discouraged. This will be beneficial to you in the long run.

Related

Best practice for referencing a project or assembly in a solution

I started at a new company which manages multiple projects (around 30). However, all their projects are in one git-repository. I know wanted to split all our projects into one git-repository per project. To achieve that I went ahead and extracted every folder into a new folder, containing it's own git repository.
However, some references were broken. While investigating I found that project referencing was done in multiple ways, dependent on the project
Including the entire solution/project in the current solution.
Referencing the .csproj-file of another solution.
Referencing the built .dll (bin/debug).
In my opinion, the first way should not the way to be, right? This seems like a way too big overhead. So I'm split between 2 and 3, and I would like to hear how you people are doing it?
Looking forward for your input!
It's normal to have code you want to share between multiple solutions.
For this, we use projects like 'Infrastructure' or 'Logging' with their own CI builds. When done, we create a release build which uploads the dll's to a private nuget server.
These projects are than included as dll's in the other projects through nuget and updated when needed. You also don't break other solutions when you change something in your logging, you have to update the logging version first.
What I do is to have a nuget server in the company or you can use Azure DevOps to do that: https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-nuget?view=azure-devops.
After you set the nuget server you can update/import the packages for each project. So, when you update the code of any project, post it to the nuget server and you can update all other projects.
Your question sounds like "How to split a solution into smaller solutions".
30 projects is not that much, At 30 C#-projects it's just the starting point to split your solution.The solution is the base for the Repository also.
If you analyze the dependencies of C#-projects, you certainly can form kind of clusters,
are there basics- referenced by everything, and almost front end parts, referenced by nothing.
Basic-Projects (Depends from Nothing, but referenced by many) Tend to be more stable and have less frequent changes, it's also more dangerous to change, more danger of braking changes. It's good to make access to them more complicated (=put it in a different solution). You do not change it frivolously, just because you see the source code and you edit it.
The code and architecture becomes more cleaner, since programmers tend to use a wrapper, derived classes or interfaces to do, what they want to do, inside their active solution.
They do not change dependent solutions as fast and easy any more. So it stays more stable.
You can consider a Solution as a product of it's own, as a Library or Final Product.
So splitting projects in the aspect, what is potentially being used in upcoming projects in the next years, and what is being used as a product for one client only.
Suppose you start a new Product next week, what Projects would you most likely include there ? They belong into a library.
It's also simplifying life to new programmers, if you tell them "Just use it, you don't need to dig in the source code", or "get familiar with this solution only" if you group your C# projects into such clusters. They are not so overwhelmed by quantity.
Also the branching is done per solution, you create a branch on one solution per client request, and a branch of another solution to stay up to date with technology. This is much easier to handle with smaller bundles of projects.
Nuget-Server as proposed by others, is a good way to maintain updates. Without a Server you link to DLLs directly. If you do not have many updates, either you invest time in setting up the server, or in copying a few DLL's around, twice a year. One is not more complicated or time consuming as the other. Manual jobs done by different people cause the risk of human errors. But the task "copy all DLL's from one directory to another directory" might still work. Do not reference the output directory from one solution directly to the other solution. Put the "productive DLL" in a separate directory and do your update by saying "yes, I want to update - use it now". "Automated update" just if someone decides to built the other solution might cause trouble.

How to reuse SpecFlow steps from a different solution

I am new to SpecFlow and I am wanting to reuse steps/tests (.feature files essentially) between solutions. I know there is a way to reuse steps between projects in the same solution by adding a reference to the project but I'm not sure exactly how to do essentially the same thing to a different solution. Thanks for any help on this one.
You cant reuse .feature files but you can reuse step definitions and hooks.
You will have to add reference to the project.
Here is the link how to reference a project in Visual studio: Link
I do not think it is possible to use steps from a different solution. You will need to include them in your working solution somewhere to use them. I don't think Visual Studio has the option to let you use inter-solution code unless you have compiled it and reference it within your working solution.
Doing this is a bit of an anti pattern. The reason for having feature files is to talk about WHAT the application does and WHY its important. So feature files should contain things that are unique to your application domain, and there won't be much overlap between projects
When you write features this way even common functionality isn't really worth sharing, because the complexity outweighs the simplicity of doing things again.
For example logging in is ripe for sharing between applications but all you need in a feature is
Given I am registered
When I login
Then I should be logged in
This is so simple that its easier to just write another one for your second application.
Most steps that people have shared other the years are all about HOW things are done e.g. clicking on things, filling in fields etc.. These generally lead to bloated scenarios and again the cost outweighs the benefits.
If you still feel there is alot of shared behaviour between your applications you may have an architectural problem where you need to extract the shared behaviour into its own application, and have your applications delegate responsibility.

Organizing code in separate Projects vs separate Namespaces

I work in a .net c# application which contains 2 solutions for client and server. In server side there are 80+ projects that have been used to separate following Architectural layers,
Infrastructure Layer
Integration Layer (External Systems)
Domain Layer
Repository Layer
Manager Layer
Service Layer
In addition, almost every layer has test project.Now, the build time of the solution takes 2 to 3 minutes, and many developers (including me :)) feel we need to tackle this problem.
Therefore,proposed solution was to reduce the number of projects by merging the projects.In my view, it is probably a good solution to minimize the build time and we could achieve what we want.
Proposed solution is that we merge our projects into 3 areas, such as one library for production code, one library for test code, and one for deployment projects (WCF host ,etc) and logically divided layers in same project by separating the namespaces.
However, my concerns are
Could these separation good for the maintainability ? providing that more that hundread of classes for each namespace appox.
If we have common functionality such as helpers, where are we put those ?
Is there any other way to layering the solution ?
I guess you should split your solutions in logical layers.
As part of where do you put the helpers. Make a solution for it, on one of the lowest levels.
EXAMPLE
Software for a farm. You'll need to keep track of your animals, vegetables. You need a module for feeding the animals and one for Selling the animals and vegetables to the consumer market.
This could be splitted in a the following solutions
Back-end
Sell Module: Everyting for selling your products
Buy Module: Buying seeds, food for your animals, other products, ...
Sheduler Module: Trigger events for sow seeds, harvesting, ...
Prediction Module: Predicting harvests quantity's by the weather, and market prices, ...
...
Each of these back-end modules, can have it's own Data Access Layer, DTO, WCF Services, ...
This solution will only contain Business Logic, Data Access, ... . And there can be multiple front-end solution connecting to these back-end solutions.
Front-end
ASP.NET MVC Application: Webshop for selling to a consumer
WPF Application: Approving sells
Other WPF Application: Buying things.
Mobile application: Getting the events to your phone or something.
(Another option is to connect 2 or more backend solutions into 1 front-end solution)
...
This is a BIG change for your project and this will have an impact. Make sure you think this true, if you wan't to change it.
Multiple solutions will INCREASE your overall Build Time and it's important to have a nightly build so every developer can always work on the latest binaries, without having to build all the solutions on his local machine.
Note you can still use your layers in the different solutions:
Infrastructure Layer
Integration Layer (External Systems)
Domain Layer
Repository Layer
Manager Layer
Service Layer
To make this work all together and don't get messed up with binaries. You can map a drive I.E. X: where you have a folder binaries, where you have a folder for each solution. where each solutions copy's the assemblies on the post build event. (Script this, so it works on every machine)
If you have a good network infrastucture, you can also copy it on a server. So when you build all solutions for example in TFS, it can copy it to a location all developers can access.
If you build in TFS make sure your build order is correct, first the lowest layer, last the highest layer.
But as you split up your solution, in solutions you'll probably don't need them in every solution.
I recently read an article about Onion Architecture, maybe you can have a look at that too. (It's specific for ASP.NET MVC).
You can also have a look into CQRS.
Why 80+ projects while you only have 6 layers in your application ?
You might answer that they cover a large number of functional areas, but do you need all these functional areas in one solution in the first place ?
I'd recommend reflecting architectural divisions with projects and functional divisions with solutions. Different solutions can reuse the same projects. This way you'll have one project for each reusable architectural layer and as many Domain projects as there are functional areas.
I definitely wouldn't merge the projects... I think you'll quickly end up with spaghetti code in each layer as the developers take shortcuts (whether they mean to or not) that they shouldn't be taking.
I'd be more inclined to separate the layers out into separate solutions... and use binary references instead of project references across the tiers. This can play havoc with branching though, be careful.
I've seen build times drop by making the projects build to a common place - apparently this can prevent VS rebuilding projects when it doesn't need to - but I don't know if this is true or not.
Some ideas here: http://blogs.microsoft.co.il/blogs/arik/archive/2011/05/17/speed-up-visual-studio-builds.aspx
Finally.... is the three minutes for a full build or just to unit test one project? Focus on whichever is the biggest issue. If unit testing is taking a long time, you've got a problem with dependencies. If the full solution is taking a long time, get a build server and focus on bringing your unit test development time down.
Hope that helps
A low impact way I've dealt with a problem like that in the past is to create a series of solution files that include just one of the projects and its test project (and perhaps the project's dependencies). Then, get yourself a tool like NCrunch and do most of your coding in these solutions, probably using TDD. This will give you lightning fast feedback loops and is decidedly in the spirit of the layered, decoupled approach. When I've done this in the past, I find that I only actually run the entire application a few times a day, max, and I rely heavily on red-green-refactor, which is nice anyway.
If you want, you don't even have to source control these little solution files -- developers can create their own and they can be borderline throw-away.
Of course, this is by no means a panacea and won't address the problem of long compile times when you want to run the application, but it can definitely help simultaneously cut down on feedback time while promoting good design/development practice and it has the advantage of being extremely low risk and fast to setup.

Best practices for large solutions in Visual Studio (2008) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
We have a solution with around 100+ projects, most of them C#. Naturally, it takes a long time to both open and build, so I am looking for best practices for such beasts. Along the lines of questions I am hoping to get answers to, are:
how do you best handle references between projects
should "copy local" be on or off?
should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application)
Are solutions' folders a good way of organizing stuff?
I know that splitting the solution up into multiple smaller solutions is an option, but that comes with its own set of refactoring and building headaches, so perhaps we can save that for a separate thread :-)
You might be interested in these two MSBuild articles that I have written.
MSBuild: Best Practices For Creating Reliable Builds, Part 1
MSBuild: Best Practices For Creating Reliable Builds, Part 2
Specificially in Part 2 there is a section Building large source trees that you might want to take a look at.
To briefly answer your questions here though:
CopyLocal? For sure turn this off
Build to one or many output folders? Build to one output folder
Solution folders? This is a matter of taste.
Sayed Ibrahim Hashimi
My Book: Inside the Microsoft Build Engine : Using MSBuild and Team Foundation Build
+1 for sparing use of solution folders to help organise stuff.
+1 for project building to its own folder. We initially tried a common output folder and this can lead to subtle and painful to find out-of-date references.
FWIW, we use project references for solutions, and although nuget is probably a better choice these days, have found svn:externals to work well for both 3rd party and (framework type) in-house assemblies. Just get into the habit of using a specific revision number instead of HEAD when referencing svn:externals (guilty as charged:)
Unload projects you don't use often, and buy a SSD. A SSD doesn't improve compile time, but Visual Studio becomes twice faster to open/close/build.
We have a similar problem as we have 109 separate projects to deal with. To answer the original questions based on our experiences:
1. How do you best handle references between projects
We use the 'add reference' context menu option. If 'project' is selected, then the dependency is added to our single, global solution file by default.
2. Should "copy local" be on or off?
Off in our experience. The extra copying just adds to the build times.
3. Should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application)
All of our output is put in a single folder called 'bin'. The idea being that this folder is the same as when the software is deployed. This helps prevents issues that occur when the developer setup is different from the deployment setup.
4. Are solutions folders a good way of organizing stuff?
No in our experience. One person's folder structure is another's nightmare. Deeply nested folders just increase the time it takes to find anything. We have a completely flat structure but name our project files, assemblies and namespaces the same.
Our way of structuring projects relies on a single solution file. Building this takes a long time, even if the projects themselves have not changed. To help with this, we usually create another 'current working set' solution file. Any projects that we are working on get added in to this. Build times are vastly improved, although one problem we have seen is that Intellisense fails for types defined in projects that are not in the current set.
A partial example of our solution layout:
\bin
OurStuff.SLN
OurStuff.App.Administrator
OurStuff.App.Common
OurStuff.App.Installer.Database
OurStuff.App.MediaPlayer
OurStuff.App.Operator
OurStuff.App.Service.Gateway
OurStuff.App.Service.CollectionStation
OurStuff.App.ServiceLocalLauncher
OurStuff.App.StackTester
OurStuff.Auditing
OurStuff.Data
OurStuff.Database
OurStuff.Database.Constants
OurStuff.Database.ObjectModel
OurStuff.Device
OurStuff.Device.Messaging
OurStuff.Diagnostics
...
[etc]
We work on a similar large project here. Solution folders has proved to be a good way of organising things, and we tend to just leave copy local set to true. Each project builds to its own folder, and then we know for each deployable project in there we have the correct subset of the binaries in place.
As for the time opening and time building, that's going to be hard to fix without breaking into smaller solutions. You could investigate parallelising the build (google "Parallel MS Build" for a way of doing this and integrating into the UI) to improve speed here. Also, look at the design and see if refactoring some of your projects to result in fewer overall might help.
In terms of easing the building pain, you can use the "Configuration Manager..." option for builds to enable or disable building of specific projects. You can have a "Project [n] Build" that could exclude certain projects and use that when you're targeting specific projects.
As far as the 100+ projects goes, I know you don't want to get hammered in this question about the benefits of cutting down your solution size, but I think you have no other option when it comes to speeding up load time (and memory usage) of devenv.
What I typically do with this depends a bit on how the "debug" process actually happens. Typically though I do NOT set copy local to be true. I setup the build directory for each project to output everything to the desired end point.
Therefore after each build I have a populated folder with all dll's and any windows/web application and all items are in the proper location. Copy local wasn't needed since the dll's end up in the right place in the end.
Note
The above works for my solutions, which typically are web applications and I have not experienced any issues with references, but it might be possible!
We have a similar issue. We solve it using smaller solutions. We have a master solution that opens everything. But perf. on that is bad. So, we segment up smaller solutions by developer type. So, DB developers have a solution that loads the projects they care about, service developers and UI developers the same thing. It's rare when somebody has to open up the whole solution to get what they need done on a day to day basis. It's not a panacea -- it has it's upsides and downsides. See "multi-solution model" in this article (ignore the part about using VSS :)
I think with solutions this large the best practice should be to break them up. You can think of the "solution" as a place to bring together the necessary projects and perhaps other pieces to work on a solution to a problem. By breaking the 100+ projects into multiple solutions specialized to developing solutions for only a part of the overall problem you can deal with less at a given time there by speeding your interactions with the required projects and simplifying the problem domain.
Each solution would produce the output which it is responsible for. This output should have version information which can be set in an automated process. When the output is stable you can updated the references in dependent projects and solutions with the latest internal distribution. If you still want to step into the code and access the source you can actually do this with the Microsoft symbol server which Visual Studio can use to allow you to step into referenced assemblies and even fetch the source code.
Simultaneous development can be done by specifying interfaces upfront and mocking out the assemblies under development while you are waiting for dependencies that are not complete but you wish to develop against.
I find this to be a best practice because there is no limit to how complex the overall effort can get when you break down it down physically in this manner. Putting all the projects into a single solution will eventually hit an upper limit.
Hope this information helps.
We have about 60+ projects and we don't use solution files. We have a mix of C# and VB.Net projects. The performance was always an issue. We don't work on all the projects at the same time. Each developer creates their own solution files based on the projects they're working on. The solution files doesn't get checked into our source control.
All Class library projects would build to a CommonBin folder at the root of the source directory. Executable / Web Projects build to their individual folder.
We don't use project references, instead file based reference from the CommonBin folder. I wrote a custom MSBuild Task that would inspect the projects and determine the build order.
We have been using this for few years now and have no complaints.
It all has to do with your definition and view on what a solution and a project are. In my mind a solution is just that, a logical grouping of projects that solve a very specific requirement. We develop a large Intranet application. Each application within that Intranet has it's own solution, which may also contain projects for exes or windows services. And then we have a centralized framework with things like base classes and helpers and httphandlers/httpmodules. The base framework is fairly large and is used by all applications. By splitting up the many solutions in this way you reduce the amount of projects required by a solution, as most of them have nothing to do with one another.
Having that many projects in a solution is just bad design. There should be no reason to have that many projects under a solution. The other problem I see is with project references, they can really screw you up eventually, especially if you ever want to split up your solution into smaller ones.
My advice is to do this and develop a centralized framework (your own implementation of Enterprise Library if you will). You can either GAC it to share or you can directly reference the file location so that you have a central store. You could use the same tactic for centralized business objects as well.
If you want to directly reference the DLL you will want to reference it in your project with copy local false (somewhere like c:\mycompany\bin\mycompany.dll). A runtime you will need to add some settings to your app.config or web.config to make it reference a file not in the GAC or runtime bin. In all actuality it doesn't matter if it's copy local or not, or if the dll ends up in the bin or is even in the GAC, because the config will override both of those. I think it is bad practice to copy local and have a messy system. You will most likely have to copy local temporarily if you need to debug into one of those assemblies though.
You can read my article on how to use a DLL globally without the GAC. I really dislike the GAC mostly because it prevents xcopy deployment and does not trigger an autorestart on applications.
http://nbaked.wordpress.com/2010/03/28/gac-alternative/
Set CopyLocal=false will reduce build time, but can cause different issues during deployment time.
There are many scenarios, when you need to have Copy Local’ left to True, e.g.
Top-level projects,
Second-level dependencies,
DLLs called by reflection.
My experience with setting CopyLocal=false wasn't successful. See summary of pro and cons in my blog post "Do NOT Change "Copy Local” project references to false, unless understand subsequences."

Directories or projects .

On a regular winforms solutions, how do you determine to break classes into different directories / namespaces or seperate projects. Besides binary dependencies should view, controllers, models all be in different projects ?
I tend to believe that you can happily work with a simpler system and separate your dependencies using folders. Adding extra projects makes the system slightly harder to work with, deploy and maintain as you now have several smaller things you have to coordinate.
Using folders you will still have to ensure that hasty developers do not bypass your layering, which can be a big concern with junior developers. You can watch out for violations using static checking (like NDepend) but no checker is perfect. If you have specific functionality at each level that you feel you need another protection level (internal) then by all means split it up into separate projects.
As for what folders to break them into I would likely follow the conventions found in web mvp/mvc frameworks such as.
Controllers\
Views\
Broken down by controller
Model\
You might want to read this blog post on the topic. Good luck.

Categories

Resources