My company is having trouble figuring out the best way to manage our builds, releases, and branches... Our basic setup is we have 4 applications we maintain 2 WPF applications and 2 ASP.NET applications, all 4 of these applications share common libraries, so currently they are all in one folder /trunk/{app1, app2, app3, app4}.
This makes it very hard to branch/tag a single application because you are branching all 4 at the same time, so we would like to separate it out into something like {app1,app2,app3,app4}/{trunk,tags,branches} but then we run into the issue of where to put the shared libraries?
We can't put the shared libraries as SVN externals because then when you branch/tag the branch is still referencing the trunk shared libs instead of having them branched as well.
Any tips? Ideas?
We are currently using svn and cruisecontrol.net.
EDIT: The shared libraries are changing often as of right now, which is why we can't use them as svn externals to trunk, because we might be changing them in the branch. So we can't use them as binary references.
Its also very hard to test and debug when the libraries are statically built instead of including the source.
I guess it all depends on how stable the shared libraries are. My preference would be for the shared libraries to be treated as their own project, built in CruiseControl like the others. Then the four main applications would have binary references to the shared libraries.
The primary advantage with this approach is the stability of the applications now that the shared libraries are static. A change to the libraries wouldn't affect the applications until they explicitly updated the binaries to the newer version. Branching brings the binary references with it. You won't have the situation where a seemingly innocuous change breaks the other three applications.
Can you clarify why you don't like branching all four applications at the same time?
This makes it very hard to branch/tag a single application because you are branching all 4 at the same time
I usually put all my projects directly under trunk as you are currently doing. Then when I create a release branch or a feature branch, I just ignore the other projects that get carried along. Remember, the copies are cheap, so they're not taking up space on your server.
To be specific, here's how I would lay out the source tree you've described:
trunk
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
branches
WPF1 v 1.0
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
WPF1 v 1.1
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
lib1 payment plan
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
We are kicking off an open source project to try and deal with this issue. If anyone is interested in commenting on it or contributing to it, it's at:
http://refix.codeplex.com
I agree with #Brian Frantz. There's no reason to not treat the shared libraries as their own project that is built daily and your projects take binary dependency on the daily builds.
But even if you want to keep them as a source dependency and build them with the app, why wouldn't the SVN externals approach work for you? When you branch particular app, there's no need to branch the shared library as well, unless you need a separate copy of it for that branch. But that means, it not a shared library anymore, right?
I've tried solving this problem several ways over the years, and I can honestly say there is no best solution.
My team is currently in a huge development phase and everyone basically needs to be working off of the latest and greatest of the shared libs at any given time. This being the case we have a folder on everyone's C: drive called SharedLibs\Latest that is automatically synced up with the latest development release of each of our shared libraries. Every project that should be drinking from the firehose has absolute file references to this folder. As people push out new versions of the shared libs, the individual projects end up picking them up transparently.
In addition to the latest folder, we have a SharedLibs\Releases folder which has a hierarchy of folders named for each version of each shared lib. As projects mature and get towards release candidate phase, the shared lib references are pointed to these stable folders.
The biggest downside to this is that this structure needs to be in place for any project to build. If someone wants to build an app 10 years from now, they will need this structure. It is important to note that these folders need to exist on the build/CI server as well.
Previous to doing this, each solution had a lib folder that was under source control containing the binaries. Each project owner was tasked with propagating new shared dlls. Since most people owned several projects, things often fell through the cracks for the projects that were still in the non-stable phase. Additionally TFS didn't seem to track changes to binary files that well. If TFS was better at tracking dlls we probably would have used a shared libs solution / project instead of the file system approach we are taking now.
Apache NPanday + Apache Maven Release
... might solve your problems
It gives you dependeny management (transitive resolving), strong versioning support, and automatic tagging/branching on 14+ version control systems, including SVN.
Give me a hint, if I should elaborate more.
I think there is no way you can avoid versioning and distributing your shared libs as separate artifacts, but Maven helps you alot doing that!
And you can allways do tricks to get it all opened in one Solution :-)
A sample workflow:
Dev 1 build A locally using Maven
Checks in sources
Build server builds A and deploys so-called SNAPSHOT-Versions to Repository Manager (e.g. Nexus)
Dev 2 two loads B, NPanday will automatically resolve the A-libs from the Repository Manager (No need to get the source and build)
Dev 1 wants to release A: Maven Release creates a branch or a tag with your source, finalizes the Version (removing SNAPSHOT) and deploys the artifacts to a Repository Manager.
Dev 2 can now upgrade B to use the final release of A (change entry in xml, or use VS-addin to do so)
Now Dev 2 can release B, again with automatic creation of tag or branch and deployment of built artifacts.
If you want to provide zipped packages as output from your build, Maven Assembly Plugin will help you do that.
You can use Apache/ IVY in standalone mode.
http://ant.apache.org/ivy/history/latest-milestone/standalone.html
I need to emphasize "stand alone" mode. If you google for examples....you will find alot of (not standalone) ones.
Basically, IVY works on this premise.
You publish binaries (or any kind of file, but I'll say binaries from this point forward).....as little binary-packages.
Below is PSEUDO code, do not rely on my memory.
java.exe ivy.jar -publish MyBinaryPackageOne.xml --revision 1.2.3.4 (<< where the .xml refers to N number of files that make up the one package.))
"Package" simply means a group of files. You can include .dll and .xml and .pdb files in a package (what I do with a DotNet build of assemblies). Or whatever. IVY is file-type agnostic. If you want to put WordDocs up there you could, but sharepoint is better for documents.
As you make bug fixes to your code, you increment the revision.
java.exe ivy.jar -publish MyBinaryPackageOne.xml --revision 1.2.3.5
then later you can retrieve from IVY what you want.
java.exe ivy.jar -retrieve PackagesINeed.xml
PackagesINeed.xml would contain information about the packages you want.
something like
"I want version '1.2+ of the MyBinaryPackageOne"
(defined in xml)
As you build your framework binaries...you PUBLISH to IVY.
Then, as you develop and build your code...you RETRIEVE from IVY.
In a NUTSHELL, IVY is a repository for FILES (not source code).
Ivy then becomes the definitive source of your binaries.
None of the "Hey, Developer-Joe has the binaries we need" kind of bull-mess.
.......
Advantages:
1. You do NOT keep your binaries in source control. (and thus do not BLOAT your source control).
2. You have ONE definitive source for binaries.
3. Through xml configuration, you say which versions you need for a library.
(In the example above, if version 2 (2.0.0.0) of MyBinaryPackageOne is published to IVY (let's assume with breaking changes from 1.2.x.y)...then you are OK, because you defined in your retrieve (xml configuration file) .. .that you only want "1.2+". Thus your project will ignore anything 2+...unless you change the configuration package.
Advanced:
If you have a build machine (CruiseControl.NET for example)....you can write logic to publish your (newly built) binaries to IVY after each build.
(Which is what I do).
I use the SVN revision as the last number in the build number.
If my SVN revision was "3333", then I would run something like this:
java.exe ivy.jar -publish MyBinaryPackageOne.xml --revision 1.2.3.3333
Thus whenever retrieve the package for revision "1.2.3+" .... I'll get the latest build.
In this case, I would get version 1.2.3.3333 of the package.
It's sad that IVY was started in 2005 (well, that's the good news)...but that NUGET didn't come out til 2010? (2011?)
Microsoft was 5-6 years behind on this one, IMHO.
I would never go back to putting binaries in source control.
IVY is very good. It is time proven. It solves the problem of DEPENDENCY management.
Does it take a little bit of time to get comfortable with it?
Yep.
But it is worth it in the end.
My 2 cents.
.................
But idea #2 is
Learn how to use NUGET with a local (as in..local to your company) repository.
That is the about the same thing as IVY.
But having looked at NUGET, I still like IVY.
Related
I'm afraid I may be asking a really dumb question, but I can't seem to find anything that makes this clear. I usually work on smaller applications but am now working on a larger one with several assemblies in a baseline framework and several assemblies for a product line domain (with more to come). I would like to manage the build by configuring MSBuild. I've done a lot of online research (specifically with several MSDN articles I found) and now feel knowledgeable enough to be dangerous.
I understand that in csharp the *.csproj file can be unloaded and modified with properties, items, and targets to control the build process. I also understand that I can import my own targets file to help separate and organize. In this link though (https://msdn.microsoft.com/en-us/magazine/dd483291.aspx) a multilevel project build is organized with node-level dirs.proj files. This is confusing to me and has raised several questions I can't seem to find an answer to:
What is the difference in a *.proj and *.csproj file?
Can a *.proj be setup in VS to load on Build with F6 or does using this require use of the command prompt only? (i.e. "msbuild dirs.proj /t:Build").
Does dirs.proj load automatically? If so, my study-by is not working correctly, yet it does with command prompt.
Or am I overlooking something all the way around with "dirs.proj" Maybe it's just a substitue name for one of the project *.csproj files? If that was the case though there wouldn't have been a need for the root node's dirs.proj which from what I can tell doesn't have an actual project associated to it.
Anyways, I've seen dirs.proj mentioned in several forums regarding issues, but no where can I find how it's loaded or used in VS (outside of manual command prompt building which seems unreasonable if this is used to organize the build but the build won't really take a huge amount of time). I'm hoping someone can help me achieve that a-ha moment with this.
Thanks in advance.
Dirs.proj is an MSBuild convention typically used when dealing with very large source trees (> than 20 projects). I've worked with Microsoft engineers at a previous company and the dirs.proj convention appears to be one that Microsoft developed and uses internally to manage very large source trees.
A very good implementation reference for this is the Python Tools for Visual Studio project on CodePlex GitHub.
The link you shared by Sayed Ibrahim Sashimi is a very good explanation of the reasoning behind the msbuild paradigm, but it doesn't do a very good job of showing a practical example of how it works. The Python Tools project is an outstanding reference for this.
The idea behind using this paradigm is simple. I'd wager a guess that most .NET software engineers work on somewhat limited-scale projects that don't deal with more than 5-10 projects at a time, and they manage these projects in Visual Studio via Solution (.sln) files. They may even instruct their build system to run builds on the .sln. This works fine until you start thinking about scaling your product into or combining it with something larger, such as a platform with many, many projects. Solution files are not MSBuild files and as such they are not extensible like MSBuild is and they suffer massive performance penalties when dealing with large numbers of projects.
From an MSBuild perspective, dirs.proj stands in for Visual Studio .sln files. The difference, however, is that dirs.proj don't just include .csproj (and the like) as .sln do, rather, they can include source subtrees (e.g. other nested dirs.proj). So, building the root dirs.proj can result in the entire source tree being built, or building a nested dirs.proj will result in that subtree being built.
Therefore, the paradigm encourages you to look at your source as a series of interdependent nodes organized into features or product areas. That way, engineers can work on different source subtrees in very large projects without having to deal with the entire source tree, as you would have to with a VS solution.
Using this paradigm also carries certain benefits that don't come with .sln files. For example, if one project references a project from another, separate subtree, msbuild will build that reference first, automatically. Additionally, your source nodes can carry their own build settings, allowing them to be built dynamically using different build settings based on build scenario. For example, under one scenario a SharePoint source subtree needs WSP packaging, a C# subtree needs to be built without .pdb, a DB subtree needs to generate dacpacs, and the entire source tree needs to sign their assemblies using myCorp.snk and set build output to the $(buildRoot)\Output directory.
dirs.proj aren't opened via visual studio - they're built on the command line using msbuild. The only pain point is that the files have to be hand-curated.
So, long answer short take a look at the Python Tools project and see how they're using dirs.proj. Note how the entire source tree has common settings managed by Common.Build.settings, and how msbuild properties in this .settings file are used in the various .csproj files.
I am fairly new to python, and come from a C# background. In C# l, third party libraries are commonly stored inside the project folder.
This means that libraries are totally internal to the project. The project then is not dependent on anything outside of the project folder (other than .net framework of course).
I really like this structure and have tried successfully to mirror this in python by copying the libraries into a lib directory, in the project root, and adding the lib folder to the python path on startup of the application.
I am worried that there may be something I am overlooking by doing this as I have looked around a bit amd have not really seen anyone else in thw python community doing this.
My question is simply - is this ok? Is there something that I may miss by simply dumping the necessary .py libraries in, rather than using easy install, and thus storing the libraries in site packages, at a system level?
Please feel free to let me know of any drawbacks you can see, no matter how simple.
Thanks!
I'll espouse the usage of virtualenv and pip for development purposes. This will give you exactly the sandbox that you are used to. As for distribution, use setup.py and reuse the requirements.txt file that you would use with pip install -r to install dependencies to generate the install_requires argument to setuptools.setup. I've been meaning to set up an example that shows this off a little - check out https://github.com/dave-shawley/setup-example for a nice example with some description too. I plan on adding a little more to this as time allows.
If you want to closely manage the dependencies of your code on the per project basis you might want to take a look at virtualenv.
Virtualenv will allow you to keep your dependencies close to your source but will remove the error prone manual copying of the .py files.
On top of that remamber that some packages are not pure python and they sometimes contain compiled C code - if you use virtualenv you do not have to worry about it.
Background: My team is made up of 3 fairly inexperienced developers. We are developing in-house software for our company. Currently we have a number of smaller and separate solutions. Many of these are interdependent. Currently these depencies are made by referencing the output dll's in the respective release-folder. Updates are pushed around by manually rebuilding dependent solutions.
Example:
Solution A uses features of solution B. The connection is made having Solution A referencing ...\Release\B.dll . Changes to B propagates by building solution B, then building solution A and so forth.
This has worked okay before, but now we are moving from a manual (mind numbing) "version control system" (folder1, folder2, folder2New...) to using a proper one (git).
It seems that versioning the .dll's is not recommended. This means that every time someone wants to build a new version of A, he also needs to build B (and maybe 5 other solutions) in order to have the latest version of B.
I'm thinking that there must be a better way to do this.
I've been looking at combining the relevant solutions into one master solution, but I can't figure out how to do this in Visual C# Express (which we are using).
So at long last the questions:
Is having a master solution that builds everything the way to go?
-- it seems so from MSDN but I can't figure out how to do this in Visual C# Express 2008, which leeds me to
Is this even possible in Visual C# Express? If not, what is a
good way of managing the problem?
Edit Thanks to all for the great suggestions below. Here's a summary of what I ended up doing.
In short the answers to the questions are: "Yes" and "Sort of, but mostly yes". I implemented as follows: In order to get an idea of the dependencies, I did as suggested below, and drew a map of the binary products, with an arrow pointing from the dll's or exe's name to all of its dependencies.
For each project, I opened its corresponding solution (since at first there was one solution pr project). I then added the projectfile of each dependency in the tree structure revealed in the graph (by right-clicking the solution in solution explorer), so that also dependecies's dependencies and so forth were included. Then I removed the old references (pointing directly to the .dlls) and added references to the projects instead.
The important result is:
When a solution of a project is built, all it's dependencies are built with it, so that when deploying, you know that all the build products are automatically of the latest version.
I would create a new solution and add all of the projects that relate to each other to it. You can group the projects from each of the original solutions by putting them in different solution folders within the new solution. This way, when you build a project, all of the projects it depends upon will also get built. It also means that all of your projects will be built using the same configuration (i.e. Release or Debug). This means that all of your projects can be built in Debug, not just the top one in the dependency tree while everything below it is a Release assembly. Makes debugging much easier.
I have Visual C# Express 2010 and when I create a new project, it automatically creates a default solution. If it's visible, then you can right-click on the solution and choose Add>Existing Project.
If the solution is not visible, (I seem to remember this problem in C# Express 2005/8), then you can add an existing project via File>Add>Existing Project. The solution should be visible now.
In terms of speration, what I usually do is this:
Everything that must be built together should be in one solution, and these should be projects and not DLL's. I try to live by The Joel List, where you should be able to build your project in one step. If it is one deployable unit, then there should be one solution. All of my projects are built on a build server before they can be deployed, so everything should be in the solution that needs to be built.
Guys sometimes put the WCF services project and the clients in the same project for easy debugging, but it depends on whether you want to deploy client and server independently. Usually for bigger projects I separate them.
Lastly there's one exception. We have a central common library that is used by different teams. If it's included in different solutions, and one team changes something, we end up breaking the other team's builds. In this case, we create a single solution that has all of the library projects. These get built to DLL's that we store the versions of. We treat these as a framework that the other solutions can use. E.g. Team A is using CommonLibrary 1.1 and Team B is using CommonLibrary 1.2.
You need to think of Solutions as just "groupings of projects" - the projects are what are actually "built", not the "solution" (well, that's not entirely true, the solution is turned into a "metaproject" that references the contained projects, but its close enough to the truth)
If you have interdependencies between solutions, I would suggest drawing all the projects on a big whiteboard, then draw arrows representing the dependencies from project to project. Once you've done this, you'll be able to see at a glance what the appropriate "groupings of projects" make sense. Those become your solution files.
For example, if you have projects A, B, ..., F, where:
A depends on B
B depends on C
D depends on C
E depends on F
One possible split here would be solution 1 with projects A, B, C, D and solution 2 with projects E, F.
I would come up with a common area to push all dlls. My company uses the "R" drive, which is just a LOCAL (not network so no one can touch another persons folder) mapped folder everyone has. Each solution will build to this. Right click a project, properties->build and change the output. Or you can add a post build command to push the dll there. After that, have all of your projects reference this location.
Once this is done and everything is pointing at the same place, you can even add different combinations of projects to different solutions. If a developer only wants the ui projects, they can open a special "ui" solution that is a subset of the whole.
Here is a post build event that I use in my project properties->build events
rem when building on local workstation copy dll to local R:\
if '$(BuildingInsideVisualStudio)' (
xcopy $(TargetDir)$(TargetName).* R:\Extranet\$(TargetName)\1.0\ /Y
)
rem if "Enterprise" build then copy dll to Corp R:\ drive and to Build Machine R:\
if '$(Reason)' == 'Manual' (
xcopy $(TargetDir)$(TargetName).* \\folder\$(TargetName)\1.0\ /Y
xcopy $(TargetDir)$(TargetName).* R:\Extranet\$(TargetName)\1.0\ /Y
)
We have a C# desktop application which we run for clients on various servers on a software as a service model. We are still on dot net framework 2.
The software has a architecture in which we have an independent application to catch external data thrown by some server. Then an application to make calculations based on it. Also one more application on which the client sees the output. The link between the 3 applications is another application which communicates with the DB.
The 4 solutions are on a SVN for sourcecontrol. But the release management is still manual and the patches are made manually by checking the log and including the dlls, pdbs, xml. etc for the projects for which the code has changed.
There is no assembly versioning implemented and the patch or release management is just done in the dark.
I want to know what is the industry practice for generating automatic patches from the code. Also I want a patch for each revision in the SVN. Also is assembly versioning helpful in this?
I have read much about continuous integration but it fails because we do not have unit tests and other fancy code to moniter the correctness of code.
The only thing at this time I would be interested is to implement a way to make patches which can be applied and removed easily. Also I want to know a way to determine the way we can monitor which release is at which level(or what patches have been applied) by some automated way rather than maintaining a log manually.
We use a build script which creates a SvnVersion.cs file containing the last commited revision. This file is placed in the root of the solution, and then added to all projects in the solution (but added as a link, not copied).
The template for the file (SvnVersion.Template.cs) looks like this:
using System.Reflection;
[assembly: AssemblyVersion("1.0.0.$WCREV$")]
[assembly: AssemblyFileVersion("1.0.0.$WCREV$")]
And we simply use TortoiseSVN to fill these placeholders in a batch script:
type "%TRUNKPATH%SvnVersion.Template.cs" > "%TRUNKPATH%\SvnVersion.tmp"
SubWcRev "%TRUNKPATH%\" "%TRUNKPATH%SvnVersion.tmp" "%TRUNKPATH%SvnVersion.cs" -f
IF ERRORLEVEL 1 GOTO ERROR
DEL "%TRUNKPATH%SvnVersion.tmp"
If you don't use TortoiseSVN, there are other ways to get this info in the file.
You will also need to remove this same information from your AssemblyInfo.cs files or you'll get a compile error. Also, to speed up Debug builds, this is only executed in Release builds (and in Debug builds only if the file doesn't initially exists, like after a fresh checkout).
I currently have a program that i wrote that is divided up into 3 separate solutions.
Front end (all display related stuff)
Parsers (multiple (39) projects that each create a dll to parse specific data)
Globals (multiple (5) projects that each create a dll that is used by projects in the parsers solution, and by the front end).
Requirements -
Both the Front end and Parsers require the globals dlls to exist at compile time, and used at run time.
The Parsers dlls are loaded at run time using assembly.LoadReference.
Development is: C:\projects\myProg
deployed location is: C:\myProg
My problem is that I have been going back and forth with issues dealing with project dependencies, where to point to for my globals dlls. Do I point to the deployed location or the developement location, and if so, release or debug?
So I started looking up the different solution types, and I'm wondering if I should set up a partitioned solution, or a multi-solution for my particular situation.
Add all the projects to a single solution.
Change any references between projects into "project references" rather than direct references to dll files. This will fix a lot of dependency issues.
If you have any "library" files that are not changed often, then you can optionally move them into a separate solution. The output of this should be "prebuilt" release dlls that you can then reference from a standard location in your main solution (the best way to do this is to add a post build step that copies the output to your development "library binaries" folder. That way, the build process is not changed, you simply add an extra step to get the files where you need them, and you remain in full control of the build process). This works well, but is a pain if you need to change these prebuilt dlls often, so it's best only used for fairly static parts of your codebase.
Finally, consider merging many of your projects into a single project/assembly. The killer on build times is not the amount of code, it's the number of assemblies - on my PC every project adds a pretty constant 3 seconds to the build time, so by merging small projects I've saved quite a bit of build time.
Since those 3 are all part of the same system, it will probably be easier to have a single Solution with each Project added to it.
NOTE: You do not need to move anything from their current locations.
Just create a new empty solution and do a right-click Add > Existing Project... for each project you want to be a included, they will remain where they are on disk, but will be opened together.
The current ("old") solutions will be available as well, just as they are.
Also keep in mind that if you are editing the same project in two instances of VS at the same time, it will bug you about reloading the source code when a change is made and saved.
Most importantly, having the projects in the same solution will allow you to add references between them, rather than the DLL files.
why are they scattered into separate projects, Combine the Parses and globals into a single assembly. keep the UI assembly separate and as simple/small as possible.
Let's say you have a good reason for having so many projects (example: different amount of parsers available for different licenses of a product).
Managing dependencies in visual studio is made easy:
Right click your solution node
Select "Project Build Order..."
Make sure that every project does not need a project beneath it in that dialog.
About "where to deploy": visual studio does it well by default. If you're in debug, it will output to the debug folder of your solution, likewise for release.
HTH.