I'm afraid I may be asking a really dumb question, but I can't seem to find anything that makes this clear. I usually work on smaller applications but am now working on a larger one with several assemblies in a baseline framework and several assemblies for a product line domain (with more to come). I would like to manage the build by configuring MSBuild. I've done a lot of online research (specifically with several MSDN articles I found) and now feel knowledgeable enough to be dangerous.
I understand that in csharp the *.csproj file can be unloaded and modified with properties, items, and targets to control the build process. I also understand that I can import my own targets file to help separate and organize. In this link though (https://msdn.microsoft.com/en-us/magazine/dd483291.aspx) a multilevel project build is organized with node-level dirs.proj files. This is confusing to me and has raised several questions I can't seem to find an answer to:
What is the difference in a *.proj and *.csproj file?
Can a *.proj be setup in VS to load on Build with F6 or does using this require use of the command prompt only? (i.e. "msbuild dirs.proj /t:Build").
Does dirs.proj load automatically? If so, my study-by is not working correctly, yet it does with command prompt.
Or am I overlooking something all the way around with "dirs.proj" Maybe it's just a substitue name for one of the project *.csproj files? If that was the case though there wouldn't have been a need for the root node's dirs.proj which from what I can tell doesn't have an actual project associated to it.
Anyways, I've seen dirs.proj mentioned in several forums regarding issues, but no where can I find how it's loaded or used in VS (outside of manual command prompt building which seems unreasonable if this is used to organize the build but the build won't really take a huge amount of time). I'm hoping someone can help me achieve that a-ha moment with this.
Thanks in advance.
Dirs.proj is an MSBuild convention typically used when dealing with very large source trees (> than 20 projects). I've worked with Microsoft engineers at a previous company and the dirs.proj convention appears to be one that Microsoft developed and uses internally to manage very large source trees.
A very good implementation reference for this is the Python Tools for Visual Studio project on CodePlex GitHub.
The link you shared by Sayed Ibrahim Sashimi is a very good explanation of the reasoning behind the msbuild paradigm, but it doesn't do a very good job of showing a practical example of how it works. The Python Tools project is an outstanding reference for this.
The idea behind using this paradigm is simple. I'd wager a guess that most .NET software engineers work on somewhat limited-scale projects that don't deal with more than 5-10 projects at a time, and they manage these projects in Visual Studio via Solution (.sln) files. They may even instruct their build system to run builds on the .sln. This works fine until you start thinking about scaling your product into or combining it with something larger, such as a platform with many, many projects. Solution files are not MSBuild files and as such they are not extensible like MSBuild is and they suffer massive performance penalties when dealing with large numbers of projects.
From an MSBuild perspective, dirs.proj stands in for Visual Studio .sln files. The difference, however, is that dirs.proj don't just include .csproj (and the like) as .sln do, rather, they can include source subtrees (e.g. other nested dirs.proj). So, building the root dirs.proj can result in the entire source tree being built, or building a nested dirs.proj will result in that subtree being built.
Therefore, the paradigm encourages you to look at your source as a series of interdependent nodes organized into features or product areas. That way, engineers can work on different source subtrees in very large projects without having to deal with the entire source tree, as you would have to with a VS solution.
Using this paradigm also carries certain benefits that don't come with .sln files. For example, if one project references a project from another, separate subtree, msbuild will build that reference first, automatically. Additionally, your source nodes can carry their own build settings, allowing them to be built dynamically using different build settings based on build scenario. For example, under one scenario a SharePoint source subtree needs WSP packaging, a C# subtree needs to be built without .pdb, a DB subtree needs to generate dacpacs, and the entire source tree needs to sign their assemblies using myCorp.snk and set build output to the $(buildRoot)\Output directory.
dirs.proj aren't opened via visual studio - they're built on the command line using msbuild. The only pain point is that the files have to be hand-curated.
So, long answer short take a look at the Python Tools project and see how they're using dirs.proj. Note how the entire source tree has common settings managed by Common.Build.settings, and how msbuild properties in this .settings file are used in the various .csproj files.
Related
Ever since I've been using the (relatively) new .NET Standard Library project type in Visual Studio, I've been having some problems getting a complete set of DLL files that are required by my project.
The problem is usually limited to 3rd-party libraries which I reference as NuGet packages. I've noticed that these don't get copied to the output folder of my project when I build it. This didn't use to be the case in classic project types.
While I can appreciate the de-cluttering effect that this change has brought for .NET Standard projects, I'm now faced with a problem. I sometimes absolutely need to be able to get the entire list of all files that my project depends on!
I have several different cases, where I might require this list for one reason or another, but the one I believe is most crucial for me, is when I want to gather these files from the csproj itself, right after it's built. In there, I have a custom MSBuild <Target> which should take all the files from the output dir and zip them together for distribution. The problem is, I'm missing all the files that come from NuGet dependencies, because they're not there!
How can I solve this in a general (i.e. not project-specific) way?
UPDATE
There's this deps.json file that contains basically all I'm after and then some. It's just a matter of extracting the relevant information and find the files in the local NuGet cache. But that would involve writing a specialized app and calling it from my target. Before I start writing one myself... Is there something like this already out there somewhere?
I followed this answer and it sort of works.
The suggested thing was to include the following into my csproj:
<CopyLocalLockFileAssemblies>true</CopyLocalLockFileAssemblies>
My main concern is that it also outputs some other DLLs from the framework (such as System.Memory.dll and System.Buffers.dll, among others), which I didn't expect. But maybe that's a good thing. They do seem to be dependencies, just not direct ones. I'll see how it plays out.
If it turns out ok, my only wish would be that this directive was more prominently displayed in project settings (as a simple checkbox, maybe?) so I wouldn't have to hunt the web to find it.
We have a Project with a number of different solutions files and each solution then has many projects. The problem is that there are projects that need to be built for certain solutions. Each developer has to go through the pain of opening a large solution (a solution that contains many projects). The problem is that these solutions don't always build because a certain build order has to be followed.
My question is, is there a way to identify dependencies for each project in a given directory and then build these projects. Something like find all the project files that don't have any dependencies on our other projects that we own. So build such projects first then build the ones whose dependencies are already built.
I was thinking of using F# or Fake to do this but I am not sure where to start or if it is even possible.
I would really appreciate an answer with an example or links to where I can get help.
Regards,
Nasir
If you want to go with something off the shelf. Resharper from JetBrains has a very nice tool for viewing project build dependencies. This will help you create a build script with the correct build order.
http://www.jetbrains.com/resharper/webhelp/Architecture__Project_Dependencies_Exploration.html
Implementing the analysis in F# yourself wouldn't be too complicated.
For example, you can do it in 2 phases:
1. Go through your solution folder structure and build a map of (project file name -> full path of file).
2. Go through all the files you found, and for each file add references to other solutions projects to a graph structure.
Then you build progressively projects that don't have any (yet unbuilt) references.
Project files are easy to parse, being XML. Solution projects can be recognised by the relative path reference:
<ProjectReference Include="..\MyProjFolder\MyProjFile.csproj">
<!-- --^ -->
We have a moderately sized solution, with about 20 projects. In one of them I have my business entities. On compiling any project, visual studio waits and hangs about one and a half minutes on this BusinessEntities project.
I tried our solution in SharpDevelop and it compiles our complete solution, in 18 seconds. Similar timing with MSBuild.
My guess is that VS is trying to find out if the project needs a compile, but this process is about 15 times slower than actually performing the compile!!
I can't switch to the great sharpdevelop, it lacks some small, but essential requirements for our debugging scenarios.
Can I prevent VS from checking this project, And have it compile the projects without such a check, just like sharpdevelop?
I already know about unchecking projects in configuration management to prevent building some projects, but my developers will forget they need to compile this project after updating to latest sources and they face problems that seem strange to them.
Edit: Interesting results of an investigation: The delay happens to one of the projects only. In configuration manager I unchecked all projects, then compiled each of them individually. All projects compile in a few seconds!! The point is this: if that special project is built directly, compiles in a few seconds, if it is being built (or skipped, because it is up-to-date) as a result of building another project that depends on it, VS hangs for about a minute and half, and then decides to compile it (or skip it). My conclusion: Visual studio is checking to know if any files are changed, but for some reasons, for this special project it is extremely inefficient!!
I'd go to Tools -> Options -> Projects and Solutions -> Build and Run and then change the "MSBuild project build [output|build log] verbosity" to Diagnostic. At that level it will include timings which should help you track down the issue.
We had the same problem with an ASP.NET MVC web project running in Visual Studio 2013. We build the project and nothing happens for about a minute or so and then the output window shows that we are compiling.
Here's what fixed it... open the .csproj file in a text editor and set MvcBuildViews to false:
<MvcBuildViews>false</MvcBuildViews>
I had to use sysinternals process monitor to figure this out but it's clearly the cause for my situation. The site compiles in less than 5 seconds now and previously took over a minute. During that minute the Asp.net compilation process was putting files and directories into the Temporary Asp.net Files folder.
Warning: If you set this, you'll no longer precompile your views so you will lose the ability to see syntax errors in your views at build time.
There is the possibility that you are suffering from VS inspecting other freshly built assemblies for the benefit of the currently compiling project.
When an assembly is built, VS will inspect the references of the target assembly, which if they are feshly built or new versions, may include actually loading them in a .Net domain, which bears all the burdens of loading an assembly as though you were going to run it. The build can get progressively slower as it rebuilds more and more projects. When one assembly becomes newer the others do a lot more work. This is one possible explanation for why building by itself, versus already built, versus building clean, all have seemingly relevantly differing results. Its really tht the others changed and not about the one being compiled.
VS will 'mark down' the last 'internal' build number of the referenced assembly and look to see if the referenced assembly actually changed as it rolls through its build process. If its not differnt, a ton of work gets skipped. And yes, there are internal assembly build numbers that you dont control. This is probalby not in any way due to the actual c# compiler or its work or anything post-compile, but pre-compile steps necessary for the most general cases.
There are several reference oriented settings you can play with, and depending on your dev, test, or deployments needs, the functional differences may be irrelevant, however may profoundly impact how VS behaves and how long it takes during build.
Go to the references of one of the projects in Solution Explorer:
1) click on a reference
2) open the properties pane if its not (not the Property Pages or the Property Manager)
3) look at 'Copy Local', 'Embed Interop Types', 'Reference Output Assembly'; those may be very applicable and probably something good to know about regardless. I strongly suggest looking up what they do on MSDN. 'Reference Output Assembly' may or may not show in the list.
4) unload the project, and edit the .proj file in VS as text. look for the assembly reference in the XML and look for 'Private'. This means whether the assembly referenced is to be treated as though its going to be a private assembly from the referencing assemblies perspective, vs a shared one. Which is sort of a wordy way of saying, will that assembly be deployed as a unit with the other assemblies together. This is very important toward unburdening things. Background: http://msdn.microsoft.com/en-us/magazine/cc164080.aspx
So the basic idea here is that you want to configure all of these to be the least expensive, both during build and after deployment. If you are building them together, then for example you probably really don't need 'Copy Local'. Id hate to say more about how you should configure them without knowing more about your needs, but its a very fine thing to go read a few good paragraphs about each. This gets very tricky however, because you also influence whether VS will use the the stale old one when resolving before the referenced one is rebuilt. As a further example explaiing that its good to go read about these, Copy Local can use the local copy, even though its stale, so having this set can be double bad. Just remember the goal at the moment is to lower the burden of VS loading newly built assemblies jsut to compile the others.
Lastly, for now, I can easily say that hanging for only 1.5 mins is getting off very lucky. There are people with much much worse build times due to things like this ;)
Some troubleshooting idea's that have not been mentioned:
Clean solution?
Delete Obj and Bin folders plus the .suo file? FYI, neither Clean nor Rebuild will delete non-build files, eg files copied during a pre-build command.
Turn off VS scanning outside files. Options > tools > environment > document > detect when file is changed outside the environment?
Rollback SVN history to confirm when it started to occur? What changed? If the project file on day 1 takes the same time, recreate the project, add all the files and build.
Otherwise could you please run Process Monitor and let us know what Visual Studio is doing in the prep-build stage?
Sounds silly, but remove all breakpoints first. It sped up my pre-build checks massively - still don't know why though.
Based on the (limited) information provided one possibility is that there could be a pre-build action specified in the project file that is slow to compile.
Try disabling platform verification task as described here.
If your individual projects are compiling correctly then all you can do is change order of compilation by setting dependent projects explicitly in configuration.
Try to visualize your project dependency hierarchy and set dependent projects. For example, if your business entities project is referenced in each project, then in configuration of each project, this project must be selected as dependent.
When an explicit build order is not set, visual studio is analyzing projects to create an order of building project. Setting explicit dependent projects wiki make visual studio skip this step and use the order provided by you.
With such an extreme delay on a single project and no other avenue seeming to provide a reason I would attempt to build that specific project while running procmon from sysinternals and filter out all the success messages. You could probably also narrow it down to just the file system actions as well. From your description I might guess that the files are being locked by an external source like the event collection or workflow management process services.
Other things to consider would be whether or not this is a totally clean build machine or if it has been used to perhaps test the builds as well? If so, is there a chance that someone mapped an IIS application path to the project directly or registered it as a service location?
If you run procmon and see no obvious locks or conflicts I would create a totally new solution and project and copy the files over to see if that project also has the same delay. If it does have the same delay I would create a sample project of the same type but generic data (essentially empty) and see if that too is slow. If the new project with the same files builds fine you can then diff the directories to see what the variance is that causes the problem (perhaps a config or project setting).
For me, thoroughly disabling code analyzers helped per instructions here:
https://learn.microsoft.com/en-us/visualstudio/code-quality/disable-code-analysis?view=vs-2019#net-framework-projects.
I thought my code analyzers were already off, but adding the extra xml helped.
Thanks Kaleb's for the suggestion to set "MSBuild project build [output|build log] verbosity" to Diagnostic. The first message took more than 10 seconds to display:
Property reassignment: $(Features)=";flow-analysis;flow-analysis" (previous value: ";flow-analysis") at C:\myProjectDirectory\packages\Microsoft.NetFramework.Analyzers.2.9.3\build\Microsoft.NetFramework.Analyzers.props (32,5)
Which led me to the code analyzers.
Just in case someone else trips into this issue:
In my case the delay was being caused by an invalid path entry in "additional include directories" that referred to a non accessible UNC location.
Once this was corrected, the delay disappeared.
I currently have a program that i wrote that is divided up into 3 separate solutions.
Front end (all display related stuff)
Parsers (multiple (39) projects that each create a dll to parse specific data)
Globals (multiple (5) projects that each create a dll that is used by projects in the parsers solution, and by the front end).
Requirements -
Both the Front end and Parsers require the globals dlls to exist at compile time, and used at run time.
The Parsers dlls are loaded at run time using assembly.LoadReference.
Development is: C:\projects\myProg
deployed location is: C:\myProg
My problem is that I have been going back and forth with issues dealing with project dependencies, where to point to for my globals dlls. Do I point to the deployed location or the developement location, and if so, release or debug?
So I started looking up the different solution types, and I'm wondering if I should set up a partitioned solution, or a multi-solution for my particular situation.
Add all the projects to a single solution.
Change any references between projects into "project references" rather than direct references to dll files. This will fix a lot of dependency issues.
If you have any "library" files that are not changed often, then you can optionally move them into a separate solution. The output of this should be "prebuilt" release dlls that you can then reference from a standard location in your main solution (the best way to do this is to add a post build step that copies the output to your development "library binaries" folder. That way, the build process is not changed, you simply add an extra step to get the files where you need them, and you remain in full control of the build process). This works well, but is a pain if you need to change these prebuilt dlls often, so it's best only used for fairly static parts of your codebase.
Finally, consider merging many of your projects into a single project/assembly. The killer on build times is not the amount of code, it's the number of assemblies - on my PC every project adds a pretty constant 3 seconds to the build time, so by merging small projects I've saved quite a bit of build time.
Since those 3 are all part of the same system, it will probably be easier to have a single Solution with each Project added to it.
NOTE: You do not need to move anything from their current locations.
Just create a new empty solution and do a right-click Add > Existing Project... for each project you want to be a included, they will remain where they are on disk, but will be opened together.
The current ("old") solutions will be available as well, just as they are.
Also keep in mind that if you are editing the same project in two instances of VS at the same time, it will bug you about reloading the source code when a change is made and saved.
Most importantly, having the projects in the same solution will allow you to add references between them, rather than the DLL files.
why are they scattered into separate projects, Combine the Parses and globals into a single assembly. keep the UI assembly separate and as simple/small as possible.
Let's say you have a good reason for having so many projects (example: different amount of parsers available for different licenses of a product).
Managing dependencies in visual studio is made easy:
Right click your solution node
Select "Project Build Order..."
Make sure that every project does not need a project beneath it in that dialog.
About "where to deploy": visual studio does it well by default. If you're in debug, it will output to the debug folder of your solution, likewise for release.
HTH.
My company is having trouble figuring out the best way to manage our builds, releases, and branches... Our basic setup is we have 4 applications we maintain 2 WPF applications and 2 ASP.NET applications, all 4 of these applications share common libraries, so currently they are all in one folder /trunk/{app1, app2, app3, app4}.
This makes it very hard to branch/tag a single application because you are branching all 4 at the same time, so we would like to separate it out into something like {app1,app2,app3,app4}/{trunk,tags,branches} but then we run into the issue of where to put the shared libraries?
We can't put the shared libraries as SVN externals because then when you branch/tag the branch is still referencing the trunk shared libs instead of having them branched as well.
Any tips? Ideas?
We are currently using svn and cruisecontrol.net.
EDIT: The shared libraries are changing often as of right now, which is why we can't use them as svn externals to trunk, because we might be changing them in the branch. So we can't use them as binary references.
Its also very hard to test and debug when the libraries are statically built instead of including the source.
I guess it all depends on how stable the shared libraries are. My preference would be for the shared libraries to be treated as their own project, built in CruiseControl like the others. Then the four main applications would have binary references to the shared libraries.
The primary advantage with this approach is the stability of the applications now that the shared libraries are static. A change to the libraries wouldn't affect the applications until they explicitly updated the binaries to the newer version. Branching brings the binary references with it. You won't have the situation where a seemingly innocuous change breaks the other three applications.
Can you clarify why you don't like branching all four applications at the same time?
This makes it very hard to branch/tag a single application because you are branching all 4 at the same time
I usually put all my projects directly under trunk as you are currently doing. Then when I create a release branch or a feature branch, I just ignore the other projects that get carried along. Remember, the copies are cheap, so they're not taking up space on your server.
To be specific, here's how I would lay out the source tree you've described:
trunk
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
branches
WPF1 v 1.0
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
WPF1 v 1.1
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
lib1 payment plan
WPF1
WPF2
ASP.NET 1
ASP.NET 2
lib1
lib2
We are kicking off an open source project to try and deal with this issue. If anyone is interested in commenting on it or contributing to it, it's at:
http://refix.codeplex.com
I agree with #Brian Frantz. There's no reason to not treat the shared libraries as their own project that is built daily and your projects take binary dependency on the daily builds.
But even if you want to keep them as a source dependency and build them with the app, why wouldn't the SVN externals approach work for you? When you branch particular app, there's no need to branch the shared library as well, unless you need a separate copy of it for that branch. But that means, it not a shared library anymore, right?
I've tried solving this problem several ways over the years, and I can honestly say there is no best solution.
My team is currently in a huge development phase and everyone basically needs to be working off of the latest and greatest of the shared libs at any given time. This being the case we have a folder on everyone's C: drive called SharedLibs\Latest that is automatically synced up with the latest development release of each of our shared libraries. Every project that should be drinking from the firehose has absolute file references to this folder. As people push out new versions of the shared libs, the individual projects end up picking them up transparently.
In addition to the latest folder, we have a SharedLibs\Releases folder which has a hierarchy of folders named for each version of each shared lib. As projects mature and get towards release candidate phase, the shared lib references are pointed to these stable folders.
The biggest downside to this is that this structure needs to be in place for any project to build. If someone wants to build an app 10 years from now, they will need this structure. It is important to note that these folders need to exist on the build/CI server as well.
Previous to doing this, each solution had a lib folder that was under source control containing the binaries. Each project owner was tasked with propagating new shared dlls. Since most people owned several projects, things often fell through the cracks for the projects that were still in the non-stable phase. Additionally TFS didn't seem to track changes to binary files that well. If TFS was better at tracking dlls we probably would have used a shared libs solution / project instead of the file system approach we are taking now.
Apache NPanday + Apache Maven Release
... might solve your problems
It gives you dependeny management (transitive resolving), strong versioning support, and automatic tagging/branching on 14+ version control systems, including SVN.
Give me a hint, if I should elaborate more.
I think there is no way you can avoid versioning and distributing your shared libs as separate artifacts, but Maven helps you alot doing that!
And you can allways do tricks to get it all opened in one Solution :-)
A sample workflow:
Dev 1 build A locally using Maven
Checks in sources
Build server builds A and deploys so-called SNAPSHOT-Versions to Repository Manager (e.g. Nexus)
Dev 2 two loads B, NPanday will automatically resolve the A-libs from the Repository Manager (No need to get the source and build)
Dev 1 wants to release A: Maven Release creates a branch or a tag with your source, finalizes the Version (removing SNAPSHOT) and deploys the artifacts to a Repository Manager.
Dev 2 can now upgrade B to use the final release of A (change entry in xml, or use VS-addin to do so)
Now Dev 2 can release B, again with automatic creation of tag or branch and deployment of built artifacts.
If you want to provide zipped packages as output from your build, Maven Assembly Plugin will help you do that.
You can use Apache/ IVY in standalone mode.
http://ant.apache.org/ivy/history/latest-milestone/standalone.html
I need to emphasize "stand alone" mode. If you google for examples....you will find alot of (not standalone) ones.
Basically, IVY works on this premise.
You publish binaries (or any kind of file, but I'll say binaries from this point forward).....as little binary-packages.
Below is PSEUDO code, do not rely on my memory.
java.exe ivy.jar -publish MyBinaryPackageOne.xml --revision 1.2.3.4 (<< where the .xml refers to N number of files that make up the one package.))
"Package" simply means a group of files. You can include .dll and .xml and .pdb files in a package (what I do with a DotNet build of assemblies). Or whatever. IVY is file-type agnostic. If you want to put WordDocs up there you could, but sharepoint is better for documents.
As you make bug fixes to your code, you increment the revision.
java.exe ivy.jar -publish MyBinaryPackageOne.xml --revision 1.2.3.5
then later you can retrieve from IVY what you want.
java.exe ivy.jar -retrieve PackagesINeed.xml
PackagesINeed.xml would contain information about the packages you want.
something like
"I want version '1.2+ of the MyBinaryPackageOne"
(defined in xml)
As you build your framework binaries...you PUBLISH to IVY.
Then, as you develop and build your code...you RETRIEVE from IVY.
In a NUTSHELL, IVY is a repository for FILES (not source code).
Ivy then becomes the definitive source of your binaries.
None of the "Hey, Developer-Joe has the binaries we need" kind of bull-mess.
.......
Advantages:
1. You do NOT keep your binaries in source control. (and thus do not BLOAT your source control).
2. You have ONE definitive source for binaries.
3. Through xml configuration, you say which versions you need for a library.
(In the example above, if version 2 (2.0.0.0) of MyBinaryPackageOne is published to IVY (let's assume with breaking changes from 1.2.x.y)...then you are OK, because you defined in your retrieve (xml configuration file) .. .that you only want "1.2+". Thus your project will ignore anything 2+...unless you change the configuration package.
Advanced:
If you have a build machine (CruiseControl.NET for example)....you can write logic to publish your (newly built) binaries to IVY after each build.
(Which is what I do).
I use the SVN revision as the last number in the build number.
If my SVN revision was "3333", then I would run something like this:
java.exe ivy.jar -publish MyBinaryPackageOne.xml --revision 1.2.3.3333
Thus whenever retrieve the package for revision "1.2.3+" .... I'll get the latest build.
In this case, I would get version 1.2.3.3333 of the package.
It's sad that IVY was started in 2005 (well, that's the good news)...but that NUGET didn't come out til 2010? (2011?)
Microsoft was 5-6 years behind on this one, IMHO.
I would never go back to putting binaries in source control.
IVY is very good. It is time proven. It solves the problem of DEPENDENCY management.
Does it take a little bit of time to get comfortable with it?
Yep.
But it is worth it in the end.
My 2 cents.
.................
But idea #2 is
Learn how to use NUGET with a local (as in..local to your company) repository.
That is the about the same thing as IVY.
But having looked at NUGET, I still like IVY.