I work at a company where we have a monolithic application and I want to split it out into smaller web services / windows services / websites e.t.c
I want to be 'smart' and re-use libraires contained in the monolith by putting them into their own class libraries and creating a NUGET package for them on build, using Azure Devops (although not entirely bound to this tool).
The main one I am attempting to isolate is the Data-Access Layer, as it is the most commonly used.
So far, the CI side will build one of my class libraries, and publish an incremented version number to a nuget feed I can connect to through VS.
The problem is, I cannot for the life of me figure out how to debug these nuget packages as if we were still debugging in the monolith - and we are all very used to being able to debug through the entire end-to-end of request -> response.
I think I essentially want to
Host a nuget package
Have it able to be built in DEBUG or RELEASE variants, (release on publish, debug during development)
Step into the library easily, with full source and variable watch abilities, as if we were still in the monolith design
I refuse to believe this isn't possible as I assume most companies who craft good code will surely do this, or am I thinking about it all wrong?
The only thing I've managed to find online is someone who actively copies nuget packages locally to his machine, builds them in debug and drags pdb files across - which would be more hassle than its worth and make me want to just 'stick to the monolith design'.
It depends on your expectations, or what you consider the definition of debugging. Say you're debugging and the bug is indeed in the package. Do you expect to be able to change a few lines of code of an assembly in the package, build a new version of your app using the dependency and run again? In that case no, it's not going to be as easy as that and you're best off keeping everything in a single solution, even if they're deployed as multiple services.
However, if you just mean using Visual Studio's debugger and being able to step into methods, then you can use the Azure DevOps Pipelines task to publish symbols. Note this is different to using a symbols package or snupkg like if you were publishing symbols to nuget.org. The Azure DevOps task is just copying the *.pdb directly to their symbol server. Each developer will also need to configure their VS one time to use the symbol server.
Related
We are in the process of implementing a DevOps strategy for our client deployed desktop app (winforms). Until now, we used SlowCheetah to do our config transforms (ex: select QA from config manager, app.QA.config is automatically swapped in, do the build, deploy MSI to QA machines with SCCM).
We are trying to leverage Azure DevOps to automate this process and I have run into a roadblock. I want to do 1 build, and a release pipeline of Dev --> QA --> UA --> Prod, but since the config transform is only run on build Im not sure how to do this.
The MSI would only be generated for the current selected config, so the drop in the release step would only have 1 MSI (with the config already packaged and no way to change it).
I know having the build step build the solution 4 times (one for each config) would work - the drop would contain all 4 MSIs, but that seems silly.
I can't just build the setup project on the release pipeline either as only the DLLs are available in the Drop, not the project files. How can I accomplish this?
Thanks!
We had exactly the same problem building MSIs from a Visual Studio solution that contained a WiX Installer project, using config transforms on the app.config to replace the configuration.
As you suggested, we originally went down the route of running an Azure DevOps build pipeline with multiple builds for every configuration in the solution, but this quickly became inelegant and wasteful as not only did we require builds for (dev/stage/qa/live) but also had configurations that applied to multiple customers, which ended up in 12 + configurations in the solution and really long build times.
Replace config within the MSI
The solution we ended up with, as alluded to in a previous answer, was to build the MSI only once in a build pipeline, copy the MSI along with all our replacement app.config files to the drop folder, and then run a custom application within the release pipelines to forcibly replace the Application.exe.config inside the MSI. Unfortunately, this isn't as simple as just 'unzipping the MSI', replacing the config and then 're-zipping' within a release task because the MSI uses a custom file format and maintains an internal database that needs to be modified properly.
We ended up creating a custom C# .NET console application using the method posted in this stack overflow answer, which we then hosted on our on-premises build agent so that we could run a simple powershell task within our release pipeline that called our custom console application with some relevant parameters:
"C:\BuildTools\msi_replace_file.exe" -workingfolder "$(System.DefaultWorkingDirectory)/_BuildOutput/drop/Application.Installer/bin/Release/" -msi "Application.Installer.msi" -config "Application.exe.config"
We then had a release pipeline stage for each 'configuration' that performed these basic steps:
There are various other methods for replacing a file in an MSI, as described in this question, but we chose to create a C# application using the utilities within the Microsoft.Deployment.* namespace that are provided as part of the WiX Toolset. This would guarantee compatibility with the version of WiX we were using to build our installer in the first place and gave us full control of the process. However, I appreciate that this approach is quite brittle (which I'm not happy about) and not especially scalable as its relying on a custom tool to be hosted on our on-premises build agent. I intend to improve this in the future.
You should also be aware that hacking the MSI in this way could cause problems in the future, especially if you change your tool-chain or upgrade to a later version of WiX.
Building the MSI from the release pipeline
I do not personally like the idea of copying the required dlls/assets to the drop location and then 'building' the MSIs within the release pipeline, because for us the act of building the WiX project was very much part of our 'build process' and was integrated into our visual studio solution, so it felt like moving the creation of the MSI to the release pipelines was counter intuitive and would also potentially require us to create custom tasks on the build agents to run the WiX CLI tools (heat.exe, light.exe, candle.exe) against a version of our WXS file or have build steps that just built the wixproj file instead of the whole solution. However, I can see how this alternative approach may be suitable for others and I think is equally valid depending on your circumstances.
What we did a few years back is maintaining a sub-folder that contains all the environment config files. Using a Custom Action at install time and supplying that particular environment on the command line the custom action would extracts the config file from the environment matching folder in the configFils.zip.
Folder structure similar to this in the ConfigFiles.zip file.
/Dev1/app.config
/Dev2/app.config
/Prod/app.config
MsiExec.exe /i YourMSI.msi /TargetDir=C:\Yourfolder /Config=Prod
Custom action would extract and place the app.config from the Prod folder.
To do this in the release pipeline you've really only got a couple of choices:
Break the MSI apart and re-import the right config and repackage (don't know how easy this would be as I don't know MSI, but have taken this approach with other packages which are effectively .zip)
Build the package in the release pipeline. You say the files aren't available in the drop but you are in control of this from your build pipeline (assuming this was done with Azure Pipelines also). You can either change your pipeline therefore to copy the needed files (with a copy task) into the place you create your drop from which is usually $(build.artifactstagingdirectory). Alternatively if you don't want to mix these files into your drop you can create a second artifact drop (just put in another publish artifact task in for this). If I took this route I would copy the files that are in $(build.artifactstagingdirectory) today into $(build.artifactstagingdirectory)/packagefiles and the project files needed to package up the MSI into $(build.artifactstagingdirectory)/projectfiles and point the two publish artifacts tasks to either one of these directories.
Once you have the drops including the files to build your MSI you'll need tasks to replace in the right config and then an MSI packaging task and you should be done.
Another way of doing this:
Instead of placing environment dependent settings in the App.config or any of its transforms, you could configure your app dynamically at runtime.
This would however require your app to get some clue from the target system so that it knows in which environment or on which host it is running.
Example:
ASP.NET Core applications assume the existence of a „ASPNETCORE_ENVIRONMENT“ environment variable. If it is not present, the applications assumes it runs in production.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/environments?view=aspnetcore-5.0
Providing such an environment variable should‘t pose a problem when you have access to the hosts anyway.
Simply add a step at the beginning of your release pipeline to set up the environment variable and set its value to the name of the stage you’re trying to deploy to (development/testing/etc).
That way you can build your MSI once in the build pipeline and deploy on as much environments as you like.
Of course, this requires you to prepare your app for that target environment ahead of time.
I am currently working on a personal project but I want to use Azure & Visual studio online build facilities for self teaching purpose. I am having a hard time resolving this problem :
I have a wpf app connected to an azure web api.
Wpf app is in its own Git repo, web api is also in its own Git repo.
Since both apps shared a common model, I put common model in its own repo as well to avoid code duplication.
I must be missing something ....
What I want to do
When I build on Visual studio online, I want to build "common" and feed its output dlls to webapi and wpf apps so that they can reference the model.
Solutions considered so far
nuget package
making a nuget package of "model" but where do I push it ? It's definitively not going to be of any value to nuget.org so no go.
I would need some private nuget repo in visual studio online, not sure it exists.
postbuild event
I also considered adding a post build event to the "common" build and copy its bin*.dll output to wpf and webapi apps to some "dependencies folder" but I find this dirty, Moreover I am not sure a build can push its output to the input of another build (I know Jenkins can but I am unsure about visual studio online), Moreover, how can I reference dlls which do not exist yet in my csprj ?
commit bins in repo (ugh)
Of course, I could build model locally and push the resulting dll in the git repos but, well, I am against putting binaries in versioning tools :)
Change my design
Consider that WPF only needs dto and not the real entities (which is true) but webapi will need to deserialize dtos anyway so back to square one, but with dtos this time :)
Thanks for your input !
Thanks a lot to CrowCoder !
That's exactly what I needed : using the "Package Management" extension in visual studio online, which is free up to 5 users.
Steps required :
configure nuget on my local machine,
create the nuspec,
create the feed,
package the model library
configure the build to push the library to the feed
use nuget packages to reference model
I keep running into an issue with our TFS build server. I've got 2 projects (both in the same solution), 1 is a WebForms project, running .Net 4.0. The second is an ASP .Net MVC5 project running .Net 4.5. There is also a Silverlight project, but the problem is reproducible with just the first two.
Both of these projects use NuGet packages for various libraries. Sometimes there are different assemblies within a package for their respective environments. A .Net 4.0, 4.5, SL assembly, etc.
The build server seems to dump all of the libraries required into a single folder, then pulls from that to build the solution. This causes problems in many cases, with the wrong project getting the wrong assembly version. This does not occur locally, only on the build server. I can't figure out what I need to do to keep this from happening. Any ideas?
Yes, I hate this standard behavior, but TFS will output everything to the same folder by default, and then you will get various errors depending on which order msbuild compiles your projects if you have references with the same name or even project outputs with the same name.
The easiest workaround is to use the AsConfigured option on the Process tab, '2. Build' -> 'Output location' of the build definition window. This keeps your normal source structure intact, but I think you will lose support for automatically dropped outputs (i.e. you will have to provide a script to do that yourself). If you are only using TFS Build for validation, this is the cleanest approach.
You can also use the PerProject setting and split up your projects into two distinct solutions, perhaps suffixed by platform (we've done that numerous times in our company). Then, you specify both solutions to the build process and it will create two separate folders in the output, one for each solution.
This is all assuming you are using TFS 2013. In TFS2012, there is a similar option but it is in '3. Advanced' -> 'Solution Specific Build Outputs'. You will probably have to go this route if you are using TFS2012 or you will need to modify the default workflow yourself to add your own logic.
EDIT:
From your comment to the other poster I see you are using TFS 2010. Well... I think this was absolutely not supported at that time, I remember having similar problems, but we upgraded to TFS 2012 and all was well.
I think your only option is to either create two separate build definitions and build each solution that way, or you will need to checkout the xaml workflow and edit it with your own logic. Perhaps downloading the TFS2012 template and "porting" it to TFS2010 would be a better approach since at least you would not be reinventing the wheel that way.
Situation - We have a .net mvc solution with WCF layer. the solution has about 20 odd projects that get compiled into DLL. the site is running on SQL server 2008. we maintain the SQL scripts in the solution folder as versions. So we have SQL scripts eg. version 1.0.0.0 to lets say latest which is 3.0.0.1.
the solution is source controlled in TFS, we also use TFS to manage the work items, bugs etc etc. SQL script files are also in TFS
Question - the question is that do we need version numbers on the assemblied i.e. dlls aswell. Our DLLS are not exposed in any way or from to the outside world they are just in the runtime of the mvc app. we do not expose the WCF to outside clients,again its just used by the mvc app.
the deploy process is simplly the latest code against the latest db, so when we deploy we check what version the db is in and run a tool to upgrade it to the latest version that is in the db project in the solution.
One of our senior architects is saying that we should maintain the version numbers in the assemblies aswell. I am saying that we dont need any version numbers in the code. beacuse TFS manages that. when we release we just deploy the latest code with the latest assemblies/ deploy package.
I have not come accross the assembly versions unless them assemblies where released to the outside world (if you know what i mean)
please can you suggest... Also note we dont do feature development its just version numbers so that we know what version a particular DB is at.
I would prefer the security of knowing and being able to double check versions. If there were a problem with the publishing process, or there were a bug that manifested itself that appeared to be a publishing problem I would want to rule things out as quickly as possible. I also think that it's so easy to implement you've spent more time discussing and thinking about it than you would have actually spent doing it, and there is no down side to it that I can think of.
In a similar project at my job, we use version numbers.
Every commit against the version control system (VCS) causes our CI server (TeamCity) to build a new artifact, with the version set to "LATEST". Every successful build of "LATEST" get deployed automatically to our test environment. We could, in theory, also deploy this "LATEST" version to production, but we don't.
When we want to deploy a new version to production we run a different, manual build job which creates a versioned release (e.g. 1.4.7). The build job also creates an SVN "Tag" of the current codebase. To have our DLLs have the appropriate version, we use TeamCity's AssemblyInfo Patcher feature. This way, we don't have to constantly manually update our projects' AssemblyInfo.cs files. Instead, they get to always have placeholder version info like this...
[assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyFileVersion("1.0.0.0")]
These number get automatically updated during the build by TeamCity. The versioned artifacts (which include any corresponding SQL scripts) are saved to our "Releases" directory where we keep all our versions of the codebase.
Now this all seems like overkill, right? Not really.
This gives us the following benefits...
Our deploy process does a wget to our monitoring page which lists the version number and asserts the versions match up (version expected to have been deployed vs. the version currently running on the server). This gives us confidence that our deploy process worked properly.
If bugs are found in the versioned release (the production release candidate), we can SVN checkout the tag, apply a fix, and create a new release without having to worry about other changes on trunk which could compromise the release. It is hard to stay "releasable" all the time, this allows to not have to be. Although, don't get me wrong, it has it's advantages.
If problems are found with a versioned release but they can't be resolved quickly, you can always just re-deploy the older artifact which is known to work. Being able to revert a deployed release to an older version has definitely saved us on a couple occasions.
If bugs are found on production that need to be investigated, we are free to deploy the same versioned artifact to any of our test environments so that we can try to reproduce the problems outside of our production environment.
There are probably more advantages I am forgetting at the moment but the above list should give a general idea of the power that proper version management can bring to the table.
What I would advise against is continuously, manually updating 20+ projects' version files. This seems like a lot of busy work which is mostly a waste of time because it is prone to human error. Whatever you decide to do, automate it and verify the results.
We found a bug in Web Service Software Factory a description can be found here. There has been no updates on it so we decided to download the code and fix it ourself. Very simple bug and we patched it with maybe 3 lines of code. However* we have now tried to repackage it and use it and are finding that this is seemingly an impossible process.
Can someone please explain to me the process of PLKs? I have read all about them but still don't understand what is really required to distribute a VS package.
I was able to get it to load and run using a PLK obtained from here, but i am assuming that you have to be a partner to get a functional PLK that will be recognized on other peoples systems?
Every time i try and install this on a different computer I get a "Package Load Failure". Is the reason I am getting errors because I am not using a partner key? Is there any other way around this? For instance is there any way we can have an "internal" VS package that we can distribute?
Edit
Files I had to change to get it to work.
First run devenv PostInstall.proj
Generate your plks and replace ##Package PLK## (.resx files)
--Just note that package version is not the class name but is "Web Service Software Factory: Modeling Edition"
-- And you need to remove the new lines from the key
ProductDefinitionRegistryFragment.wxi line 1252(update version to whatever version you used in plk)
Uncomment all // [VSShell::ProvideLoadKey("Standard", Constant in .tt files.
The short answer is no, you don't need to be a VSIP partner registered with Microsoft to obtain and use a PLK. The PLK you obtained from the site should work on any VS install. (On a related note, Microsoft has eliminated the requirement for PLK's altogether for VS 2010.)
The following pages should help with debugging what the issue is:
http://msdn.microsoft.com/en-us/library/bb164677.aspx
http://blogs.msdn.com/dr._ex/archive/2006/12/14/debugging-package-load-failures.aspx
There is also a tool in the Visual Studio 2008 SDK called the Package Load Analyzer that should help you debug the load failure (and confirm that it's actually a PLK issue and not something else). Copy and run VSSDK_PLA.exe (under VisualStudioIntegration\Tools\Bin under the VS SDK install location) to your test machine to install the Package Load Analyzer tool.
You don't have to worry about package load keys when rebuilding the Web Service Software Factory because it is a guidance package that depends on GAX, which has the only PLK needed. To build guidance packages, like the Service Factory, you also need to have GAT installed.
The Service Factory source should contain the setup projects you need to build and redeploy it. If you have an issue, the discussion forums on its community site (http://servicefactory.codeplex.com) are monitored by team members. Response is pretty good.
Aaron is right that this whole story gets a LOT easier in VS2010. VSIX is pretty sweet. We are updating the Service Factory to VS2010. It should be ready for release within a month.
Don
MS p&p