There exists a great amount of documentation and samples on how to create build definitions for VSTS and TFS 2015+ for Service Fabric continuous integration and deployment.
What is available in terms of integration with TFS 2013 for deployment of Service Fabric applications?
How do we integrate the build and deployment of on-premises Service Fabric clusters / applications / services with TFS 2013?
Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable distributed applications. It was developed as a Microsoft-internal-only platform for over five years, which shipped publicly as a product in 2015.
The vNext build is also released on 2015 and have many benefit such as simple, deep Customization than XAML build. So most documentation is related to vNext build.
According to the build and release steps in documentation you provided above,it's general normal tasks such as build , test, copy files ,publish artifacts, it's not hard to convert to XAML build. You just need to do some build activities customization. One specific task is Azure Resource Group Deployment Task, there is no such correspondingly in XAML build. However, this task is used to create or update a resource group in Azure using the Azure Resource Manager templates. You could try to use powershell to achieve this part. The most import is using powershell script to publish to Service Fabric.
Actually, when working on TFS2013 XAML build, usually we integrate with Azure Cloud Service not service fabric. There are also related blog with detail steps show how to do it. You could take a reference: Continuous Delivery for Cloud Services in Azure
Moreover, since you are still using XAML build and stay on TFS2013. We encourage you update your TFS version to get latest technology and move to new web-based vNext build system.
On TFS2018, we even removed support for XAML Builds. For the benefit of VNext builds, you could refer to this article: Why You Should Switch to Build VNext
We currently have an internal WPF application that serves the business in different ways for different departments. We have a staged rollout process that takes changes through the following steps:
Development (local)
Alpha testing
Beta testing
Live
Developers need to be able to run all of these versions of the application, and some users access the Beta version to sign off new features.
Currently, this is done through a Launcher application deployed via ClickOnce, which downloads and runs the client binaries for the selected version. Each version of the application is hosted by a corresponding web service on the appropriate server (alpha, beta, live).
Does anyone know how this could be done through UWP? We want to future-proof the application and think about support for surface, windows phone etc. But in all cases, developers and users should be able to access the different versions of the application, sometimes even running them at the same time.
Is there support for this kind of concurrent deployment of multiple versions of the same UWP application?
For development these applications may be installed via powershell. From the AppStore you would only get the latest released version but locally you can do what you want.
The required powershell scripts are provided when you deploy the files to the local file system with visual studio. They will even prompt you to create a local developer key if required for your machine.
I am new to mongoDB. I would like to ask for help about deployment of a C# .net app with MongoDB. I tried to publish it but when I run, it goes not working. I know the error is that I need to manually run mongod.exe through C:/mongodb/bin/mongod. But how can I setup it without manually run the mongod.exe? Your help is highly appreciated. Thank you :)
You should understand that your .NET application and Mongo database are different parts of the system. They even can be placed on different machines. So, publish of your application shouldn't affect availability of database.
However you can combine these two actions in one simple batch file:
msbuild.exe [your app with necessary options]
C:/mongodb/bin/mongod.exe [options]
On how to build and deploy web-apps via msbuild you can see here:
How to Publish Web with msbuild?
Invoke a publish from msbuild for visual studio 2012
Invoke a publish from msbuild for visual studio 2012
You probably want to set up mongodb to run as a windows service, rather than manually starting the server on demand.
Instructions can be found here:
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/
I am developing a C#, MVC4, EF5 Code First application on .NET in Visual Studio 2012 and have used the VS publish mechanism to deploy it to an Azure Website with an Azure SQL Database.
I now want to use Git and GitHub for version control and involve others in the project.
However, although I am familiar with using Git in a LAMP environment, I have no experience of using Git with Windows, Azure Websites and a compiled environment.
I would like to use the Azure Website as the production server, another Azure Website as a Staging server, developer Windows machines using Visual Studio for development and GitHub as the central repository.
There is a helpful article here: http://www.windowsazure.com/en-us/develop/net/common-tasks/publishing-with-git/ . I can get my head around what would be needed here for, say, a PHP application on Azure. But I am unsure of the best approach with a compiled application and what I can achieve using Azure Websites and Visual Studio.
A nudge or two in the right direction would be greatly appreciated!
don't publish from VS to azure, instead setup your azure website to pull from the github repo. the deployment process compiles your solution.
watch http://www.youtube.com/watch?v=5NGieL0tinw&feature=youtu.be&hd=1 or read http://vishaljoshi.blogspot.com/2012/09/continuous-deployment-from-github-to.html
Also SocttGu announced this on his blog # http://weblogs.asp.net/scottgu/archive/2012/09/17/announcing-great-improvements-to-windows-azure-web-sites.aspx he also talks about a cool feature of publishing branches, this will nail your requirement for a stage server and production server. Have a stage branch and a production branch and merge to them as desired. see the section "Support for multiple branches"
looks like they added support for private repos finally.
appharbor is a competitor to azure that does something similar.
You are basically introducing a new step with the requirement that the source code must be compiled before it can be deployed to the server. Where you implement this step is up to you. You could:
Ensure that your target server has the capabilities to compile the source code (some Continuous Integration tools could help with this, such as CruiseControl.NET). This has the caveat that the target server be able to compile source code (possibly even requiring Visual Studio to be installed), so that may not be an option.
Check the compiled binaries into source control. You could keep these compiled binaries separate from the main source branch, to keep things clean. Deploy the binaries to the target server.
Some hybrid of the previous two options is also possible; you could set up a Continuous Integration server with CruiseControl.NET, which can check out the current source, build it, and check the resulting binary back into a special branch, then deploy that branch to your target Server.
I have been testing Jenkins CI, and now it is time to build a server. What is the best way to go? There are plenty of options, and I don't know what one to choose.
a shared machine, with other server running with it,
a virtual machine, inside a machine used for other servers
a stand-alone machine
use multiple machines with different OS, on each to test each platform?
(I have some web UI tests, based on selenium)
And also, I want a suggestion of the OS to use. I use msbuild, and probably that is only available on Windows... but maybe a linux server, with some sort of builder from mono may be the best way to go.
I am not tied to Jenkins, but it seems to be the best. If you know of better options, let me know.
I need opinions, I need to know what possibilities exist, and if possible, to know what others are doing, and what experiences you have with various setups, so that I can make a solid decision.
Thanks!
First things first. My CI server is a VM running CruiseControl.NET. I dont use Jenkins so I cant really comment on it. From the looks of things, Jenkins is more well-developed than CC.NET.
Per the virtual vs physical question: ultimately, it doesnt really matters as far as CI is concerned. As long as it is visible on the network and has enough resources to perform it's function, the rest is just administration. Personally, I find benefits of virtualization to be worth the extra effort. You can easily add resources, move its physical location, stand up additional VMs to run a cluster. The benefits of virtualization are well known and everybody is doing it these days.
My CI server is on a VMWare ESX server that has a ton of CPU and RAM to dish out. It runs many other VMs on it. I have about 35 sites running through CI and probably 20 are hosted on the machine itself and another 70 sites that are set to build by manually triggering them through the CI dashboard. I have never had any relevant performance issues with it.
Your build server should ideally have the same setup as whatever machine(s) you are planning on deploying your code to. For websites, that would be the same OS as your production servers (probably Windows 2003 or 2008). For desktop applications, I would probably just pick the latest and greatest OS that you are targeting for support and can afford.
Using multiple machines with multiple OSes would only be relevant when you are building desktop applications that you are trying to support on multiple OSes. In this case, having multiple servers would be ideal, but I see that as being a lot of work to get set up. Personally, I would start simple, get everything running and start adding pieces on when they become truly necessary.
As I mentioned, I use CruiseControl.NET. It's been great so far and I am happy with it. Since it is written in .NET and you are using .NET, there are less moving parts that your server needs to get running (I see Jenkins is built on Java). Writing plugins/extensions would be theoretically easier since you already have .NET people in house. I've never written an extension for CC.NET so I cant say that with certainty, though I know it is possible. The down side is the community is small and active development is slow.
Finally, I'll add that it will be A LOT of work to get started. It took me over 6 months to get my CI server ready for production, a few more to migrate all of our projects over to run through it and many more to train the rest of the developers on how to use it or work with it.
So, in summation,
Virtualization is good! (but it doesnt really matter).
You should match you CI environment to whatever envirnoment you are deploying to, if possible.
You better be ready to commit for the long haul.
Continuous integration is great and you wont regret setting up a CI server. Whatever you choose, it will be better than the "cowboy coding" that used to go on :)
EDIT Other answers are posting their process, so I guess I should have done that too! :)
My shop builds LAMP and .NET websites so we needed something that could work effectively with both. We have CC.NET running as the core framework but nearly all of the functionality is performed by custom Nant scripts. We use Nant because it is 1) .NET based and has built in .NET commands and 2) is easy to perform command line operations which form the core of all of our build steps.
CC.NET listens to the SVN server and grabs updates as they are made. CC.NET checks them out and fires off the NANT task that performs all the actual work. For .NET, that means mstest to unit test and msbuild to build and publish. PHP usually just moves the files straight to the destination environment. Then, if all steps were successful, Robocopy will copy the files to the destination server, which was mapped as a network drive during a Group Policy startup script (Windows servers are mapped with net use and LAMP servers are mapped with Webdrive).
We have development servers, staging/QA servers and production servers. Since we work in .NET and LAMP, we have one server per environment for each of these stages - 6 in total and all are virtual. Our development servers are the only ones that are set to a continuous integration build. Staging and production are force-build only along with some other SVN wizardry to prevent accidental deployments. We also build and unit test AcrionScript using MXMLC but that is rare for us.
Here's our setup. We have two virtual servers (a build server and a test server), and then two production servers.
The build server is running TeamCity (for CI) and FinalBuilder (for some of the more complex build jobs that involve editing XML files, changing config settings, installing and registering Windows services).
Most of our applications are ASP, ASP.NET or MVC web apps. TeamCity checks the code out of subversion automatically (triggered by a checkin), compiles anything that needs compiling, deploys the latest pages and DLLs to the IIS web server that's running on the build box.
All our sites have multiple host headers set up in IIS so the same site is listening as www.mysite.com.build, www.mysite.com.test, www.mysite.com. We've set up a DNS wildcard alias on our domain controller, so that *.build points to the build server, *.test points to the test server, and so on.
This means as soon as code has been committed and build by TeamCity, everyone in the company can see it on www.whatever.com.build.
There's then another TeamCity job that uses msdeploy.exe to push individual websites - including their virtual apps and subfolders - from the build server to the test server.
At each stage, TeamCity runs any unit tests that are part of the project, and also runs a separate project that does HTTP requests to various key URLs on our site and makes sure everything is up, running and responding.
Finally, there's a "go-live" task that msdeploys the ENTIRE server from test to live; this means the complete server configuration is completely controlled by TeamCity, which discourages making config changes on live servers since your changes will get overwritten during the next deployment.
TeamCity is fantastic - we've now licensed it because we needed > 20 projects (and LDAP authentication) but the free version served us well for years, and it's an absolutely awesome piece of software. FinalBuilder is expensive but very, very easy to use - if you're cash-rich and time-poor, go for it; if you've got more time than money, stick to Nant or msbuild and write your own steps for editing web.config files, etc.
EDIT: Another detail I missed - we have a test and a live database server. Coders' workstations and the .build servers are all set up to use the test database; the *.test and live servers talk to live data. We use SQL Compare to (manually) push schema changes from the test SQL server to the live SQL server, but normally TeamCity just tweaks the config files between build and test to toggle the database connection string.
I would consider best practice to be:
A seperate build server (doesn't matter if it is vertual or not)
The build server builds the code on check in
Have a seperate deployment server for testing (again virtual doesn't matter)
Have your build deploy to the test server (you can have a seperate build for this i.e. CI build and a Build and Deploy build for testing)
Any unit or integration tests I would run on the build server, manual testing is done on test server
I hope this helps.
My current setup and best practice:
Development projects and environment:
C++ and C# applications, including some web based C# applications.
Windows application.
Subversion.
~30 developers world wide accessing centralized build servers.
Developers commit to the trunk of repositories.
Build scripts:
We employ Visual Build Professional, VBP, www.kinook.com, as a corporate build tool.
Build scripts are hierarchically designed into layers of build scripts, which performs different functions and can be reused.
Build scripts design:
Build machine layer - check lists for required build tools, checks out source codes from SVN trunk.
SVN layer - performs branching, versioning, committing and switching back to trunk.
Build product layer - A build script that builds N number of sub build scripts, where 1 sub build script = 1 project(not VS projects). (Developer friendly)
Sub build script layer - Defines a collection of C#/C++ solutions to be built. Also defines build order dependency. Uses MSBuild /t:Rebuild to build solutions. Uses devenv to build special projects. (Developer friendly)
Daily builds:
Builds 1. to 4. in Build scripts design.
Continuous integration(ci) builds:
Builds 3. and 4. in Build scripts design.
Basic Build environment: ( our more complex projects are build upon these principles )
Separated daily build server from continuous integration build server, also separate test servers for testing after each successful continuous integration build. ( 1 x daily build server, 1 x ci build server, N x test servers )
VM with Windows Servers with multiple CPUs as build machines. (For MSBuild /m)
Other Windows OSes as test machines.
Cruise Control.NET, CCNet, installed on all build/test machines.
Daily builds controlled by CCNet and runs at schedule time daily.
Continuous integration builds triggered by CCNet upon commits.
Build behavior:
Daily build starts at midnight, publishes build output to network shared drive, eg: \share\daily_build. ( Yes, we still use shared drives. ) :)
Upon a successful daily build, ci build will automatically be triggered to clean up working copy, check out source codes and build from scratch. (MSBuild /t:Build)
CI build then copies the built binary output to network shared drive, eg: \share\ci_build. ( Notice, 2 different folders, 1 for daily build, 1 for ci build )
Development environment:
Developers execute batch file that gets up-to-date ci build output to their development machine.
Developers and project managers relies on ci build status, has CCNet Tray installed to get immediate outcome of builds.
Developers sometimes hold lotteries to see who breaks the build, and punish by making him/her bottoms up a beer on Fridays. :D
Hope this helps.
I would suggest a seperate physical build server for one simple reason... It gets buy-in with management.
Once they have actually had to fork out money they become a lot more interested in how the Continuous Integration is going.