I have a Selenium/c#/specflow test project that is sitting locally and now on TFS.
The TFS version builds and picks up the tests fine, but I am currently using a hardcoded URL as the starting point for the tests.
I want to be able to create two different release definitions, one for the dev env and one for the test env.
I have found that there is a run settings file that can be added, and then overridden in the release definition (within the vstest assembly), but it seems this is geared towards unit tests.
My question is whether this is the correct place to specify a URL as the starting point for my tests (so that I can then create another release definition for dev and change this variable to my dev URL)? Is there a standard approach to doing this?
Ultimately I want to have a variable in my code for a URL, and to be able to over ride that from the release definition in TFS!
This is the first test project I've set up and built in TFS so I'm looking for a bit guidance on where this is best put.
Thanks in advance.
Not sure if totally got your point, instead of hardcode the url in tests, you could try to use this solution which only requires:
· Create runsettings file and add your parameters.
· Modify your code to call the parameters from the .runsettings file
· Changes to the “Test Steps” in VSTS release or build
More details take a look at this blog: Enabling Targeted Environment Testing during Continuous Delivery (Release Management) in VSTS
Another blog may be helpful to set up your selenium test CD through TFS: Continuous Delivery using VSO Release Manager with Selenium Automated Tests on Azure Web Apps (PaaS)
Related
We are in the process of implementing a DevOps strategy for our client deployed desktop app (winforms). Until now, we used SlowCheetah to do our config transforms (ex: select QA from config manager, app.QA.config is automatically swapped in, do the build, deploy MSI to QA machines with SCCM).
We are trying to leverage Azure DevOps to automate this process and I have run into a roadblock. I want to do 1 build, and a release pipeline of Dev --> QA --> UA --> Prod, but since the config transform is only run on build Im not sure how to do this.
The MSI would only be generated for the current selected config, so the drop in the release step would only have 1 MSI (with the config already packaged and no way to change it).
I know having the build step build the solution 4 times (one for each config) would work - the drop would contain all 4 MSIs, but that seems silly.
I can't just build the setup project on the release pipeline either as only the DLLs are available in the Drop, not the project files. How can I accomplish this?
Thanks!
We had exactly the same problem building MSIs from a Visual Studio solution that contained a WiX Installer project, using config transforms on the app.config to replace the configuration.
As you suggested, we originally went down the route of running an Azure DevOps build pipeline with multiple builds for every configuration in the solution, but this quickly became inelegant and wasteful as not only did we require builds for (dev/stage/qa/live) but also had configurations that applied to multiple customers, which ended up in 12 + configurations in the solution and really long build times.
Replace config within the MSI
The solution we ended up with, as alluded to in a previous answer, was to build the MSI only once in a build pipeline, copy the MSI along with all our replacement app.config files to the drop folder, and then run a custom application within the release pipelines to forcibly replace the Application.exe.config inside the MSI. Unfortunately, this isn't as simple as just 'unzipping the MSI', replacing the config and then 're-zipping' within a release task because the MSI uses a custom file format and maintains an internal database that needs to be modified properly.
We ended up creating a custom C# .NET console application using the method posted in this stack overflow answer, which we then hosted on our on-premises build agent so that we could run a simple powershell task within our release pipeline that called our custom console application with some relevant parameters:
"C:\BuildTools\msi_replace_file.exe" -workingfolder "$(System.DefaultWorkingDirectory)/_BuildOutput/drop/Application.Installer/bin/Release/" -msi "Application.Installer.msi" -config "Application.exe.config"
We then had a release pipeline stage for each 'configuration' that performed these basic steps:
There are various other methods for replacing a file in an MSI, as described in this question, but we chose to create a C# application using the utilities within the Microsoft.Deployment.* namespace that are provided as part of the WiX Toolset. This would guarantee compatibility with the version of WiX we were using to build our installer in the first place and gave us full control of the process. However, I appreciate that this approach is quite brittle (which I'm not happy about) and not especially scalable as its relying on a custom tool to be hosted on our on-premises build agent. I intend to improve this in the future.
You should also be aware that hacking the MSI in this way could cause problems in the future, especially if you change your tool-chain or upgrade to a later version of WiX.
Building the MSI from the release pipeline
I do not personally like the idea of copying the required dlls/assets to the drop location and then 'building' the MSIs within the release pipeline, because for us the act of building the WiX project was very much part of our 'build process' and was integrated into our visual studio solution, so it felt like moving the creation of the MSI to the release pipelines was counter intuitive and would also potentially require us to create custom tasks on the build agents to run the WiX CLI tools (heat.exe, light.exe, candle.exe) against a version of our WXS file or have build steps that just built the wixproj file instead of the whole solution. However, I can see how this alternative approach may be suitable for others and I think is equally valid depending on your circumstances.
What we did a few years back is maintaining a sub-folder that contains all the environment config files. Using a Custom Action at install time and supplying that particular environment on the command line the custom action would extracts the config file from the environment matching folder in the configFils.zip.
Folder structure similar to this in the ConfigFiles.zip file.
/Dev1/app.config
/Dev2/app.config
/Prod/app.config
MsiExec.exe /i YourMSI.msi /TargetDir=C:\Yourfolder /Config=Prod
Custom action would extract and place the app.config from the Prod folder.
To do this in the release pipeline you've really only got a couple of choices:
Break the MSI apart and re-import the right config and repackage (don't know how easy this would be as I don't know MSI, but have taken this approach with other packages which are effectively .zip)
Build the package in the release pipeline. You say the files aren't available in the drop but you are in control of this from your build pipeline (assuming this was done with Azure Pipelines also). You can either change your pipeline therefore to copy the needed files (with a copy task) into the place you create your drop from which is usually $(build.artifactstagingdirectory). Alternatively if you don't want to mix these files into your drop you can create a second artifact drop (just put in another publish artifact task in for this). If I took this route I would copy the files that are in $(build.artifactstagingdirectory) today into $(build.artifactstagingdirectory)/packagefiles and the project files needed to package up the MSI into $(build.artifactstagingdirectory)/projectfiles and point the two publish artifacts tasks to either one of these directories.
Once you have the drops including the files to build your MSI you'll need tasks to replace in the right config and then an MSI packaging task and you should be done.
Another way of doing this:
Instead of placing environment dependent settings in the App.config or any of its transforms, you could configure your app dynamically at runtime.
This would however require your app to get some clue from the target system so that it knows in which environment or on which host it is running.
Example:
ASP.NET Core applications assume the existence of a „ASPNETCORE_ENVIRONMENT“ environment variable. If it is not present, the applications assumes it runs in production.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/environments?view=aspnetcore-5.0
Providing such an environment variable should‘t pose a problem when you have access to the hosts anyway.
Simply add a step at the beginning of your release pipeline to set up the environment variable and set its value to the name of the stage you’re trying to deploy to (development/testing/etc).
That way you can build your MSI once in the build pipeline and deploy on as much environments as you like.
Of course, this requires you to prepare your app for that target environment ahead of time.
We have 10K+ unit tests in C# solution which are passed when running in local and in TFS.
Now, we are setting up Jenkins for our solution stack and we are facing issue of around 250 Unitests failing consistently.
The same unit tests are passed when i tried running them in jenkins setup server by using Visual Studio and Commnad prompt(MSTest).
What do you think is the issue? Any leads to look at this issue will be helpful.
Edit 1:
I did the research and not able to find anything as the problem itself is strange one. If you are not clear please raise questions instead of Down voting.
Edit 2:
I am able to find out the issue. It is with the unittests dll config file. When i executed MSTest in server by removing the config file, i am seeing same set of tests failing which are failing with Jenkins setup.
I guess we need to modify the steps configure in Jenkins portal to load the unittests dll config file.
My guess would be you have tests that are not actually Unit tests, but instead are integration tests or worse, and they fail non-deterministically. Other than that, you're asking people to do the impossible here without source code posted.
Either post source, or hire a consultant who knows about Jenkins IMO.
There might be issues with conditional compilation symbols (e.g. DEBUG vs. RELEASE code): in VS you normally run the tests on a DEBUG build, on the CI server on a release build.
Also look at some global state not being cleaned up correctly. Some threads which may still be running after a test has seemingly finished may corrupt later tests even when those later tests are located in a different test dll. That can sometimes be detected if test failure depends on the order of tests run.
Another issue often faced is dependency on test data in files: the files may be missing in the virtualized environment where the test is actually run. Use the Deployment attribute.
I had a similar problem with NCrunch. Maybe you can check in VS your "Build"-tag the Platform target. It should be the same like Jenkins is configured. For example "Platform target" is x64, it should be used in Jenkins as well.
I have a solution that has 4 projects in it. 3 are dependencies for my tests and the other is just my tests.
DL
BI
MySite (web site)
MyTests
Some unit tests in the MyTests project reference namespaces in the web site MySite for some MVC contollers.
Question is how do I get just the MyTests project to build and deploy with a TFS build. NO matter what I try the _publishedWebSites folder on the TFS build machine always has the web site and not the MyTests folder. For some reason it thinks it is building the web site and not the tests. Any help would be appreciated from the build definition or solution perspective.
The purpose is to build the tests and distribute them to a server where they can be run (selectively) using the command line tool in the task manager. I cannot distribute them if I cannot get the solution to build properly.
Alright so there are a few things. Firstly, you need to make sure that the outer solution recognizes MyTests as the start up project and has the other projects as build dependencies. However, this likely won't solve your problems. In order to do this you'll probably have to create a custom build script or edit your solution/project files by hand. The problem with the latter approach is that if other people are building MySite from this solution editing the project file to exclude it's output from the drop is going to cause problems for them.
My personal approach would be to make an MSBuild script which specifies the order in which to build the projects and which files you want in the drop. It's fairly straight forward and it will probably be easy to specify the output you want (this is sometimes very tedious if the projects build is messy to begin with or it has excessive and convoluted dependencies).
Here's the outer most resource for MSBuild. I'd look it over and think about what the simplest solution is but I wouldn't be surprised if you can just make every project build using their project files then add a single build step to cleanse your output.
Using Visual Studio and TFS & preferably Specflow or standard unit test.
I want devs to run ALL unit test as a policy before check in. If a unit test breaks, then vS should stop them from checking in, just like when running across a merge conflict.
I know there're post build scripts that will do this, but really if unit test breaks, I rather that it doesn't get into source control at all. Plus the turn around is rather slow to wait for the full build. And then there's the bickering on who breaks whose stuff.
So no, I want unit test to pass locally before a check in. How would I do that? Yes they can just hit the button, but I like to get them a bit more "incentive" than that.
It sounds like what you're after is a TFS Gated Check-in. This can ensure that the code builds, merges and that tests run successfully prior to committing the check-in. You can read more about it here:
An introduction to gated
check-in
It's worth noting that it's a much slower process than CI builds, so depending on how many check-ins your developers are doing you may be better off looking at a CI build with 'Create Work Item on Failure' enabled and a Project Alert set up to notify the developer that they broke the build.
The TeamCity Visual Studio plugin supports pre-tested commits. I can't speak for TFS, however.
What approach would you take while developing a custom MSBuild Task in a test driven way?
Are there any available test harnesses suitable for test drive development of a Microsoft.Build.Utilities.ToolTask extension?
I was considering attempting to use NUnit or MSUnit and check files generated and where they are placed, though this I forsee this as being a little clunky.
it's not really TDD way but look at the Tool MS Build Sidekicks
This tool really helps us to develop our nightly/daily builds (with database creation, structure compare, CodeAnalysis, test execution, clickonce deployment ...)
You can analyse and debug the build types on the build machine and on the local development machine.
Build scripts are not designed to be tested.. but
You can create some SmokeTests of your build to see if everything went ok. If you are deploying a website you can have some smoke tests to see:
Login page could be opened
Login page works (You can make a correct login and a failed one)
Core funcionality works (Once you accessed to your site you can perform some basic action like
opening product page or similar)
Those smoke test should be able to be called from command line, so you can call them from task AfterDropBuild to see the result of smoke tests just after build was created.