I'm seeking some advice about improving our release strategy for an inhouse WPF application.
Currently, we are using ClickOnce to release new versions to our users.
Still manual, but we are looking into using DevOps pipelines to streamline this.
We have noticed that, as our application grows, we are getting more risks of making some breaking changes without noticing it during the testing phase. We have a small team and tight deadlines, so testing phase is limited.
So, to improve our way of working we were investigating canary releases. This would imply that we first release new version to a set of key-users and when they don't notice any issues, we would launch it for everybody.
From an application perspective, I could make it work. But I'm not sure how we can make this work with our database.
Has anyone tried this approach with desktop applications already? Or are there better ways to do this kind of thing?
Any help is appreciated!
Kind Regards
Tim
We have integrated a two step release pipeline for this same purpose. We have an internal UAT stage that pushes out the app internally, via Microsoft's AppCenter, and then a second full release stage which publish to "production".
We're not using ClickOnce, but the principle is the same. You could have an UAT and Production publish locations for each stage.
Related
I am new to the Winium world. I tried playing with Winium.Cruciatus which seems to be working fine but not perfect.
Hence, I am thinking to try Winium.Desktop.Driver.exe which I understand -
is a Selenium-compatible wrapper for Cruciatus, and
it is required to be running separately while development.
However, I wanted to understand that once the development is done and
if the solution is deployed in production, will it
(Winium.Desktop.Driver.exe) still be required to be running in advance
for the solution to work?
My requirement:
To automate the installation of a software on multiple VMs (domain joined) from one single VM.
If (yes)
{
I think it may block the installation if it requires explicit Admin
permission to run. As we cannot go on each machine to click on 'Yes',
which defeats the purpose of automation.
Because my environments will be Windows Server 2012 R2 and most of
the time they are more restricted than a normal Windows like Win10.
}
If (no)
{
Any specific advantage of using Winium.Desktop.Driver vs only developing with Cruciatus library?
}
Note: Can someone of high reputation please create a new tag - 'Winium' as it seems this is required now as we already have few more questions
on Winium.
Winium.Desktop is a testing tool, it is usually used to automate end-to-end or other functional testing scenarios. When it is used as testing tool, then it is only required during development/testing phase, not in production.
But if you use Winium.Desktop not for testing, but as an automation tool, for example to automate installation of a software, i.e. Winium.Desktop is a core part of solution that runs setup program and clicks next or something, then you will need Winium.Desktop during deployment phase.
Key advantage of Winium.Desktop over Cruciatus is that it provides Selenium interface and works as client-server, which is useful for test automation, as client-server can be scaled, and Selenium interface is well known and there are a lot of tutorials on how to use it.
If you just need to automate installation of some software (i.e. do not need to do actual testing using Winium), I would suggest looking into direction of one of IT infrastructure automation tools like Ansible, Chef, etc.
Regarding admin rights I suggest to open an issues at https://github.com/2gis/Winium.Desktop/issues describing your use case, probably there is a way to run it without admin rights or grant access only once.
I am using TFS2013 & MS Release Management vNext to provide some continuous deployment capability to the development team, but am struggling to find a feature (or a way of achieving the capability) related to downstream (or chained) builds.
What I would like to achieve is the ability to start another build upon successfully completing another.
The basic premise behind this is we would sometimes have services that need to be deployed before a web application is subsequently deployed, but not in all cases (otherwise the build itself would simply deploy all of these components every single time) - in 80% of cases the web application will be deployed in isolation.
Has anyone achieved this in any other way than custom TFS build templates? Is there actually an un-documented feature somewhere in MS RM?
Thanks for your time in advance
If i understand correctly then you want to deploy to an environment based on some pre-condition being met. There is a nice story for that with new release management pipeline see the video below
https://www.youtube.com/watch?v=OPuWRL4jORQ
I have a c# Windows Forms application which is slowing down or even freezing after a day or so when deployed on a particular customer's site. I would love to rewrite this inherited project from scratch but for now the potential sources of the problem are rather widespread.
Can anyone suggest a way to perform some basic profiling (even "poor man's profiling"/break-and-sample) on a dotNET application in a live environment without completely crippling the performance.
Given the severity of the slow-down in the application, I guess that just a few data-points should be enough to find the cause.
Please recommend a good continuous integration that would build and integrate with the .net stack and the visual c++ as well.
Some recommendations I have got are
Jenkins
CruiseControl
Teamcity
Because of the polyglot nature of the project, which continuous integration solution would you recommend?
I have used all three over several years. Some of the answers below state that most of the work will be producing your own build scripts. This has been true in my experience as well. We use a combination of MSBuild and Powershell scripts for our build process, which can be run under just about any CI tool, so picking one comes down to what you're looking for in terms of customization, integration with other systems, performance, and ease of use.
Short answer:
I recommend Jenkins. So far it seems to be the best combination of the above qualities. It has a ton of plugins, some localization and is actively developed by the OSS community.
Long answer:
I started with Cruise Control .Net. It was easily configurable with a text file and I found it highly reliable. However, we moved away from it because Thoughtworks was moving toward a paid product (Cruise, now Go) and future development was in question. A new team has since forked the project but there is little word about future development since.
We moved to TeamCity, which is free and has a great ajax-y UI. It is easy to setup and get going and has a lot of features for distributed builds. We quit using TeamCity for several reasons. The server does a ton of stuff and it was a bit overkill for our basic needs. Even so, it was not very customizable (see Time Zones and notification contents) and we often found the administration UI confusing. That was all still okay, but we also had steadily worsening performance problems. We started with a standard HSSQLDB out-of-the-box, moved our installation to SQL server when we started experiencing degraded performance, then had to quit using the server at all as performance continued to degrade over time. I'm not sure what the culprit was but I couldn't find any cleanup to do that would explain the constantly worsening performance as the Tomcat web server fought with SQL Server for resources, even when there were no active builds running. I am sure it's my fault and I was missing some crucial setting or needed to feed the server more memory, but this is a shared utility box, we did not have these issues with CC.Net, and most of all, I am not a Java/Tomcat guy and don't have a lot of extra time to keep fighting with these issues.
We've moved to Jenkins now. It seems to be working fine so far but we've only been with it a short while. It was easy to set up, does not seem to be taking nearly as many resources as TeamCity and has a ridiculous number of plugins. The only downside so far is like many OSS products, it does not seem to have the best documentation and it does so much that I may be tweaking knobs for a while to get it set up the way we want.
Between CruiseControl and TeamCity, TeamCity is faster and easier to set up, but you may need to check on licensing for it. I can't speak to Jenkins, never having used it.
Jenkins has the big advantage of being very extensible (currently over 400 plugins), which allows you to combine it with a huge number of other tools. So it gives you complete freedom in your other tool choices. I recently read that this is one problem of TeamCity, that you get locked in using the whole stack of tools (e.g. using SVN or Git as version control system will not be possible).
I am using Jenkins myself for our projects which has both Java and C++ code, and I am very happy with the tool. We had CruiseControl before, and have not once regretted the switch.
I have tried both Cruise Control and Jenkins, and Jenkins impressed me with very fast and user-friendly set up.
The three you list are all sensible choices, and the main problem will be producing the build script(s) needed to do produce the build artifact(s). If you manage to make them do everything needed, changing CI system shouldn't be a big issue.
After implementing all three in different shops, I'd chose all of the above. Pick one.
What's your advice/points for developers to follow or avoid in the development and early stages of developing a database driven asp.net websites. so we could have an easy and efficient deployment -specially in development(creating) the database to be easily deployed in the feature on my shared hosting server- .
Edit 1
I'm sorry, but still I didn't get any detailed advice specially about the database.
I mean, I'm creating my website database using the SqlExpress -am not sure which version. this is from the connection string "AttachDbFilename="C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\DB.mdf"- . I use the Database Diagrams option in the studio to create FKs and create the relations between table.
So how can I copy this database structure and data in the future to be used on a server. I was thinking maybe I should do it all in SQL and save the script and run it later on a database that I'd create on the deployment server.
Just some thoughts! I hope I'd find some great ways to do it from developers who already deployed websites before!
If possible you should practice an iterative development approach including continuous deployment. Even if you deploy iteratively to a staging area you will be exercising many of your processes. This gives you a chance to fail early and often, there by making your final deployment smoother.
From a prioritisation perspective: software development has the end goal delivered functionality, and if you can't deploy, then you can't deliver any functionality.
For most projects - web or otherwise - the first story should be something like "As a user, I want to be able to install the product, so I can run it." This usually causes the development of the deployment mechanism to be done very early, and maintained as the codebase changes when additional stories are completed.
The deployment mechanism should be your way of delivering functionality to the customer for approval and testing.
It is very important to avoid getting to the end of a project and having to ask "okay, now how do we deploy it?"
Edited to add: Also make 100% certain you're aware of the licencing and distribution restrictions on any third-party components you're using. Pay particular attention to any Free code that may be covered by licences like the GPL. Check whether any commercial components you're using require royalties per deployment, or require special 'server' licences.