How do I Install Multiple Versions of MyProgram (same PC, VS2008)? - c#

I have a big MDI Suite application that I've written for our company. It is installed on two Citrix servers, and is then accessed by hundreds of Windows terminals across the plant.
Before installing updates, I (the sole developer) test my new routines until I am satisfied that it is going to perform how I expect it to. This generally works, but it has the group manager worried.
He wants me to install a BETA version that can be tested on in our workplace environment by others.
To do this, I would need to install my application twice on the Citrix servers machines so our employees can test it (right?); however, when I try to install the application again on the same PC (i.e. the Citrix server), Windows says it is already installed.
The VS2008 Setup and Deployment Installer has a Product Code. Should I just change this? What problems do I need to be aware of? Do I need to keep track of two Product Codes (one for testing, and one for release)?
How do I install 2 working versions of the same application on 1 PC?

The safest way is to create new ProductCode, new UpgradeCode, new TARGETDIR and make sure there are NO shared directories (besides the system dirs). Make Windows Installer think it's a completely different product.
Changing the ProductCode, but NOT the UpgradeCode may cause existing components to upgrade. When the install runs the FindRelatedProducts action, it will locate anything with the matching upgrade code and attempt to upgrade matching components.
I strongly agree with cutrisk - stage the whole thing somewhere else first (install released version, upgrade to BETA, open it for internal testing, etc.) Then carefully roll it out to customers.
Trust me, you do NOT want your installer to bring down someone elses production service(s) because your BETA "upgrades" some component and the whole system falls over.

Since you want to allow dual installs that should work. I'd make a new "beta" installer that may target a different install directory to keep bleed over to a minimum.
Here's the MSDN write up on changing Product Code for some more info.
Another possible angle, that may be a lot simpler.....is there a test/staging Citrix server you can host it on? That's how we roll out changes here. But that ultimately depends on your Citrix environment/servers in farm...
Oh, and in relation to tracking the product code, in my experience(right or wrong), that isn't as important as making sure you keep your UpgradeCode is sync.

I'd give the beta .exe a different name: app_beta.exe. You can use the same setup to deploy it. You can also keep changing your upgradeCode with each version of the beta or prod release- it's ok for them to overlap.
prod: 2.0
beta: 2.1
beta: 2.2
prod update: 2.3
next beta: 2.4

InstallShield 2009 and later support "multiple instances":
e.g.:
http://blog.deploymentengineering.com/2008/03/installshield-2009-beta-part-i.html

Related

Deploy C# application with Access Database, without access instalation

Context
For a bit more context, the company I work for made an Access vb.net application that runs inside MS Access. They wanted to upgrade and create a C# WPF frontend for it. This works great except for the fact that when I installed the application and downloaded the [dummyDB].accdb (32-bits 2016) the application immediatly broke with the error :
The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine.
I then installed the 2010 AccessDataBaseEngine from Microsoft and a new error apeared:
This Database file requires a newer version of Access
Naturally I went back to Microsoft, downloaded and installed the 2016 x64 (my system is 64-bits) and tried again.
Sadly we we're back to the first error. So i tried to install the 2016 32-bits AccessDataBaseEngine but ran into the problem that I had already installed 64-bits office products. (I also uninstalled all the previous 32-bit AccessDataseEngines)
We of course don't want users to reinstall all their Office products just to use our application.
Question
Is there a NugetPackage of somesort that we could use so the customer can interact with the AccessDatabase out of the box?
If not, is there anyway to make it so they wouldn't have to reinstall all of their Office products?
ps. Different DataBase files 32-bit and 64-bit for users is also not an option because different users with most likely different systems will have to access the same database file.
I don't know if I'm being completely oblivious here, but any help would be greatly appreciated
Application info:
WPF application (.Net Framework 4.7.2)
Build Target : anyCPU
Is there a NugetPackage of somesort that we could use so the customer can interact with the AccessDatabase out of the box
No, not as far as I know.
If not, is there anyway to make it so they wouldn't have to reinstall all of their Office products?
You could write an wrapper that handles both 32 and 64 bit versions of the access driver. I.e. if the installed driver is the same platform as your program you can just continue as usual. If it is another platform you need to start a new process with the correct platform, and delegate all the database work to this process. If your process is anyCPU and is run on 64-bit windows the actual platform will depend on the "prefer 32-bit" flag.
Another alternative is to build both 32 and 64 bit versions of your entire application and run the one matching the office installation. This might be easier, but might not work if you have other platform specific dependencies.
This all assumes that office is already installed, if it is not you can just deploy the stand alone ace driver (i.e. AccessDataBaseEngine) with the correct bittness. The database file itself is platform agnostic and can be accessed by either 32 or 64 bit processes. It is the driver that needs to be of the correct version.
Note that Access is kind of difficult to work with and rather error prone. If this product is intended to be long lived I would suggest migrating to something better sooner rather than later. SqlLite is a popular embedded database engine that could be an alternative.

WiX - Suppress / Ignore version

I am developing an installer for our companies application using WiX.
One of the things I've noticed when testing is that running the same installer twice (after a successful install) causes the install to be aborted because the same version of the software already exists. I need behaviour that allows the same installer to run multiple times, and install the same application multiple times.
This is because when we deploy to our clients (which has been manual) we always deploy in both a Test Environment and a Production Environment. The code bases for the two environments are identical. Additionally some clients wish to have multiple production / test environments on the same machine.
Is there a way to suppress the version information for the installer, so that it will ignore any previous installations and install again? I've tried so far suppressing PublishProduct, but it does not give this behaviour. It appears that the version attribute is also required (I cannot remove it).
In order to run both installation on the same machine you will need to:
Change the product code to "*".
Remove upgrade code or change it for each installer.
Change the path of the installation for each install.
The easiest thing to do is to have a MajorUpgrade element in your install, sequence it afterInstallInitialize (so it uninstalls the older version then installs the new) and also set AllowSameVersionUpgrades to true. You will need to have the ProductCode and PackageCode values to * so each build creates new guids. Basically it's the ProductCode that says a product is installed and you can't install the same product twice - it needs an upgrade.

Setup of a Continuous-Integration server for C# web-applications and libraries

I have been testing Jenkins CI, and now it is time to build a server. What is the best way to go? There are plenty of options, and I don't know what one to choose.
a shared machine, with other server running with it,
a virtual machine, inside a machine used for other servers
a stand-alone machine
use multiple machines with different OS, on each to test each platform?
(I have some web UI tests, based on selenium)
And also, I want a suggestion of the OS to use. I use msbuild, and probably that is only available on Windows... but maybe a linux server, with some sort of builder from mono may be the best way to go.
I am not tied to Jenkins, but it seems to be the best. If you know of better options, let me know.
I need opinions, I need to know what possibilities exist, and if possible, to know what others are doing, and what experiences you have with various setups, so that I can make a solid decision.
Thanks!
First things first. My CI server is a VM running CruiseControl.NET. I dont use Jenkins so I cant really comment on it. From the looks of things, Jenkins is more well-developed than CC.NET.
Per the virtual vs physical question: ultimately, it doesnt really matters as far as CI is concerned. As long as it is visible on the network and has enough resources to perform it's function, the rest is just administration. Personally, I find benefits of virtualization to be worth the extra effort. You can easily add resources, move its physical location, stand up additional VMs to run a cluster. The benefits of virtualization are well known and everybody is doing it these days.
My CI server is on a VMWare ESX server that has a ton of CPU and RAM to dish out. It runs many other VMs on it. I have about 35 sites running through CI and probably 20 are hosted on the machine itself and another 70 sites that are set to build by manually triggering them through the CI dashboard. I have never had any relevant performance issues with it.
Your build server should ideally have the same setup as whatever machine(s) you are planning on deploying your code to. For websites, that would be the same OS as your production servers (probably Windows 2003 or 2008). For desktop applications, I would probably just pick the latest and greatest OS that you are targeting for support and can afford.
Using multiple machines with multiple OSes would only be relevant when you are building desktop applications that you are trying to support on multiple OSes. In this case, having multiple servers would be ideal, but I see that as being a lot of work to get set up. Personally, I would start simple, get everything running and start adding pieces on when they become truly necessary.
As I mentioned, I use CruiseControl.NET. It's been great so far and I am happy with it. Since it is written in .NET and you are using .NET, there are less moving parts that your server needs to get running (I see Jenkins is built on Java). Writing plugins/extensions would be theoretically easier since you already have .NET people in house. I've never written an extension for CC.NET so I cant say that with certainty, though I know it is possible. The down side is the community is small and active development is slow.
Finally, I'll add that it will be A LOT of work to get started. It took me over 6 months to get my CI server ready for production, a few more to migrate all of our projects over to run through it and many more to train the rest of the developers on how to use it or work with it.
So, in summation,
Virtualization is good! (but it doesnt really matter).
You should match you CI environment to whatever envirnoment you are deploying to, if possible.
You better be ready to commit for the long haul.
Continuous integration is great and you wont regret setting up a CI server. Whatever you choose, it will be better than the "cowboy coding" that used to go on :)
EDIT Other answers are posting their process, so I guess I should have done that too! :)
My shop builds LAMP and .NET websites so we needed something that could work effectively with both. We have CC.NET running as the core framework but nearly all of the functionality is performed by custom Nant scripts. We use Nant because it is 1) .NET based and has built in .NET commands and 2) is easy to perform command line operations which form the core of all of our build steps.
CC.NET listens to the SVN server and grabs updates as they are made. CC.NET checks them out and fires off the NANT task that performs all the actual work. For .NET, that means mstest to unit test and msbuild to build and publish. PHP usually just moves the files straight to the destination environment. Then, if all steps were successful, Robocopy will copy the files to the destination server, which was mapped as a network drive during a Group Policy startup script (Windows servers are mapped with net use and LAMP servers are mapped with Webdrive).
We have development servers, staging/QA servers and production servers. Since we work in .NET and LAMP, we have one server per environment for each of these stages - 6 in total and all are virtual. Our development servers are the only ones that are set to a continuous integration build. Staging and production are force-build only along with some other SVN wizardry to prevent accidental deployments. We also build and unit test AcrionScript using MXMLC but that is rare for us.
Here's our setup. We have two virtual servers (a build server and a test server), and then two production servers.
The build server is running TeamCity (for CI) and FinalBuilder (for some of the more complex build jobs that involve editing XML files, changing config settings, installing and registering Windows services).
Most of our applications are ASP, ASP.NET or MVC web apps. TeamCity checks the code out of subversion automatically (triggered by a checkin), compiles anything that needs compiling, deploys the latest pages and DLLs to the IIS web server that's running on the build box.
All our sites have multiple host headers set up in IIS so the same site is listening as www.mysite.com.build, www.mysite.com.test, www.mysite.com. We've set up a DNS wildcard alias on our domain controller, so that *.build points to the build server, *.test points to the test server, and so on.
This means as soon as code has been committed and build by TeamCity, everyone in the company can see it on www.whatever.com.build.
There's then another TeamCity job that uses msdeploy.exe to push individual websites - including their virtual apps and subfolders - from the build server to the test server.
At each stage, TeamCity runs any unit tests that are part of the project, and also runs a separate project that does HTTP requests to various key URLs on our site and makes sure everything is up, running and responding.
Finally, there's a "go-live" task that msdeploys the ENTIRE server from test to live; this means the complete server configuration is completely controlled by TeamCity, which discourages making config changes on live servers since your changes will get overwritten during the next deployment.
TeamCity is fantastic - we've now licensed it because we needed > 20 projects (and LDAP authentication) but the free version served us well for years, and it's an absolutely awesome piece of software. FinalBuilder is expensive but very, very easy to use - if you're cash-rich and time-poor, go for it; if you've got more time than money, stick to Nant or msbuild and write your own steps for editing web.config files, etc.
EDIT: Another detail I missed - we have a test and a live database server. Coders' workstations and the .build servers are all set up to use the test database; the *.test and live servers talk to live data. We use SQL Compare to (manually) push schema changes from the test SQL server to the live SQL server, but normally TeamCity just tweaks the config files between build and test to toggle the database connection string.
I would consider best practice to be:
A seperate build server (doesn't matter if it is vertual or not)
The build server builds the code on check in
Have a seperate deployment server for testing (again virtual doesn't matter)
Have your build deploy to the test server (you can have a seperate build for this i.e. CI build and a Build and Deploy build for testing)
Any unit or integration tests I would run on the build server, manual testing is done on test server
I hope this helps.
My current setup and best practice:
Development projects and environment:
C++ and C# applications, including some web based C# applications.
Windows application.
Subversion.
~30 developers world wide accessing centralized build servers.
Developers commit to the trunk of repositories.
Build scripts:
We employ Visual Build Professional, VBP, www.kinook.com, as a corporate build tool.
Build scripts are hierarchically designed into layers of build scripts, which performs different functions and can be reused.
Build scripts design:
Build machine layer - check lists for required build tools, checks out source codes from SVN trunk.
SVN layer - performs branching, versioning, committing and switching back to trunk.
Build product layer - A build script that builds N number of sub build scripts, where 1 sub build script = 1 project(not VS projects). (Developer friendly)
Sub build script layer - Defines a collection of C#/C++ solutions to be built. Also defines build order dependency. Uses MSBuild /t:Rebuild to build solutions. Uses devenv to build special projects. (Developer friendly)
Daily builds:
Builds 1. to 4. in Build scripts design.
Continuous integration(ci) builds:
Builds 3. and 4. in Build scripts design.
Basic Build environment: ( our more complex projects are build upon these principles )
Separated daily build server from continuous integration build server, also separate test servers for testing after each successful continuous integration build. ( 1 x daily build server, 1 x ci build server, N x test servers )
VM with Windows Servers with multiple CPUs as build machines. (For MSBuild /m)
Other Windows OSes as test machines.
Cruise Control.NET, CCNet, installed on all build/test machines.
Daily builds controlled by CCNet and runs at schedule time daily.
Continuous integration builds triggered by CCNet upon commits.
Build behavior:
Daily build starts at midnight, publishes build output to network shared drive, eg: \share\daily_build. ( Yes, we still use shared drives. ) :)
Upon a successful daily build, ci build will automatically be triggered to clean up working copy, check out source codes and build from scratch. (MSBuild /t:Build)
CI build then copies the built binary output to network shared drive, eg: \share\ci_build. ( Notice, 2 different folders, 1 for daily build, 1 for ci build )
Development environment:
Developers execute batch file that gets up-to-date ci build output to their development machine.
Developers and project managers relies on ci build status, has CCNet Tray installed to get immediate outcome of builds.
Developers sometimes hold lotteries to see who breaks the build, and punish by making him/her bottoms up a beer on Fridays. :D
Hope this helps.
I would suggest a seperate physical build server for one simple reason... It gets buy-in with management.
Once they have actually had to fork out money they become a lot more interested in how the Continuous Integration is going.

Strategy for developing and testing an SDK that will reside in the GAC

I am developing an SDK for internal use at our company. It will not be deployed (in SDK form, it will deploy as a runtime included with our products) outside of the company. Other development groups will use this SDK to develop products and will get the SDK via setup (they will not pull source or binaries from source control). As part of the setup, the SDK assemblies will be put on the target machine and they will also be installed in the GAC. When a product is deployed, the SDK's "runtime" msm will be used to install the SDK's assemblies in the GAC.
So, each developer will install the SDK on their machine. When they want to add a reference, they will browse to the install location of the SDK (or get it via the .NET tab on the Add References dialog if we decide to register the assemblies). When they run the product they are developing, the assemblies will be resolved from the GAC.
That all seems pretty reasonable.
My question is about the best way for me, as the SDK developer, to work. I will primarily be working on the SDK. So, in addition to writing code for the SDK, I will be writing test code, test applications, samples, etc. Is it better to write the tests against the "installed" SDK (i.e. reference the assemblies from their "installed" location, make sure that the assemblies are installed in the GAC so that when the tests (etc) run they are resolving from the GAC just like they will in real life?) If I work this way, then as I work on the SDK, if I make a change, I will need to make sure that the modified assembly is in the GAC.
In addition to working on the SDK, I may also contribute to actual product features which might, in turn, utilize functionality in the SDK. Again, it seems that I should do my work against the "installed" SDK so that I am using the same version as everyone else.
Maybe I am overcomplicating this, but I feel a little confused over the whole issue of managing the work being done (by me) locally on the SDK, running/testing against the "as deployed" assemblies (GAC), and how/if to transition between the two. Part of my problem is that I have a good amount of experience in application development working on "big" projects where I have not had to deal with these kinds of issues (deployment, build process, etc). That is, I have always been a consumer of any internally developed SDKs, not a producer (or producer/consumer). I have also only recently transitioned from C++/COM/VB6 to .NET development. For what it's worth, I will be developing primarily in C# and will be developing (or contributing to) class libaries and WCF services.
I did find this link from here at SO about testing issues when working with GAC deployed assemblies:
Testing code in GAC deployed assemblies
But I'm not sure that it helped me that much.
Anyway, thanks for any tips or ideas that anyone is able to share.
You're overcomplicating matters. There is no functional difference between loading assemblies from the local app bin directory vs loading from the GAC. For unit testing, go with the simplest and fastest solution: just run the tests referencing the SDK assemblies that were copied into the test app's local bin directory by the build process.
You should have a different testing step that exercises loading an application that references your SDKs that reside in the GAC to make sure you don't have any signature issues, but this is more of a system-wide integration test that should be run before release and after any install configuration changes. Since the chances of screwing up your GAC installation are relatively small, it doesn't need to be monitored all the time, IMO.
The less install prerequisites you place on the dev environment, the less time it will take for each dev to get set up on a new machine. Keeping a clean and simple dev environment is good for general dev sanity, but it's particularly important when you have multiple devs who each work with multiple VMs for development and testing.

.net (winforms, not asp) multi-server deployment

I have a small .NET WinForms application, and couple of linux servers, DEV and CL1,CL2..CLN (DEV is development server and CL* are servers which belons to our clients, they are in private networks and it's a kind of production servers)
I want an update mechanism so that
(1) i develop a new version and publish it to a DEV
(2) users of DEV-server install latest version from DEV
(3) users of CL2 (employees of client2) install stable version from CL-2 directly
(4) application checks for updates using server it was installed from (so, if it was installed from CL-2, it should check CL-2 for updates)
(5) i should be able to propogate the update to a selected CL-server (using just file copy & maybe sed; not republishing), if i want that (and if i don't, that CL-server will have an old version until manually i update it)
I tried to use clickonce, but looks like it meets only first two requirements.
What should i do?
ClickOnce should handle 1-4 to be honest. All that would be needed is that for each site you want to deploy/update from, you'll need it to have its own publish, which after looking at your specifications is not incorrect to do.
What you could then do in order to make 5. applicable, is create an automated process to re-publish the file. This could perform a publish and then upload to the correct server.
Remember that ClickOnce needs a new manifest per version, and a new version requires a publish, so I'm not sure that you'll get around 5. with a simple file replacement.
Kyle is right. But for the 5th note, you just need to copy the deployment, and then use mage to modify the installation URL and point it to the new server, and then re-sign the manifests.
I support an app that we deploy to a DEV, QA, PROD servers. The way I handled this is that
I created created a cmd file that has command line calls to MSBUILD. It builds the app once for each server with the appropriate URLs and switches. I give my DEV and QA builds a different AssemblyName that way I can run all 3 environments side by side. This way my build process is automated and I don't have to publish at all.
Here's an article that describes the parameters you can use.
http://msdn2.microsoft.com/en-us/library/ms165431(VS.80).aspx
#Kyle,
For the above solution can the different versions run side by side or do you get errors indicating the app is already installed.

Categories

Resources