One of my tasks in a stage of my Release pipeline in Azure DevOps downloads some artifacts that are placed in $(System.ArtifactsDirectory) folder. Those artifacts are input files for my tests to be executed.
I am using the new Configuration approach from Microsoft and already have a .runsettings file where I specify some envirnoment variables that overwrite values from my appsettings.json file.
How can I access that particular variable from the tests?
Related
I'm working in a project that has these two files
Web.env1.config
Web.env2.config
Those files represent the two different enviroments on which my application is published.
We're using TFS Build 2015 to publish our applications, and whenever I have to publish the application, I have to manually copy the content of the one of those files and paste it into Web.config file.
So, I wanted to know if it's possible to do this process automatically without having to do it manually.
Here's what I do.
If all of your settings are located in config sections supported by it, you could create transforms and apply them during a web deploy task in the release pipeline with this setting:
If I have environment-specific settings in sections not support by transforms, I maintain a Web.config.template file side-by-side with web.config. The environment-specific values are tokens that will be replaced in the Release Pipeline. For instance, instead of a connection string value it might be "~connstr~". Web.config is just used during debugging.
In the release pipeline I have a replace tokens task that pulls pipeline variables and replaces the ~tokens~ with values. The environment-specific values are scoped pipeline variables:
After the web.config.template file has had it's tokens replaced with environment-specific values, I have a command line task that renames it to web.config. From there the pipeline deploys the files.
Another option is to create a Secure File for each environment and bring it into the build or release pipeline.
Pipelines are very powerful and you can get creative with solutions.
I am using Jenkins to build my application, which I then need to publish to s3. One of the outputs is an installation exe file that I then provide as a link to users of the application. Because of this I need this installer file to always be in the same place, for every build. However, no matter how I set up my S3 Publishing post-build step the the artifacts are uploaded to a separate folder for every build, like so
Is there a way for me to set up the publish to the root of the directory/bucket every time, overwriting the old file if neccesary? This would eliminate the jobs/TestTrayApp/{buildnumber} directories. This is my s3 publish post-build step setup:
I'm not sure I fully understand what exactly what you want, but based on what I could gather up,you have an .exe file which needs to be in a particular location before publishing to s3 ?
Why not add a another post build action before your actual post build(publishing to s3) copy .exe to the destination location and then initiate publish build.
wont that be easier :)
You simply need another build step that copies the published .exe artifact to the permanent location for users to download.
I think you are confusing "publishing an app to your production environment" and publishing build artifacts. I believe the intent of this Jenkins S3 publish plugin is not to be used for your final release production version but as a build step to archive build artifacts, archive might have the same meaning as publishing in this context regarding your build artifacts. See this article for why I am thinking the Jenkins S3 publish plugin is not meant to be used to actually publish the final release version of your application.
Use a Jenkins Pipeline or add another build step to your Freestyle to copy the exe from S3 build artifact archive to your final permanent storage container for users to download from your website/app.
Unselect both Manage artifacts and Flatten directories checkboxes. And as per the Source give the directory name followed by a / which contains your executable. For example:
This way your latest executable will always be placed in bucket/blahblahblah/executable/executable.exe location no matter how many times your job runs.
When coding parts of an application that require access to a directory, I often write the code so that if the Directory is specified using a relative path, the system assumes that the intent is to use a directory subordinate to the directory/folder that the Application (Executable) is running in (or was loaded from).
But when I run Unit Tests which exercise this code, it turns out that the Application path is a completely different directory, because the "Application " is the Test runner Harness, and it is running in a separate directory.
For writes this is no problem, but if the code is attempting to read a file configured in the solution as [Content] from a directory that the compiler copied as part of the build process, this file is of course not in the Test Runner version of the folder at all.
This can be resolved for files that are physically in the test project solution space, (by adding additional instructions on TestRunner configuration), but as far as I know, if the file is specified in the Solution as a link to a physical file located outside the project space, then I have yet to be able to find a simple solution to the problem.
What solutions to this issue exist? and what are the pros/cons of each one?
I'm trying to configure Visual Studio's in-built test runner (msunit) to copy some static files to the output directory, but am so far unsuccessful.
The static files reside in a common test project which all the other test projects reference. The static files have
Build Action: Content
Copy to Output Directory: Copy always
When each test project is built, the static files are copied to the \bin\Debug\ directory, but when the tests run and msunit copies everything to a \TestResults\ directory, these static files are not copied.
Copying the files to the output directory
According to this MSDN page,
To make additional files available during a test, such as test data or
configuration files, incorporate the files into your project and set
the Copy to Output property. If that is not practical, deploy
additional files or directories by using the DeploymentItemAttribute
on test classes or methods.
Taking these two suggestions in turn, the first I've tried without success... or must the static files be in each of the test projects? Is not enough to have them in a referenced project?
And as for the DeploymentItemAttribute, I really don't want to have to put this above every test class in every test project.
Running the tests in their original location
Another option would be to run the tests in their original location rather than deployed to the \TestResults\ directory, which I'd be happy to do if I could figure out how to configure such.
This MSDN page explains how to configure how the tests are run, though despite being the only page where the *.runsettings file is described, it's woefully short on detail. For example, it provides a sample wherein the <TestAdaptersPaths> element contains a dummy value, but doesn't explain what it can be replaced with.
And this *.runsettings file also doesn't explain if it's even possible to run the tests in the original output directory. This is something which is possible with the older *.testsettings file, but the documentation also says that a *.testsettings file may only be used in legacy mode, which is slow and not as robust; and although I may specify a *.testsettings file in the *.runsettings file, I would also then have to force the tests to run in legacy mode.
So it appears to be the case that the older version permitted a configuration setting which indicated that the tests should not be deployed to another directory, but that the new version doesn't support this (or doesn't document that it supports it).
Using xcopy
Here is another MSDN page wherein it is suggested to define a post-build task to xcopy the files into place. I've tried this as well and the files never appear in the deployment directory.
So in summary, everything I've tried has proven unsatisfactory. Could someone please explain how I can tell my test projects to copy static files from a referenced project into the deployment output directory (preferable), or how I can configure the tests to run in their original location?
Update
I'll also add that if an individual test method is run, the test will run in the standard output directory (that specified in the project's properties, default is "bin\Debug\"). In such a case the test runs as expected because the static files are present in the standard output directory; if multiple tests are run (either choosing "Run all tests" at project or class level) then the tests are run from a uniquely-named deployment directory under the "TestResults\". It is in this latter case where the static files are not copied.
Furthermore, if I decorate a test class with [DeploymentItem("...", "...")] then the specified file is copied, but if I decorate a common class (used for DB setup) with the same attribute (and which is also decorated with [TestClass]) then the specified file is not copied.
I have an interesting dilemma to which, I hope, someone may have an answer. My company has a very complex web application which we deliver to our clients via an InstallShield multi-instance installer package. The application is written in ASP.NET MVC3 (using the Web Forms View Engine), C# 4.0, and Silverlight. It is the Silverlight component of the app with which we are having problems during installation.
Many of our clients wish to install our web app in what we refer to as “mixed-binding mode”. This may not be the correct terminology. What I mean to say is that the client will install the web app on a web server that is internal to the client’s firewalls and expose it to the web via a proxy server in the DMZ. In this way, everything internal to the firewall will resolve as HTTP, while every external request will resolve as HTTPS.
This does not pose a problem to most of the web application during installation because it will be running inside the firewall and will ALWAYS be HTTP when the app is installed in “mixed-binding mode”. This is not the case with the Silverlight component. Because it runs in the end-user’s browser process space, it is external to the proxy and firewall and must resolve via HTTPS.
The Silverlight files are in the XAP file. Within the XAP file, we have a configuration file (in XML format) that must be modified based on the binding mode of the web application (HTTP, HTTPS, or MIXED). As we all know, XAP files are simply Zip files, so theoretically all that is needed to edit a file contained in the XAP is to rename it from “.xap” to “.zip” and use any zip compatible utility or library component to extract the configuration file, edit it by some manual or automatic means, and then use the same zip component to re-archive the modified file back into the XAP file.
“Therein lies the rub!” This must take place automatically within the InstallShield Basic MSI process. At first we tried using an InstallShield Managed Code Custom Action using the DotNetZip library; however, DotNetZip seems to have an incompatibility with InstallShield. Every time InstallShield launches the Custom action, the installer would throw a InstallShield 1603 error the moment the custom action tried to execute the first DotNetZip command. (Yes we did rename the XAP file from “.xap” to “.zip” prior to trying to unzip the file.) We encountered the same problem with SharpZipLib.
Next, we dropped down to a lower level algorithm using the Shell32 library. We knew there would be timing considerations to take into account using this method since Shell32 processes run in a separate thread so we built wait states into the process, as shown below:
//Extract the config file from the .zip archive
Shell32.Shell shell = new Shell32.Shell();
string extractedFolderPath = tempZipFileName.Substring(0, tempZipFileName.Length - 4) + "\";
if (Directory.Exists(extractedFolderPath))
Directory.Delete(extractedFolderPath, true);
Directory.CreateDirectory(extractedFolderPath);
//Folder name will have the file extension removed.
Shell32.Folder output = shell.NameSpace(extractedFolderPath);
Shell32.Folder input = shell.NameSpace(tempZipFileName);
foreach (FolderItem F in input.Items())
{
string name = F.Name;
//We only extract the config file
if (name.Equals(CFG_FILE))
{
output.MoveHere(F, 4);
System.Threading.Thread.Sleep(LONG_TIMEOUT);
}
}
//You have to sleep some time here,
// because the shell32 runs in a separate thread.
System.Threading.Thread.Sleep(MED_TIMEOUT);
This seemed to work the “majority” of the time.
NOTE: the use of the word “majority” above. The process works inconsistently on a fast server, whereas it works consistently on a slower machine. Don’t ask me why, my math professors always taught me that a second is a second (in this Universe at least).
We have even tried this with a PowerShell script. In this case, we are able to extract the configuration file and rename it in the target folder to which it was extracted; however, everything we have tried to do copy the renamed file back to the ZIP archive has failed. This is very unusual since we added a second call to push the target folder and all its children into the Zip archive and that works just fine (See the code reference below)!
#Extract_XAP.ps1
#===============
$shell=new-object -com shell.application
#Establish Current Location as Current Location
$CurrentLocation=get-location
#Determine the path of the $CurrentLocation
$CurrentPath=$CurrentLocation.path
#Create a namespace using the $CurrentPath
$Location=$shell.namespace($CurrentPath)
#Rename the XAP File to a ZIP file for extraction will work.
Get-ChildItem *.xap|Rename-Item -NewName {
$_.Name -replace "xap","zip"
}
#Create a ChildItem object using the ZIP file.
$ZipFile = get-childitem SunGard.OmniWA.WebAdmin.zip
#Insure the ZIP File is not readonly
(dir $ZipFile).IsReadOnly = $false
#Create a shell namespace for the ZIP Folder within the ZIP File.
$ZipFolder = $shell.namespace($ZipFile.fullname)
\Iterate through the ZIP Items in the ZIP Folder
foreach ($CurFile in $ZipFolder.items())
{
if ($CurFile.name -eq "ServiceReferences.ClientConfig")
{
#Current item's name matches the file we want to extract,
# so copy it from the ZIP Folder to the Current $Location.
$Location.Copyhere($CurFile)
#Wait briefly to insure process stays in sync
Start-sleep -milliseconds 1000
#Change the extention to the extracted file, so we can tell it from the original when we
# insert it back to the ZIP Folder.
Get-ChildItem *.ClientConfig|Rename-Item -NewName {
$_.Name -replace "ClientConfig","ClientConfig_Test"
}
#Create an object containing the extracted file.
$ClientConfigFile = get-childitem ServiceReferences.ClientConfig_Test
#Wait briefly to insure process stays in sync
Start-sleep -milliseconds 2000
#No need to search further
break
}
}
#For some reason this WILL not copy the renamed file back
# into the ZipFolder, even though it should (see below)!
$ZipFolder.CopyHere($ClientConfigFile)
#For some reason copying the entire current directory
# works just fine! Go figure!
$ZipFolder.CopyHere($Location)
#I hoped this would work, but it doesn't????
$ZipFolder.Close
write-output "renaming to the original extension"
#Rename ZIP file back to a XAP file, so Silverlight can use it.
Get-ChildItem *.zip|Rename-Item -NewName {
$_.Name -replace "zip","xap"
}
This shouldn’t be this hard to do. Somebody has to have done something like this before! I mean any complex web installation is going to need an automated install process and is going to have to be eventually installed using proxies and DMZ’s.
Any assistance would be appreciated. Here are the specifications we must follow:
Use InstallShield Basic MSI as the installer.
Any InstallShield Custom Action must be written in C#.
The XAP file must be modified after the file has been copied to the server by the installer.
Management wants me to stay away from BSD and GNU "free-ware" like
Info-Zip, 7Zip, et.al.
Ken Jinks
Senior Web Developer
Sungard Omni Wealth Services
Birmingham AL
Option 2
Address the 1603 error.
A note I found on this page says:
Verify the correct prototype syntax for entry-point function
declarations of InstallScript custom actions, if any have been
included in the setup.
An example of a correct prototype is: export prototype MyFunction(HWND);
Note: HWND must be specifically referenced
in the prototype. All InstallScript functions called via custom
actions must accept a HWND parameter. This parameter is a handle to
the Microsoft Windows Installer (MSI) database.
Option 1 (not an option)
Putting on my pragmatic hat here:
If you only have the two scenarios, e.g. Mixed-binding and "normal", I would suggest simply generating two ZAP files in your project and installing only one, or (my preference) install both and dynamically deciding which ZAP filename to inject into the hosting page at runtime (if you can identify the running mode in your MVC web app).
This keeps the installer simple and does not add much work to have a few shared main files in each of two Silverlight projects connected to the same web project.
You can use a shared TT include file to generate any configs using a mostly shared config template.
O.K. I have finally figured it out using PowerShell. As it turns out, I could not get any of the CopyHere flags parameter settings to work and CopyHere will not overwrite a file in a zip folder object unless it has been told to do so by the Flags Parameter.
The simplest solution was to change all of the CopyHere methods to MoveHere methods. In that way I was never having to try to overwrite a file and I also never had to delete my working file because it got moved back into the Zip Folder.
Thanks for all of your help.
Ken Jinks