I have created a C# web service, a web client, and successfully debugged the service with ASP.NET Development Server (that thingy that gets activated when you just press F5). All fine. Now I need a web service that is almost the same as previous, but differs in a few lines of code. For this purpose I created two new configurations, DebugNew and ReleaseNew, and set the output directory to binNew (instead of default "bin"). The problem is that original web service is executed in debugger, instead of new web service. The debugger is unaware of binNew folder. How to set up the environment to start new web service if DebugNew configuration is active?
As far as I know, web applications will only run out of the bin folder. If somebody knows how
to change that, I would be happy to call myself wrong in order to learn that trick myself.
Assuming that I am actually correct for once there, you could write a post compile script that checks which build configuration is active. If it's either DebugNew or ReleaseNew, copy the contents from binNew to bin.
If there's really only a few lines of code different though, I question whether or not putting a configuration setting in and adjusting the code accordingly isn't a better way to go. But, I certainly don't know all the facts. Just a thought I had.
I established a truce with Visual Studio (or unconditionally surrendered, depending on the point of view), and reverted output directory back to \bin. At least the setup project, which is in the same solution, maintains separate folders for each configuration.
Related
I have my web application running. And every time I change some little peace of logic code I need to stop the app and wait for IIS to restart entirely.
Somewhere in the web I saw some guy saying that one of cool features of MVC5 (or maybe MVC6 on ASP.NET Core) that you can make changes "on the fly".
So can I not stop and restart IIS every time, or I just misunderstood something?
It depends on how the ASP.NET Core app is deployed. Essentially, its ability to make changes on the fly is owed to it being able to be deployed as just plain code, rather than as a compiled application. The web server essentially compiles it on the fly. However, for that to happen, you need to be using a web server than actually can compile it on the fly. IIS cannot. However, IIS can act as a reverse proxy for Kestrel, and Kestrel can compile on the fly. If you deploy the app in the traditional "compile and publish directly to IIS application directory" approach, then you will not benefit from this.
Actually, you don't need to restart IIS after each deploy. Whenever a change is detected in DLLs, the app (not IIS) will recycle and re-load the new DLLs. It just impacts that particular application and reloads the app domain.
Also, editing the web.config in a web application will also recycle the app.
You can read more in this article.
I've recently inherited a number of WCF webservices that are configured to use an ASHX handler within a web project to render the .SVC files in the form of http://example.com/Services/V1/MyService.svc. The services are running in the dev and production environments, WSDL comes up, and adding a service reference in a new project allows me to call MyMethod and get a response exactly as expected.
The error handling and logging story isn't great, so I'm trying to run the site locally and add a service reference to http://localhost:1234/Services/V1/MyService.svc. I can load the service at that URL and see the same WSDL that appears for the production environment, but when I try to use code similar to what's below to call my method neither the client nor the response objects are recognized the way they are when I connect to production.
using MyServiceTestProject.LOCAL_MyService;
//...
MyServiceClient test = new MyServiceClient();
MyServiceMethodResponse r = test.MyServiceMethod("arguments!");
I am able to see exactly one of the MyCustomObject classes that is only declared within my service and stops being available when I stop using the service reference, so I know that something is coming across even if it's not everything that one would expect.
The relevant parts of the Web.Config files are the same when I compare my local and dev/prod environments, and the project that's running locally is the one that was deployed to those other boxes.
Has anyone encountered this sort of behavior runnning a services project locally using IIS Express?
Edit:. The endpoints are different between the prod, dev, and local environments, using the same code in each one. Thanks for pointing out that detail I'd omitted originally.
After spending several hours on and off trying to solve this before posting the question, deleting and re-adding the service reference and restarting both my IDE and my computer to no avail, I've discovered that right-clicking on the name of the service and selecting "Update Service Reference" (circled in red in the screenshot below) would ultimately fix the problems I was seeing although VS sometimes required me to perform multiple updates before it would work.
Without performing the update, neither my attempts to delete and add the service back nor restarting Visual Studio/rebooting my PC would help address the issue. I'm not sure if it's something about IIS Express that causes the service reference to not be created with all of the necessary data the first time around or if it's one of a hundred other variables in my local environment, but at least there's a reliable way to get it working when it does fail.
I would like to expose a number of WCF web services in AppHarbor. However, it is unclear to me, how to actually start the services, once the code has landed at AppHarbor. My questions are very fundamental:
Given a bunch of compiled code, how does AppHarbor know which dll/exe to execute? And which method on which class?
Should I start the service hosts myself, or should I just provide an .svc file?
So, basically, I miss a clear picture of how AppHarpor figures out what code to execute, and in case of WCF web services, how these should be started.
Here you can find information:
https://appharbor.com/page/how-it-works
I have deployed multiple WCF services projects on appharbor. First you have to know that when you push your code to AppHarbor it will look for only one .sln file. If there are more it'll throw an error.
Once you have deployed yor sevice it will look something like this:
Now, AppHarbor will make a list of every available commit you have pushed into the server so it will let you choose whiche one of them will be the one activated.
Since Appharbor compiles and builds the entire solution, you would have to push the entire project folder and not only the .svc file.
How would it know how to start it? That depends on the .sln file since it compiles the project it would be the same as when you debug it on your local browser. You don't have to start anything, once you have choosen a build to deploy, appharbor will do all the hardwork.
I hardly recommend it for .NET solutions ;)
Hope it helps.
More links:
http://support.appharbor.com/kb/getting-started/deploying-your-first-application-using-git
I have an C# web application using mvc5. It currently runs on Azure and I have a dev, test, and production instances. What do I need to do to ensure that the database connection strings will automatically change as the application is pushed to each environment? I know this is possible with web.config as you can define Web.Debug.Config, etc, but how would I go about this for different worker roles on Azure? I have been all over the internet looking for a solution. In a nutshell, I would like to do the same approach used for the multiple web.config files but for Azure.
As some additional background, for my solution I have my repositorybase broken out into a separate project and there I am trying to grab the connection string from the configuration files (let's say domain.dll is the name of the library that contains it). As first this worked when I was only using web.config but when I had to run my domain DLL files from another worker role the configuration began to return null; because this code would not run when run from a different worker process(non web). This seems to introduce an interesting problem, what if I need to use the domain.dll code outside the web and outside of Azure? How do I still maintain the connection string benefits that Azure and web.config provide?
Assumption: you are using Web-Services, not Web-Sites. There is a difference.
There are 2 ways to get what you need:
For worker role you can do app.config transformations almost in the same way you do in web.config. Only you'll need to do it with SlowCheetah. There is a nuget package for that, also there is VS extension to create transform files. There is too much faffing-about with this method. I never liked it, so move on to second option.
If you run Web-Services, you can specify connection strings as part of worker-role configuration. Go to your Azure project and open properties of your worker-role:
There you can add database connection string. And create a configuration for every environment you run (dev, test, prod). And place a different connection string for every environment.
To get your connection string you execute:
CloudConfigurationManager.GetSetting("DatabaseConnectionString")
Once your site is deployed you'll see these configuration values in Configure tab in Azure Portal.
You should make the distinction between 'building in release mode' and 'deploying to environment X'.
Building in Release mode should just transform your configuration files to be 'production-ready'. With MsDeploy you can parameterize your configuration files so upon deployment they will be filled with the parameters as supplied by you to your MsDeploy script.
There is no magic bullet which will automatically change your connectionstrings etc per environment. But this way you can standardize your process which will greatly help with the stability of your product.
One thing to note is that the parameterization of your deployments will break the easy workflow 'publish' from within visual studio due to the fact that you are not given an option to fill in your parameters during the publish wizard... :'(
You should manage the connection strings through the azure portal rather than through config file transformations. With the MVC app this will be easy, go to the configure tab and set your connection strings there
For items like web jobs use Microsoft.WindowsAzure.ConfigurationManager which
provides a unified API to load configuration settings regardless of
where the application is hosted - whether on-premises or in a Cloud
Service
What are some best practices for being able to deploy a Windows service that will have to be updated?
I have a Windows service that I will be deploying but might require some debugging and new versions during the beta process. What is the best way to handle that? Ideally, I'd like to find a ClickOnce-style deployment solution for Windows services but my understanding is that this does not exist. What is the closest I can get to ClickOnce for a Windows service?
A simple solution that I use is to merely stop the service and x-copy the files from my bin folder into the service folder.
A batch file to stop the service then copy the files should be easy to throw together.
Net stop myService
xcopy \\myServerWithFiles\*.* c:\WhereverTheServiceFilesAre
net start myService
I have a system we use at work here that seems to function pretty well with services. Our deployed system has around 20-30 services at any given time. At work we use a product called TopShelf you can find it here http://topshelf-project.com/
Basically TopShelf handles a lot of the service related stuff. Installing, Uninstalling etc all from the cmd line of the service. One of the very useful features is the ability to run as console for debugging. You build one service, and with a different cmd line start you can run it as a console to see the output of the service. We added one custom feature to this software that lets us configure profiles in advance. Basically our profiles configure a few things like logging, resource locations etc so that we can control all that without having to republish any code. All we do is run a command like
D:\Services\ServiceName.exe Core.Profiles.Debug or
D:\Services\ServiceName.exe Core.Profiles.Production
to get different logging configurations.
Our build script creates install.cmd and uninstall.cmd scripts for each of our services all we do is copy the files to the server and run the script. If we want to see debug output we stop the service and double click the exe and we get a console to read all the output.
One more thing that topshelf has which we don't use because its not necessary is the concept of shelving (there is documentation on this website for this). This allows you to update the service without having to "restart" but you still need to copy the files manually unless you build an automated system for that.
However, my suggestion if you need 100% service availability is to have a redundant system. No matter how you configure your service for updates you cannot avoid hardware failure causing downtime without an automated failover system. If said system was in place my recommended update strategy would be to turn off 1 node, update, test, turn on turn off the other node, update, test and turn the 2nd node back on. You can do this all of course with a simple script. This may be a more complicated system than you need but if you can't take a service offline for a simple restart that takes 5 seconds then you really need some system in place to deal with hardware issues because I can guarantee it will happen eventually.
Since a service is long-running anyway, using ClickOnce style deployment might not be viable - because ClickOnce only updates when you launch the app. A service will typically only be launched when the machine is rebooted.
If you need automatic update of a service then your best bet might be to hand-code something into the service, but I'd forsee problems with almost any solution: most install processes will require some level of user interaction (if only to get around UAC), so I can't imagine this would lead an answer that doesn't involve getting a logged-on user in front of the screen at some point.
One idea that might just work is active-directory deployment (or some similar equivalent). If your service is deployed via a standard MSI-type installer, AD allows you to update the application silently as part of the computer policy. I suspect you'd have to force the server to refresh the AD policy (by rebooting or using gpupdate from the console), but other than that it should be a hands-off deployment.
I would suggest using the "plugin" approach on this, that is, using the Proxy Design Pattern.
While using this pattern, an independant thread may verify over a folder for updates. You will need to use ShadowCopy over your assembly deployment. When your service update-thread encounters a new version of your service, it shall unload the current production assembly and load the new version, without stopping the service itself. Even more! Your service should never notice the difference, if there is no breaking code within your assembly.
I would suggest to create a normal setup project, and add the windows service project output in that setup project.
For more information please refer to http://support.microsoft.com/kb/816169.