Ways to leverage single code base between Windows Service and Azure WebJob - c#

I'm working on a timed recurring process that in some cases will be deployed OnPrem and in other cases deployed in the cloud (Azure). I'm researching a Windows Service and an Azure WebJob. Given that I only need the recurring process to be the timed piece, I'm thinking about having the bulk of the logic in a library with just the entry points being different between the Windows Service for the local deployment or a WebJob when deploying to Azure. Each csproj (service and WebJob) would handle only the timed loop and configuration settings then call into the library for the bulk of the work.
My question is: Is there another design combination available to me that would fulfill these requirements potentially in a better way? I've read about wrapping an existing windows service in a WebJob, but I don't think that would be necessary in this case given I'm starting from scratch.

When it comes to keeping your common code up to date, and knowing which versions are used by which applications, bets solution is to create a class library project with respected design pattern and convert it into a nuget project.
you know you can host your own private NuGet repository, create your own packages, and host them internally within your own network.
Here is a very nice article for"How to create Nuget package out of your class library project". You can utilize it and share it across all your code.
And finally you can just callit from your windows service/WebJob.
Let me know if you need any help related to designing the solution.
Hope it helps.

Related

Gracefully restarting Service Fabric Application while in Production to reflect an update to the Data Package

I have a working C#/.NET application designed for Azure Service Fabric. The application uses Data Package and Config Package files, which are read in the constructor of the Stateless Service Fabric application during initialization. Together the files provide information that enables the application to create multiple infinitely long running Tasks in the RunAsync method of the app.
Now say, there is an update to the Data Package file after the Application has already been in production for a long time, say hours/days or more. I want the application to read the Data Package and Config Package from scratch and restart from the initialization steps performed in the constructor of the Stateless application.
After looking around I found the "RestartDeployedCodePackageAsync" method, which needs to be invoked as follows:
await client.FaultManager.RestartDeployedCodePackageAsync(applicationName, selector, CompletionMode.Verify);
This is described in detail on this page: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-testability-workload-tests
Should I use this method to achieve what I am looking to do? Is it the best method to do so?
To do this, you should perform a service upgrade deployment with just the configuration changes.
Use the same structure as a regular package, but without code, with just the new data & config files.
More info here and here.

Difference between a console application and Web application in asp.net core

I am trying to run a background service which just writes to a file on a specified interval.
There are two methods that I tried
1) Created the project with the Console application template
2) Created the project with Web Application as template
When I run the app from visual Studio, both of them run fine. But when I deploy them to IIS, only the web application version works. It must be noted that there is absolutely no difference between the code of the two projects. I have used the WebHost as a hosting strategy in both the projects as well as well as installed all the dependencies in case of Console application as there are in the Web Application version.
I must also inform that I have used the preloadEnabled="true" option in IIS as IIS needs a web request to start the application.
I am wondering what is the difference between both the project types as the code is the same? I don't want the Web Application template.
Edit 1: I forgot to mention that the service will also need to expose an api endpoint for healthcheck purposes. Will the windows service approach listen for http requests?
I used the following article for implementing my background service.
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice
After years of building background services, I learned that Windows services are the best tools to implement these applications. While there are different techniques to keep an IIS application up and running in the background and prevent it from getting recycled, in practice, the applications on IIS are not meant to be executed forever.
If you had an intention to build your app in the cloud, I would have suggested using something like Azure WebJobs or Azure Functions Timer-Triggered functions, but for on-premise, even using something like Hangfire in the web is not sustainable. The worst happens when you need backward compatibility on Windows servers that don't have the "Application Initialization" module.
My suggestion is to move your application to a simple Windows Service if you control your environment. Windows services consume less memory, are easier to manage, and can run forever without getting recycled.
WebApplications are plain the wrong tools for this.
Being always on and always reachable, WebServers are primary targets for hacking. To compensate for that, they are usually run under the most restrictive user rights you can imagine: Read rights to their programm and this instances content directory. While I do not know why it worked at all, it propably will stop working in Production.
What you wanted to write was eitehr a Service or something executed by the Windows Task Sheduler. Personally I advise for the Task Sheduler as Services have their own set of restrictions. Unless of coruse there is some detail of the requirements that you did not told us.
This article could be helpful. It's a step by step tutorial on how to convert a console application to a web application.

2 parts Windows application: "windows service" + GUI to configure it

I’m working on a windows app composed of two parts:
An agent, running in the background.
A main application with a window to start/stop the agent and configure it.
What I understand is that I should use a “windows service” for the agent.
But I’m not sure how this is supposed to be packaged? (Can I have these two parts in the same .exe?)
And how the agent and the main application can communicate (should I use a shared file? Can my agent have a private folder to work in?)
I’m looking for some architecture advices basically
Running the agent as a service is probably the best way to go. It'll run without anyone needing to be logged in to run it, and Windows provides extensive monitoring support for services. You can use the sc command to install, start and stop a service, and it even supports controlling services on other machines if you've got the appropriate permissions.
In order to have your gui communicate with it you could look at using WCF. It will allow you to define your interactions with the service as C# classes and will save you having to worry about checking shared directories or looking into a shared file etc. This approach will also make it easy to support multiple clients at the same time, whilst something like a shared folder approach will make this difficult.
You will need to have to separate .exe files, one for the service and one for the windows application. You can package these are two separate MSIs within Visual Studio, the benefit here is that if you need to move the service (for whatever reason) you are not then also packaging up the Windows App and leaving it where ever you install the service.
There are different ways you can have them communicate without getting massively complex. you could read from a text file, as you've suggested, but this could cause locking problems. When I've had to do similar I created a simple database in SQL (or any brand of database you wish), and have the Windows App insert / update configuration options to a table, and the service then reads the table to get its settings.

C# ClickOnce deployment for Windows Services?

What are some best practices for being able to deploy a Windows service that will have to be updated?
I have a Windows service that I will be deploying but might require some debugging and new versions during the beta process. What is the best way to handle that? Ideally, I'd like to find a ClickOnce-style deployment solution for Windows services but my understanding is that this does not exist. What is the closest I can get to ClickOnce for a Windows service?
A simple solution that I use is to merely stop the service and x-copy the files from my bin folder into the service folder.
A batch file to stop the service then copy the files should be easy to throw together.
Net stop myService
xcopy \\myServerWithFiles\*.* c:\WhereverTheServiceFilesAre
net start myService
I have a system we use at work here that seems to function pretty well with services. Our deployed system has around 20-30 services at any given time. At work we use a product called TopShelf you can find it here http://topshelf-project.com/
Basically TopShelf handles a lot of the service related stuff. Installing, Uninstalling etc all from the cmd line of the service. One of the very useful features is the ability to run as console for debugging. You build one service, and with a different cmd line start you can run it as a console to see the output of the service. We added one custom feature to this software that lets us configure profiles in advance. Basically our profiles configure a few things like logging, resource locations etc so that we can control all that without having to republish any code. All we do is run a command like
D:\Services\ServiceName.exe Core.Profiles.Debug or
D:\Services\ServiceName.exe Core.Profiles.Production
to get different logging configurations.
Our build script creates install.cmd and uninstall.cmd scripts for each of our services all we do is copy the files to the server and run the script. If we want to see debug output we stop the service and double click the exe and we get a console to read all the output.
One more thing that topshelf has which we don't use because its not necessary is the concept of shelving (there is documentation on this website for this). This allows you to update the service without having to "restart" but you still need to copy the files manually unless you build an automated system for that.
However, my suggestion if you need 100% service availability is to have a redundant system. No matter how you configure your service for updates you cannot avoid hardware failure causing downtime without an automated failover system. If said system was in place my recommended update strategy would be to turn off 1 node, update, test, turn on turn off the other node, update, test and turn the 2nd node back on. You can do this all of course with a simple script. This may be a more complicated system than you need but if you can't take a service offline for a simple restart that takes 5 seconds then you really need some system in place to deal with hardware issues because I can guarantee it will happen eventually.
Since a service is long-running anyway, using ClickOnce style deployment might not be viable - because ClickOnce only updates when you launch the app. A service will typically only be launched when the machine is rebooted.
If you need automatic update of a service then your best bet might be to hand-code something into the service, but I'd forsee problems with almost any solution: most install processes will require some level of user interaction (if only to get around UAC), so I can't imagine this would lead an answer that doesn't involve getting a logged-on user in front of the screen at some point.
One idea that might just work is active-directory deployment (or some similar equivalent). If your service is deployed via a standard MSI-type installer, AD allows you to update the application silently as part of the computer policy. I suspect you'd have to force the server to refresh the AD policy (by rebooting or using gpupdate from the console), but other than that it should be a hands-off deployment.
I would suggest using the "plugin" approach on this, that is, using the Proxy Design Pattern.
While using this pattern, an independant thread may verify over a folder for updates. You will need to use ShadowCopy over your assembly deployment. When your service update-thread encounters a new version of your service, it shall unload the current production assembly and load the new version, without stopping the service itself. Even more! Your service should never notice the difference, if there is no breaking code within your assembly.
I would suggest to create a normal setup project, and add the windows service project output in that setup project.
For more information please refer to http://support.microsoft.com/kb/816169.

Please give reference/guidance to make a web application for managing all IT assest/devices

Please give reference/guidance to make a web application for managing all IT assest/devices .
Application consist of two component Web application and Windows .NET Application.
Client Windows .NET Application scan all active network & find all IT assests like printer,scanner & upload all data into the web application.
Now our team using Asp.net & c#
technology for this project.
Please give suggestion regarding the
client application & web application
interaction.
Suggest any library/reference
required for the project.
Is Microsoft sync frame work good for
client application.
Is Microsoft WCF will be a good
option for client & server
interaction(for making API).
How to scan active network for
finding devices by using client
application.
I will suggest you to go for a simple solution like
1. A windows application that takes care of scanning the entire network of computers from the installed computer and then sends the information to a web service
2. A web service to accept the assets list and then save them to the database
3. An asp.net application to display and catalog the indexed assets.
The part that will become somewhat complicated will be the assets discovery since it should be handled by the windows application. Your best bets are
1. to depend on the windows registry
2. to use the Windows Management Instrumentation (WMI)
3. If you dare you can dirctly program against the Netapi32.dll: NetServerEnum or similar low level win32 API
Hope this will help you to get started
Consider this approach:
Asset Catalog Service (WCF App). Its responsibility is to act as a repository for found assets. The service encapsulates the actual storage (database, etc).
Asset Finder (Winform app). Its responsibility is to scan all active network, find all IT assets and calls Asset Catalog Service to register the device. Asset Catalog Service will reconsile whether a device has not been registered (therefore will be stored), a device have been updated (therefore will be updated) or no change (therefore will be ignored). You may have multiple instances of Asset Finder running at the same time to speed up the discovery process. Asset Catalog Service or another service may be used to keep track the work pool of the asset finders.
The website (Asp.Net Web app). Its responsibility is to visualize the asset catalog. It queries Asset Catalog Service and display the result to the end users.
I can't see any obvious use case for using Microsoft Sync framework.
Unfortunately, I don't have any experience in writing any asset discovery algorithm. Others might be able to help on that point.
Hope that helps.
This is a bit off topic, but I would suggest looking at options from existing vendors that will meet your overall business requirements. Asset detection and management is not a simple task and creating an in-house application to do it is often a waste of time and money that could be better spent on core business needs or other IT support/resources. Purchasing software from an existing vendor will give you a much better solution than whatever you can code up in a week, a month, or even a year. If you are trying to catalog even a medium sized network with over 100 nodes then using an established system could end up being much cheaper than building your own. Here are a few examples of existing products:
http://www.vector-networks.com/components/network-discovery-and-mapping.php
http://www.manageengine.com/products/service-desk/track-it-assets.html
I haven't used either of them, but I have been down a similar route trying to create an in--house server monitoring and management system. We spent two weeks working on a prototype that was eventually scraped for a 3rd-party system that cost $1,000 a year. It would have cost us at least $10,000 to build something with 1/10th the features, let alone support and maintain it. Even just searching for a FOSS solution and then using that as the basis for your project (something like nmap) would be better than starting from scratch.
Best of luck!
Windows based application should be used to scan network and collect info about devices/assets available in the network. save those information in database.
look at following project to get and idea how you may scan the network http://sourceforge.net/projects/nexb/
the same database should be used by ASP .net app to be used for reporting purposes, you may also use it to group/tag/categorize various assets.
Also store scanned devices in separate "departments" depending on their IP schema.

Categories

Resources