Background worker in selfhosted WCF Service - c#

I am completely new to WCF and multithreading. So, I followed this tutorial to set up a selfhosted WCF service. After I right-clicked on the Interface "INews_Service", I clicked on "implement Interface". Then, VS creates a method named DoWork().
In the tutorial above, the DoWork() method is not needed (-> it is deleted). However, in my project, I would like to use this method to run a background worker thread/task.
In my project, the background worker is supposed to permanently load data from an external device and store it in the DataContract-class. The WCF Service, in turn, is supposed to simultaneously provide the instance of that DataContract-class to external WCF consumers.
In reference to the tutorial above, what is the best way to add a background worker method, which constantly changes the variables within an instance of the DataContract-class?
EDIT:
#Brian: Thank you very much! The following example shows the selfhosted service program from the tutuorial above. After I start the host, I would like to run an endless loop that constantly updates my DataContract-class. Can you make an example, how this can be done? I do not need any synchronization, such as SpinLock or Monitor?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ServiceModel;
namespace WCF_NewsService
{
class Program
{
static void Main(string[] args)
{
ServiceHost host = new ServiceHost(typeof(News_Service));
host.Open();
Console.WriteLine("Host Open Sucessfully ...");
while (true)
{
//here I want to constantly update my DataContract-class TOInews
}
}
}
}
EDIT2:
Actually, my problem is that I don't know how to access my DataContract-object objtoinews, which is defined in another file (i.e. in NewsService, as in the tutorial). When I run something like objtoinews.ID = 1;, VS complains that objtoinews is not defined in the current context.
while (true)
{
if (host.State != CommunicationState.Opened)
{
throw new Exception("SynchronizationWS Service Host failed.");
break;
}
else
{
objtoinews.ID = 1;
objtoinews.Header = "blabla";
objtoinews.Body = "huhu";
}
}

You don't need to use DoWork in a WCF solution. Basically, the console program described in that tutoral will perform DoWork() when the Host.Open() is called. In other words, Host.Open() will do what you expect DoWork() will do.
The console acts as the executable, but all the work is done asynchronously/multi-threaded in the background by WCF service.
Booz, I'm not sure why you think you need to continously update your DataContract. I don't think you can, actually, while the program's running. If you're worried about people sending different data constructs to your WS host, maybe you need to abstract the structure so that (basically) your clients can send virtually anything.
In any event, here's the code I'm use to loop and check the status of my web service:
while (true) {
//broken connection case
if (wshost.State != CommunicationState.Opened) {
throw new Exception("Service Host failed.");
//dump from loop and throw error
break;
}
Threading.Thread.Sleep(1000); //sleep 1 second
}

Related

Right architecture for using HangFire

I'm about to start using hangfire in C# in a asp.net mvc web application, and wonder how to create the right architecture.
As we are going to use HangFire, we are using it as a messagequeue, so we can process(store in the database) the user data directly and then for instance notify other systems and send email later in a separate process.
So our code now looks like this
function Xy(Client newClient)
{
_repository.save(newClient);
_crmConnector.notify(newClient);
mailer.Send(repository.GetMailInfo(), newClient)
}
And now we want to put the last two lines 'on the queue'
So following the example on the hangfire site we could do this
var client = new BackgroundJobClient();
client.Enqueue(() => _crmConnector.notify(newClient));
client.Enqueue(() => mailer.Send(repository.GetMailInfo(), newClient));
but I was wondering whether that is the right solution.
I once read about putting items on a queue and those were called 'commands', and they were classes especially created to wrap a task/command/thing-to-do and put it on a queue.
So for the notify the crm connector this would then be
client.Enqueue(() => new CrmNotifyCommand(newClient).Execute();
The CrmNotifyCommand would then receive the new client and have the knowledge to execute _crmConnector.notify(newClient).
In this case all items that are put on the queue (executed by HangFire) would be wrapped in a 'command'.
Such a command would then be a self containing class which knows how to execute a kind of business functionality. When the command itself uses more than 1 other class it could also be known as a facade I guess.
What do you think about such an architecture?
I once read about putting items on a queue and those were called
'commands', and they were classes especially created to wrap a
task/command/thing-to-do and put it on a queue.
Yes, your intuition is correct.
You should encapsulate all dependencies and explicit functionality in a separate class, and tell Hangfire to simply execute a single method (or command).
Here is my example, that I derived from Blake Connally's Hangfire demo.
namespace HangfireDemo.Core.Demo
{
public interface IDemoService
{
void RunDemoTask(PerformContext context);
}
public class DemoService : IDemoService
{
[DisplayName("Data Gathering Task Confluence Page")]
public void RunDemoTask(PerformContext context)
{
Console.WriteLine("This is a task that ran from the demo service.");
BackgroundJob.ContinueJobWith(context.BackgroundJob.Id, () => NextJob());
}
public void NextJob()
{
Console.WriteLine("This is my next task.");
}
}
}
And then separately, to schedule that command, you'd write something like the following:
BackgroundJob.Enqueue("demo-job", () => this._demoService.RunDemoTask(null));
If you need further clarification, I encourage you to watch Blake Connally's Hangfire demo.

IIS hosted WCF service: Integration tests and code coverage

For a project I have programmed a wcf service library. It can be hosted in IIS and in a self-hosted service.
For all external systems that are connected, I have provided Mock implementations which give some generic data, so such the service (library) keeps running and doing work. It is a classic automaton / finite-state machine.
While bootstrapping, all data sources are connected. In testing mode, the mock implementations are connected. So when I run tests, the service library is "started" from a self-hosted service, not IIS and the the state machine keeps running and processing data packages.
Is there any way to get some kind of "test coverage" from such a run.
I would really appreciate if I could tell which code paths are hit by the example data I provide from the mock objects. And then provide more testdata to get a higher coverage.
If I could do this without having to provide "lots of extra" testing code, it would be great. I think a lot of cases are already covered from the data provided from the mock objects. But right now I have no starting point for that.
Here are some code examples to give a more clear picture of what is meant. Code is strongly simplified of course.
In a very simple console application to start the service (self hosted version)
static void Main(string[] args)
{
using (var host = new ServiceHost(typeof(MyServiceLib.Service.MyServiceLib)))
{
host.Open();
Console.ReadLine();
host.Close();
}
}
In the service library, a constructor is called from that code
public MyServiceLib()
{
Task.Factory.StartNew(this.Scaffold);
}
Which does nothing more than starting the state machine
private void Scaffold()
{
// lots of code deleted for simplicity reasons
var dataSource = new MockDataSource();
// inject the mocked datasource
this.dataManager = new DataManager(dataSource);
// this runs in its own thread. There are parts that are started on a timer event.
this.dataManager.Start();
}
public class DataManager : IDataManager
{
public void Start()
{
while (this.IsRunning)
{
var data = this.dataSource.getNext();
if (data != null)
{
// do some work with the data retrieved
// lots of code paths will be hit from that
this.Process(data);
}
else
{
Thread.Sleep(1000);
}
}
}
public void Process(IData data)
{
switch (data.PackageType)
{
case EnumPackageType.Single:
{
ProcessSingle(data);
break;
}
case EnumPackageType.Multiple:
{
ProcessMultiple(data);
break;
}
// here are lots of cases
default:
{
Logger.Error("unknown package type");
break;
}
}
}
}
What I have tried so far:
OpenCover
with a special test dll that would create the Host as shown above, but the host cannot be created properly, so the testing does not start really. I get a "Host is in fault state" error message. I followed this mini-tutorial. Despite that I get a coverage report with a calculated coverage of about 20%. But the service is just starting, it is not doing any work so far.
Visual Studio Performance Tools
The steps are essentially described in this article. I get a myproject.coverage file, but I cannot view it, because I only have a VS Professional, the coverage seems to be only of use in Test Premium or Ultimate editions.
Besides having tried those two, I will accept any answer showing how to get it up and running with any of those (openCover preferred).
Will accept an answer that shows how to test this setup and get a code coverage while leveraging tools to generate most of the code (as pex would, but after trial I see it does not generate very good code).
It would help to see the operations of the service.
I never tried running such "console kind" application under a coverage tool.
I would suggest writing a test with let's say NUnit (or any other unit testing framework; it's not a unit test, obviously, but the technique fits quite well).
In the test, you open the service host, create a client of the service, let the client execute some operations on your service, and close the service host.
Run this test under a coverage tool, and you should be done.
I've done that with NUnit and NCover about 7 years ago, using their current versions at that time (NCover was free software, if I remember it right).
Looks like with OpenCover you are actually getting the coverage, but the service is entering Faulted state, so to you need to catch the faults from your ServiceHost and adress that.
Basically you need some kind of error log, and the first thing i would try is looking in the system event logs (Win+R, eventvwr.msc, Enter).
You can also try to listen to the Faulted events on your ServiceHost:
host.Faulted += new EventHandler(host_faulted);
Here is the link to another SO answer addressing this issue:
How to find out the reason of ServiceHost Faulted event
I would suggest testing your business logic and not the bootstrap code. I mean testing DataManager class and not the hosting and the initializing code. You can write a unit test, using one of the unit testing frameworks, for example NUnit. Then you can run your tests either in Visual Studio with Resharper Ultimate or in your Continuous Integration with Code Coverage tool, like OpenCover or dotCover to get your code coverage.
[TestFixture]
public class DataManagerTests
{
[Test]
public void Process_Single_Processed()
{
// Arrange
IData data = new SingleData();
DataManager dataManager = new DataManager();
// Act
dataManager.Process(data);
// Assert
// check data processed correctly
}
}
in order to allow your Unit-Test-Framework to determin the coverage you have to host the service within the "runner" of the framework (aka. the process that is executing the tests).
The coverage is calculated by and withing the "runner" what means that you can not get coverage if the service is hosted anywhere else.
Below I'll add an example how to do this.
Greetings
Juy Juka
namespace ConsoleApplication4
{
using System.ServiceModel; // Don't forgett to add System.ServiceModel as Reference to the Project.
public class Program
{
static void Main(string[] args)
{
string arg = ((args != null && args.Length > decimal.Zero ? args[(int)decimal.Zero] : null) ?? string.Empty).ToLower(); // This is only reading the input for the example application, see also end of Main method.
string randomUrl = "net.tcp://localhost:60" + new System.Random().Next(1, 100) + "/rnd" + new System.Random().Next(); // random URL to allow multiple instances parallel (for example in Unit-Tests). // Better way?
if (arg.StartsWith("t"))
{
// this part could be written as a UnitTest and should be
string result = null;
using (ServiceHost host = new ServiceHost(typeof(MyService)))
{
host.AddServiceEndpoint(typeof(IMyService), new NetTcpBinding(), randomUrl);
host.Open();
IMyService instance = ChannelFactory<IMyService>.CreateChannel(new NetTcpBinding(), new EndpointAddress(randomUrl), null);
result = instance.GetIdentity();
host.Close();
}
// Assert.Equals(result,"Juy Juka");
}
else if (arg.StartsWith("s"))
{
// This part runs the service and provides it to the outside. Just to show that it is a real and working host. (and not only working in a Unit-Test)
using (ServiceHost host = new ServiceHost(typeof(MyService)))
{
host.AddServiceEndpoint(typeof(IMyService), new NetTcpBinding(), randomUrl);
host.Open();
System.Console.Out.WriteLine("Service hosted under following URL. Terminate with ENTER.");
System.Console.Out.WriteLine(randomUrl);
System.Console.In.ReadLine();
host.Close();
}
}
else if (arg.StartsWith("c"))
{
// This part consumes a service that is run/hosted outoside of the application. Just to show that it is a real and working host. (and not only working in a Unit-Test)
System.Console.Out.WriteLine("Please enter URL of the Service. Execute GetIdentity with ENTER. Terminate with ENTER.");
IMyService instance = ChannelFactory<IMyService>.CreateChannel(new NetTcpBinding(), new EndpointAddress(System.Console.In.ReadLine()), null);
System.Console.Out.WriteLine(instance.GetIdentity());
System.Console.In.ReadLine();
}
else
{
// This is only to explain the example application here.
System.Console.Out.WriteLine("I don't understand? Please use one of the following (Terminate this instance with ENTER):");
System.Console.Out.WriteLine("t: To host and call the service at once, like in a UnitTest.");
System.Console.Out.WriteLine("s: To host the servic, waiting for clients.");
System.Console.Out.WriteLine("c: To contact a hosted service and display it's GetIdenttity result.");
System.Console.In.ReadLine();
}
}
}
// Declaration and Implementation of the Service
[ServiceContract]
public interface IMyService
{
[OperationContract]
string GetIdentity();
}
public class MyService : IMyService
{
public string GetIdentity()
{
return "Juy Juka";
}
}
}

How to pass a parameter to windows service from client machine

I want to write error log from windows service. I know to call a customcommand from windows service, but it will not allow any parameters. I want to pass my error message to service and the customcommand will write the log. How can I do it. I tried something
I have created a static string variable in my library class.
Whenever an error occurs I am calling the function like
ATELib.AteBAC.getErrorMessage = "error message from client";
ServiceController Controller = new ServiceController("ATELogsService");
if (Controller.Status == ServiceControllerStatus.Running)
{
Controller.ExecuteCommand(128);
}
and the code in my service is
protected override void OnCustomCommand(int command)
{
if (command == 128)
{
using (System.IO.StreamWriter file = new System.IO.StreamWriter(Application.StartupPath + #"\ATELogCheck.txt", true))
{
file.WriteLine(ATELib.AteBAC.getErrorMessage);
ATELib.AteBAC.getErrorMessage = null;
}
}
}
it is creating the errorlog file(ATELogCheck.txt) but the error message(string value) is not there in the file, creating an empty txt file. It is a static variable even why it is writterning empty. I am using tcp protocol and calling the service object as
baCls = (ATELib.AteBAC)Activator.GetObject(typeof(ATELib.AteBAC), "tcp://localhost:9090/ATE");
How can I pass the string value to the service?
Because your service runs in different domain from your application. So the ATELib.AteBAC.getErrorMessage in your application is different from ATELib.AteBAC.getErrorMessage in your service.
I honestly think, if you have to do something like this, you have issues in the design of your application. The service is really there to do something continuously. Or, it can "sleep" and listen to TCP port. Once signal received, it can find work and execute. Definitely, look into design first, before going into unneeded complexities.

How do i know if my windows service is working?

I have built a windows service to populate a database with my email inbox every 5 minutes.
I used a class inside my windows service the class gets my emails and writes them to my database, the class has been tested and works.
All i need the windows service to do is use a timer and call the class every 5 minutes, but i have no idea whats going on as i cant even test my windows service.
Please someone tel me what to do to test, if there is a way to test, or just blink luck and pray it works lol.
Also do u have to uninstall and re-install every time you want to test the service or is there an update service option? Please answer this i'm really interested even tho its not my main question.
This is my windows service, if u can point out any errors that would be amazing since i cant test for them. I think my timer might be wrong if some one could look at it?
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.ServiceProcess;
using System.Text;
using System.Timers;
namespace EmailWindowsService
{
public partial class MyEmailService : ServiceBase
{
private Timer scheduleTimer1 = null;
private DateTime lastRun;
private bool flag;
public MyEmailService()
{
InitializeComponent();
if (!System.Diagnostics.EventLog.SourceExists("MySource"))
{
System.Diagnostics.EventLog.CreateEventSource(
"MySource", "MyNewLog");
}
eventLogEmail.Source = "MySource";
eventLogEmail.Log = "MyNewLog";
scheduleTimer1 = new Timer();
scheduleTimer1.Interval = 5 * 60 * 1000;
scheduleTimer1.Elapsed += new ElapsedEventHandler(scheduleTimer_Elapsed);
}
protected override void OnStart(string[] args)
{
flag = true;
lastRun = DateTime.Now;
scheduleTimer.Start();
eventLogEmail.WriteEntry("Started");
}
protected override void OnStop()
{
scheduleTimer.Stop();
eventLogEmail.WriteEntry("Stopped");
}
protected override void OnPause()
{
scheduleTimer.Stop();
eventLogEmail.WriteEntry("Paused");
}
protected override void OnContinue()
{
scheduleTimer.Start(); ;
eventLogEmail.WriteEntry("Continuing");
}
protected override void OnShutdown()
{
scheduleTimer.Stop();
eventLogEmail.WriteEntry("ShutDowned");
}
protected void scheduleTimer_Elapsed(object sender, ElapsedEventArgs e)
{
RetriveEmailClass Emails = new RetriveEmailClass();
if (flag == true)
{
eventLogEmail.WriteEntry("In getting Email Method");
Emails.ServiceEmailMethod();
lastRun = DateTime.Now;
flag = false;
}
else if (flag == false)
{
if (lastRun.Date < DateTime.Now.Date)
{
Emails.ServiceEmailMethod();
eventLogEmail.WriteEntry("In getting Email Method");
}
}
}
}
}
Surely you can test it. All you need is
start up the service
observe that it triggers the expected call after 5 minutes
(observe that it triggers the expected call every 5 minutes for a couple more times)
You can test this manually, or (preferably) create/use an automated test harness which allows you to test repeatedly and reliably, as many times as you want. This is possible even using a simple batch file.
To detect that the timer works correctly, you can inspect its log file. It also helps of course if you make the called class method configurable instead of hardcoding it. So you can run your automated tests using a dummy worker class which does not flood your inbox :-)
To make it even more testable, you can extract the timing logic from your service class too, so that it becomes runnable from a regular application. Then you can test it even easier, even using a unit test framework such as NUnit. This allows you to do more thorough testing, using different timing intervals etc. And the service class itself becomes an almost empty shell whose only job is to launch and call the other classes. If you have verified that all the classes containing real program logic (i.e. all the code which can fail) is unit tested and works fine, you can have much greater confidence in that your whole app, when integrated from its smaller parts, works correctly too.
Update
Looking through your code, it seems that you don't initialize flag anywhere, so its default value will be false. You should initialize it to true in the constructor, otherwise your email retriever won't ever get called even if the timer fires properly.
To set the interval to 1 minute, my first guess would be
scheduleTimer1.Interval = 1 * 60 * 1000;
James Michael Hare has on his blog written about a really nice template/framework he has made, making it lot easier to develop (and debug) Windows Services: C# Toolbox: A Debuggable, Self-Installing Windows Service Template (1 of 2)
It provides you with all the basics you need to quickly get started. And best of all, it give you a really nice way to debug your service as if it was a regular console application. I could also mention that it provides out of the box functionality to install (and uninstall) your service. Part two of the post can be found at this link.
I've used this myself a couple of times, and can really recommend it.
Refactor you logic in another class.
Write a simple console application invoking this class
Test it like a normal application.
Once it runs standalone, it should run as a service.
Beware on permissions and service registration, there are a couple of issues there (like having a sys user, or a desktop session).
A good practice is to use system logs (e.g. the ones you can inspect with eventvwr)
1.add this line to the place you want to break, then you can debug your service.
System.Diagnostics.Debugger.Break();
or
2.try to attach to your service progress from process explorer, then you can also debug your service.
or
3.use a log file to log what your service is doing.
You can attach a debugger to you running service instance from Visual Studio. Click "Debug" in the main menu, "Attach to Process...", select your service process from the list and click "Attach".
If you need to debug the startup of your service, you need to use System.Diagnostics.Debugger.Break().

Windows Service Design Help

I'm in the process of designing the architecture of an application I’m planning on building and need some advice on the best way to implement a specific windows service component described below. I'll be building the service using .net 4.0 so I can take advantage of the new parallel and task APIs, I’ve also looked at using the MSMQ service however I’m not sure this is appropriate for what I hope to achieve.
The simplest way of explaining my use case is that users can create a number of reminders of different types for a task that they need to complete, which they create using a web-based application built in ASP.NET MVC 2. These reminders can be of various types for example email and SMS, which of cause must be sent at the specified due time. The reminders can be changed up until the point they have been sent to the user, paused and cancelled all together, which I guess makes a queuing based service such as MSMQ not appropriate?
I plan to host a windows service that will periodically (unless there is a more appropriate way?) check to see if there are any reminders due, determine their type and pass them to the specific component to deal with them and send them off. If an exception occurs the reminder will be queued up at a set interval and tried again, this will continue to happen with the interval increasing until they meet a set threshold at which point they are discarded and logged as a failure. To add a final layer of complexity to the service, I hope to specify in a configuration file the concrete implementation of each type (This means I can say change the SMS service due to cost or whatever), which are loaded at service start-up dynamically. Any reminders of an unknown or unavailable type will of cause automatically fail and be logged as before.
Once a reminder has been successfully sent it simply discards it, however with the SMS gateway I’m planning to use, it requires me to call its API to find out whether the message was successfully delivered or not, which means an additional timer is required at a set interval to check for this. It would also be nice to be able to add additional reminder type services that conform to a unified interface at service start-up without the need to change its code.
Finally, I don't know whether this should be posted as a separate question or not but I wondered would it be possible to say build a console application that could be started/stopped at anytime and when running can see what the windows service is currently doing?
This is my first ever question on Stackoverflow, even though I’ve been using the community for a while so I apologise if I’ve done some incorrectly.
Thanks in advance,
Wayne
For the second part of your question, I have been thinking about this and here is a class I put together that helps to create a service which can be run both as a Console application as well as a Windows Service. This is fresh off the press, so there might be some issues to resolve, and some refactoring required esp. around the reflection code.
NB: You should set the Service project Output type to Console Application, this will still work fine as a normal service.
using System;
using System.Collections.Generic;
using System.Reflection;
using System.ServiceProcess;
using System.Threading;
namespace DotNetWarrior.ServiceProcess
{
public class ServiceManager
{
private List<ServiceBase> _services = new List<ServiceBase>();
public void RegisterService(ServiceBase service)
{
if (service == null) throw new ArgumentNullException("service");
_services.Add(service);
}
public void Start(string[] args)
{
if (Environment.UserInteractive)
{
foreach (ServiceBase service in _services)
{
Start(service, args);
}
Console.CancelKeyPress += new ConsoleCancelEventHandler(Console_CancelKeyPress);
Thread.Sleep(Timeout.Infinite);
}
else
{
ServiceBase.Run(_services.ToArray());
}
}
public void Stop()
{
foreach (ServiceBase service in _services)
{
Stop(service);
}
}
private void Console_CancelKeyPress(object sender, ConsoleCancelEventArgs e)
{
Stop();
Environment.Exit(0);
}
private void Start(ServiceBase service, string[] args)
{
try
{
Type serviceType = typeof(ServiceBase);
MethodInfo onStartMethod = serviceType.GetMethod(
"OnStart",
BindingFlags.NonPublic | BindingFlags.Instance,
null,
new Type[] { typeof(string[]) },
null);
if (onStartMethod == null)
{
throw new Exception("Could not locate OnStart");
}
Console.WriteLine("Starting Service: {0}", service.ServiceName);
onStartMethod.Invoke(service, new object[] { args });
Console.WriteLine("Started Service: {0}", service.ServiceName);
}
catch (Exception ex)
{
Console.WriteLine("Start Service Failed: {0} - {1}", service.ServiceName, ex.Message);
}
}
private void Stop(ServiceBase service)
{
try
{
Type serviceType = typeof(ServiceBase);
MethodInfo onStopMethod = serviceType.GetMethod(
"OnStop",
BindingFlags.NonPublic | BindingFlags.Instance);
if (onStopMethod == null)
{
throw new Exception("Could not locate OnStart");
}
Console.WriteLine("Stoping Service: {0}", service.ServiceName);
onStopMethod.Invoke(service, null);
Console.WriteLine("Stopped Service: {0}", service.ServiceName);
}
catch (Exception ex)
{
Console.WriteLine("Stop Service Failed: {0} - {1}", service.ServiceName, ex.Message);
}
}
}
}
To use this, you can rip the standard code out of the Main entry point of the service and replace it with the following.
static void Main(string[] args)
{
ServiceManager services = new ServiceManager();
services.RegisterService(new Service1());
services.Start(args);
}
The services.Start() method will detect that the service is being run as an interactive application and manually invoke the OnStart method of all the registered services, once started the main thread goes to sleep. To stop the services, just press 'Ctrl+C` which will result in the Services being stopped by calling the OnStop method of the service.
Of course is the application is run as a Service by the SCM then everyhing works as a normal service. The only caveat is that the service should not require to be run with 'Allow service to interact with desktop' since this will make the service run in interactively even though it is run as a service. This can be worked around if required, but hey I only just wrote the code.
Monitoring and Starting/Stopping a Service
From the command line you can use the NET.EXE to Start/Stop a service
Start a service
net start <service name>
Stop a service
net stop <service name>
For managing a service from .NET you can use System.ServiceProcess.ServiceController
// Stop a service
System.ServiceProcess.ServiceController sc = new
System.ServiceProcess.ServiceController("<service name>");
sc.Stop();
For general communication with the service other than what is provided through ServiceController I would suggest that you host a WCF service as part of your service, which you can then use to communicate with the service to query internal details specific to your service.
Handling the Scheduling
To be honest, I was hesitant to answer this aspect of the question since there are so many approaches each with there Pros/Cons. So I will just give some high level options for you to consider. You have probably thought this through already, but here are a few things off the top of my head
If you are using SQL Server to store the notifications.
Have an SP that you can call to retrieve the reminders that are due, then process the result to raise the reminders approriately.
With this SP you have some options
Call the SP periodically from your service and process the reminders
Have a SQL Job that periodically runs the SP and adds a reminder to a Service Broker Queue. Your Service can then subscribe to the Queue and process reminders as they appear on the Queue. The advantage of this approach is that you can scale out with multiple servers processing the reminder notification generation without any special coding, just add another server that runs your service and the messages will automatically be distributed between the two servers.
If you are not using SQL Server to store the reminders
You can still use a similar approach as for SQL Server. Of course the Windows Service can query the data store using what ever is approapriate and process the reminders. Getting this to scale is a little harder since you will need to ensure that multiple servers do not process the same reminder etc. but not a train smash.
I think that covers the gist of it, everything else is some variation on the above. Ultimately your decision would depend on the target volumes, reliability requirements etc..

Categories

Resources