I have a C# windows Service installed in a customer server that does the following tasks:
Listen to a SQL Broker service for any insert / update in 3 tables and then POST data to an API method so remote server gets updated with latest. (using SqlTableDependency)
Polling method every 5 minutes to verify / validate that on remote server these 3 tables have same data. (In case the SQL Broker service is not working)
Starts a SelfHosted WebAPI server (this doesn't work because customer doesn't allow server to be exposed to Internet)
This last selfhosted task was implemented so that from an application it can request to the customer server to perform some updates on a table.
I would like to know if there is a way to subscribe the windows service to a realtime broascast engine / service such Pusher or AWS SQS, etc. The idea behind is that I can trigger tasks in the remote customer windows service from an outside application.
Any idea if this is a doable thing? If I can do this I even can get rid of the Polling task in the windows service because now I can get the windows service to push information to the API based on an event that I can trigger from an external source.
This might not be the best workaround but seems to be working pretty nice.
What I did was to implement in the Windows Service an infinite loop with a Long polling call to a AWS SQS, having the max Receive message wait time parameter in SQS to 20 seconds. What this allowed me is to reduce the empty response messages and also reduce cost for requests to SQS service to only one every 20 seconds.
If a message comes in when the long polling is beging handled then inmediatly the long polling stops and the message is received.
Because the messages sent frequency is not that big, lets say on every 20 seconds I receive 1 then:
3 request messages every minute
180 per hour
4,320 per day
103,680 per month
And AWS Pricing is $0.40 per 1 million so that will be practically free.
Related
We have an ASP.Net MVC app that is running with a gateway policy that any web requests that go over 5 minutes are terminated. One of the features is exporting some data. It's been running just above 5 minutes. Would SignalR help? Does having a persistent connection between the client and server be enough for the gateway to think that it is active and will not terminate it?
We face the same issue in our project where we have to process some data in API and UI can't wait for such long processing time interval response from API.
We use SignalR to notify the requester UI/Client when data get successfully processed.
We have an WebApi json rest service written in C# for .Net 4.0 running in AWS. The service has a /log endpoint which receives logs and forwards the logs onto logstash via tcp for storage.
The /log endpoint uses Task.Factory.StartNew to send the logs to logstash async and returns StatusCode.OK immediately. This is because we don't want to client to wait for the log to be sent to logstash.
All exceptions are observed and handled, also we don't care if logs are lost because the service is shutdown or recycled from time to time as they are not critical.
At first the flow of logs was very low, probably 20 or 30 per hour during peek time. However we have recently started sending larger amounts of logs through, can be well over a thousand per hour. So the question now is that by using Task.Factoring.StartNew are we generating a large number of threads, i.e. 1 per request to the /log endpoint or is this managed somehow by a thread pool?
We use nLog for internal logging but are wondering if we can pass the logs from the /log endpoint to nlog to take advantage of its async batching features and have it send the logs to logstash? We have a custom target that will send logs to a tcp port.
Thanks in advance.
A Task in .NET does not equal one thread. It's safe to create as many as you need (almost). .NET will manage how many threads are created. .NET will not start more tasks than the hardware can handle.
Ok I have been given a task to create like ticket system in the factory I'm working in where operators will generate like a ticket which is stored in a database and also sent to an Engineer to act upon. The database is a MS SQL database which is running on a virtual server.
Each user will have a client desktop app which is developed in WPF.
I'm stuck on how to impalement the alerting for the engineers. When a ticket is generated it will be stored in the database and then a message or something needs to be sent to the engineers. So far I came up with the following thoughts for what I could use.
Webservice - Clients connect to this which is used to communicate to the database and relay ticket messages to the engineers.
Windows Service - Same as above but as a windows service? Is there a benefit?
Database polling - The client software for the engineers continuously poll the database say every 2 mins and checks for newly generate tickets. If any found then the user is notified.
The database polling is probably the easiest to implement but its not really live due to the delay. I mentioned the 2 min delay as there will be around 30 people connected at once and I would say 12 of those would be Engineers and I didn't know if 12 client programs continuously polling would affect the server performance.
Any advice would be great or if anyone knows of a better way.
The way to go if you dont want to use polling (which would not be a problem with 30 users) would be a WCF service with a callback contract which allows you to send events back from the server to the client.
example: WCF Callbacks
How would one use SignalR to implement notifications in an .NET 4.0 system that consists of an ASP.NET MVC 3 application (which uses forms authentication), SQL Server 2008 database and an MSMQ WCF service (hosted in WAS) to process data? The runtime environment consists of IIS 7.5 running on Windows Server 2008 R2 Standard Edition.
I have only played with the samples and do not have extensive knowledge of SignalR.
Here is some background
The web application accepts data from the user and adds it to a table. It then calls an one way operation (with the database key) of the WCF service to process the data (a task). The web application returns to a page telling the user the data was submitted and they will be notified when processing is done. The user can look at an "index" page an see which tasks are completed, failed or are in progress. They can continue to submit more tasks (which is independent of previous data). They can close their browser and come back later.
The MSMQ based WCF service reads the record from the database and processes the data. This may take anything from milliseconds to several minutes. When its done processing the data, the record is updated with the corresponding status (error or fail) and results.
Most of the time, the WCF service is not performing any processing, however when it does, users generally want to know when its done as soon as possible. The user will still use other parts of the web application even if they don't have data to be processed by the WCF Service.
This is what I have done
In the primary navigation bar, I have an indicator (similar to Facebook or Google+) for the user to notify them when the status of tasks has changed. When they click on it, they get a summary of what was done and can then view the results if they wish to.
Using jQuery, I poll the server for changes. The controller action checks to see if there is any processes that were modified (completed or failed) and return them otherwise waits a couple of seconds and check again without returning to the client. In order to avoid a time out on the client, it will return after 30 seconds if there was no changes. The jQuery script waits a while and tries again.
The problems
Performance degrades with every user that views a page. There is no need for them to do anything in particular. We've noticed that memory usage of Firefox 7+ and Safari increases over time.
Using SignalR
I'm hoping that switching to SignalR can reduce polling and thus reduce resource requirements especially if nothing has changed task wise in the database. I have trouble getting the WCF service to notify clients that its done with processing a task given the fact that it uses forms based authentication.
By asking this question, I hope someone will give me better insight how they will redesign my notification scheme using SignalR, if at all.
If I understand correctly, you need a way of associating a task to a given user/client so that you can tell the client when their task has completed.
SignalR API documentation tells me you can call JS methods for specific clients based on the client id (https://github.com/SignalR/SignalR/wiki/SignalR-Client). In theory you could do something like:
Store the client id used by SignalR as part of the task metadata:
Queue the task as normal.
When the task is processed and de-queued:
Update your database with the status.
Using the client id stored as part of that task, use SignalR to send that client a notification:
You should be able to retrieve the connection that your client is using and send them a message:
string clientId = processedMessage.ClientId //Stored when you originally queued it.
IConnection connection = Connection.GetConnection<ProcessNotificationsConnection>();
connection.Send(clientId, "Your data was processed");
This assumes you mapped this connection and the client used that connection to start the data processing request in the first place. Your "primary navigation bar" has the JS that started the connection to the ProcessNotificationsConnection endpoint you mapped earlier.
EDIT: From https://github.com/SignalR/SignalR/wiki/Hubs
public class MyHub : Hub
{
public void Send(string data)
{
// Invoke a method on the calling client
Caller.addMessage(data);
// Similar to above, the more verbose way
Clients[Context.ClientId].addMessage(data);
// Invoke addMessage on all clients in group foo
Clients["foo"].addMessage(data);
}
}
I have the need from an asp.net web site to send out many SMS messages at once, and also poll a POP3 account for an incoming mail, and then SMS that out to many recipients, one at a time.
The way I am thinking of doing this is a windows service that would connect to my sql back-end to see if there are SMS messages to be sent out, like every 10-20 seconds or so. If so, get all the messages in a list, delete them from the table, and then proceed to send them out.
Same way with the pop account.
Any ideas on how to best provide this service without causing blocking in the asp.net web page when it is kicked off (e.g. messages added to sql server)?
Platform is windows server 2003 R2, sql 2008 standard, asp.net 3.5 SP1.
Thanks for your advice.
We have implemented similar scenarios using SQL Server service broker's Queueing mechanism. The idea is that every inserted SMS record is caught by a trigger which inserts a message containing the SmsID into the service broker Queue.
You then need a stored procedure which receives messages from the Queue. If there are no messages, your procedure will run until the next entry is inserted. That's OK, since it does not take up resources to listen to the Queue.
Next you'll need a Windows service who continuously (recursively) calls the STP, assembles the SMS and sends it.
The Advantage of the Service Broker Queue over a flag in a table is thread safety. This way you could have as many instances of your Service as you want w/o having to worry too much about concurrency issues.
You can find a nice Service Broker tutoial here: http://www.developer.com/db/article.php/3640771
Instead of using an Sql Server for the queuing you could use MSMQ (Microsoft Message Queuing) for this.
MSMQ is quite easy to set up and once it is up and running it is more scalable than Sql Server.
So what you could do was to setup a new queue in MSMQ that would receive the messages you wanted to send. The message would normally be some sort of Message object that describe the message, the sender and the recipient.
Then you would either setup a service that would poll the queue at a regular interval or you could setup MSMQ to start a class of your choice each time a new Message was sent to the queue.
If you need a log of the messages you could have the service / sender object write to a log in sql server when the message was sent.