We have developed a C# Webservice in Service stack. In this whenever we get a request for checking the availability of a Data we need to check in the Database and return the result. If data is not there we need to wait till we get data and return the value. If no data upto certain time period then need to Timeout it.
We are using SQL Server for our application.
Can anybody tell us how to implement Long polling in service stack. Our request has to wait in the server side and return the output.
Regards
Priya
There is a discussion on the ServiceStack Google Group regarding ways to implement long polling in Service Stack.
Basically, you implement a service that just loops and wait for server-side data to become available, and only returns either after a timeout (say 30s) or when data is available.
The client on the other hand continuously loops requests to the service and waits for it to return or timeout as well.
Related
I have a website where I need to take a bit of data from the user, make an ajax call to a .net webservice, and then the webservice does some work for about 5-10 minutes.
I naturally dont want the user to have to sit there that whole time, so I have made it an asynchronous ajax call to the webservice, and after the call has been sent, I redirect the user to a "you are done!" page.
What I want to happen is for the webservice to keep running to finish--and not abort--after it receives the information from the user.
From my testing, this is more or less what happens, but now I'm finding that this might be limited by time? I.e. if the webservice runs past a certain amount of time, it will abort if the user isnt still connected.
I might be off here in this assessment, but this is what I THINK is going on from my testing.
So my question is whether with .net web services, if this is indeed what happens? Does it get aborted after some time if the user isnt still on the other end? Is there any way to disable this abort?
Thanks in advance!
when you invoke a web service, it will always finish its work, even if user leaves the page that invoked it.
Of course webservices have their own configuration and one of them sets timeout.
If you're creating a WCF service (SOAP Service) you can set it in its contract (changing binding properties), if you're creating a service with WebApi or MVC (REST/Http Service) then you can either add to its config file or programmatically set in its controller as it follows.
HttpContext.Server.ScriptTimeout = 3600; //Number of seconds
That can be a reason causing webservice to interrupt its work but it is not related to what happens on client side.
Have a nice day,
Alberto
Whilst I agree that the answer here is technically correct, I just
wanted to post a more robust alternative approach that avoids some of
the pitfalls possible with your current approach such as
Web Server being bounced during the long-running processing of request
Web Server App pool being recycled during processing
Web server running out of threads due to too many long-running requests and not being able to process any more requests
I would recommend you take a thoroughly ansynchronous approach and use
Message Queues (MSMQ for example) with a trigger on the queue that
will execute the work.
The process would be:
Your page makes Ajax call to the Webservice
Webservice writes a message into the Queue and returns right away. The message contains details of what work needs to be carried out.
User continues on your site as usual, or goes home, etc.
A trigger on the Queue is watching for messages and when a message
arrives in the queue, it activates a process which:
Reads the message
Performs the necessary work
Updates any back-end storage, etc, with the results of the work
This is much more robust because it totaly decouples the Web service from any long-running work and means that if the user makes a request and the web server goes down a moment later (for whatever reason) then the work will still be queued up when the server comes back online, etc.
You can read more about it here (MSMQ is the MS Message Queue tech; there are many others!)
Just my 2c
I've had a fairly good search on google and nothing has popped up to answer my question. As I know very little about web services (only started using them, not building them in the last couple of months) I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
To give you an example, my app is designed to make job updates, which for certain types of updates will call the web service. It seems like my options are that I could create a datatable in my app of updates that require the web service and pass the whole datatable to the web service and then write a method in the web service to process the datatable's updates. Alternatively I could iterate through my entire table of updates (which includes other updates than those requiring the web service) and call the web service as when an update requires it.
At the moment it seems like it would be simpler for me to pass each update rather than a datatable to the web service.
In terms of data being passed to the web service each update would contain a small amount of data (3 strings, max 120 characters in length). In terms of numbers of updates there would probably be no more than 200.
I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
Web services or not, any calls routed over the network would benefit from building up multiple requests, so that they could be processed in a single round-trip. In your case, building an object representing all the updates is going to be a clear winner, especially in setups with slower connections.
When you make a call over the network, these things need to happen when a client communicates to a server (again, web services or not):
The data associated with your call gets serialized on the client
Serialized data is sent to the server
Server deserializes the data
Server processes the data, producing a response
Server serializes the response
Server sends serialized response back to the client
The response is deserialized on the client
Steps 2 and 6 usually cause a delay due to network latency. For simple operations, latency often dominates the timing of the call.
The latency on fastest networks used for high-frequency trading is in microseconds; on regular ones it is in milliseconds. If you are sending 100 packages one by one on a network with 1ms lag (2ms per roundtrip), you are wasting 200ms just on the network latency! This one fifth of a second, a lot of time by the standards of today's CPUs. If you can eliminate it simply by restructuring your requests, it's a great reason to do it.
You should usually favor coarse-grained remote interfaces over a fine-grained ones.
Consider adding a 10ms network latency to each call - what would be the delay for 100 updates?
I have a c# WCF web service which is a server and I do have 2 clients one is java client and another is c++ client. I want both the clients to run at the same time. The scenario I have and am unable to figure it out is:
My java client will be making a call to the WCF web service and the processing time might take around 10 mins, meanwhile I want my c++ client to make a call to the web service and the get the response back. But right now I am just able to make a call to web service using c++ client when the java client request is being processed. I am not getting the response back for c++ client request until java client request is completed.
Can any one please suggest me how to make this work parallel. Thanks in advance.
Any "normal" WCF service can most definitely handle more than one client request at any given time.
It all depends on your settings for InstanceContextMode:
PerSession means, each session gets a copy of the service class to handle a number of requests (from that same client)
PerCall means, each request gets a fresh copy of the service class to handle the request (and it's disposed again after handling the call)
Single means, you have a singleton - just one copy of your service class.
If you have a singleton - you need to ask yourself: why? By default, PerCall is the recommended setting, and that should easily support quite a few requests at once.
See Understanding Instance Context Mode for a more thorough explanation.
Use
[ServiceBehavior( ConcurrencyMode = ConcurrencyMode.Multiple )]
attribute over your service class. More on this for example here:
http://www.codeproject.com/Articles/89858/WCF-Concurrency-Single-Multiple-and-Reentrant-and
This is peripheral to your question but have you considered asynchronous callbacks from the method that takes 10+ minutes to return, and then having the process run in a separate thread? It's not really good practice to have a service call waiting 10 minutes synchronously, and might solve your problem, although the service should allow for multiple callers at once anyway (our WCF service takes thousands of simultaneous requests).
When you call a WCF you have a choice in either calling it synchronously or asynchronously. A synchronous call waits for the response to send back to the caller in the same operation. In the caller it would look like "myresult = svc.DoSomething()". With an asynchronous call, the caller gives the service a function to call when it completes but does not wait for the response. The caller doesn't block while waiting for the response and goes about its business.
Your callback will take DoSomethingCompletedEventArgs:
void myCallback(object sender, DoSomethingCompletedEventArgs e)
{
var myResult = e.Result;
//then use the result however you would have before.
}
You register the callback function like an event handler:
svc.DoSomethingCompleted+=myCallback;
then
svc.DoSomethingAsync(). Note there is no returned value in that statement; The service would execute myCallBack when it completes and pass the result. (All WCF calls from Silverlight have to be asynchronous but for other clients this restriction isn't there).
Here's a codeproject article that demonstrates a slightly different way in detail.
http://www.codeproject.com/Articles/91528/How-to-Call-WCF-Services-Synchronously-and-Asynchr
This keeps the client from blocking during the 10+ minute process but doesn't really change the way the service itself functions.
Now the second part of what I was mentioning was firing off the 10+ minute process in a separate thread from inside the service. The service methods themselves should be very thin and just be calling functionality in other libraries. Functions that are going to take a long time should ideally be called in their own threads (say a backgroundworker, for which you register on the service side a callback when it completes) and have some sort of persistent system to keep track of their progress and any results that need to go back to the client. If it were me I would register the request for the process in a db and then update that db with its completion. The client would then periodically initiate a simple poll to see if the process was completed and get any results. You might be able to set up duplex binding to get notified when the process completes automatically but to be honest it's been a few years since I've done any duplex binding so I don't remember exactly how it works.
These topics are really too big for me to go into depth here. I would suggest researching multithreaded operations with the BackgroundWorker.
How would one use SignalR to implement notifications in an .NET 4.0 system that consists of an ASP.NET MVC 3 application (which uses forms authentication), SQL Server 2008 database and an MSMQ WCF service (hosted in WAS) to process data? The runtime environment consists of IIS 7.5 running on Windows Server 2008 R2 Standard Edition.
I have only played with the samples and do not have extensive knowledge of SignalR.
Here is some background
The web application accepts data from the user and adds it to a table. It then calls an one way operation (with the database key) of the WCF service to process the data (a task). The web application returns to a page telling the user the data was submitted and they will be notified when processing is done. The user can look at an "index" page an see which tasks are completed, failed or are in progress. They can continue to submit more tasks (which is independent of previous data). They can close their browser and come back later.
The MSMQ based WCF service reads the record from the database and processes the data. This may take anything from milliseconds to several minutes. When its done processing the data, the record is updated with the corresponding status (error or fail) and results.
Most of the time, the WCF service is not performing any processing, however when it does, users generally want to know when its done as soon as possible. The user will still use other parts of the web application even if they don't have data to be processed by the WCF Service.
This is what I have done
In the primary navigation bar, I have an indicator (similar to Facebook or Google+) for the user to notify them when the status of tasks has changed. When they click on it, they get a summary of what was done and can then view the results if they wish to.
Using jQuery, I poll the server for changes. The controller action checks to see if there is any processes that were modified (completed or failed) and return them otherwise waits a couple of seconds and check again without returning to the client. In order to avoid a time out on the client, it will return after 30 seconds if there was no changes. The jQuery script waits a while and tries again.
The problems
Performance degrades with every user that views a page. There is no need for them to do anything in particular. We've noticed that memory usage of Firefox 7+ and Safari increases over time.
Using SignalR
I'm hoping that switching to SignalR can reduce polling and thus reduce resource requirements especially if nothing has changed task wise in the database. I have trouble getting the WCF service to notify clients that its done with processing a task given the fact that it uses forms based authentication.
By asking this question, I hope someone will give me better insight how they will redesign my notification scheme using SignalR, if at all.
If I understand correctly, you need a way of associating a task to a given user/client so that you can tell the client when their task has completed.
SignalR API documentation tells me you can call JS methods for specific clients based on the client id (https://github.com/SignalR/SignalR/wiki/SignalR-Client). In theory you could do something like:
Store the client id used by SignalR as part of the task metadata:
Queue the task as normal.
When the task is processed and de-queued:
Update your database with the status.
Using the client id stored as part of that task, use SignalR to send that client a notification:
You should be able to retrieve the connection that your client is using and send them a message:
string clientId = processedMessage.ClientId //Stored when you originally queued it.
IConnection connection = Connection.GetConnection<ProcessNotificationsConnection>();
connection.Send(clientId, "Your data was processed");
This assumes you mapped this connection and the client used that connection to start the data processing request in the first place. Your "primary navigation bar" has the JS that started the connection to the ProcessNotificationsConnection endpoint you mapped earlier.
EDIT: From https://github.com/SignalR/SignalR/wiki/Hubs
public class MyHub : Hub
{
public void Send(string data)
{
// Invoke a method on the calling client
Caller.addMessage(data);
// Similar to above, the more verbose way
Clients[Context.ClientId].addMessage(data);
// Invoke addMessage on all clients in group foo
Clients["foo"].addMessage(data);
}
}
Do calls to webservice from multiple clients execute in parallel or one by one (i.e. will the 2nd call be considered only after 1st call is complete)?
thanks in advance.
Calls to web services are essentially calls to web pages on a server. The server typically maintains a thread pool from which it retrieves threads to serve incoming calls. So if a number of computers call the same web service method at the same time, they will be executed in parallell as long as there are threads available in the thread pool. If all threads are already busy method calls will start to be put on hold (and the server may even report that it is too busy to handle the request). 5 computers should not pose a problem though.
A web service can respond to a request. So, what you'll need to do is have a function that all 5 computers call to submit the data you need from each machine. Then, create a function that each computer calls to check if the response is ready. Once the data from each computer is collected, the web service would respond with the correct data.
Web service responses must be initiated by the client, not the server.
For example,
SubmitData(data) returns bool -> each computer submits data, returns if successful or not. The server stores the responses in a DB.
GetResponse() returns data or FALSE -> The server checks if all 5 computers have responded. If not, return FALSE. If true, process and return the data.
Almost all web services frameworks supports a-synchronicity.
if you are using C#, then you might benifit from the following article:
http://www.codeguru.com/csharp/csharp/cs_webservices/security/article.php/c9179