I have an ASP.NET application which transforms HTTP GET requests into web service calls.
Now each request results in a web method call. I would like to collect or group the data from the requests to execute less service calls. So each request should check if a processor object exists and accept more data.
Then pass the data to this object or create a new processor object. The maximum waiting period of time for a request to be passed to the service would normally be in the range of seconds or maybe a couple of minutes.
Is there a nice pattern for this?
Related
I have an application that sends concurrent web service requests. These requests are then processes by a .Net proxy (that has several methods depending on the request). When a request is processed, a SoapLogger Extension class will then log request and response calls.
The issue I have is that some of the web service requests did not receive responses and are stuck. One reason could be threads timed-out. Is there a way that web service method requests can be queued and add a "sleep" parameter to synchronous request? or is a reverse-proxy the last resort?
I am thinking of processing 1 web service synch request at a time and add a sleep parameter on the interface to give time to the asynchronous part to progress while any new incoming request would be queued.
I've had a fairly good search on google and nothing has popped up to answer my question. As I know very little about web services (only started using them, not building them in the last couple of months) I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
To give you an example, my app is designed to make job updates, which for certain types of updates will call the web service. It seems like my options are that I could create a datatable in my app of updates that require the web service and pass the whole datatable to the web service and then write a method in the web service to process the datatable's updates. Alternatively I could iterate through my entire table of updates (which includes other updates than those requiring the web service) and call the web service as when an update requires it.
At the moment it seems like it would be simpler for me to pass each update rather than a datatable to the web service.
In terms of data being passed to the web service each update would contain a small amount of data (3 strings, max 120 characters in length). In terms of numbers of updates there would probably be no more than 200.
I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
Web services or not, any calls routed over the network would benefit from building up multiple requests, so that they could be processed in a single round-trip. In your case, building an object representing all the updates is going to be a clear winner, especially in setups with slower connections.
When you make a call over the network, these things need to happen when a client communicates to a server (again, web services or not):
The data associated with your call gets serialized on the client
Serialized data is sent to the server
Server deserializes the data
Server processes the data, producing a response
Server serializes the response
Server sends serialized response back to the client
The response is deserialized on the client
Steps 2 and 6 usually cause a delay due to network latency. For simple operations, latency often dominates the timing of the call.
The latency on fastest networks used for high-frequency trading is in microseconds; on regular ones it is in milliseconds. If you are sending 100 packages one by one on a network with 1ms lag (2ms per roundtrip), you are wasting 200ms just on the network latency! This one fifth of a second, a lot of time by the standards of today's CPUs. If you can eliminate it simply by restructuring your requests, it's a great reason to do it.
You should usually favor coarse-grained remote interfaces over a fine-grained ones.
Consider adding a 10ms network latency to each call - what would be the delay for 100 updates?
We have developed a C# Webservice in Service stack. In this whenever we get a request for checking the availability of a Data we need to check in the Database and return the result. If data is not there we need to wait till we get data and return the value. If no data upto certain time period then need to Timeout it.
We are using SQL Server for our application.
Can anybody tell us how to implement Long polling in service stack. Our request has to wait in the server side and return the output.
Regards
Priya
There is a discussion on the ServiceStack Google Group regarding ways to implement long polling in Service Stack.
Basically, you implement a service that just loops and wait for server-side data to become available, and only returns either after a timeout (say 30s) or when data is available.
The client on the other hand continuously loops requests to the service and waits for it to return or timeout as well.
I have a quick simply question about requests in WCF. Does WCF automatically queue requests to a service(the service being a singleton) when multiple users request the same process, ie lets say I have a function that takes a while to complete, and two users make a call to this function, does WCF automatically queue the requests so that when the first request is finished it then starts processing the next?
~Just Wondering
The service behavior attribute on the contract defines how sessions, instances and concurrency are handled. See http://msdn.microsoft.com/en-us/library/ms731193.aspx for more details.
Basically you can configure it (1) handle one request at a time or (2) multiple requests at the same time.
Do calls to webservice from multiple clients execute in parallel or one by one (i.e. will the 2nd call be considered only after 1st call is complete)?
thanks in advance.
Calls to web services are essentially calls to web pages on a server. The server typically maintains a thread pool from which it retrieves threads to serve incoming calls. So if a number of computers call the same web service method at the same time, they will be executed in parallell as long as there are threads available in the thread pool. If all threads are already busy method calls will start to be put on hold (and the server may even report that it is too busy to handle the request). 5 computers should not pose a problem though.
A web service can respond to a request. So, what you'll need to do is have a function that all 5 computers call to submit the data you need from each machine. Then, create a function that each computer calls to check if the response is ready. Once the data from each computer is collected, the web service would respond with the correct data.
Web service responses must be initiated by the client, not the server.
For example,
SubmitData(data) returns bool -> each computer submits data, returns if successful or not. The server stores the responses in a DB.
GetResponse() returns data or FALSE -> The server checks if all 5 computers have responded. If not, return FALSE. If true, process and return the data.
Almost all web services frameworks supports a-synchronicity.
if you are using C#, then you might benifit from the following article:
http://www.codeguru.com/csharp/csharp/cs_webservices/security/article.php/c9179