I have a WCF service that is being called from a web application (Web App) that is used by multiple users.
It takes some time for the WCF service to process a single request (e.g. 30 seconds).
I have noticed that if I open the Web App in two tabs or in two browser windows and make the Web App send two requests to the WCF service, they are processed subsequently. That is WCF will start processing the second request only after the first request will be processed to the end.
Is there a way to make the WCF service process incoming requests in parallel?
Just because technology allows you to do something doesn't mean it's a good idea.
Here are the instructions for the dangerous tool you're asking for help developing.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/how-to-implement-an-asynchronous-service-operation
Every programmer should understand the complexity an accessible, distributed process makes possible. In short, they centralize user requirements. Harvard Business Review proves DRY is a bad practice.
http://www.powersemantics.com/p.html
Proceed at your own discretion. I've given you an alternative.
Related
I have a website where I need to take a bit of data from the user, make an ajax call to a .net webservice, and then the webservice does some work for about 5-10 minutes.
I naturally dont want the user to have to sit there that whole time, so I have made it an asynchronous ajax call to the webservice, and after the call has been sent, I redirect the user to a "you are done!" page.
What I want to happen is for the webservice to keep running to finish--and not abort--after it receives the information from the user.
From my testing, this is more or less what happens, but now I'm finding that this might be limited by time? I.e. if the webservice runs past a certain amount of time, it will abort if the user isnt still connected.
I might be off here in this assessment, but this is what I THINK is going on from my testing.
So my question is whether with .net web services, if this is indeed what happens? Does it get aborted after some time if the user isnt still on the other end? Is there any way to disable this abort?
Thanks in advance!
when you invoke a web service, it will always finish its work, even if user leaves the page that invoked it.
Of course webservices have their own configuration and one of them sets timeout.
If you're creating a WCF service (SOAP Service) you can set it in its contract (changing binding properties), if you're creating a service with WebApi or MVC (REST/Http Service) then you can either add to its config file or programmatically set in its controller as it follows.
HttpContext.Server.ScriptTimeout = 3600; //Number of seconds
That can be a reason causing webservice to interrupt its work but it is not related to what happens on client side.
Have a nice day,
Alberto
Whilst I agree that the answer here is technically correct, I just
wanted to post a more robust alternative approach that avoids some of
the pitfalls possible with your current approach such as
Web Server being bounced during the long-running processing of request
Web Server App pool being recycled during processing
Web server running out of threads due to too many long-running requests and not being able to process any more requests
I would recommend you take a thoroughly ansynchronous approach and use
Message Queues (MSMQ for example) with a trigger on the queue that
will execute the work.
The process would be:
Your page makes Ajax call to the Webservice
Webservice writes a message into the Queue and returns right away. The message contains details of what work needs to be carried out.
User continues on your site as usual, or goes home, etc.
A trigger on the Queue is watching for messages and when a message
arrives in the queue, it activates a process which:
Reads the message
Performs the necessary work
Updates any back-end storage, etc, with the results of the work
This is much more robust because it totaly decouples the Web service from any long-running work and means that if the user makes a request and the web server goes down a moment later (for whatever reason) then the work will still be queued up when the server comes back online, etc.
You can read more about it here (MSMQ is the MS Message Queue tech; there are many others!)
Just my 2c
If this has been asked before my apologies, and this is .NET 2.0 ASMX Web services, again my apologies =D
A .NET Application that only exposes web services. Roughly 10 million messages per day load balanced between multiple IIS Servers. Each incoming messages is XML, and an outgoing message is XML. (XMLElement) (we have beefy servers that run on steroids).
I have a SLA that all messages are processed in under X Seconds.
One function, Linking Methods, in the process is now taking 10-20 seconds, it is required for every transaction, however is not critical that it happens before the web service returns the results. Because of this I made a suggestion to throw it on another thread, but now realize that my words and the eager developers behind them might have not fully thought this through.
The below example shows on the left the current flow. On the right what is being attempted
Effectively what I'm looking for is to have a web service spawn a long running (10-20 second) thread that will execute even after the web service is completed.
This is what, effectively, is going on:
Thread linkThread= new Thread(delegate()
{
Linkmembers(GetContext(), ID1, ID2, SomeOtherThing, XMLOrSomething);
});
linkThread.Start();
Using this we've reduced the time from 19 seconds to 2.1 seconds on our dev boxes, which is quite substantial.
I am worried that with the amount of traffic we get, and if a vendor/outside party decides to throttle us, IIS might decide to recycle/kill those threads before they're done processing. I agree our solution might not be the "best" however we don't have the time to build in a Queue system or another Windows Service to handle this.
Is there a better way to do this? Any caveats that should be considered?
Thanks.
Apart from the issues you've described, I cannot think of any. That being said, there are ways to fix the problem that do not involve building your own solution from scratch.
Use MSMQ with WCF: Create a WCF service with an MSMQ endpoint that is IIS hosted (no need to use a windows service as long as WAS is enabled) and make calls to the service from within your ASMX service. You reap all the benefits of reliable queueing without having to build your own.
Plus, if your MSMQ service fails or throws an exception, it will reprocess automatically. If you use DTC and are hitting a database, you can even have the MSMQ transaction flow to the DB.
We have a number of Windows services running in our system (built in C#). We use WCF to communicate with them and control them, since WCF offers very convenient communication with these processes.
Right now in our Windows GUI for managing, monitoring and troubleshooting the services, we simply register callbacks and receive notifications when a message is available from the service. Obviously this application is stateful and WCF provides the ability for the local delegate to be called when the maintained connection to the service indicates.
In our web application which users actually use, we'd like to use long-polling to have a status area on the web page (iframe, AJAX, whatever) which shows any issues which the services are reporting. We'd like to use a long-polling or other technique which minimizes actual polling on the network.
The problem we are running up against is that we need something to make the long-polling HTTP request against which will somehow always be running in IIS and which itself can be WCF-connected to our services and which can convert the event/delegate-based WCF response into a blocking-style long-poll response. It feels like a chicken-and-egg situation that some component in our system is always going to be in a loop, polling - and that's exactly what we are trying to avoid.
Does anyone have an example of doing this?
Well, if your services present with WCF, why not simply consume the WCF services with javsacript? Then you remove your IIS servers from the equation completely. if a user wants to see what the services are doing then they can retrieve the information directly from the service.
Here's a blog with someone showing how to do this:Call wcf service from Json
I have a quick simply question about requests in WCF. Does WCF automatically queue requests to a service(the service being a singleton) when multiple users request the same process, ie lets say I have a function that takes a while to complete, and two users make a call to this function, does WCF automatically queue the requests so that when the first request is finished it then starts processing the next?
~Just Wondering
The service behavior attribute on the contract defines how sessions, instances and concurrency are handled. See http://msdn.microsoft.com/en-us/library/ms731193.aspx for more details.
Basically you can configure it (1) handle one request at a time or (2) multiple requests at the same time.
We are trying to write an inner wcf service between 2 servers.
one off the application is a server application for our clients.
the clients sends us files and we then process them and converting them.
this whole process takes some time mean while the client session is open, i dont this using async is possible? which way can we make this methodology faster ?
keep in mind that we have aprox 1000 files an hour ...each client sends up to 200 files an hour also
G
You could to send an address to be called back when that file processing is done and it will notify the consumer server. Or to use a message queue on both ends.
This article (link) by Juval Lowy is all about one-way services, wcf call-back methods, etc. It should show you how to set your services up to handle what you're looking for.
One-way services make the call asynchronous - fire and forget. Setting up a call-back does what it sounds like - you can specify a service/method to be called back after a method executes.
Better yet, check out chapter 5 in Lowy's Programming WCF Services (link). It goes into MUCH greater detail than the article above.
I think the first link is enough to get started though.