I have a WCF RESTful service that is hosted in IIS that is hit by several of our applications. The WCF services appear to operate fine for the most part, but sometimes it takes a long time to get a response from the service.
I was seeing if there was a good tutorials or resources to follow on how to best configure WCF RESTful services to be web scale either through the web.config, from IIS, or from our dedicated application pool.
We have gone through our services and used NHibernate profiler to find and optimize any problematic queries and we also have memcached setup to also help with performance. The problem seems to be when many applications are consuming the service in a short period of time or when the service has sat idle for a long period of time.
Thanks for any assistance.
Not sure if its applicable to your scenario, but I read the below mentioned blog post on MSDN a couple of days ago. It's about a problem in the Net IOCP Threadpool which causes long response times for WCF when many requests are issued in short time. Maybe that could help you?
WCF scales up slowly with bursts of work
KB2538826
There is no general advice on heavy load issue, but one of the possible optimizations would be using asynchronous operations on the server side: Scale WCF Application Better with Asynchronous Programming. It's about conserving thread pool resources while making database calls.
As for the idle period issue, check out Configuring Recycling Settings for an Application Pool (IIS 7)
Related
I have developed a WCF Web Service that is called from several SharePoint Online workflows. At certain points there could be around 4 users starting up to 10 workflows within a very short time frame: one workflow could possibly make as much as 3 requests to the web service. Needless to say, at certain points, the WCF Service becomes overloaded. When SharePoint workflows make HTTP web service calls and the service is unavailable, the workflow runs into an error and attempts to restart the workflow after a short period of time: which only contributes to making things worse.
These are some of the exceptions logged today from the web service during an approximate 40 minute of "overloading":
Unable to read data from the transport connection: An existing
connection was forcibly closed by the remote host.
The underlying connection was closed: An unexpected error occurred on
a receive.
The underlying connection was closed: A connection that was expected
to be kept alive was closed by the server.
I have tried to look into ways to avoid the WCF web service from malfunctioning when several requests are being made and besides the obvious actions of finding ways to decrease the amount of calls made to the web service (which is not always an option), I came into the terms: WCF Concurrency Modes and Throttling Limits.
Given the scenario described above, could anyone guide me into the right direction as to which Concurrency Mode and Throttling limits would be most ideal? Presently, my WCF service has default configuration.
Concurrency Modes can be:
Single or
Multiple or
Reentrant
Throttling Limit options are shown below:
<serviceThrottling maxConcurrentCalls="Integer"
maxConcurrentInstances="Integer"
maxConcurrentSessions="Integer" />
I am still quite new to this area of programming and am finding it a tad complicated, so any help would be greatly appreciated!
Update: The SharePoint system is highly customised and it covers a Business process that is quite complicated. The Web Service methods are varied and it would take me a long time to explain what every method does but I will mention some examples. The web service is used for operations that either cannot be done (easily or at all) using out of the box SharePoint designer actions. For example: moving documents and copying metadata from one folder to another (in the same or different lists), syncing information between lists/libraries, calculating values based on metadata of several documents living within a given folder, scheduling data into an external database to be used with other components such as a console application running as a scheduled task, etc.
The web service calls take an average of 2 minutes to execute and return a value. The fastest methods take around 30 seconds, and the slowest around 4 minutes. Both the slow and fast methods are frequently utilised.
Your problem could be caused by a number of things, and you need to gather more information in order for anyone to be helpful to you.
With that said, the best I can do here is give you some pointers on how to gather such information, such as:
Turn on WCF tracing and try to understand when does the error occur on Sharepoint side. Does the error occur while the webservice is processing the request, after, or does it never receive the request in the first place?
If this tracing doesn't give you much answers, write code in your webservices to Trace specific messages to give you more information on what the webservice is doing and what it is receiving/returning from/to Sharepoint, or use your preferred logging library.
In specific cases, the EventViewer might have some information on what is happening. Check for any messages that show up at a similar time of when the error occurs on the client.
At last, relaxing your serviceThrottling settings might mitigate some of your issues, but won't solve them.
If you have alot of I/O operations in your webservices (access to Databases, Filesystem or other Webservices) you might improve your webservices performance by using asynchronous I/O, using the TPL framework.
If you are returning a lot of data from your webservice (like a big object, an object with cyclic references, or a big file), this might be also the reason why the server is forcing the connections to be closed.
Hope this helps you in solving your issue.
I am hitting into a problem with my company application.
I am going to summarize the system key elements:
My company's system is running since few years on Windows XP and 7 (Home, Pro, Basic) machines.
It has been written in .NET 4.0 and based upon WCF.
It uses the default throttling values (MaxConcurrentSessions = 100 * CPU (4) : enough for our workload).
The main service is hosted by a stand alone deamon process (not IIS).
The main service is configured as Multithraded/PerSession instances.
The protocol is Reliable NET.TCP.
No more than 10 clients access concurrently the service.
The problem is that only on Windows 7, intermittently, I get (I discovered that by the WCF full trace log) a "Server too busy exception" due to an exhausted MaxConcurrentSessions limit (impossible!!!).
Do you have any idea about this strange behaviour?
Thank you and have a Happy New Year!
Antonio
Do all your Clients properly close/dispose connection to service after use ? It's worth to check, "ghost" connections could maybe explain this.
We experienced a similar issue with a self-hosted WCF interface which provided a synchronous request/response web service for an asynchronous (2 one way service calls) backend request. Early in our testing, we noticed that after a somewhat variable amount of time, our service became unresponsive to new requests. After some research, we discovered that whenever the backend service (out of our control) did not send a response, we continued to wait indefinitely and as such we kept our client connection open.
We fixed the issue by providing a “time-to-wait” configuration value so we were sure to respond to the client and close the connection. We used something like the following …
Task processTask = Task.Factory.StartNew(() => Process(message));
bool isProcessSuccess = processTask.Wait(shared.ConfigReader.SyncWebServiceWaitTime);
if (!isProcessSuccess)
{
//handle error …
}
The following link, which provides information regarding WCF Service performance counters, may help further determine if the calls are being closed as expected. http://blogs.microsoft.co.il/blogs/idof/archive/2011/08/11/wcf-scaling-check-your-counters.aspx
Hope this helps.
Regards,
i have thoroughly searched the internet (most of the links sent me to stackoverflow ;)) to try to come up with a solution how to keep a WCF Service alive under IIS (7.5).
Many of the responses here were suggesting to write an application that will periodically send dummy requests to the WCF service in order to keep it alive.
My question is:
what if I create a thread in the WCF which will start when a service is first called (in a static constructor) that will periodically consume the WCF itself?
I mean for example in c#:
while (true)
{
WebClient client = new WebClient();
string returnString = client.DownloadString("http://...");
Thread.Sleep(1000 * 5);
}
assuming that "http://..." is an URI to a provided WebMethod which for example returns some integer.
Would that work?
Basically I need some kind of web service (not particulary a WCF but not a Windows Service) that is running on a server that performs some operations and updates something in a SQL Server database. So if the described approach will not work, what might be the best way to achieve this?
Go to your IIS -> Application Pool (or create new one) -> Advanced settings and set Regular Time Interval=0
See related thread here.
AppFabric allows you to create wcf services which can autostart and be long living - this might be worth checking out as a hosting option (it's just a plugin for IIS)
Auto Start
What you are doing is basically wrong from the outset.
The problem is this:
IIS is basically a stateless request broker for http requests (basic IIS) and a request broker for service requests (IIS w. AppFabric).
What you are asking for is how to turn the inherently stateless IIS into a stateful server, with eternal threads running.
That is not what IIS does, IIS handles requests and its AppDomain is subject to AT ALL TIMES be torn down (destorying all threads).
Which makes the most upvoted answer dangerous, as it teaches you how to affect the recycle process, without controlling the tear-downs (off app-domains and threads) that IIS itself will intermittenly perform.
The requester is "foreign" to the IIS itself.
The internal lifetime of the service though, is entirely managed by IIS (and the configuration of its applications) itself.
So if with "keep alive" you mean: to constantly request some service, then do as Andreas suggest further up (create a schedueled job).
If with "keep alive" you mean: to make sure the same instance of the class handles requests, then you need to look into WCF lifetimes.
If with "keep alive" you mean: to make the code you have created "stateful" and keep f.eks static variables alive and so on, well you are not accepting that IIS is basically a stateless pr. request broker with internal lifetime management.
I suggest you create a small program (console app) that calls the web service. The program should take as arguments the url of the web service. Then you create a windows scheduled task that runs the program. In this way you have a lot of flexibility as compared to the embedded approach you are querying about as the program is just another client to the web service.
Try to avoid using while loop. Maybe http://quartznet.sourceforge.net/ is something you are looking for. On WCF start create Task every 10 minutes which will cal WCF itself.
If this has been asked before my apologies, and this is .NET 2.0 ASMX Web services, again my apologies =D
A .NET Application that only exposes web services. Roughly 10 million messages per day load balanced between multiple IIS Servers. Each incoming messages is XML, and an outgoing message is XML. (XMLElement) (we have beefy servers that run on steroids).
I have a SLA that all messages are processed in under X Seconds.
One function, Linking Methods, in the process is now taking 10-20 seconds, it is required for every transaction, however is not critical that it happens before the web service returns the results. Because of this I made a suggestion to throw it on another thread, but now realize that my words and the eager developers behind them might have not fully thought this through.
The below example shows on the left the current flow. On the right what is being attempted
Effectively what I'm looking for is to have a web service spawn a long running (10-20 second) thread that will execute even after the web service is completed.
This is what, effectively, is going on:
Thread linkThread= new Thread(delegate()
{
Linkmembers(GetContext(), ID1, ID2, SomeOtherThing, XMLOrSomething);
});
linkThread.Start();
Using this we've reduced the time from 19 seconds to 2.1 seconds on our dev boxes, which is quite substantial.
I am worried that with the amount of traffic we get, and if a vendor/outside party decides to throttle us, IIS might decide to recycle/kill those threads before they're done processing. I agree our solution might not be the "best" however we don't have the time to build in a Queue system or another Windows Service to handle this.
Is there a better way to do this? Any caveats that should be considered?
Thanks.
Apart from the issues you've described, I cannot think of any. That being said, there are ways to fix the problem that do not involve building your own solution from scratch.
Use MSMQ with WCF: Create a WCF service with an MSMQ endpoint that is IIS hosted (no need to use a windows service as long as WAS is enabled) and make calls to the service from within your ASMX service. You reap all the benefits of reliable queueing without having to build your own.
Plus, if your MSMQ service fails or throws an exception, it will reprocess automatically. If you use DTC and are hitting a database, you can even have the MSMQ transaction flow to the DB.
I have been building a client / server app with Silverlight, web services, and polling. Apparently I missed the whole Duplex Communication thing when I was first researching this subject. At any rate, the MSDN article I saw on the subject was promising.
When researching the scalability, it appears as if there's conflicting opinions on the subject.
silverlight.net/forums/t/89970.aspx - This thread seems to indicate that the duplex polling only supports a finite amount of concurrent clients on the server end.
dotnetaddict.dotnetdevelopersjournal.com/sl_polling_duplex.htm - This blog entry shows up in multiple places, so it muddies waters.
silverlight.net/forums/t/108396.aspx - This thread shows that I'm not the only one with this concern, but there are no answers in it.
silverlight.net/forums/t/32858.aspx - Despite all the bad press, this thread seems to have an official response saying the 10 concurrent connections is per machine.
In short, does anyone have facts / benchmarks?
Thanks :)
This is my understanding of this, but I haven't done tests.
There is an inbuilt 10 connection limit on non-server operating systems (XP/Vista/Windows 7).
On IIS 6 (XP) it will reject new connections once there are 10 in progress.
On II7 (Vista/Windows 7) it will queue connections once there are 10 in progress. I think this means that 10 simultaneous connections are out.
On the server OS side (2003/2008), there is no connection limit. However, on IIS6 (2003) each long running connection will take a thread from the threadpool, so you will run into a connection limit pretty quickly. On IIS7 (2008), async threads get suspended in a way that does not use up a thread, so 1000s of connections should be possible.
Scalability of the WCF backend using the protocol in a web farm scenario is discussed at http://tomasz.janczuk.org/2009/09/scale-out-of-silverlight-http-polling.html.
There are WCF built-in limits. However these limits can be very easily extended through configuration. (http://weblogs.asp.net/alexeyzakharov/archive/2009/04/17/how-to-increase-amount-of-silverlight-duplex-clients.aspx)
I'm running into a few issues with the duplex binding. From time to time the channel gets faulted for no apparent reason and has a hard time reconnecting. I'm not aware of any alternatives to implement a push model, short of doing everything yourself (and maybe get even worst results).
Performance of the Silverlight HTTP polling duplex protocol and tuning of a WCF service in IIS is discussed at http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html.