Log number of queued requests OWIN self-hosted - c#

I do have my OWIN application hosted as a Windows Service and I am getting a lot of timeout issues from different clients. I have some metrics in place around the request/response time however the numbers are very different. For example I can see the client is taking around one minute to perform a request that looks like in the server is taking 3-4 seconds. I am then assuming that the number of requests that can be accepted has reached the limit and subsequent requests that come in would get queued up. Am I right? If that's the case, is there any way I can monitor the number of incoming requests at a given time and how big is the queue (as in number of requests pending to get served)?
I am playing around with https://msdn.microsoft.com/en-us/library/microsoft.owin.host.httplistener.owinhttplistener.setrequestprocessinglimits(v=vs.113).aspx but doesn't look to have any effect.
Any feedback is much appreciated.
THanks!

HttpListener is built on top of Http.Sys so you need to use its performance counters and ETW traces to get this level of information.
https://msdn.microsoft.com/en-us/library/windows/desktop/cc307239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
http://blogs.msdn.com/b/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx

Related

How to increase the number of processing HTTP requests?

I have ASP.NET Web API app (backend for mobile) published on Azure. Most of requests are lightweight and processed fast. But every client makes a lot of requests and do it rapidly on every interaction with mobile application.
The problem is that web application can't process even small (10/sec) amount of requests. Http queue growth but CPU doesn't.
I ran load testing with 250 requests/second and avg response time growth from ~200ms to 5s.
Maybe problem in my code? Or it's hardware restrictions? Can I increase count of processed requests at one time?
First it really matters what instances do you use (specially if you use small and extra small instances), how many instances do you use - dont expect too much from 1 core and 2Gb RAM on server.
Use caching (WebApi.OutputCache.V2 to decrease servers processing efforts, Azure Redis Cache as fast cache storage), Database also can be a bottleneck.
If you`ll have same results after adding both more instances to server and caching - then you should take a look at your code and find bottlenecks there.
And thats only general recommendatins, there is no code in a question.

Is there a WCF service request queue performance counter?

There is a nice ASP.NET perf counter category and set of counters that can be used to track the request queue during perf test runs. However I can't find similar set for a WCF service not hosted thru IIS. Our WCF services are run as Windows services using net-tcp protocols. I've learned that there are a couple binding parameters that control queuing (Binding.MaxConnections and Binding.ListenBacklog). It wasn't a very easy find. So I wonder going forward, is there a why to track these two values in PerfMon?
Under the ServiceModelService performance counter category you can find the following set of queue performance counters:
Queue Dropped Messages
Queue Dropped Messages Per Second
Queued Poison Messages
Queued Poison Messages Per Second
Queued Rejected Messages
Queued Rejected Messages Per Second
None of which provide the information you're looking for. The performance counter that I could find more closely related to what you want is:
Percent of Max Concurrent Calls
Which provides the number of concurrent calls as a percent of maximum concurrent calls.
To see a complete list of available WCF performance counters click here.

Separate threads in a web service after it's completed

If this has been asked before my apologies, and this is .NET 2.0 ASMX Web services, again my apologies =D
A .NET Application that only exposes web services. Roughly 10 million messages per day load balanced between multiple IIS Servers. Each incoming messages is XML, and an outgoing message is XML. (XMLElement) (we have beefy servers that run on steroids).
I have a SLA that all messages are processed in under X Seconds.
One function, Linking Methods, in the process is now taking 10-20 seconds, it is required for every transaction, however is not critical that it happens before the web service returns the results. Because of this I made a suggestion to throw it on another thread, but now realize that my words and the eager developers behind them might have not fully thought this through.
The below example shows on the left the current flow. On the right what is being attempted
Effectively what I'm looking for is to have a web service spawn a long running (10-20 second) thread that will execute even after the web service is completed.
This is what, effectively, is going on:
Thread linkThread= new Thread(delegate()
{
Linkmembers(GetContext(), ID1, ID2, SomeOtherThing, XMLOrSomething);
});
linkThread.Start();
Using this we've reduced the time from 19 seconds to 2.1 seconds on our dev boxes, which is quite substantial.
I am worried that with the amount of traffic we get, and if a vendor/outside party decides to throttle us, IIS might decide to recycle/kill those threads before they're done processing. I agree our solution might not be the "best" however we don't have the time to build in a Queue system or another Windows Service to handle this.
Is there a better way to do this? Any caveats that should be considered?
Thanks.
Apart from the issues you've described, I cannot think of any. That being said, there are ways to fix the problem that do not involve building your own solution from scratch.
Use MSMQ with WCF: Create a WCF service with an MSMQ endpoint that is IIS hosted (no need to use a windows service as long as WAS is enabled) and make calls to the service from within your ASMX service. You reap all the benefits of reliable queueing without having to build your own.
Plus, if your MSMQ service fails or throws an exception, it will reprocess automatically. If you use DTC and are hitting a database, you can even have the MSMQ transaction flow to the DB.

Logging Via WCF Without Slowing Things Down

We have a large process in our application that runs once a month. This process typically runs in about 30 minutes and generates 342000 or so log events. Recently we updated our logging to a centralized model using WCF and are now having difficulty with performance. Whereas the previous solution would complete in about 30 minutes, with the new logging, it now takes 3 or 4 hours. The problem it seems is because the application is actually waiting for the WCF request to complete before execution continues. The WCF method is already configured as IsOneWay and I wrapped the call on the client side to that WCF method in a different thread to try to prevent this type of problem but it doesn't seem to have worked. I have thought about using the async WCF calls but thought before I tried something else I would ask here to see if there is a better way to handle this.
342000 log events in 30 minutes, if I did my math correctly, comes out to 190 log events per second. I think your problem may have to do with the default throttling settings in WCF. Even if your method is set to one-way, depending on if you're creating a new proxy for each logged event, calling the method will still block while the proxy is created, the channel is opened, and if you're using an HTTP-based binding, it will block until the message has been received by the service (an HTTP-based binding sends back a null response for a 1-way method call when the message is received). The default WCF throttling limits concurrent instances to 10 on the service side, which means only 10 requests will be handled at a time, and any further requests will get queued, so pair that with an HTTP binding, and anything after the first 10 requests are going to block at the client until it's one of the 10 requests getting handled. Without knowing how your services are configured (instance mode, etc.) it's hard to say more than that, but if you're using per-call instancing, I'd recommend setting MaxConcurrentCalls and MaxConcurrentInstances on your ServiceBehavior to something much higher (the defaults are 16 and 10, respectively).
Also, to build on what others have mentioned about aggregating multiple events and submitting them all at once, I've found it helpful to setup a static Logger.LogEvent(eventData) method. That way it's simple to use throughout your code, and you can control in your LogEvent method how you want logging to behave throughout your application, such as configuring how many events should get submitted at a time.
Making a call to another process or remote service (i.e. calling a WCF service) is about the most expensive thing you can do in an application. Doing it 342,000 times is just sheer insanity!
If you must log to a centralized service, you need to accumulate batches of log entries and then, only when you have say 1000 or so in memory, send them all to the service in one hit. This will give you a reasonable performance improvement.
log4net has a buffering system that exists outside the context of the calling thread, so it won't hold up your call while it logs. Its usage should be clear from the many appender config examples - search for the term bufferSize. It's used on many of the slower appenders (eg. remoting, email) to keep the source thread moving without waiting on the slower logging medium, and there is also a generic buffering meta-appender that may be used "in front of" any other appender.
We use it with an AdoNetAppender in a system of similar volume and it works wonderfully.
There's always the traditional syslog there are plenty of syslog daemons that run on Windows. Its designed to be a more efficient way of centralised logging than WCF, which is designed for less intensive opertions, especially if you're not using the tcpip WCF configuration.
In other words, have a go with this - the correct tool for the job.

web service slowdown

I have a web service slowdown.
My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml.
My client is in .NET
The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability.
I have about 100 clients, response time is 1s mandatory.
Clients have about 1 request per minute.
Clients are checking web service presence by tcp open port test.
So, to avoid possible congestion, I turned gSoap KeepAlive to false.
Until there everything runs fine : I bearly see connections in TCPView (sysinternals)
New special synchronisation program now calls the service in a loop.
It's higher load but everything is processed in less 30 seconds.
With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT.
They slowdown the service and It takes seconds for the service to reply, now.
Could it be that I need to reset the SoapHttpClientProtocol connection ?
Someone has TIME_WAIT ghosts with a web service call in a loop ?
Sounds like you aren't closing the connection after the call and opening new connections on each request. Either close the connection or reuse the open connections.
Be very careful with the implementations mentioned above. There are serious problems with them.
The implementation described in yakkowarner.blogspot.com/2008/11/calling-web-service-in-loop.html (COMMENT ABOVE):
PROBLEM: All your work will be be wiped out the next time you regenerate the web service using wsdl.exe and you are going to forget what you did not to mention that this fix is rather hacky relying on a message string to take action.
The implementation described in forums.asp.net/t/1003135.aspx (COMMENT ABOVE):
PROBLEM: You are selecting an endpoint between 5000 and 65535 so on the surface this looks like a good idea. If you think about it there is no way (at least none I can think of) that you could reserve ports to be used later. How can you guarantee that the next port on your list is not currently used? You are sequentially picking up ports to use and if some other application picks a port that is next on your list then you are hosed. Or what if some other application running on your client machine starts using random ports for its connections - you would be hosed at UNPREDICTABLE points in time. You would RANDOMLY get an error message like "remote host can't be reached or is unavailable" - even harder to troubleshoot.
Although I can't give you the right solution to this problem, some things you can do are:
Try to minimize the number of web service requests or spread them out more over a longer period of time
For your type of app maybe web services wasn't the correct architecture - for something with 1ms response time you should be using a messaging system - not a web service
Set your OS's number of connections allowed to 65K using the registry as in Windows
Set you OS's time that sockets remain in TIME_WAIT to some lower number (this presents its own list of problems)

Categories

Resources