Large Volume Async Calls Being Blocked on Server - c#

I have an application that must send hundreds to thousands of HTTP requests at once. It's a .NET Windows service that uses Async calls. When my main server sends out small batches (around 1000 or less at a time) everything works fine, I get a response form the HTTP calls and all is good.
When it starts hitting 1500 or more at a time, though, all of sudden I get very little to no responses from my HTTP requests. When I run these large batch tests on my local machine though, I have no issues. Has anyone had any experience and might know what the culprit would be of what is holding back my .NET app?

Async calls end up using ThreadPool behind the scenes. And creating a new thread for pool could be time consuming. Try to check ThreadPool.GetMaxThreads() to see how many threads can be created.
Another options is just your local machine is faster than server's one

Related

OData Azure API erratic response time

We have several OData API's using Entity Framework and AutoMapper. These connect to an on-premise SQL database through a VNet. The GET requests of this API are not asynchronous per example found here. The scaling is set to S2. We have enabled always on.
Sometimes the requests complete in 500 ms. Sometimes the very same requests take 40 seconds. We have tried scaling out but this offers no tangible benefit. We have tried making the GET function on the controllers async. We have tried disabling authentication. We have tried looking at the application insights call stack in the profiler but sometimes the code hangs on one call, while other times on another. We even found a 39 second call to String.Replace(). We've tried Kudu but can't seem to get any knowledge from it.
On top of this I alone succeed in bringing the server to its knees simply by spamming F5 on a relatively simple request, locking the CPU at 100%. S2 seems pretty high already, and we are stunned that the server apparently cannot handle it. And it's also not always the case that low CPU usage on the server equals fast requests. Sometimes these requests also take an extraordinary amount of time.
We have tried looking at the application insights data but grow even more confused as some data suggests one thing is at fault while other data suggests it is not.
CPU usage on the app service plan is high.
CPU usage in the live metrics usually remains low.
This suggests that SQL is at fault. But we have almost ruled that out since if we spam an API on one app service plan and send the same single request to another app service plan we get the result immediately.
This suggests that the code or server is at fault.
How can we diagnose this issue and find the bottleneck?

How to increase the number of processing HTTP requests?

I have ASP.NET Web API app (backend for mobile) published on Azure. Most of requests are lightweight and processed fast. But every client makes a lot of requests and do it rapidly on every interaction with mobile application.
The problem is that web application can't process even small (10/sec) amount of requests. Http queue growth but CPU doesn't.
I ran load testing with 250 requests/second and avg response time growth from ~200ms to 5s.
Maybe problem in my code? Or it's hardware restrictions? Can I increase count of processed requests at one time?
First it really matters what instances do you use (specially if you use small and extra small instances), how many instances do you use - dont expect too much from 1 core and 2Gb RAM on server.
Use caching (WebApi.OutputCache.V2 to decrease servers processing efforts, Azure Redis Cache as fast cache storage), Database also can be a bottleneck.
If you`ll have same results after adding both more instances to server and caching - then you should take a look at your code and find bottlenecks there.
And thats only general recommendatins, there is no code in a question.

Does Keep-Alive in WCF Use More Resources than Polling?

I could probably setup a couple test-bed applications and find out, but I'm hoping someone has already experienced this or just simply has a more intuitive understanding. I have three executables. Two different clients (call them Client1.exe and Client2.exe) and a WCF service host (call it Host.exe) that hosts what's more or less a message bus type service for the two clients. I won't get into the "why's" as that's a long story and not productive to this question.
The point is this, Client1 sends a request through this service to Client2. Client2 performs operations, then responds with results to Client1. Client1 will always be initiator of requests, so this order of operations will always be consistent this way. This also means that Client1 can open it's channels to communicate to this service as-needed, whereas due to the need of callback services, Client2 has to keep it's channels open. I've began by attempting to keep-alive. However, these are all three on the desktop and PC sleep events, or other issues (not sure) seem to interfere with it. And once it times out, everything has to be restarted which makes it a real pain. I have some ideas I may try to help the keep-alive approach, but this brought up a question that I don't have an answer to... is this the best use of my resources.
The way I figure it, there are two main approaches for Client2,
Keep-Alive with a lot of monitoring (timers and checking of connection states) and connection resetting code which would be faster since it could respond to requests immediately. The downside is this has to be kept alive throughout the time that the user keeps Client2 open on their desktop which could be short and sweet to crazy-long.
Poll periodically for a request which would allow the resources to only be used when checking or processing a request from Client1. This would be slower since poll requests would not be real-time, but would eliminate any external issue concerns disconnecting the service. This would also cause me to have to add more state to the service. It's already a PerSession service with a list of available instances of Client2 ID's so that Client1 knows which instance it's talking to, but it would add more.
Client2 performs many other functions and so still has to be very performant with this process, which makes me wonder which is most likely to cost in resources? Is the polling approach more costly in resources? Or attempting to keep-alive?

How expensive is it to call a web service?

I've had a fairly good search on google and nothing has popped up to answer my question. As I know very little about web services (only started using them, not building them in the last couple of months) I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
To give you an example, my app is designed to make job updates, which for certain types of updates will call the web service. It seems like my options are that I could create a datatable in my app of updates that require the web service and pass the whole datatable to the web service and then write a method in the web service to process the datatable's updates. Alternatively I could iterate through my entire table of updates (which includes other updates than those requiring the web service) and call the web service as when an update requires it.
At the moment it seems like it would be simpler for me to pass each update rather than a datatable to the web service.
In terms of data being passed to the web service each update would contain a small amount of data (3 strings, max 120 characters in length). In terms of numbers of updates there would probably be no more than 200.
I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
Web services or not, any calls routed over the network would benefit from building up multiple requests, so that they could be processed in a single round-trip. In your case, building an object representing all the updates is going to be a clear winner, especially in setups with slower connections.
When you make a call over the network, these things need to happen when a client communicates to a server (again, web services or not):
The data associated with your call gets serialized on the client
Serialized data is sent to the server
Server deserializes the data
Server processes the data, producing a response
Server serializes the response
Server sends serialized response back to the client
The response is deserialized on the client
Steps 2 and 6 usually cause a delay due to network latency. For simple operations, latency often dominates the timing of the call.
The latency on fastest networks used for high-frequency trading is in microseconds; on regular ones it is in milliseconds. If you are sending 100 packages one by one on a network with 1ms lag (2ms per roundtrip), you are wasting 200ms just on the network latency! This one fifth of a second, a lot of time by the standards of today's CPUs. If you can eliminate it simply by restructuring your requests, it's a great reason to do it.
You should usually favor coarse-grained remote interfaces over a fine-grained ones.
Consider adding a 10ms network latency to each call - what would be the delay for 100 updates?

Separate threads in a web service after it's completed

If this has been asked before my apologies, and this is .NET 2.0 ASMX Web services, again my apologies =D
A .NET Application that only exposes web services. Roughly 10 million messages per day load balanced between multiple IIS Servers. Each incoming messages is XML, and an outgoing message is XML. (XMLElement) (we have beefy servers that run on steroids).
I have a SLA that all messages are processed in under X Seconds.
One function, Linking Methods, in the process is now taking 10-20 seconds, it is required for every transaction, however is not critical that it happens before the web service returns the results. Because of this I made a suggestion to throw it on another thread, but now realize that my words and the eager developers behind them might have not fully thought this through.
The below example shows on the left the current flow. On the right what is being attempted
Effectively what I'm looking for is to have a web service spawn a long running (10-20 second) thread that will execute even after the web service is completed.
This is what, effectively, is going on:
Thread linkThread= new Thread(delegate()
{
Linkmembers(GetContext(), ID1, ID2, SomeOtherThing, XMLOrSomething);
});
linkThread.Start();
Using this we've reduced the time from 19 seconds to 2.1 seconds on our dev boxes, which is quite substantial.
I am worried that with the amount of traffic we get, and if a vendor/outside party decides to throttle us, IIS might decide to recycle/kill those threads before they're done processing. I agree our solution might not be the "best" however we don't have the time to build in a Queue system or another Windows Service to handle this.
Is there a better way to do this? Any caveats that should be considered?
Thanks.
Apart from the issues you've described, I cannot think of any. That being said, there are ways to fix the problem that do not involve building your own solution from scratch.
Use MSMQ with WCF: Create a WCF service with an MSMQ endpoint that is IIS hosted (no need to use a windows service as long as WAS is enabled) and make calls to the service from within your ASMX service. You reap all the benefits of reliable queueing without having to build your own.
Plus, if your MSMQ service fails or throws an exception, it will reprocess automatically. If you use DTC and are hitting a database, you can even have the MSMQ transaction flow to the DB.

Categories

Resources