Our web application (hosted in a Web App in Azure) experiences spikes in HTTP Queue Length. Each time there is a spike in HTTP Queue Length, the web application crashes and we either have to wait for Azure to restart the web app itself, or we restart the web app ourselves. This happens very often.
The web application does use SignalR, and a Web Job is running that calls a method on the Hub which then broadcasts data to connected clients. There is only ever a handful of users at this stage, so we have not implemented a SignalR backplane.
Here is an example of the spikes in HTTP Queue Length
Note, we tried having the web application in its very own Application Service Plan (P3) and it still exhibited the same behaviour. The memory percentage was much lower than that shown here though, around the 20-40 percent, but still crashed with regular spikes in HTTP Queue Length. Thus, I don't believe it's a memory issue that is causing the issue.
After a while of trying to diagnose this issue, we decided to then host the application (same code) into a VM (still in Azure) and change the URL to point to the VM instead of the web app. The new VM is only very basic, with only 3.5GB Memory.
Since moving to a VM, the application is performing great, no crashes and performs much better than in a Web App with a large dedicated service plan.
So it is difficult to say it is the code, when we running perfmon and other indicators, all memory and queue lengths seem to quickly drop down after serving requests. Whereas in a WebApp, this seemed to continually grow until it crashed.
Just wondering if anyone else has experienced this behaviour with Web Apps? We are going to continue hosting in a VM, but originally preferred hosting within a Web App as PaaS is more appealing.
In case it helps, more information on the tech stack is:
HTML5, C#, Web API 2, Kendo MVVM, SignalR, Azure SQL Server, Web Jobs processing Service Bus Topics.
Kind regards,
Stefan
Related
I have an ASP.NET MVC application hosted in Azure.
This application is complemented with a desktop application that also has WCF services for communicating with III party interfaces. WCF are hosted locally.
There are thousands of clients using the desktop application at different geographical locations.
Till now, every desktop application used to talk to web app using api with the help of WCF.
This was limited to on demand from the desktop application.
Whenever desktop application feels the need to talk to web app, it used the way of web api from WCF.
Now, what I want is:-
To access the different desktop applications(typically called sites), from azure depending upon the need.
This is required on account of an online ordering system that is through web app/mobile app.
I do not want to keep polling from desktop application to know about if any new order is there for this site.
I feel it would be better if I can play from other side.
Also, keeping in mind that IP of sites will not be fixed. There may be issue with firewall. NAT may translate resource identifier differently.
Can service bus in azure may be of any help, but what confuses me is that every desktop application is having its own WCF service and order should reach the respective site only.
Any type of ideas on this would be appreciated.
According to your description, Service Bus messaging is a perfect way to achieve this.
More information about Service Bus Messaging, we can refer to: Service Bus queues, topics, and subscriptions
In addition, We can also use RabbitMQ or ZeroMQ which is similar with Service Bus Messaging because both of them are free. You can choose an best way to realize your requirements.
About differences between ZeroMQ and RabbitMQ:
ZeroMQ has better performance, but it is built in the case of allowing message data loss to apply to high throughput / low latency applications. Unlike ZeroMQ, RabbitMQ fully implements the AMQP protocol, which is similar to mailbox services, supporting message persistence, transaction, congestion control, load balancing and so on, making RabbitMQ have a more extensive application scenario.
Function RabbitMQ ZeroMQ
Message persistence Support Not Support
Transaction Support Not Support
performance Low High
stability High Low
Support for AMQP protocol Support Not Support
Application scenario Data loss is not allowed High throughput
More information about RabbitMQ and ZeroMQ, we can refer to:
RabbitMQ
ZeroMQ
If you are able to modify the desktop applications, implementing a websockets connection with SignalR might be worth a look. The desktop applications sign up with a SignalR hub you provide.
You can then push data to the clients from, for example an ASP.NET MVC app. It works very reliable and handles lots of connections well. It is typically used for realtime web communication but might be useful in your case, too.
The downside is probably, that the desktop app needs to initially sign up to a hub to receive push messages.
I am using web services - not WCF - hosted in an iis web application written in C#/asp.net. I also have a C# winform Desktop application that had originally polled a web method to check for any messages on the server. I found the memory on the client shot up. So, instead of polling this web method I invoke it once, the web method goes into a loop checking for messages. As soon as it finds a message(s) for this client it breaks out of the loop and returns the message(s) to the client. The client in turn will process the message(s) and then re-invoke the same web method waiting for the next message(s).
I run this and the memory on the client desktop and the memory on the web server remain low. I really have 2 questions here.
1). Will the memory escalate on the server when more clients invoke the same web method?
2). Should I avoid this way of doing things?
I know there are callbacks available using WCF and I know I can create a hub using Signal R. what I would like to know is there anything wrong/different to how I am doing it and/or is there a better way of doing it?
Many Thanks.
I have an Azure web role that accesses an external WCF based SOAP web service (port 80) for various bits of data. The response from this service is highly erratic. I routinely get the following error.
There was no endpoint listening at
http://www.myexternalservice.com/service.svc that could accept the message. This is
often caused by an incorrect address or SOAP action.
To isolate the problem I created a simple console app to repetitively call this service in 1 second intervals and log all responses.
using (var svc = new MyExternalService())
{
stopwatch.Start();
var response = svc.CallService();
stopwatch.Stop();
Log(response, stopwatch.ElapsedMilliseconds);
}
If I RDP to one of my Azure web instances and run this app it takes 10 to 20 attempts before it gets a valid response from the external service. These first attempts are always accompanied by the above error. After this "warm up period" it runs fine. If I stop the app and then immediately restart, it has to go back through the same "warm up" period.
However, if I run this same app from any other machine I receive valid responses immediately. I have run this logger app on servers running in multiple data centers (non Azure), desktops on different networks, etc... These test runs are always very stable.
I am not sure why this service would react this way in the Azure environment. Unfortunately, for the short term I am forced to call this service but my users cannot tolerate this inconsistency.
A capture of network traffic on the Azure server indicates a large number of SynReTransmit's in 10 second intervals during the same time I experience the connection errors. Once the "warm up" is complete the SynReTransmit's no longer occur.
The Windows Azure data center region where the Windows Azure application is deployed might not be near the external Web Service. The local machine you're trying (which works fine) might be close to the web service. That’s why there might be huge latency in Azure which would likely cause it to fail.
Success accessing WSDL from a browser in Azure VM might be due to browser caching. Making a function call from browser would tell you if it is actually making a connection.
We found a solution for this problem although I am not completely happy with it. After exhausting all other courses of action we changed the load balancer to Layer-7 Load Balancing from Layer-4 Load Balancing. While this fixed the problem of lost requests I am not sure why this made a difference.
We have a number of Windows services running in our system (built in C#). We use WCF to communicate with them and control them, since WCF offers very convenient communication with these processes.
Right now in our Windows GUI for managing, monitoring and troubleshooting the services, we simply register callbacks and receive notifications when a message is available from the service. Obviously this application is stateful and WCF provides the ability for the local delegate to be called when the maintained connection to the service indicates.
In our web application which users actually use, we'd like to use long-polling to have a status area on the web page (iframe, AJAX, whatever) which shows any issues which the services are reporting. We'd like to use a long-polling or other technique which minimizes actual polling on the network.
The problem we are running up against is that we need something to make the long-polling HTTP request against which will somehow always be running in IIS and which itself can be WCF-connected to our services and which can convert the event/delegate-based WCF response into a blocking-style long-poll response. It feels like a chicken-and-egg situation that some component in our system is always going to be in a loop, polling - and that's exactly what we are trying to avoid.
Does anyone have an example of doing this?
Well, if your services present with WCF, why not simply consume the WCF services with javsacript? Then you remove your IIS servers from the equation completely. if a user wants to see what the services are doing then they can retrieve the information directly from the service.
Here's a blog with someone showing how to do this:Call wcf service from Json
I have a WCF RESTful service that is hosted in IIS that is hit by several of our applications. The WCF services appear to operate fine for the most part, but sometimes it takes a long time to get a response from the service.
I was seeing if there was a good tutorials or resources to follow on how to best configure WCF RESTful services to be web scale either through the web.config, from IIS, or from our dedicated application pool.
We have gone through our services and used NHibernate profiler to find and optimize any problematic queries and we also have memcached setup to also help with performance. The problem seems to be when many applications are consuming the service in a short period of time or when the service has sat idle for a long period of time.
Thanks for any assistance.
Not sure if its applicable to your scenario, but I read the below mentioned blog post on MSDN a couple of days ago. It's about a problem in the Net IOCP Threadpool which causes long response times for WCF when many requests are issued in short time. Maybe that could help you?
WCF scales up slowly with bursts of work
KB2538826
There is no general advice on heavy load issue, but one of the possible optimizations would be using asynchronous operations on the server side: Scale WCF Application Better with Asynchronous Programming. It's about conserving thread pool resources while making database calls.
As for the idle period issue, check out Configuring Recycling Settings for an Application Pool (IIS 7)