I am playing with the the Windows Azure emulator running an MVC website with a single controller method that calls Thread.Sleep(5000) before it returns.
On the client I run a loop that sends a POST request to the controller every 1000 ms, receives a reply from the server with the RoleEnvironment.CurrentRoleInstance.Id, and prints it on the screen.
I have 4 instances of my MVC worker role running.
I understand that the connection: keep-alive HTTP header can keep the browser from making a request to a different instance, because an existing connection is open.
But still, even when loading up my site in multiple browser windows, it keeps hanging while waiting for the Thread.Sleep(), and then (most times) continues to get replies from the same instance.
Why doesn't Azure's load balancer send subsequent requests to a non-busy worker role instance? Do I need to manually mark it as busy?
You mentioned using the emulator, which doesn't handle load balancing the same way as Azure's real load balancers. See this post for details about the differences. I don't know what exactly is going on in your case, but... I'd suggest you trying this out in Azure to see if you get the behavior you're expecting.
Related
We have a web site which load and analyse excel data and report back to the user.Now the process of analyzing the excel data takes about on average over 5 minutes (depending on the data) during which time the client server communication seems to be idle.
This web site is hosted on Azure as a webapp, and it seems that Azure has a load balancing time out according to the following link
https://azure.microsoft.com/en-us/blog/new-configurable-idle-timeout-for-azure-load-balancer/
in this link it is mentioned that
In its default configuration, Azure Load Balancer has an ‘idle
timeout’ setting of 4 minutes.
This means that if you have a period of inactivity on your tcp or http
sessions for more than the timeout value, there is no guarantee to
have the connection maintained between the client and your service.
because of this issue the end user constantly get HTTP status of 500 and sub status of 121.
Currently we cant re-architecture the system nor able to change deploying as a webapp.
We have tried to sending Jquery ajax request to the server on a set interval but this doesn't seem to be working.
The above article talks about keeping the TCP session alive using ServicePoint.SetTcpKeepAlive(), but we have no idea how to implement this in MVC web application.(Did not find any samples on the net either)
We really need to resolve this issue because this could make or break our project.So any help is appreciated. specifically any working sample code using ServicePoint.SetTcpKeepAlive() in an MVC application is greatly appritiated
Thanks in advance.
UPDATE
i tried out what Irb mentioned but still no luck.As you can see in the given image i call KeepSessionAlive repeatedly.At every call to KeepSessionAlive i access the Session Variable making sure not to time out the session.But still the call to Save returns 500. Again this only happens in Azure
A web api method that return json will not always keep session alive. If you request something that invokes session then you can call it on a timed interval from the client. Something as simple as requesting an image with a jpeg web handler could work. Here is a link to something similar.
I have a website where I need to take a bit of data from the user, make an ajax call to a .net webservice, and then the webservice does some work for about 5-10 minutes.
I naturally dont want the user to have to sit there that whole time, so I have made it an asynchronous ajax call to the webservice, and after the call has been sent, I redirect the user to a "you are done!" page.
What I want to happen is for the webservice to keep running to finish--and not abort--after it receives the information from the user.
From my testing, this is more or less what happens, but now I'm finding that this might be limited by time? I.e. if the webservice runs past a certain amount of time, it will abort if the user isnt still connected.
I might be off here in this assessment, but this is what I THINK is going on from my testing.
So my question is whether with .net web services, if this is indeed what happens? Does it get aborted after some time if the user isnt still on the other end? Is there any way to disable this abort?
Thanks in advance!
when you invoke a web service, it will always finish its work, even if user leaves the page that invoked it.
Of course webservices have their own configuration and one of them sets timeout.
If you're creating a WCF service (SOAP Service) you can set it in its contract (changing binding properties), if you're creating a service with WebApi or MVC (REST/Http Service) then you can either add to its config file or programmatically set in its controller as it follows.
HttpContext.Server.ScriptTimeout = 3600; //Number of seconds
That can be a reason causing webservice to interrupt its work but it is not related to what happens on client side.
Have a nice day,
Alberto
Whilst I agree that the answer here is technically correct, I just
wanted to post a more robust alternative approach that avoids some of
the pitfalls possible with your current approach such as
Web Server being bounced during the long-running processing of request
Web Server App pool being recycled during processing
Web server running out of threads due to too many long-running requests and not being able to process any more requests
I would recommend you take a thoroughly ansynchronous approach and use
Message Queues (MSMQ for example) with a trigger on the queue that
will execute the work.
The process would be:
Your page makes Ajax call to the Webservice
Webservice writes a message into the Queue and returns right away. The message contains details of what work needs to be carried out.
User continues on your site as usual, or goes home, etc.
A trigger on the Queue is watching for messages and when a message
arrives in the queue, it activates a process which:
Reads the message
Performs the necessary work
Updates any back-end storage, etc, with the results of the work
This is much more robust because it totaly decouples the Web service from any long-running work and means that if the user makes a request and the web server goes down a moment later (for whatever reason) then the work will still be queued up when the server comes back online, etc.
You can read more about it here (MSMQ is the MS Message Queue tech; there are many others!)
Just my 2c
I have a windows form application that I've recently been handed to upgrade. It makes two Web Services calls (using .net Web References functionality). One is SSL, the other is not.
The first webservice requested after you open the client takes about 12 seconds, any other requests take about .5 sec. -Regardless of which webservice you request first, and any future request is fast regardless of which until you close the client.
After you open the client again the first hit takes a 12 seconds again.
I've having a hard time searching for this because of the huge amount of forum posts regarding the Server first load that occurs with IIS metadata. I'm familiar with that issue and it is not what is occurring here.
Also, the database calls that the application performs have no such delay. I'm not leaning towards a network issue because of that.
Any thoughts?
Thanks.
A delay that long is probably I/O related, either disk (generating XML serializers) or network (DNS resolution, certificates, strong name validation, etc.). Check the resource monitor: is the CPU, disk, or network loaded? If not, it's probably a network call stuck on a timeout.
Try capturing data with Process Monitor, which will include all disk and network traffic.
If the problem looks to be network-related, then Wireshark or Fiddler might give a clearer picture.
I have an Azure web role that accesses an external WCF based SOAP web service (port 80) for various bits of data. The response from this service is highly erratic. I routinely get the following error.
There was no endpoint listening at
http://www.myexternalservice.com/service.svc that could accept the message. This is
often caused by an incorrect address or SOAP action.
To isolate the problem I created a simple console app to repetitively call this service in 1 second intervals and log all responses.
using (var svc = new MyExternalService())
{
stopwatch.Start();
var response = svc.CallService();
stopwatch.Stop();
Log(response, stopwatch.ElapsedMilliseconds);
}
If I RDP to one of my Azure web instances and run this app it takes 10 to 20 attempts before it gets a valid response from the external service. These first attempts are always accompanied by the above error. After this "warm up period" it runs fine. If I stop the app and then immediately restart, it has to go back through the same "warm up" period.
However, if I run this same app from any other machine I receive valid responses immediately. I have run this logger app on servers running in multiple data centers (non Azure), desktops on different networks, etc... These test runs are always very stable.
I am not sure why this service would react this way in the Azure environment. Unfortunately, for the short term I am forced to call this service but my users cannot tolerate this inconsistency.
A capture of network traffic on the Azure server indicates a large number of SynReTransmit's in 10 second intervals during the same time I experience the connection errors. Once the "warm up" is complete the SynReTransmit's no longer occur.
The Windows Azure data center region where the Windows Azure application is deployed might not be near the external Web Service. The local machine you're trying (which works fine) might be close to the web service. That’s why there might be huge latency in Azure which would likely cause it to fail.
Success accessing WSDL from a browser in Azure VM might be due to browser caching. Making a function call from browser would tell you if it is actually making a connection.
We found a solution for this problem although I am not completely happy with it. After exhausting all other courses of action we changed the load balancer to Layer-7 Load Balancing from Layer-4 Load Balancing. While this fixed the problem of lost requests I am not sure why this made a difference.
I am trying to use a COMET solution using ASP.NET .
Trouble is I want to implement sending and notification part in the same page.
On IE7, whenever I try to send a request, it just gets queued up.
After reading on internet and stackoverflow pages I found that I can only do 2 simultaneous asyn ajax requests per page.
So until I close my comet Ajax request, my 2nd request doesn't get completed, doesn't even go out from the browser. And when I checked with Firefox I just one Ajax comet request running all time..so doesn't that leave me one more ajax request?
Also the solution uses IRequiressessionstate for Asynchronous HTTP Handler which I had removed. But it still creates problems on multiple instances of IE7.
I had one work around which is stated here http://support.microsoft.com/kb/282402
it means we can increase the request limit from registry by default is 2.
By changing "MaxConnectionsPer1_0Server" key
in hive "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
we can increase the number of requests.
Basically I want to broadcast information to multiple clients connected to a server using Comet and the clients can also send messages to the Server.
Broadcasting works but the send request back to server doesn't work.
I'm using IIS 6 and ASP.NET .
Are there any more workarounds or ways to send more requests?
References :
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
AJAX, PHP Sessions and simultaneous requests
jquery .ajax request blocked by long running .ajax request
jQuery: Making simultaneous ajax requests, is it possible?
You are limited to 2 connections, but typically that's all you need - 1 to send, 1 to receive, even in IE.
That said, you can totally do this; we do it all the time in WebSync. The solution lies in subdomains.
The thing to note is that IE (and other browsers, although they typically limit to 6 requests, not 2) limits requests per domain - but that limitation is for the entire domain excluding subdomains. So for example, you can have 2 requests open to "www.stackoverflow.com" and 2 more requests open to "static.stackoverflow.com", all at the same time.
Now, you've got to be somewhat careful with this approach, because if you make a request from the www subdomain to the static subdomain, that's considered a cross-domain request, so you're immediately limited to not using direct XHR calls, but at that point you have nevertheless bypassed the 2 connection limit; JSONP, HTML5, etc, are all your friend for bypassing the cross-domain limitations.
Edit
Managing with > 1 instance of IE comes back to the same problem. The limitation applies across all instances. So, if you have two browsers open, and they're both using comet, you're stuck with 2 long-polling connections open. If you've maximized your options, you're going to be connecting those long-polling requests to something like "comet.mysite.com", and your non-long-polling requests will go to "mysite.com". That's the best you'll get without going into wildcard DNS.
Check out some of our WebSync Demos; they work in 2 instances of IE without a problem. If you check out the source, you'll see that the DNS for the streaming connection is different from the main page; we use JSONP to bypass the cross-domain limitation.
The main idea in COMET is to keep one client-to-server request open, until a response is necessary.
If you design your code properly, then you don't need more than 2 requests to be open simultaneously. Here's how it works:
client uses a central message send-receive loop to send out a request to the server
server receives the request and keeps it open.
at some point, the server responds to the client.
the client (browser) receives the response, handles it in its central message loop.
immediately the client sends out another request.
repeat
The key is to centralize and asynchronize all communications in the client. So you will never need to have 2 open requests.
But to answer your question directly, no, there are no additional workarounds.
Raise the connection limit or reduce the number of connections you use.