HttpWebRequest fails after third call - c#

here's my method:
private static void UpdatePref(List<EmailPrefer> prefList)
{
if(prefList.Count > 0)
{
foreach (EmailPref pref in prefList)
{
UpdateEmailRequest updateRequest = new UpdateEmailRequest(pref.ID.ToString(), pref.Email, pref.ListID.ToString());
UpdateEmailResponse updateResponse =(UpdateEmailResponse) updateRequest.SendRequest();
if (updateResponse.Success)
{
Console.WriteLine(String.Format("Update Succsesful. ListID:{0} Email:{2} ID:{1}", pref.ListID, pref.Email, pref.ID));
continue;
}
Console.WriteLine( String.Format("Update Unsuccessful. ListID:{0} Email:{2} ID:{1}\n", pref.ListID, pref.Email, pref.ID));
Console.WriteLine(String.Format("Error:{0}", updateResponse.ErrorMessage));
}
Console.WriteLine("Updates Complete.");
}
Console.WriteLine("Procses ended. No records found to update");
}
the list has around 84 valid records that it's looping through and sending an API request for. But it stops on the 3rd API call and only processes 2 out of the 84 records. When I debug to see what's happening, I only see that it stops here in my SendRequest method without spitting out any error. It's stops at the GetRequestStream and when I step to that and try to keep stepping, it just stops and my application stops running without any error!
HttpWebRequest request = CreateWebRequest(requestURI, data.Length);
request.ContentLength = data.Length;
request.KeepAlive = false;
request.Timeout = 30000;
// Send the Request
requestStream = request.GetRequestStream();
wtf? Eventually if I let it keep running I do get the error "The Operation Has Timed Out". But then why did the first 2 calls go through and this one timed out? I don't get it.
Also, a second question. Is it inefficient to have it create a new object inside my foreach for sending and receiving? But that's how I stubbed out those classes and required that an email, ListID and so forth be a requirement to send that type of API call. I just didn't know if it's fine or not efficient to create a new instance through each iteration in the foreach. Might be common but just felt weird and inefficient to me.

EDIT: It seems you answered your own question already in the comments.
I don't have personal experience with this, but it seems you need to call close on the HTTP web request after you've fetched the response. There's a limit of 2 on the number of open connections and the connection isn't freed until you Close(). See http://blogs.msdn.com/feroze_daud/archive/2004/01/21/61400.aspx, which gives the following code to demonstrate the symptoms you're seeing.
for(int i=0; i < 3; i++) {
HttpWebRequest r = WebRequest.Create(“http://www.microsoft.com“) as HttpWebRequest;
HttpWebResponse w = r.GetResponse() as HttpWebResponse;
}

One possibility for it timing out is that the server you're talking to is throttling you. You might try inserting a delay (a second, maybe?) after each update.
Assuming that UpdateEmailRequest and UpdateEmailResponse are somehow derived from WebRequest and WebResponse respectively, it's not particularly inefficient to create the requests the way you're doing it. That's pretty standard. However, note that WebResponse is IDisposable, meaning that it probably allocates unmanaged resources, and you should dispose of it--either by calling the Dispose method. Something like this:
UpdateEmailResponse updateResponse =(UpdateEmailResponse) updateRequest.SendRequest();
try
{
if (updateResponse.Success)
{
Console.WriteLine(String.Format("Update Succsesful. ListID:{0} Email:{2} ID:{1}", pref.ListID, pref.Email, pref.ID));
continue;
}
Console.WriteLine( String.Format("Update Unsuccessful. ListID:{0} Email:{2} ID:{1}\n", pref.ListID, pref.Email, pref.ID));
Console.WriteLine(String.Format("Error:{0}", updateResponse.ErrorMessage));
}
finally
{
updateResponse.Dispose();
}
I guess it's possible that not disposing of the response objects keeps an open connection to the server, and the server is timing out because you have too many open connections.

Related

Websocket 'unable to connect to the remote server' after random number of connections

I've written a small Winforms application in C# to load test an AWS websockets API that triggers a Lambda function. The application makes n calls to the API, with a given period, each submitting a randomised payload in the request. Different payloads result in different runtimes for the Lambda function (between a fraction of a second and several minutes).
Calling the API involves the following steps:
Connect
Send a message containing credentials, the route action and
the request payload (containing a small amount of data needed to
fulfil the request)
Receive the result
Disconnect
These steps are carried out in a Task which is added to a List<Task>. These tasks are then run using Task.WhenAll(taskList). Simplified (redacted) code is below. I'm completely prepared for people who know more than me to tell me it's terrible.
async Task RunTest()//Triggered by a button.
{
List<Task> taskList = new List<Task>();
for (int i = 0; i < numberOfRequests; i++)
{
//Generate inputPayload string.
taskList.Add(CallAPI(inputPayload, i, i * period));
}
await Task.WhenAll(taskList);
}
public async Task CallAPI(Dictionary<string, double> requestBody, int requestNumber, int delay)
{
if (requestNumber > 0) await Task.Delay(delay);//No need to delay the first one (although 'delay' is 0 on the first one anyway).
using (ClientWebSocket websocketClient = new ClientWebSocket())
{
CancellationToken cancellationToken = new CancellationToken();
await websocketClient.ConnectAsync(new Uri("wss://..."), cancellationToken);//Exception is thrown at this line after a random number of tasks.
InputStructure requestPayload = new InputStructure
{
Action = "RouteThatCallsLambda",
Name = nameTextBox.Text,
ApiKey = apiKeyTextBox.Text,
ApiRequestBody = requestBody
};
while (websocketClient.State == System.Net.WebSockets.WebSocketState.Open)
{
byte[] messageBuffer = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(requestPayload));
await websocketClient.SendAsync(new ArraySegment<byte>(messageBuffer), System.Net.WebSockets.WebSocketMessageType.Text, true, cancellationToken).ConfigureAwait(false);
break;
}
//All the 'record' bits do here is write stuff to a text box on the UI, and to a List<LogEntry> that I use to write out to a CSV file at the very end.
ArraySegment<byte> buffer;
System.Net.WebSockets.WebSocketReceiveResult receiveResult;
MemoryStream memoryStream;
while (websocketClient.State == System.Net.WebSockets.WebSocketState.Open)
{
buffer = new ArraySegment<byte>(new byte[8192]);
receiveResult = null;
memoryStream = new MemoryStream();
do
{
receiveResult = await websocketClient.ReceiveAsync(buffer, CancellationToken.None);
memoryStream.Write(buffer.Array, buffer.Offset, receiveResult.Count);
}
while (!receiveResult.EndOfMessage);
memoryStream.Seek(0, SeekOrigin.Begin);
if (receiveResult.MessageType == System.Net.WebSockets.WebSocketMessageType.Text)
{
StreamReader streamReader = new StreamReader(memoryStream, Encoding.UTF8);
string resultPayload = await streamReader.ReadToEndAsync();
//If successful, the payload will contain "validData".
if (resultPayload.Contains("validData"))
{
try
{
//Record the success.
}
catch
{
//Record the error (which in most cases would be a deserialisation exception).
}
await websocketClient.CloseAsync(System.Net.WebSockets.WebSocketCloseStatus.NormalClosure, null, CancellationToken.None);
}
else if (resultPayload.Contains("ping"))
{
//Ignore - the Lambda function sends a message for long-running requests to keep the connection alive.
}
else //Failed.
{
//Record the error message sent by the Lambda function.
await websocketClient.CloseAsync(System.Net.WebSockets.WebSocketCloseStatus.NormalClosure, null, CancellationToken.None);
}
}
break;
}
if (websocketClient.State == System.Net.WebSockets.WebSocketState.Closed)
{
//Record the connection closure.
}
}
if (requestNumber == numberOfRequests - 1)
{
//Record process complete.
}
}
The most I've ever set numberOfRequests to is 100 but it never gets that far before websocketClient.ConnectAsync() throws an 'unable to connect to the remote server' exception. In the CloudWatch API log stream, it reports 'Method completed with status: 410' which does suggest a client-side issue, but why it would strike at random I don't know.
Usually it gets to between 60 and 80 but sometimes after only a handful. Because it seems to be random, sometimes if I set numberOfRequests to much fewer it runs successfully all the way through. I've never seen any problems when I've set it to 1.
Does anyone have any idea what's going on?
Update:
[I originally posted the following as an answer to my own question, but it appears that all it's done is make the exception rarer. I have no idea why that would be the case.]
It appears I've solved it. I saw on a couple of websites the following way of doing things but I didn't think it would make any difference. However, on the basis that I already had an inkling that the problem was due to some strange threading issue, I gave it a go anyway.
I moved the two while (websocketClient.State == System.Net.WebSockets.WebSocketState.Open) blocks into their own separate async Tasks, one for sending the message and one for receiving the result. Then immediately after websocketClient.ConnectAsync() I await a call to each in turn, passing the necessary parameters:
await websocketClient.ConnectAsync(new Uri("wss://..."), CancellationToken.None);
await SendMessage(websocketClient, requestBody);
await ReceiveMessage(websocketClient);
TLDR: When setting up a websocket API on AWS, use mock endpoints for the connect and disconnect routes, not Lambda functions, to return the 200 response.
Solved it. It seems the 'unable to connect' errors were a secondary consequence of how I'd set up the websocket API routes.
Having followed an article on this, I'd set up two Lambda functions that return a 200 response, that serve as the endpoints for the $connect and $disconnect websocket routes. These contain one line of Javascript that returns {"statusCode":200}.
What seems to have been happening is that when the 'connect' Lambda function was invoked it was returning a 'rate exceeded' error instead of a 200 response, which appears to the client as 'unable to connect'.
The solution: dispense with the Lambda functions altogether and use mock endpoints that pass through a template containing the 200 response instead.
So now I'm always able to connect because I don't need to invoke a Lambda function just to create a websocket connection. Instead, the 'rate exceeded' error occurs when the message is sent to invoke the function that does the actual data processing, and it's much more obvious what's going on.
Fundamentally, the problem seems to be that Lambda concurrency is only set to 10, despite the documentation stating that the default is 1,000. I'm now in a better position to evidence a request for an increase.

GetRequestStream method and hanging thread

Assume I have the following code:
private string PostData(string functionName, string parsedContent)
{
string url = // some url;
var http = (HttpWebRequest)WebRequest.Create(new Uri(url));
http.Accept = "application/json";
http.ContentType = "application/json";
http.Method = "POST";
http.Timeout = 15000; // 15 seconds
Byte[] bytes = Encoding.UTF8.GetBytes(parsedContent);
using (Stream newStream = http.GetRequestStream())
{
newStream.Write(bytes, 0, bytes.Length);
}
using (WebResponse response = http.GetResponse())
{
using (var stream = response.GetResponseStream())
{
var sr = new StreamReader(stream);
var content = sr.ReadToEnd();
return content;
}
}
}
I set up a breakpoint over this line of code:
using (Stream newStream = http.GetRequestStream())
before http.GetRequestStream() gets executed. Here is a screenshot of my active threads:
This whole method is running in background thread with ThreadId = 3 as you can see.
After pressing F10 we get http.GetRequestStream() method executed. And here is an updated screenshot of active threads:
As you can see, now we have one extra active thread that is in state of waiting. Probably the method http.GetRequestStream() spawns it. Everything is fine, but.. this thread keeps hanging like that for the whole app lifecycle, which seems not to be the intended behaviour.
Am I misusing GetRequestStream somehow?
If I use ilspy it looks like the request is send asynchronously. That would explain the extra thread.
Looking a little bit deeper the HttpWebRequest creates a static TimerQueue with one thread and a never ending loop, that has a Monitor.WaitAny in it. Every webrequest in the appdomain will register a timer callback for timeout handling and all those callbacks are handled by that thread. Due to it being static that instance will never get garbage collected and therefore it will keep hold of the thread.
It did register for the AppDomain.Unload event so if that fires it will clean up it's resources including any threads.
Do notice that these are all internal classes and those implementation details might change at any time.

Multithread HttpWebRequest hangs randomly on responseStream

I'm coding a multithreaded web-crawler that performs a lot of concurrent httpwebrequests every second using hundreds of threads, the application works great but sometimes(randomly) one of the webrequests hangs on the getResponseStream() completely ignoring the timeout(this happen when I perform hundreds of requests concurrently) making the crawling process never end, the strange thing is that with fiddler this never happen and the application never hang, it is really hard to debug because it happens randomly.
I've tried to set
Keep-Alive = false
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
but I still get the strange behavior, any ideas?
Thanks
HttpWebRequest code:
public static string RequestHttp(string url, string referer, ref CookieContainer cookieContainer_0, IWebProxy proxy)
{
string str = string.Empty;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
request.UserAgent = randomuseragent();
request.ContentType = "application/x-www-form-urlencoded";
request.Accept = "*/*";
request.CookieContainer = cookieContainer_0;
request.Proxy = proxy;
request.Timeout = 15000;
request.Referer = referer;
//request.ServicePoint.MaxIdleTime = 15000;
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
List<byte> list = new List<byte>();
byte[] buffer = new byte[0x400];
int count = responseStream.Read(buffer, 0, buffer.Length);
while (count != 0)
{
list.AddRange(buffer.ToList<byte>().GetRange(0, count));
if (list.Count >= 0x100000)
{
break;
}
count = 0;
try
{
HERE IT HANGS SOMETIMES ---> count = responseStream.Read(buffer, 0, buffer.Length);
continue;
}
catch
{
continue;
}
}
//responseStream.Close();
int num2 = 0x200 * 0x400;
if (list.Count >= num2)
{
list.RemoveRange((num2 * 3) / 10, list.Count - num2);
}
byte[] bytes = list.ToArray();
str = Encoding.Default.GetString(bytes);
Encoding encoding = Encoding.Default;
if (str.ToLower().IndexOf("charset=") > 0)
{
encoding = GetEncoding(str);
}
else
{
try
{
encoding = Encoding.GetEncoding(response.CharacterSet);
}
catch
{
}
}
str = encoding.GetString(bytes);
// response.Close();
}
}
return str.Trim();
}
The Timeout property "Gets or sets the time-out value in milliseconds for the GetResponse and GetRequestStream methods." The default value is 100,000 milliseonds (100 seconds).
The ReadWriteTimeout property, "Gets or sets a time-out in milliseconds when writing to or reading from a stream." The default is 300,000 milliseconds (5 minutes).
You're setting Timeout, but leaving ReadWriteTimeout at the default, so your reads can take up to five minutes before timing out. You probably want to set ReadWriteTimeout to a lower value. You might also consider limiting the size of data that you download. With my crawler, I'd sometimes stumble upon an unending stream that would eventually result in an out of memory exception.
Something else I noticed when crawling is that sometimes closing the response stream will hang. I found that I had to call request.Abort to reliably terminate a request if I wanted to quit before reading the entire stream.
There is nothing apparent in the code you provided.
Why did you comment response.Close() out?
Documentation hints that connections may run out if not explicitly closed. The response getting disposed may close the connection but just releasing all the resources is not optimal I think. Closing the response will also close the stream so that is covered.
The system hanging without timeout can be just a network issue making the response object a dead duck or the problem is due the high number of threads resulting in memory fragmentation.
Looking at anything that may produce a pattern may help find the source:
How many threads are typically running (can you bundle request sets in less threads)
How is the network performance at the time the thread stopped
Is there a specific count or range when it happens
What data was processed last when it happened (are there any specific control characters or sequences of data that can upset the stream)
Want to ask more questions but not enough reputation so can only reply.
Good luck!
Below is some code that does something similar, it's also used to access multiple web sites, each call is in a different task. The difference is that I only read the stream once and then parse the results. That might be a way to get around the stream reader locking up randomly or at least make it easier to debug.
try
{
_webResponse = (HttpWebResponse)_request.GetResponse();
if(_request.HaveResponse)
{
if (_webResponse.StatusCode == HttpStatusCode.OK)
{
var _stream = _webResponse.GetResponseStream();
using (var _streamReader = new StreamReader(_stream))
{
string str = _streamReader.ReadToEnd();

C# HttpWebResponse Timeout doesn't work

I have the function to check if website is available.
public bool ConnectionAvailable(string strServer)
{
try
{
HttpWebRequest reqFP = (HttpWebRequest)HttpWebRequest.Create(strServer);
reqFP.Timeout = 10000;
HttpWebResponse rspFP = (HttpWebResponse)reqFP.GetResponse();
if (HttpStatusCode.OK == rspFP.StatusCode)
{
// HTTP = 200 - Internet connection available, server online
rspFP.Close();
return true;
}
else
{
// Other status - Server or connection not available
rspFP.Close();
return false;
}
}
catch (WebException)
{
// Exception - connection not available
return false;
}
}
It's not mine code. I found it in the Net.
The problem is when some website isn't available.
I want to wait x miliseconds (set in reqFP.Timeout), then function should return false.
But everytime I have to wait ~20 seconds (even if i set 10 seconds in "timeout").
Do you have any idea what is wrong?
PS: Sorry for language mistakes.
From MSDN article:
A Domain Name System (DNS) query may
take up to 15 seconds to return or
time out. If your request contains a
host name that requires resolution and
you set Timeout to a value less than
15 seconds, it may take 15 seconds or
more before a WebException is thrown
to indicate a timeout on your request.
If it's possible that's the case? Try the sane code but using IP address instead of hostname.
Also, when you get false after waiting 20 seconds, are you sure it's because of timeout and not because the server returned something other than "200"?
Try this property: ReadWriteTimeout

Timer in C# windows service not restarting

I have a windows service that runs four timers for a monitoring application. The timer in question opens a web request, polls a rest web service, and saves the results in a database.
Please see the elapsed method below:
void iSMSPollTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
iSMSPollTimer.Stop();
try
{
Logger.Log("iSMSPollTimer elapsed - polling iSMS modem for new messages");
string url = "http://...:../recvmsg?user=" + iSMSUser + "&passwd=" + iSMSPassword;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream resStream = response.GetResponseStream();
XmlSerializer serializer = new XmlSerializer(typeof(Response));
using (XmlReader reader = XmlReader.Create(resStream))
{
Response responseXml = (Response)serializer.Deserialize(reader);
if (responseXml.MessageNotification != null)
{
foreach (var messageWrapper in responseXml.MessageNotification)
{
DataContext dc = new DataContext();
DateTime monitorTimestamp = DateTime.Now;
if (messageWrapper.Message.ToUpper().EndsWith("..."))
{
//Saved to DB
}
else if (messageWrapper.Message.ToUpper().EndsWith("..."))
{
//Saved to DB
}
dc.SubmitChanges();
}
}
else
{
Logger.Log("No messages waiting in the iSMS Modem");
}
}
Logger.Log("iSMSPollTimer processing completed");
}
catch (Exception ex)
{
Logger.Log(GetExceptionLogMessage("iSMSPollTimer_Elapsed", ex));
Logger.Debug(GetExceptionLogMessage("iSMSPollTimer_Elapsed", ex));
}
finally
{
iSMSPollTimer.Start();
}
}
When I look at the log messages, I do get "iSMSPollTimer processing completed" and randomly afterwards the timer does not restart.
Any thoughts?
I'm thinking there's a potential reentrancy problem here, but I can't put my finger on it exactly.
I would suggest that, rather than calling Timer.Stop and then Timer.Start, set the timer's AutoReset property to false when you create it. That will prevent any reentrancy problems because the timer is automatically stopped the first time the interval elapses.
Your handler code remains the same except that you remove the code that calls iSMSPollTimer.Stop.
I'm not saying that this will solve your problem for sure, but it will remove the lingering doubt about a reentrancy problem.
This is a pretty well known issue with using timers in .NET service. People will tell you to use a different type of timer (Threading vs System), but in the end they will also fail you. How long before they stop triggering? The shorter your interval, the faster it will fail. If you set it to 1 second, you'll see it happen every couple hours.
The only workaround that I found working for me, is not depending on timers altogether and use a while loop with a Sleep function inside.

Categories

Resources