I have an Azure function that sends a request to a URL and sends back the response. This function kept failing with timeout error for URLs from a particular domain (confidential).
To debug this, I created a very minimal Azure function:
var content = string.Empty;
using (var response = await _httpClient.GetAsync(url))
{
response.EnsureSuccessStatusCode();
content = await response.Content.ReadAsByteArrayAsync();
}
return new OkObjectResult(content);
This code works fine in local. When I try using the deployed Azure function, it works for all the other domains I tried (ex: https://google.com) but it hits request timeout error for a particular domain after trying for about 90 seconds. The error happens at this particular line: _httpClient.GetAsync(url). Again, it works fine for this (confidential) domain in local.
I have tried deploying the Azure function to two completely different Azure service plans and regions. Same result. It doesn't work for URLs from the required domain. Works for URLs of other domains.
Error:
System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request..
Update (solution):
I tried sending a request from Postman, copied the code from there for C# and deployed it to the Azure function and it is now working for the problematic domain. Something like below:
var client = new RestClient(url);
client.Timeout = -1;
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
The key here is client.Timeout = -1, which seems to have fixed the problem.
Now, in my original code, I tried setting HttpClient's timeout to Timeout.InfiniteTimeSpan both in Startup configuration as well as at individual request level but it did not work.
services.AddHttpClient("AzureTestClient", options =>
{
options.Timeout = Timeout.InfiniteTimeSpan;
});
Am I setting the timeout wrong in the HttpClient solution?
If you are using a Consumption plan then maybe the confidential URL need to whitelist the whole Azure Data center. You can follow the guide here or consider upgrading the Consumption plan to a premium one and have a dedicated linked VNET.
Maybe your local machine is already linked to the domain/whitelisted so azure function operates from different range.
Another reason maybe the URL returns a different HttpStatusCode that is't Successful range (200-299) so it fails with "EnsureSuccessStatusCode" in the old code?
Normally for the http code initialization, I did something like that:
public void Configure(IWebJobsBuilder builder)
{
builder.Services.AddHttpClient("AzureTestClient",
options => { options.Timeout = Timeout.InfiniteTimeSpan; });
}
Then when I want to use it, I do like that in any other function and it worked:
var client = clientFactory.CreateClient("AzureTestClient");
When I use Postman to try uploading a large file to my server (written in .NET Core 2.2), Postman immediately shows the HTTP Error 404.13 - Not Found error: The request filtering module is configured to deny a request that exceeds the request content length
But when I use my code to upload that large file, it gets stuck at the line to send the file.
My client code:
public async void TestUpload() {
StreamContent streamContent = new StreamContent(File.OpenRead("D:/Desktop/large.zip"));
streamContent.Headers.Add("Content-Disposition", "form-data; name=\"file\"; filename=\"large.zip\"");
MultipartFormDataContent multipartFormDataContent = new MultipartFormDataContent();
multipartFormDataContent.Add(streamContent);
HttpClient httpClient = new HttpClient();
Uri uri = new Uri("https://localhost:44334/api/user/testupload");
try {
HttpResponseMessage httpResponseMessage = await httpClient.PostAsync(uri, multipartFormDataContent);
bool success = httpResponseMessage.IsSuccessStatusCode;
}
catch (Exception ex) {
}
}
My server code:
[HttpPost, Route("testupload")]
public async Task UploadFile(IFormFileCollection formFileCollection) {
IFormFileCollection formFiles = Request.Form.Files;
foreach (var item in formFiles) {
using (var stream = new FileStream(Path.Combine("D:/Desktop/a", item.FileName), FileMode.Create)) {
await item.CopyToAsync(stream);
}
}
}
My client code gets stuck at the line HttpResponseMessage httpResponseMessage = await httpClient.PostAsync(uri, multipartFormDataContent), while the server doesn't receive any request (I use a breakpoint to ensure that).
It gets stuck longer if the file is bigger. Looking at Task Manager, I can see my client program uses up high CPU and Disk as it is actually uploading the file to the server. After a while, the code moves to the next line which is
bool success = httpResponseMessage.IsSuccessStatusCode
Then by reading the response content, I get exactly the result as of Postman.
Now I want to know how to immediately get the error to be able to notify the user in time, I don't want to wait really long.
Note that when I use Postman to upload large files, my server doesn't receive any request as well. I think I am missing something, maybe there is problem with my client code.
EDIT: Actually I think it is the client-side error. But if it is server-side error, then it still doesn't mean too much for me. Because, let me clear my thought. I want to create this little helper class that I can use across projects, maybe I can share it with my friends too. So I think it should be able, like Postman, to determine the error as soon as possible. If Postman can do, I can too.
EDIT2: It's weird that today I found out Postman does NOT know before hand whether the server accepts big requests, I uploaded a big file and I saw it actually sent the whole file to the server until it got the response. Now I don't believe in myself anymore, why I thought Postman knows ahead of time the error, I must be stupid. But it does mean that I have found a way to do my job even better than Postman, so I think this question might be useful for someone.
Your issue has nothing to do with your server-side C# code. Your request gets stuck because of what is happening between the client and the server (by "server" I mean IIS, Apache, Nginx..., not your server-side code).
In HTTP, most clients don't read response until they send all the request data. So, even if your server discovers that the request is too large and returns an error response, the client will not read that response until the server accepts the whole requests.
When it comes to server-side, you can check this question, but I think it would be more convenient to handle it on the client side, by checking the file size before sending it to the server (this is basically what Postman is doing in your case).
Now I am able to do what I wanted. But first I want to thank you #Marko Papic, your informations do help me in thinking about a way to do what I want.
What I am doing is:
First, create an empty ByteArrayContent request, with the ContentLength of the file I want to upload to the server.
Second, surround HttpResponseMessage = await HttpClient.SendAsync(HttpRequestMessage) in a try-catch block. The catch block catches HttpRequestException because I am sending a request with the length of the file but my actual content length is 0, so it will throw an HttpRequestException with the message Cannot close stream until all bytes are written.
If the code reaches the catch block, it means the server ALLOWS requests with the file size or bigger. If there is no exception and the code moves on to the next line, then if HttpResponseMessage.StatusCode is 404, it means the server DENIES requests bigger than the file size. The case when HttpResponseMessage.StatusCode is NOT 404 will never happen (I'm not sure about this one though).
My final code up to this point:
private async Task<bool> IsBigRequestAllowed() {
FileStream fileStream = File.Open("D:/Desktop/big.zip", FileMode.Open, FileAccess.Read, FileShare.Read);
if(fileStream.Length == 0) {
fileStream.Close();
return true;
}
HttpRequestMessage = new HttpRequestMessage();
HttpMethod = HttpMethod.Post;
HttpRequestMessage.Method = HttpMethod;
HttpRequestMessage.RequestUri = new Uri("https://localhost:55555/api/user/testupload");
HttpRequestMessage.Content = new ByteArrayContent(new byte[] { });
HttpRequestMessage.Content.Headers.ContentLength = fileStream.Length;
fileStream.Close();
try {
HttpResponseMessage = await HttpClient.SendAsync(HttpRequestMessage);
if (HttpResponseMessage.StatusCode == HttpStatusCode.NotFound) {
return false;
}
return true; // The code will never reach this line though
}
catch(HttpRequestException) {
return true;
}
}
NOTE: Note that my approach still has a problem. The problem with my code is the ContentLength property, it shouldn't be exact the length of the file, it should be bigger. For example, if my file is exactly 1000 bytes in length, then if the file is successfully uploaded to the server, the Request that the server gets has greater ContentLength value. Because HttpClient doesn't just only send the content of the file, but it has to send many informations in addition. It has to send the boundaries, content types, hyphens, line breaks, etc... Generally speaking, you should somehow find out before hand the exact bytes that HttpClient will send along with your files to make this approach work perfectly (I still don't know how so far, I'm running out of time. I will find out and update my answer later).
Now I am able to immediately determine ahead of time whether the server can accept requests that are as big as the file my user wants to upload.
I am doing a .mp4 file download from Azure Blob storage and pushing that to the UI. The download works fine, the issue is it doesn't look like the headers are set correctly for content-length. Thus you cannot track the download time because the browser only says what has been downloaded and not how much is left and the estimated time. Is my code for the response wrong or should I change my request? My code as follows:
[HttpGet("VideoFileDownload")]
public IActionResult VideoFileDownloadAsync([FromQuery]int VideoId)
{
...code to get blob file
return new FileStreamResult(blob.OpenRead(), new MediaTypeHeaderValue("application/octet-stream")
}
I have played around with various request and response headers but it makes no difference.
The files are big and I know the old asp.net way of checking for range headers and then do a chunked stream but I want to use the new features in .net core which doesn't work as expected or maybe I just don't understand it thoroughly. Can somebody give me a working sample of a file download with asp.net core code.
If you have the file size, you can set the response's content length in the Response object just before returning the FileStreamResult, like this:
[HttpGet("VideoFileDownload")]
public IActionResult VideoFileDownloadAsync([FromQuery]int VideoId)
{
...code to get blob file
long myFileSize = blob.Length; // Or wherever it is you can get your file size from.
this.Response.ContentLength = myFileSize;
return new FileStreamResult(blob.OpenRead(), new MediaTypeHeaderValue("application/octet-stream")
}
I have a controller that takes a file uploaded, processes it, and then responds to the server. Everything works perfect locally. Everything works perfect when deployed to the server if the test file is small. However, once I try a larger test file (27MB), I get some strange results. After processing and what appears to be sending the OK back (which takes about 90 seconds), the whole request starts processing again. My client is NOT sending a new request, and the IIS logs show ONLY one request coming in. My code is simplified to the following:
public async Task<IHttpActionResult> UploadNfsExcelFile()
{
// Using this STRICTLY to make sure the request is new each time for logging and debugging
Guid trackMe=Guid.NewGuid();
Logger.DebugFormat("Request coming in: {0}", trackMe);
string tempPath = Path.GetTempPath();
// This handles the persistence of the file name for processing
CustomMultipartFormDataStreamProvider streamProvider = new CustomMultipartFormDataStreamProvider(tempPath);
await Request.Content.ReadAsMultipartAsync(streamProvider);
foreach (string result in streamProvider.FileData.Select(entry => entry.LocalFileName))
{
/* File processing here */
}
Logger.DebugFormat("File processing done: {0}", trackMe);
return Ok(new
{
token: "This would be the response token"
});
}
My log file shows the "Request coming in: GUID" log statement, followed by the "File Processing done: GUID" statement, then immediately show the logging I have i place for the some filters (authentication validation) followed by the set of "Request coming in: NEWGUID", "File Processing Done: NEWGUID". At this point, Chrome shows an empty response. My IIS logs show ONLY one request with a response status of 200
EDIT
Additional information. Still getting this issue, it seems to be when the file processing takes more than 60 seconds the serer just reprocesses the request.
EDIT 2
Checking on wireshark now locally, it seems my TCP is resending the request (I'm getting "[TCP segment of a reassembled PDU]" every 60 seconds that corresponds to the re-execution of the web api method
Using Instagram API I need to get a list of users who have liked specific media.
The following call should return the list of all users according to the documentation: https://api.instagram.com/v1/media/555/likes?access_token=ACCESS-TOKEN
However, I get only 120 users with no pagination parameters.
Is there any way to keep requesting the rest?
If you need the code:
String requestLikes = "https://api.instagram.com/v1/media/" + mediaID + "/likes?access_token=" + access_token + "&count=0";
// Create a request for the URL.
request = WebRequest.Create(requestLikes);
// Get the response.
response = request.GetResponse();
//Remaining calls
AddOrUpdateAppSettings("remainingCalls", response.Headers["X-Ratelimit-Remaining"]);
// Display the status.
Console.WriteLine(((HttpWebResponse)response).StatusDescription);
// Get the stream containing content returned by the server.
dataStream = response.GetResponseStream();
// Open the stream using a StreamReader for easy access.
reader = new StreamReader(dataStream);
// Read the content.
responseFromServer = reader.ReadToEnd();
Unfortunately they only provide the latest 120 likes in newest-to-oldest order with no pagination. You can test this by requesting a photo then liking it and you'll see that your account is on top of the list.
The only work around is to set up a job to periodically cache likes beginning shortly after the photo is first posted. Since you're always getting the newest 120 you can get them all that way. You can create a subscription to a user using the realtime api and get a ping when your user posts a new photo, then start caching likes. A decaying rate would be advised - maybe cache a couple times the first hour after it's posted, then less and less frequently the longer it's been.