I have a controller that takes a file uploaded, processes it, and then responds to the server. Everything works perfect locally. Everything works perfect when deployed to the server if the test file is small. However, once I try a larger test file (27MB), I get some strange results. After processing and what appears to be sending the OK back (which takes about 90 seconds), the whole request starts processing again. My client is NOT sending a new request, and the IIS logs show ONLY one request coming in. My code is simplified to the following:
public async Task<IHttpActionResult> UploadNfsExcelFile()
{
// Using this STRICTLY to make sure the request is new each time for logging and debugging
Guid trackMe=Guid.NewGuid();
Logger.DebugFormat("Request coming in: {0}", trackMe);
string tempPath = Path.GetTempPath();
// This handles the persistence of the file name for processing
CustomMultipartFormDataStreamProvider streamProvider = new CustomMultipartFormDataStreamProvider(tempPath);
await Request.Content.ReadAsMultipartAsync(streamProvider);
foreach (string result in streamProvider.FileData.Select(entry => entry.LocalFileName))
{
/* File processing here */
}
Logger.DebugFormat("File processing done: {0}", trackMe);
return Ok(new
{
token: "This would be the response token"
});
}
My log file shows the "Request coming in: GUID" log statement, followed by the "File Processing done: GUID" statement, then immediately show the logging I have i place for the some filters (authentication validation) followed by the set of "Request coming in: NEWGUID", "File Processing Done: NEWGUID". At this point, Chrome shows an empty response. My IIS logs show ONLY one request with a response status of 200
EDIT
Additional information. Still getting this issue, it seems to be when the file processing takes more than 60 seconds the serer just reprocesses the request.
EDIT 2
Checking on wireshark now locally, it seems my TCP is resending the request (I'm getting "[TCP segment of a reassembled PDU]" every 60 seconds that corresponds to the re-execution of the web api method
Related
When I use Postman to try uploading a large file to my server (written in .NET Core 2.2), Postman immediately shows the HTTP Error 404.13 - Not Found error: The request filtering module is configured to deny a request that exceeds the request content length
But when I use my code to upload that large file, it gets stuck at the line to send the file.
My client code:
public async void TestUpload() {
StreamContent streamContent = new StreamContent(File.OpenRead("D:/Desktop/large.zip"));
streamContent.Headers.Add("Content-Disposition", "form-data; name=\"file\"; filename=\"large.zip\"");
MultipartFormDataContent multipartFormDataContent = new MultipartFormDataContent();
multipartFormDataContent.Add(streamContent);
HttpClient httpClient = new HttpClient();
Uri uri = new Uri("https://localhost:44334/api/user/testupload");
try {
HttpResponseMessage httpResponseMessage = await httpClient.PostAsync(uri, multipartFormDataContent);
bool success = httpResponseMessage.IsSuccessStatusCode;
}
catch (Exception ex) {
}
}
My server code:
[HttpPost, Route("testupload")]
public async Task UploadFile(IFormFileCollection formFileCollection) {
IFormFileCollection formFiles = Request.Form.Files;
foreach (var item in formFiles) {
using (var stream = new FileStream(Path.Combine("D:/Desktop/a", item.FileName), FileMode.Create)) {
await item.CopyToAsync(stream);
}
}
}
My client code gets stuck at the line HttpResponseMessage httpResponseMessage = await httpClient.PostAsync(uri, multipartFormDataContent), while the server doesn't receive any request (I use a breakpoint to ensure that).
It gets stuck longer if the file is bigger. Looking at Task Manager, I can see my client program uses up high CPU and Disk as it is actually uploading the file to the server. After a while, the code moves to the next line which is
bool success = httpResponseMessage.IsSuccessStatusCode
Then by reading the response content, I get exactly the result as of Postman.
Now I want to know how to immediately get the error to be able to notify the user in time, I don't want to wait really long.
Note that when I use Postman to upload large files, my server doesn't receive any request as well. I think I am missing something, maybe there is problem with my client code.
EDIT: Actually I think it is the client-side error. But if it is server-side error, then it still doesn't mean too much for me. Because, let me clear my thought. I want to create this little helper class that I can use across projects, maybe I can share it with my friends too. So I think it should be able, like Postman, to determine the error as soon as possible. If Postman can do, I can too.
EDIT2: It's weird that today I found out Postman does NOT know before hand whether the server accepts big requests, I uploaded a big file and I saw it actually sent the whole file to the server until it got the response. Now I don't believe in myself anymore, why I thought Postman knows ahead of time the error, I must be stupid. But it does mean that I have found a way to do my job even better than Postman, so I think this question might be useful for someone.
Your issue has nothing to do with your server-side C# code. Your request gets stuck because of what is happening between the client and the server (by "server" I mean IIS, Apache, Nginx..., not your server-side code).
In HTTP, most clients don't read response until they send all the request data. So, even if your server discovers that the request is too large and returns an error response, the client will not read that response until the server accepts the whole requests.
When it comes to server-side, you can check this question, but I think it would be more convenient to handle it on the client side, by checking the file size before sending it to the server (this is basically what Postman is doing in your case).
Now I am able to do what I wanted. But first I want to thank you #Marko Papic, your informations do help me in thinking about a way to do what I want.
What I am doing is:
First, create an empty ByteArrayContent request, with the ContentLength of the file I want to upload to the server.
Second, surround HttpResponseMessage = await HttpClient.SendAsync(HttpRequestMessage) in a try-catch block. The catch block catches HttpRequestException because I am sending a request with the length of the file but my actual content length is 0, so it will throw an HttpRequestException with the message Cannot close stream until all bytes are written.
If the code reaches the catch block, it means the server ALLOWS requests with the file size or bigger. If there is no exception and the code moves on to the next line, then if HttpResponseMessage.StatusCode is 404, it means the server DENIES requests bigger than the file size. The case when HttpResponseMessage.StatusCode is NOT 404 will never happen (I'm not sure about this one though).
My final code up to this point:
private async Task<bool> IsBigRequestAllowed() {
FileStream fileStream = File.Open("D:/Desktop/big.zip", FileMode.Open, FileAccess.Read, FileShare.Read);
if(fileStream.Length == 0) {
fileStream.Close();
return true;
}
HttpRequestMessage = new HttpRequestMessage();
HttpMethod = HttpMethod.Post;
HttpRequestMessage.Method = HttpMethod;
HttpRequestMessage.RequestUri = new Uri("https://localhost:55555/api/user/testupload");
HttpRequestMessage.Content = new ByteArrayContent(new byte[] { });
HttpRequestMessage.Content.Headers.ContentLength = fileStream.Length;
fileStream.Close();
try {
HttpResponseMessage = await HttpClient.SendAsync(HttpRequestMessage);
if (HttpResponseMessage.StatusCode == HttpStatusCode.NotFound) {
return false;
}
return true; // The code will never reach this line though
}
catch(HttpRequestException) {
return true;
}
}
NOTE: Note that my approach still has a problem. The problem with my code is the ContentLength property, it shouldn't be exact the length of the file, it should be bigger. For example, if my file is exactly 1000 bytes in length, then if the file is successfully uploaded to the server, the Request that the server gets has greater ContentLength value. Because HttpClient doesn't just only send the content of the file, but it has to send many informations in addition. It has to send the boundaries, content types, hyphens, line breaks, etc... Generally speaking, you should somehow find out before hand the exact bytes that HttpClient will send along with your files to make this approach work perfectly (I still don't know how so far, I'm running out of time. I will find out and update my answer later).
Now I am able to immediately determine ahead of time whether the server can accept requests that are as big as the file my user wants to upload.
I have been developing a OneDrive desktop client app because the one built into windows has been failing me for reasons I cannot figure out. I'm using the REST API in C# via an HttpClient.
All requests to the onedrive endpoint work fine (downloading, uploading small files, etc.) and uploading large files worked fine up until recently (about two days ago). I get the upload session URL and start uploading data to it, but after uploading two chunks to it successfully (202 response), the third request and beyond times out (via the HttpClient), whether it be a GET to get the status, or a PUT to upload data. The POST to create the session still works.
I have tried: getting a new ClientId, logging into a new Microsoft account, reverting code to a known working state, and recloning git repository.
In PostMan, I can go through the whole process of creating a session and uploading chunks and not experience this issue, but if I take an upload URL that my application retrieves from the OneDrive API and try to PUT data to it in PostMan, the server doesn't respond (unless the request is invalid, then it sometimes tells me). Subsequent GET requests to this URL also don't respond.
Here is a log of all requests going to the OneDrive API after authentication: https://pastebin.com/qRrw2Sb5
and here is the relevant code:
//first, create an upload session
var httpResponse = await _httpClient.StartAuthenticatedRequest(url, HttpMethod.Post).SendAsync(ct);
if (httpResponse.StatusCode != HttpStatusCode.OK)
{
return new HttpResult<IRemoteItemHandle>(httpResponse, null);
}
//get the upload URL
var uploadSessionRequestObject = await HttpClientHelper.ReadResponseAsJObjectAsync(httpResponse);
var uploadUrl = (string)uploadSessionRequestObject["uploadUrl"];
if (uploadUrl == null)
{
Debug.WriteLine("Successful OneDrive CreateSession request had invalid body!");
//TODO: what to do here?
}
//the length of the file total
var length = data.Length;
//setup the headers
var headers = new List<KeyValuePair<string, string>>()
{
new KeyValuePair<string, string>("Content-Length", ""),
new KeyValuePair<string, string>("Content-Range","")
};
JObject responseJObject;
//the response that will be returned
HttpResponseMessage response = null;
//get the chunks
List<Tuple<long, long>> chunks;
do
{
HttpResult<List<Tuple<long, long>>> chunksResult;
//get the chunks
do
{
chunksResult = await RetrieveLargeUploadChunksAsync(uploadUrl, _10MB, length, ct);
//TODO: should we delay on failure?
} while (chunksResult.Value == null);//keep trying to get thre results until we're successful
chunks = chunksResult.Value;
//upload each fragment
var chunkStream = new ChunkedReadStreamWrapper(data);
foreach (var fragment in chunks)
{
//setup the chunked stream with the next fragment
chunkStream.ChunkStart = fragment.Item1;
//the size is one more than the difference (because the range is inclusive)
chunkStream.ChunkSize = fragment.Item2 - fragment.Item1 + 1;
//setup the headers for this request
headers[0] = new KeyValuePair<string, string>("Content-Length", chunkStream.ChunkSize.ToString());
headers[1] = new KeyValuePair<string, string>("Content-Range", $"bytes {fragment.Item1}-{fragment.Item2}/{length}");
//submit the request until it is successful
do
{
//this should not be authenticated
response = await _httpClient.StartRequest(uploadUrl, HttpMethod.Put)
.SetContent(chunkStream)
.SetContentHeaders(headers)
.SendAsync(ct);
} while (!response.IsSuccessStatusCode); // keep retrying until success
}
//parse the response to see if there are more chunks or the final metadata
responseJObject = await HttpClientHelper.ReadResponseAsJObjectAsync(response);
//try to get chunks from the response to see if we need to retry anything
chunks = ParseLargeUploadChunks(responseJObject, _10MB, length);
}
while (chunks.Count > 0);//keep going until no chunks left
Everything does what the comments say or what the name suggests, but a lot of the methods/classes are my own, so i'd be happy to explain anything that might not be obvious.
I have absolutely no idea what's going on and would appreciate any help. I'm trying to get this done before I go back to school on Saturday and no longer have time to work on it.
EDIT: After waiting a while, requests can be made to the upload URL again via PostMan.
EDIT 2: I can no longer replicate this timeout phenomenon in Postman. Whether I get the upload URL from my application, or from another Postman request, and whether or not the upload has stalled in my application, I can seem to upload all the fragments I want to through Postman.
EDIT 3: This not-responding behavior starts before the content stream is read from.
Edit 4: Looking at packet info on WireShark, the first two chunks are almost identical, but only "resend" packets show up on the third.
So after 3 weeks of varying levels of testing, I have finally figured out the issue and it has almost nothing to do with the OneDrive Graph api. The issue was that when making the Http requests, I was using the HttpCompletionOption.ResponseHeadersRead but not reading the responses before sending the next one. This means that the HttpClient was preventing me from sending more requests until I read the responses from the old ones. It was strange because it allowed me to send 2 requests before locking up.
In ASP.NET MVC, now we can response 304 code to browser, which means that the content in the server has not been changed, the browser can use its local cache for this url.
public ActionResult Image(int id){
var image = _imageRepository.Get(id);
if (image == null)
throw new HttpException(404, "Image not found");
if (!String.IsNullOrEmpty(Request.Headers["If-Modified-Since"]))
{
CultureInfo provider = CultureInfo.InvariantCulture;
var lastMod = DateTime.ParseExact(Request.Headers["If-Modified-Since"], "r", provider).ToLocalTime();
if (lastMod == image.TimeStamp.AddMilliseconds(-image.TimeStamp.Millisecond))
{
Response.StatusCode = 304;
Response.StatusDescription = "Not Modified";
return Content(String.Empty);
}
}
var stream = new MemoryStream(image.GetImage());
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetLastModified(image.TimeStamp);
return File(stream, image.MimeType);
}
But I am a little confused about the logic in the browser. For example, when we first ask for a page http://www.test.com/index.html,it will load a javascript file aaa.js. But when the browser ask another page http://www.test.com/index2.html, this page also contains aaa.js.
Here comes the question. We know that the browser has a logic for http cache. I assume that when the browser asks for index2.html, it will check that it has aaa.js locally, which is available, so it will not communicate with server about this file. So here, no 304 is returned, because the browser has not request anything about this file.Is this the right logic?
Or every time it will communicate with the server to check the version of the file? In this situation, if we don't write any C# code to return 304 status, every time it will return the whole file.So I guess this is not the logic.
What is the relationship between the browser cache and 304 status?
Depending on the server response to the first request to aaa.js the browser may or may not request the file again the second time.
If no specific caching headers are sent by the server with the file, on the second page load the browser will send a request for aaa.js again. If the browser doesn't have the JS file in its cache, it will send the request the same as it did the first time. If aaa.js is in the browser cache it will send a request to the server containing an If-Modified-Since header with the date the file was previously downloaded. The server then checks if the file has been modified: if so it sends the new file; otherwise it sends the 304 header.
Now let's spool back to the beginning. In the initial request to aaa.js the server could include a Cache-control header telling the browser how long to cache the file for. Let's say Cache-control: max-age=3600 which instructs to cache the file for one hour (3600 seconds).
If the user visits the second page within one hour, the browser won't even send a request to the server for aaa.js, it will just use the cached file without question.
Once the hour is up and a new page is loaded, the browser request aaa.js again.
I send a file via POST to my ApiController.
If the file is below 2 MB, everything works like a charm.
If the file is bigger, I get a Error 404.
This is the (old) function declaration in my Controller:
[HttpPost]
public HttpResponseMessage FileUpload(HttpRequestMessage req, string entryId = "", string owner = "", int debug = 0)
{
which returns, if the entity is too large, this:
Remote Address:172.17.41.12:443
Request URL:https://webdevserver/myapp/api/Debug/FileUpload
Request Method:POST
Status Code:404 Not Found
or if it is inside the size limits, this:
Remote Address:172.17.41.12:443
Request URL:https://webdevserver/myapp/api/Debug/FileUpload
Request Method:POST
Status Code:200 OK
So I want to send a useful error message - which Error 404 definitely is NOT! - and stumbled upon HTTP Status Code 413, which IIS doesn't send automatically :( so I changed my code to:
[HttpPost]
public HttpResponseMessage FileUpload(HttpRequestMessage req=null, string entryId = "", string owner = "", int debug = 0)
{
if(req==null) {
// POST was not handed over to my function by IIS
// now is it possible to check whether file was empty or too large!? Because
return new HttpResponseMessage(HttpStatusCode.RequestEntityTooLarge);
// should only be sent if POST content was really too large!
So, how can I check whether the size of the POST data was too big or POST was empty?
According to this blog, the status code 404.13 was introduced in IIS 7 to replace the http status code 413.
Since this was done by design, I would suggest that you maintain the response as is, and in your code try to determine whether the 404 error was actually a 404.13 error.
TL;DR version
When a transfer error occurs while writing to the request stream, I can't access the response, even though the server sends it.
Full version
I have a .NET application that uploads files to a Tomcat server, using HttpWebRequest. In some cases, the server closes the request stream prematurely (because it refuses the file for one reason or another, e.g. an invalid filename), and sends a 400 response with a custom header to indicate the cause of the error.
The problem is that if the uploaded file is large, the request stream is closed before I finish writing the request body, and I get an IOException:
Message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
InnerException: SocketException: An existing connection was forcibly closed by the remote host
I can catch this exception, but then, when I call GetResponse, I get a WebException with the previous IOException as its inner exception, and a null Response property. So I can never get the response, even though the server sends it (checked with WireShark).
Since I can't get the response, I don't know what the actual problem is. From my application point of view, it looks like the connection was interrupted, so I treat it as a network-related error and retry the upload... which, of course, fails again.
How can I work around this issue and retrieve the actual response from the server? Is it even possible? To me, the current behavior looks like a bug in HttpWebRequest, or at least a severe design issue...
Here's the code I used to reproduce the problem:
var request = HttpWebRequest.CreateHttp(uri);
request.Method = "POST";
string filename = "foo\u00A0bar.dat"; // Invalid characters in filename, the server will refuse it
request.Headers["Content-Disposition"] = string.Format("attachment; filename*=utf-8''{0}", Uri.EscapeDataString(filename));
request.AllowWriteStreamBuffering = false;
request.ContentType = "application/octet-stream";
request.ContentLength = 100 * 1024 * 1024;
// Upload the "file" (just random data in this case)
try
{
using (var stream = request.GetRequestStream())
{
byte[] buffer = new byte[1024 * 1024];
new Random().NextBytes(buffer);
for (int i = 0; i < 100; i++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
catch(Exception ex)
{
// here I get an IOException; InnerException is a SocketException
Console.WriteLine("Error writing to stream: {0}", ex);
}
// Now try to read the response
try
{
using (var response = (HttpWebResponse)request.GetResponse())
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
}
catch(Exception ex)
{
// here I get a WebException; InnerException is the IOException from the previous catch
Console.WriteLine("Error getting the response: {0}", ex);
var webEx = ex as WebException;
if (webEx != null)
{
Console.WriteLine(webEx.Status); // SendFailure
var response = (HttpWebResponse)webEx.Response;
if (response != null)
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
else
{
Console.WriteLine("No response");
}
}
}
Additional notes:
If I correctly understand the role of the 100 Continue status, the server shouldn't send it to me if it's going to refuse the file. However, it seems that this status is controlled directly by Tomcat, and can't be controlled by the application. Ideally, I'd like the server not to send me 100 Continue in this case, but according to my colleagues in charge of the back-end, there is no easy way to do it. So I'm looking for a client-side solution for now; but if you happen to know how to solve the problem on the server side, it would also be appreciated.
The app in which I encounter the issue targets .NET 4.0, but I also reproduced it with 4.5.
I'm not timing out. The exception is thrown long before the timeout.
I tried an async request. It doesn't change anything.
I tried setting the request protocol version to HTTP 1.0, with the same result.
Someone else has already filed a bug on Connect for this issue: https://connect.microsoft.com/VisualStudio/feedback/details/779622/unable-to-get-servers-error-response-when-uploading-file-with-httpwebrequest
I am out of ideas as to what can be a client side solution to your problem. But I still think the server side solution of using a custom tomcat valve can help here. I currently doesn`t have a tomcat setup where I can test this but I think a server side solution here would be along the following lines :
RFC section 8.2.3 clearly states :
Requirements for HTTP/1.1 origin servers:
- Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
So assuming tomcat confirms to the RFC, while in the custom valve you would have recieved the HTTP request header, but the request body would not be sent since the control is not yet in the servlet that reads the body.
So you can probably implement a custom valve, something similar to :
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ErrorReportValve;
public class CustomUploadHandlerValve extends ValveBase {
#Override
public void invoke(Request request, Response response) throws IOException, ServletException {
HttpServletRequest httpRequest = (HttpServletRequest) request;
String fileName = httpRequest.getHeader("Filename"); // get the filename or whatever other parameters required as per your code
bool validationSuccess = Validate(); // perform filename check or anyother validation here
if(!validationSuccess)
{
response = CreateResponse(); //create your custom 400 response here
request.SetResponse(response);
// return the response here
}
else
{
getNext().invoke(request, response); // to pass to the next valve/ servlet in the chain
}
}
...
}
DISCLAIMER : Again I haven`t tried this to success, need sometime and a tomcat setup to try it out ;).
Thought it might be a starting point for you.
I had the same problem. The server sends a response before the client end of the transmission of the request body, when I try to do async request. After a series of experiments, I found a workaround.
After the request stream has been received, I use reflection to check the private field _CoreResponse of the HttpWebRequest. If it is an object of class CoreResponseData, I take his private fields (using reflection): m_StatusCode, m_StatusDescription, m_ResponseHeaders, m_ContentLength. They contain information about the server's response!
In most cases, this hack works!
What are you getting in the status code and response of the second exception not the internal exception?
If a WebException is thrown, use the Response and Status properties of the exception to determine the response from the server.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.getresponse(v=vs.110).aspx
You are not saying what exactly version of Tomcat 7 you are using...
checked with WireShark
What do you actually see with WireShark?
Do you see the status line of response?
Do you see the complete status line, up to CR-LF characters at its end?
Is Tomcat asking for authentication credentials (401), or it is refusing file upload for some other reason (first acknowledging it with 100 but then aborting it mid-flight)?
The problem is that if the uploaded file is large, the request stream
is closed before I finish writing the request body, and I get an IOException:
If you do not want the connection to be closed but all the data transferred over the wire and swallowed at the server side, on Tomcat 7.0.55 and later it is possible to configure maxSwallowSize attribute on HTTP connector, e.g. maxSwallowSize="-1".
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
If you want to discuss Tomcat side of connection handling, you would better ask on the Tomcat users' mailing list,
http://tomcat.apache.org/lists.html#tomcat-users
At .Net side:
Is it possible to perform stream.Write() and request.GetResponse() simultaneously, from different threads?
Is it possible to performs some checks at the client side before actually uploading the file?
hmmm... i don't get it - that is EXACTLY why in many real-life scenarios large files are uploaded in chunks (and not as a single large file)
by the way: many internet servers have size limitations. for instance in tomcat that is representad by maxPostSize (as seen in this link: http://tomcat.apache.org/tomcat-5.5-doc/config/http.html)
so tweaking the server configurations seems like the easy way, but i do think that the right way is to split the file to several requests
EDIT: replace Uri.EscapeDataString with HttpServerUtility.UrlEncode
Uri.EscapeDataString(filename) // a problematic .net implementation
HttpServerUtility.UrlEncode(filename) // the proper way to do it
I am experience a pretty similar problem currently also with Tomcat and a Java client. The Tomcat REST service sends a HTTP returncode with response body before reading the whole request body. The client however fails with IOException. I inserted a HTTP Proxy on the client to sniff the protocol and actually the HTTP response is sent to the client eventually. Most likly the Tomcat closed the request input stream before sending the response.
One solution is to use a different HTTP server like Jetty which does not have this problem. The other solution is a add a Apache HTTP server with AJP in front of Tomcat. Apache HTTP server has a different handling of streams and with that the problem goes away.