From an ASP.NET Web Api 2.x controller I'm am serving files using an instance of the StreamContent type. When a file is requested, its blob is located in the database, and a blob stream is opened. The blob stream is then used as input to a StreamContent instance.
Boiled down, my controller action looks similar to this:
[HttpGet]
[Route("{blobId}")]
public HttpResponseMessage DownloadBlob(int blobId)
{
// ... find the blob in DB and open the 'myBlobStream' based on the given id
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(myBlobStream)
};
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentLength = myBlobStream.Length;
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = "foo.txt",
Size = myBlobStream.Length
};
return result;
}
When I hit the endpoint in Chrome (v. 35) it says that it is resolving the host (localhost) and when the file has downloaded it then appears in the download bar. However, I am wondering what is needed to enable Chrome (or any other browser) to show the download progress?
I thought this would be fixed by included the header information like content-type, content-length, and content-disposition, but from what I have tried, that does not make any difference.
Turned out that my implementation was correct. I closed fiddler and everything worked as expected. Don't know if fiddler somehow waits for the entire response to complete before it sends it through its proxy - at least, that would explain why the browser stays in the "resolving host" state until the entire file has been downloaded.
The Web API doesn't "push" information so, unless you have a background thread on your client polling the server for the download status every few seconds or so, this is a bad idea. For a number of reasons in fact:
Increased load on the server to serve multiple requests (imagine if many clients did that at the same time)
Increased data communication from your client (would be important if you were doing this on a mobile phone contract)
etc. (I'm sure I can think of more but it's late)
You might want to consider SignalR for this, although I'm no expert on it. According to the summary in the page I linked:
ASP.NET SignalR is a new library for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available. SignalR supports Web Sockets, and falls back to other compatible techniques for older browsers. SignalR includes APIs for connection management (for instance, connect and disconnect events), grouping connections, and authorization.
If your Web API can allow it, I suppose a potential alternative would be to first send a quick GET request to receive the size of the file you're about to download and store it in your client. In fact, you could utilise the Content-Length header here to avoid the extra GET. Then do your file download and, while it's happening, your client can report the download progress by comparing how much of the file it has received against the full size of the file it got from the server.
Related
We have a web API that produces large files (up to 10 GB).
I am building an endpoint that will provide a file to the client.
There is a cloud-front server between the API and the client.
My current implementation has several issues I need to solve.
We are using .NET Core 3.1.
The service is hosted in IIS.
The code in the controller is:
return File(
new FileStream(path, FileMode.Open, FileAccess.Read),
ContentType.ApplicationOctetStream,
filename);
Getting the 504 response from the cloud-front server. The configured timeout is 60 seconds.
Getting out-of-memory exception on the server.
Questions:
Is there anything I need to add to the headers to make it come through the cloud-front server?
Should I use a different result type? I tried PhysicalFile() with the same results.
Are there any settings I should check on the cloud-front side?
Can be the problem on the client side? I have tested that via swagger and postman with the same result.
Is there a way I can limit the amount of memory the endpoint can use? The host machine is very limited in resources.
I have a problem when I upload the large file on Azure. I am working on an ASP.NET Core 5.0 API project.
I have implemented functionality regarding Microsoft recommendation. Moreover, I added a pooling mechanism so the frontend application has another endpoint to check upload status.
Everything works fine when I run locally but I have a problem with a large file on Azure. My API is using Azure App Service Premium P1v3. It returns a 502 bad gateway for large files (above 1GB).
I made a tests and 98 % time consuming is reading stream. From Microsft docs it is:
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
untrustedFileNameForStorage = contentDisposition.FileName.Value;
// Don't trust the file name sent by the client. To display
// the file name, HTML-encode the value.
trustedFileNameForDisplay = WebUtility.HtmlEncode(
contentDisposition.FileName.Value);
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
ModelState, _permittedExtensions, _fileSizeLimit);
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
}
I know there is a load balancer timeout of 230 seconds on Azure App Service but when I test it using postman in most cases 502 is being returned after 30 seconds.
Maybe I need to set some configuration feature on Azure App Service? Always on is enabled.
I would like to stay with Azure App Service, but I was thinking about migrating to Azure App service or allow the Frontend application to upload files directly to Azure Blob Storage.
Do you have any idea how to solve it?
Newset
Uploading and Downloading large files in ASP.NET Core 3.1?
The previous answers are based on only using app services, but it is not recommended to store large files in app services. The first is that future updates will become slower and slower, and the second is that the disk space will soon be used up.
So it is recommended to use azure storage. If you use azure storage, suggestion 2 is recommended for larger files to upload large files in chunks.
Preview
Please confirm whether the large file can be transferred successfully even if the error message returns a 500 error.
I have studied this phenomenon before, and each browser is different, and the 500 error time is roughly between 230s-300s. But looking through the log, the program continues to run.
Related Post:
The request timed out. The web server failed to respond within the specified time
So there are two suggestions I give, you can refer to:
Suggestion 1:
It is recommended to create an http interface (assuming the name is getStatus) in your program to receive file upload progress, similar to processbar. When the file starts to transfer, monitor the upload progress, upload the file interface, return HttpCode 201 accept, then the status value is obtained through getStatus, when it reaches 100%, it returns success.
Suggestion 2:
Use MultipartRequestHelper to cut/slice large file. Your usage maybe wrong. Please refer below post.
Dealing with large file uploads on ASP.NET Core 1.0
The version of .net core is inconsistent, but the idea is the same.
Facing similar issue on uploading document of larger size(up to 100MB) through as.net core api hosted as azure app gateway and have set timeout to 10min and applied these attributes on action
[RequestFormLimits(MultipartBodyLengthLimit = 209715200)]
[RequestSizeLimit(209715200)]
Even kestrel has configured to accept 200MB
UseKestrel(options =>
{
options.Limits.MaxRequestBodySize = 209715200;
options.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(10);
});
The file content is in base64 format in request object.
Appreciate if any help on this problem.
I have a minimum ASP.NET Handler (.ashx) that returns a PDF file:
public void ProcessRequest(HttpContext context)
{
context.Response.ContentType = "application/pdf";
context.Response.BinaryWrite(File.ReadAllBytes(context.Server.MapPath("~/files/GettingStarted.pdf")));
}
public bool IsReusable
{
get
{
return false;
}
}
}
When I run my web application on IIS Express, the app is hosted at localhost:45050. If I browse to localhost:45050/handler1.ashx on my main development machine, the PDF is downloaded as expected. If I use the DHC Chrome extension (an HTTP client) to perform an HTTP GET on localhost:45050/handler1.ashx, an HTTP 200 OK response code is returned along with the binary data:
HOWEVER, if I run the exact same ASP.NET project on a different machine, I run into bizarre issues. With the project running locally on localhost:45050, I'm still able to browse to localhost:45050/handler1.ashx in Chrome/Firefox/IE to download the file. But, when I use the DHC extension to perform an HTTP GET on localhost:45050/handler1.ashx, there is no response!
I'm able the resolve localhost:45050 (the home page of the site) via DHC on this alternate machine. The server responds with 200 OK and yields the landing page.
But when dealing with the handler that returns binary content, I cannot get any response back from the server with any HTTP client aside from the browser's URL bar. How are browsers able to resolve the HTTP response when standalone HTTP clients cannot? Does anyone have any idea what may be happening here? What would cause behavior to change across machines? I'm trying to handle the response in a JavaScript client, but I'm not getting any data back.
Any help would be greatly appreciated. Thanks!
The top answer here...
Best way to stream files in ASP.NET
...resolved the problem. It seems that writing large files in a single call is a no-no on certain servers. You have to chunk the response manually.
I want to run my personal web sites via an httphandler (I have a web server and static ip at home.)
Eventually, I will incorporate a data access layer and domain router into the handler, but for now, I am just trying to use it to return static web content.
I have the handler mapped to all verbs and paths with no access restrictions in IIS 7 on Windows 7.
I have added a little file logging at the beginning of process request. As it is the first thing in the handler, I use the logging to tell me when the handler is hit.
At the moment, the handler just returns a single web page that I have already written.
The handler itself is mostly just this:
using (FileStream fs = new FileStream(Request.PhysicalApplicationPath + "index.htm",
FileMode.Open))
{
fs.CopyTo(Response.OutputStream);
}
I understand that this won't work for anything but the one file.
So my issue is this: the HTML file has links to some images in it. I would expect that the browser would come back to the server to get those images as new requests. I would expect those requests to fail (because they'd be mapped to index.htm). But I would expect to see the logging hit at least twice (and potentially hit recursively). However, I only see a single request. The web page comes up and the images are 'X's.
When I refresh the browser, I see another request come through, but only for the root page again. The page is basic HTML, I do not have an asp.net application (nor do I want one, I like HTML/CSS/JS).
What do I have to do to get more than just the first request sent from the browser? I assume I'm just totally off the mark because I wrote an HTTP Module first, but strangely got the same exact behavior. I'm thinking I need to specify some response headers, but don't see that in any example.
As part of learning node.js, I just created a very basic chat server with node.js and socket.io. The server basically adds everyone who visits the chat.html wep page into a real time chat and everything seems to be working!
Now, I'd like to have a C# desktop application take part in the chat (without using a web browser control :)).
What's the best way to go about this?
I created a socket server in nodejs, and connected to it using TcpClient.
using (var client = new TcpClient())
{
client.Connect(serverIp, port));
using (var w = new StreamWriter(client.GetStream()))
w.Write("Here comes the message");
}
Try using the HttpWebRequest class. It is pretty easy to use and doesn't have any dependencies on things like System.Web or any specific web browser. I use it simulating browser requests and analyzing responses in testing applications. It is flexible enough to allow you to set your own per request headers (in case you are working with a restful service, or some other service with expectations of specific headers). Additionally, it will follow redirects for you by default, but this behavior easy to turn off.
Creating a new request is simple:
HttpWebRequest my_request = (HttpWebRequest)WebRequest.Create("http://some.url/and/resource");
To submit the request:
HttpWebResponse my_response = my_request.GetResponse();
Now you can make sure you got the right status code, look at response headers, and you have access to the response body through a stream object. In order to do things like add post data (like HTML form data) to the request, you just write a UTF8 encoded string to the request object's stream.
This library should be pretty easy to include into any WinForms or WPF application. The docs on MSDN are pretty good.
One gotcha though, if the response isn't in the 200-402 range, HttpWebRequest throws an exception that you have to catch. Fortunately you can still access the response object, but it is kind of annoying that you have to handle it as an exception (especially since the exception is on the server side and not in your client code).