I have a Silverlight 4 out-of-browser application that needs to be able to resume the download of an external file if the download is interrupted for any reason. I would like to be able resume instead of restart from the beginning because the file will be rather large and we have the potential to have users on slower connections.
I found some code at,
http://www.codeproject.com/Tips/157532/Silverlight-4-OOB-HTTP-Download-Component-with-Pau.aspx
but there seems to be numerous errors in it, so I'm not exactly confident that I'll be able to get it to work.
So, if anyone has any other original suggestions or alternatives, I'd like to hear them.
Thanks,
One approach you might consider is managing the download using the HTTP/1.1 Acccept-Ranges response header and the Range request header.
Make sure the resource you are downloading will include the header:-
Accept-Ranges: bytes
when it is requested (a static file sent by IIS will do this by default).
Now using the ClientHTTP stack you make an initial "HEAD" request to determine the server will accept a Range: bytes= header in the request and find the total size of the content to be sent.
You then make a "GET" request for the resource including the header:-
Range: bytes=0-65535
This limits the downloaded content to just the first 64K chunk. You then repeat the same request with:-
Range: bytes=65536-131071
Each time you can save the content of the response stream to your destination file. You keep track of how many bytes you have received. When you determine the final chunk which is likely to less than full just use a header like:-
Range: bytes=131072-
That will read to the end of file.
If the requests to the server are fail you can resume at an appropriate point in this sequence.
You need to be degrade gracefully, if the server does not include the Accept-Ranges header in the initial "HEAD" request then you'll just have to download the whole file.
Related
I've been going through some code and have come across
private readonly HttpClient _client;
_client = new HttpClient(clientHandler);
_client.DefaultRequestHeaders.ExpectContinue = false;
The msdn (https://goo.gl/IoZlB1) doesn't contain much information about ExpectContinue. Also the HttpRequestHeader Enumeration on msdn (https://goo.gl/IoZlB1) describes Expect as
The Expect header, which specifies particular server behaviors that
are required by the client.
I'm hoping if someone can shed some light on ExpectContinue. What is the purpose of it and what happens if it is true or false?
The continue status is used mostly for sending the request headers first, to see if the server will allow (accept) the request. If the server says OK, it sends a 100-continue and the client proceeds with the request body. Otherwise, server responds with 417 (Expectation Failed).
Think that you are going to upload a 1 GB file to a specific folder on a server. If you start the transfer directly and the server does not accept files bigger than 512 MB or the folder does not exist, the server will not accept the file, and the transfer will be a waste of resources for both sides.
Check the W3C documentation here
See section 8.2.3 Use of the 100 (Continue) Status
Last few days I've been building a web server application in C# that uses HttpListener. I've learned quite a lot on the way, and still am. Currently I got it all working, setting headers here and there depending on certain situations.
In most cases things are working fine, however at times a exception error is thrown. This happens on a few occasions. Most if not all of them is closing a connection before all data is send. Then the error occurs. But some of them are really caused by browsers as far as I can tell.
Like let's take Chrome. Whenever I go to a MP3 file directly, it sends 2 GET requests. And one of them is causing the error, the other is working and receiving part of the content. After this, I can listen the MP3 and there are no issues. Streaming works.
But back to the request that gives me the error, there is nothing in the headers that I could use in my code to not output data, like I do already with HEAD requests. So I'm quite puzzled here.
IE also has this problem with both opening MP3 files directly, and streaming via HTML5 audio tag. It also varies from time to time. Sometimes I open the page, and only 2 requests are made. The HTML page, and the MP3. No error there. Sometimes tho, there are 3 requests. It connects to the MP3 twice. Now sometimes one of those connections is being aborted straight after I open the page, and sometimes 2 requests to the MP3 file, doesn't even accept data. In both request headers, they want end of the file. So bytes: 123-123/124.
I've also tested it on w3school's audio element. IE also makes twice connections there, one aborted, other loading the MP3 file.
So my question is, is it possible to make the web server exception/error-proof, or maybe better question, is it bad that these exceptions are thrown? Or do you perhaps know how to fix these errors?
The error I'm getting is: I/O Operation has been aborted by either a thread exit or an application request.
The way I write to the client is:
using (Stream Output = _CResponse.OutputStream)
{
Output.Write(_FileOutput, rangeBegin, rangeLength);
}
I am not sure if there's another (better) way. This is what I came across in many topics, tutorials and pages while researching.
About headers: Default headers: Content Length, Content Type, Status Code. In some cases, like MP3 files and video's, I add a Accept-Ranges: Bytes header. In case the request header has Range in it, I add Content-Range header, and PartialContent status code.
From the server's point of view any client can disconnect at any time. This is part of the normal operation of a server. Detect this specific case, log it and swallow the exception (because it has been handled). It's not a server bug.
I need to implement a file downloader in C#. This downloader will be running on the client computer, and will download several files according to several conditions.
The main restriction I have is that the client will probably go offline during downloading (sometime more than once), so I need the following things to happen:
1) Downloader should notice there isn’t any network communication anymore and pause downloading.
2) Downloader should resume downloading once communication is back, and continue collecting the packages, adding them to those that were already downloaded to the local disk.
I have checked StackOverflow previous posts and saw that there are two options – WebClient and WebRequest (using one of the inheritance classes). I was wondering if someone can advise which one to use based on the requirements I have specified. How can I detect communication breakage?
You will need System.Net.HttpWebRequest to send HTTP requests and System.IO.FileStream to access files. Two methods needed are HttpWebRequest.AddRange and Filestream.seek
HttpWebRequest.AddRange method adds a byte range header to the request, and the range parameter specifies the starting point of the range. The server will start sending data from the range parameter specified to the end of the data in the HTTP entity. While Filestream.seek method is used to access the current position within a stream.
Source and example
You need download resume (if supported by server you are downloading from) which means you should go with WebRequest since with WebClient you cannot do that (probably next versions will support RANGE requests).
As soon as connection is dropped your code which is reading network stream throws an exception. That tells you there is a problem with download (i.e. network problem) then you can try in some periods to make a new connection and if succeeded, resume from last successful byte (using RANGE in HTTP HEADER).
I'm currently developing service in which client communicate with server by sending xml files with messages. In order to improve reliability of messaging (client will be using low quality limited bandswitch mobile internet) I chunks these message in smaller portions of 64 or 128 Kb size, and send them with transfer="streamed" in BasicHttp binding.
Now, I have a problem:
server should report to client, if he succesfully received a chunk or not, so after f.e 5 chunks failed to transfer, the transfer process will be cancelled and postponed to be tried later, and to keep track of which chunks are received and which are not.
I'm thinking about using callback mechanism to communicate client, so server will invoke callback method ChunkReceived in it's [OperationContract], when it saves chunk to the file on the server-side, but, correct me if I'm wrong, but callback only works with WS Dual http binding, and isn't supported in basichttp binding. But streamed transfer isn't supported in WS Dual binding.
So is it ok for me to switch to WS Dual binding and use transfer="buffered" (considering chunk size is relatively small) - won't that hurt reliability of the transfer? Or maybe I can somehow communicate with client in basic http binding, maybe by returning some kind of response message, i.e.
[OperationContract]
ServerResponse SendChunk (Chunk chunk);
where ServerResponse will hold some enum or bool flag to tell the client if the SendChunk operation is ok. But then I will have to keep some kind of array on both client and server side to keep track of all the chunks status. I'm just not sure what's the best pattern to use there. Any advice would be highly appreciated.
We had similar problem in our application - low bandwidth and many disconnects/timeouts. We have smaller messages, so we didn't split them, but the solution should work to for chunks too. We've created Repeater on client. This proven to be reliable solution - it works well on clients with slow, poor connections(like GPRS - being on the move disconnected often). Also client won't get timeout errors if server slows down due to high load. Here is modified version, with chunks
Client:
1. Client sends Chunk#1, with pretty short timeout time
2. Is there OK response:
2A. Yes - proceed to next chunk
3. Was that last chunk?
3A. Yes - process reponse
3B. No - send next chunk
2B. No - repeat current chunk
Server:
Accept request
Is Chunk repeated
2A. Yes:
Is final chunk:
3A. Yes - check if response is ready, else wait(this propably will make client repeat)
3B. No - send Ok reponse
2B. No:
Save request somewhere (list, dictionary etc.)
Is this last chunk:
5A. Yes - Process message, save Reponse, and send it to client
5B. No - Send Ok Message
I am using the WebClient.UploadFile() method to post files to a service for processing. The file contains an XML document with a compressed, b64-encoded content element in it. For some files (currently 1), the UploadFile throws an exception, indicating that the underlying connection was closed. The innermost exception on socket level gives the message 'An existing connection was forcibly closed by the remote host'.
Questions:
Has anyone necountered the same problem?
Why does it throw an exception for some files, and not for all?
Should I set some additional parameter for files with binary content?
Is there a workaround?
This functionality does work fine in a VPN situation, but obviously we want to use it to work in standard Internet situations.
Thanks, Rine
Sounds like a firewall or other security software sitting in between you and the server may be rejecting the request as a potential attack. I've run into this before where firewalls were rejecting requests that contained a specific file extension-- even if that file extension was encoded in a query string parameter!
If I were you, I'd take the problematic file and start trimming XML out of it. You may be able to find a specific chunk of XML which triggers the issue. Once you've identified the culprit, you can figure out how to get around the issue (e.g. by encoding those characters using their Unicode values instead of as text before sending the files). If, however, any change to the file causes the problem to go away (it's not caused by a specific piece of worrisome text), then I'm stumped.
Any chance it's a size issue and the problematic file is above a certain size and all the working files are below it? The server closing the connection when it hits a max accepted request size matches your symptom. You mentioned it worked in VPN so it's admittedly a stretch, but maybe the VPN case was a different server that's configured differently (or the max request is different for some other reason).
Are there non-WebClient methods for uploading the file to the same service from the same machine and if so, do they work?