I am using the WebClient.UploadFile() method to post files to a service for processing. The file contains an XML document with a compressed, b64-encoded content element in it. For some files (currently 1), the UploadFile throws an exception, indicating that the underlying connection was closed. The innermost exception on socket level gives the message 'An existing connection was forcibly closed by the remote host'.
Questions:
Has anyone necountered the same problem?
Why does it throw an exception for some files, and not for all?
Should I set some additional parameter for files with binary content?
Is there a workaround?
This functionality does work fine in a VPN situation, but obviously we want to use it to work in standard Internet situations.
Thanks, Rine
Sounds like a firewall or other security software sitting in between you and the server may be rejecting the request as a potential attack. I've run into this before where firewalls were rejecting requests that contained a specific file extension-- even if that file extension was encoded in a query string parameter!
If I were you, I'd take the problematic file and start trimming XML out of it. You may be able to find a specific chunk of XML which triggers the issue. Once you've identified the culprit, you can figure out how to get around the issue (e.g. by encoding those characters using their Unicode values instead of as text before sending the files). If, however, any change to the file causes the problem to go away (it's not caused by a specific piece of worrisome text), then I'm stumped.
Any chance it's a size issue and the problematic file is above a certain size and all the working files are below it? The server closing the connection when it hits a max accepted request size matches your symptom. You mentioned it worked in VPN so it's admittedly a stretch, but maybe the VPN case was a different server that's configured differently (or the max request is different for some other reason).
Are there non-WebClient methods for uploading the file to the same service from the same machine and if so, do they work?
Related
I've got a WebApi method that will receive multipart data. Current implementation uses a MultipartMemoryStreamProvider to receive the contents.
I'd like to ensure that certain parts are received before others (aka, file hash before file contents). Unfortunately, from what I can tell of the framework, the HttpContentMultipartExtensions instantiate a MimeMultipartParser to the HttpContent's response stream... the MimeMultipartParser uses the provided StreamProvider to instantiate new streams as the data arrives... but there appears to be no notification / eventing as the parser switches from the previous stream to the next.
By using events, I can queue the file's hash (before the file contents arrive), pump the file contents onto a file, and be confirming the hash while the next file's hash/contents are arriving.
Unfortunately, every example and bit of code I see, suggests that I can only access the content streams after they are complete. (I do see that MimeMultipartBodyPartParser's ParseBuffer yields its returned MimeBodyParts, I just didn't see any way to access it, since it only appeared to be called from the private MultipartReadAsyncComplete)
Am I missing something? Is there a better way?
I would not try to solve MITM attacks in the MIME parsing code itself, as the MITM is an attack inserted in part of the architecture.
There are topologies and proven solutions such as multifactor or SSL that will allow your client to determine the validity of the session and sender of the content.
Like Evk states, if your architecture is susceptible to MITM, then you need to look at the security of the connection end to end, not a validation of the MIME data on receipt.
If you are concerned about the security of the data, then you need to consider encryption of the data, and using non-opaque encryption techniques (not signing), which would prevent manipulation in transit, and allow the client to decrypt the message on receipt.
If securing the connection between client and server to avoid MITM does not work for you, maybe you could explain the constraints on your architecture so that we can provide better answers.
This kind of question has been asked several times, and I understand why it happens, and probably nothing we can do about it except retry.
I do have one question on name resolution though.
I am using AWS .Net SDK for 3.5 .Net. I am uploading a big file (>500MB up to 1.5GB, medical images). I call TransferUtility.Upload() method.
For most part the program works great.
Occasionally we get this error in the middle of the upload. Usually happens when the internet is slow.
I can catch the exception and retry, which means rery from the beginning since exception happens inside the AWS code.
My question is, if the program has resolved the s3 bucket name and has been uploading for a while why would it give me name resolution error instead of just using the cached resolved name?
Does each thread resolve the name independently and one of thread is failing since the network is saturated? Is this a computer setting? This error we were able to reproduce pretty consistently on a Windows 10 machine with Charter as ISP uploading a 800MB file.
The error occurred after about 250MB upload was done.
This is the actual exception
Exception during upload :Amazon.Runtime.AmazonServiceException:
A WebException with status NameResolutionFailure was thrown. --->
System.Net.WebException: The remote name could not be resolved: 'my-bucket.s3.amazonaws.com'
This web exception is telling you the there was an issue with the "Name Resolution". What it doesn't tell you is that the "name" it's referring to is the "EndpointRegion", for example: USEast1, USEast2 etc.
When using the Amazon.S3.Transfer.Transferutility it's crucial that the EndpointRegion you use in the Upload call MATCHES that of the bucket you're uploading into.
In my case using RegionEndpoint.GetBySystemName("USEast1") vs RegionEndpoint.GetBySystemName("US-East-1") was the difference maker.
Another cause for this issue could be DNS resolution. If your system is not able to perform DNS resolves it will give you this same error.
Last few days I've been building a web server application in C# that uses HttpListener. I've learned quite a lot on the way, and still am. Currently I got it all working, setting headers here and there depending on certain situations.
In most cases things are working fine, however at times a exception error is thrown. This happens on a few occasions. Most if not all of them is closing a connection before all data is send. Then the error occurs. But some of them are really caused by browsers as far as I can tell.
Like let's take Chrome. Whenever I go to a MP3 file directly, it sends 2 GET requests. And one of them is causing the error, the other is working and receiving part of the content. After this, I can listen the MP3 and there are no issues. Streaming works.
But back to the request that gives me the error, there is nothing in the headers that I could use in my code to not output data, like I do already with HEAD requests. So I'm quite puzzled here.
IE also has this problem with both opening MP3 files directly, and streaming via HTML5 audio tag. It also varies from time to time. Sometimes I open the page, and only 2 requests are made. The HTML page, and the MP3. No error there. Sometimes tho, there are 3 requests. It connects to the MP3 twice. Now sometimes one of those connections is being aborted straight after I open the page, and sometimes 2 requests to the MP3 file, doesn't even accept data. In both request headers, they want end of the file. So bytes: 123-123/124.
I've also tested it on w3school's audio element. IE also makes twice connections there, one aborted, other loading the MP3 file.
So my question is, is it possible to make the web server exception/error-proof, or maybe better question, is it bad that these exceptions are thrown? Or do you perhaps know how to fix these errors?
The error I'm getting is: I/O Operation has been aborted by either a thread exit or an application request.
The way I write to the client is:
using (Stream Output = _CResponse.OutputStream)
{
Output.Write(_FileOutput, rangeBegin, rangeLength);
}
I am not sure if there's another (better) way. This is what I came across in many topics, tutorials and pages while researching.
About headers: Default headers: Content Length, Content Type, Status Code. In some cases, like MP3 files and video's, I add a Accept-Ranges: Bytes header. In case the request header has Range in it, I add Content-Range header, and PartialContent status code.
From the server's point of view any client can disconnect at any time. This is part of the normal operation of a server. Detect this specific case, log it and swallow the exception (because it has been handled). It's not a server bug.
I need to implement a file downloader in C#. This downloader will be running on the client computer, and will download several files according to several conditions.
The main restriction I have is that the client will probably go offline during downloading (sometime more than once), so I need the following things to happen:
1) Downloader should notice there isn’t any network communication anymore and pause downloading.
2) Downloader should resume downloading once communication is back, and continue collecting the packages, adding them to those that were already downloaded to the local disk.
I have checked StackOverflow previous posts and saw that there are two options – WebClient and WebRequest (using one of the inheritance classes). I was wondering if someone can advise which one to use based on the requirements I have specified. How can I detect communication breakage?
You will need System.Net.HttpWebRequest to send HTTP requests and System.IO.FileStream to access files. Two methods needed are HttpWebRequest.AddRange and Filestream.seek
HttpWebRequest.AddRange method adds a byte range header to the request, and the range parameter specifies the starting point of the range. The server will start sending data from the range parameter specified to the end of the data in the HTTP entity. While Filestream.seek method is used to access the current position within a stream.
Source and example
You need download resume (if supported by server you are downloading from) which means you should go with WebRequest since with WebClient you cannot do that (probably next versions will support RANGE requests).
As soon as connection is dropped your code which is reading network stream throws an exception. That tells you there is a problem with download (i.e. network problem) then you can try in some periods to make a new connection and if succeeded, resume from last successful byte (using RANGE in HTTP HEADER).
I had an XMLDocument loading a document from a server with no problems till, almost randomly, I started getting a connection refused error.
It also doesn't matter what host I put in, whether it's a legit one or unresolvable. It always gives the same result.
Here's the code:
XmlDocument doc = new XmlDocument();
doc.Load("http://doesnotmatterifIresolveornot.com");
And here is the error:
{"No connection could be made because the target machine actively refused it 127.0.0.1:8888"}
I've turned off any applicable firewalls I can find in Win7, but it's weird cause it happened while I was testing it.
Find out why it's trying to go to 127.0.0.1:8888.
My guess is that for some reason, it thinks that's your HTTP proxy. Did you run something like Fiddler recently? Fiddler runs on 8888 and changes your default proxy settings - maybe they got stuck incorrectly?
Look in Control Panel, or in the Internet Explorer settings.
Are you serving your XML document using IIS? If so, you may need to add a mime-type definition to IIS to serve XML files. This article should help with that (if it is indeed the problem).
you may also try the HTTP loader to get a more detailed picture of what the server is responding with (HTTP headers, in particular, could be useful for troubleshooting).
I suspect the primary issue is that you're trying to connect to a socket (server + port) that the server isn't configured to listen on -- that means you'll get this error regardless of whether or not the URL resolves, since the server isn't configured to deal with a socket connection of the sort you're sending it.