Error: (502) Command not implemented. Using FtpWebResponse .net - c#

Aright, so here's how it goes I'm trying to set a up a polling system to pull log files from several laser systems each with their own ftp. However, I'm running into difficulty when attempting to call the FtpWebResponse call to download the log file the following is the code I'm using:
// Get the object used to communicate with the server.
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://192.168.10.140/param.dat");
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.Credentials = new NetworkCredential("user", "pass");
request.UsePassive = false;
request.Proxy = null;
request.UseBinary = true;
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
So I freeze up on that last line with: "The remote server returned an error: (502) Command not implemented."
I've a few different ways to grab files from the system just to see if it's some kind of setting I'm missing this is my results:
Microsoft CMD.exe: Connects up fine and can download files and perform standard ftp commands
Internet Explorer: Entering in address to file it downloads the file just fine
Firefox: "The remote server returned an error: (502) Command not implemented."
Chrome: "Error 606 (net::ERR_FTP_COMMAND_NOT_SUPPORTED): Unknown error."
Now there's not a lot of information I can get on the actual ftp set-up on the laser systems due to a long story I wont get into here but from what I'm seeing perhaps it uses some kind of legacy protocol that IE and CMD support or I'm missing something obvious. I've attempted flipping around the FtpWebRequest setting but nothing seems to work. I would really love to use this solution and not have the program auto build ftp batch files as it would really just make be sad as having everything run in program would be so much more elegant and easier to work with. Any ideas folks?

One of the things that could be causing your 502 error is attempting to use active mode when it is disabled on the server. Try using passive mode:
request.UsePassive = true
Also, from the documentation:
The URI may be relative or absolute. If the URI is of the form
"ftp://contoso.com/%2fpath" (%2f is an escaped '/'), then the URI is
absolute, and the current directory is /path. If, however, the URI is
of the form "ftp://contoso.com/path", first the .NET Framework logs
into the FTP server (using the user name and password set by the
Credentials property), then the current directory is set to
/path.
Try changing your URI to an absolute form - it may help avoid the PWD you're seeing.

Related

HttpWebRequest.GetResponse() returning 404 Error

I have some code that calls HttpWebRequest's GetResponse() method to retrieve HTML from a URL and return it to the calling method.
This has been working perfectly fine within my Development and QA environments but now that I have uploaded it to my UAT server, I keep getting the following error:
The remote server returned an error: (404) Not Found.
The main difference between Dev/QA and UAT is that UAT uses SSL/HTTPS based URLs whereas Dev/QA uses HTTP. I introduced the following line of code to help progress me a little futher:
ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications);
where AcceptAllCertifications always returns true but I still get my 404 error.
I that people who previously had this error have been able to resolve the issue by merely ensuring the URI used for the HttpWebRequest doesn't have a slash at the end (see: Simple HttpWebRequest over SSL (https) gives 404 Not Found under C#) but this does not make a difference to me.
I have now tried what was suggested at this post (see: HttpWebResponse returns 404 error) where I render the exception on the page. This bypassed the yellow-warning screen and gives me a bit more informtion, including the URL it is trying to get a response from. However, when I copy and paste the URL into my browser, it works perfectly fine and renders the HTML on the page. I'm quite happy therefore that the correct URL is being used in the GetResponse call.
Has anyone got any ideas as to what may be causing me this grief? As said, it only seems to be a problem on my UAT server where I am using SSL.
Here is my code to assist:
public static string GetHtmlValues()
{
var webConfigParentUrlValue = new Uri(ConfigurationManager.AppSettings["ParentUrl"]);
var destinationUrl = HttpContext.Current.Request.Url.AbsoluteUri;
var path = "DestinationController" + "/" + "DestinationAction" + "?destinationUrl=" + destinationUrl;
var redirect = new Uri(webConfigParentUrlValue, path).AbsoluteUri;
ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications);
var request = (HttpWebRequest)WebRequest.Create(redirect);
//Ensures that if the user has already signed in to the application,
// their authorisation is carried on through to this new request
AttachAuthorisedCookieIfExists(request);
HttpWebResponse result;
try
{
result = (HttpWebResponse)request.GetResponse();
}
catch (WebException ex)
{
result = ex.Response as HttpWebResponse;
}
String responseString;
using (Stream stream = result.GetResponseStream())
{
StreamReader reader = new StreamReader(stream, Encoding.UTF8);
responseString = reader.ReadToEnd();
}
return responseString;
}
More details of the error as it is rendered on the page:
I ran into a similar situation, but with a different error message. My problem turned out to be that my UAT environment was Windows 2008 with .NET 4.5. In this environment the SSL handshake/detecting is performed differently than most web browsers. So I was seeing the URL render without error in a web browser but my application would generate an error. My error message included "The underlying connection was closed: An unexpected error occurred on a send". This might be your issue.
My solution was to force the protocol change. I detect the specific error, then I force a change in the security protocol of my application and try again.
This is the code I use:
catch (Exception ex)
{
if(ex.Message.Contains("The underlying connection was closed: An unexpected error occurred on a send."))
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
// retry the retrieval
}
}
I finally found the solution to my problem...
The first clue to get me on the right track was the wrong physical path being displayed in the 404 error from IIS. It turns out that this incorrect physical path was mapped to another site in my IIS setup. This particular naturally had a binding also; port 443. As you may know, port 443 is the default port for https.
Now looking at my URL that I was trying to pass into the HTTPWebRequest.GetResponse() method, it looked something like this:
https://www.my-web-site.com
Taking this into account, when this application was hosted on IIS within the bounds of SSL, the error was occuring as follows:
Code enters the aforementioned method GetHtmlValues()
The code gets https://www.my-web-site.com from the web.config file
A response is requested from https://www.my-web-site.com
At this point, as no port has been specified and application is now out there on the open internet, it tries to get a response from https://www.my-web-site.com:443
The problem is, my application isn't hosted via IIS on port 443. A different application lives here. Subsequently, as the page can't be found on port 443, a 404 error is produced.
Now for the solution...
Looking in IIS, I found the port that my application sits on. Let's say port 16523.
Whereas previously in my web.config I had my key of ParentUrl decalred with a value of https://www.my-web-site.com, this is to be changed to http://www.my-web-site.com:16523
Note how the https has become http and the port number is specified at the end. Now when the application tries to get the response, it no longer uses the default ssl port as the correct one was specified.

C# FTP Size command doesn't work on all FTP servers

I am having some trouble with a very simple snippet of code:
private long getFTPLogLength()
{
long size;
FtpWebRequest ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri(ftpURL));
ftpRequest.Credentials = new NetworkCredential(ftpUsername, ftpPassword);
try
{
ftpRequest.ReadWriteTimeout = 6000;
ftpRequest.Method = WebRequestMethods.Ftp.GetFileSize;
FtpWebResponse respSize = (FtpWebResponse)ftpRequest.GetResponse();
size = respSize.ContentLength;
respSize.Close();
}
catch (Exception ex)
{
logLog.writeEntry(4, "Error getting logsize from FTP: " + ex.Message);
size = 0;
}
return size;
}
The point of this method is to obviously grab the length of a particular file. The issue here is that on some servers (namely gameservers.com servers), this code does not work. It works on every other server type I can test.
I did find this while looking for some help on it: C# FTP 550 error and I have tried with one slash and two, and I still get the same result. To take this a step further, I only get the 550 error when I do GetFileSize (SIZE). If I do GetDateTime (MDTM), it does not throw this error. This would leave you to believe that the SIZE command has been disabled, or I don't have access to use it, however if I connect to the server with FileZilla (or any client for that matter), and run SIZE games_mp.log it works just fine.
Here is a screenshot of the debugger after the exception (note, I bought this game server for the very purpose of fixing this bug, so the credentials are left in place intentionally for your testing pleasure).
Any information that could help me figure out what I need to do to make this work with their servers would be helpful. Hopefully I am missing something simple. :)
After some more testing, if I let FileZilla timeout, them I run the command manually, I get an error about SIZE not being supported in ASCII mode, but if I start a new connection to the server it works just fine.
Fresh connection to server:
Command after timeout:
Hopefully this means something to someone....
(Note: I wrote the FTP server software here and have access to the raw server logs of what happened)
The issue here is rather simple, when you attempted the SIZE command in Filezilla, you were in binary transfer mode. When you attempted to run this command via your client (or after Filezilla reconnected), you were in ASCII transfer mode.
Looking at the actual access logs, it seems that on the initial connection, Filezilla executes 'TYPE I' followed by 'MLSD'. When it reconnects afterwards, it doesn't attempt the MLSD command again, so it doesn't bother to execute 'TYPE I'.
Computing the actual file size in ASCII mode is fairly resource intensive (especially considering that your code by nature is going to attempt this command quite frequently). It requires that we parse the entire file and replace all the line ending characters. This not something that we'd want to do all the time on a busy FTP server.
This behavior is actually accounted for by the RFC:
The presence of the 550 error response to a SIZE
command MUST NOT be taken by the client as an indication that the
file cannot be transferred in the current MODE and TYPE. A server
may generate this error for other reasons -- for instance if the
processing overhead is considered too great.
In ASCII mode:
ncftp / > quote size games_mp.log
> quote size games_mp.log
Cmd: size games_mp.log
550: SIZE not allowed in ASCII mode.
SIZE not allowed in ASCII mode.
In BINARY mode:
ncftp / > quote size games_mp.log
> quote size games_mp.log
Cmd: size games_mp.log
213: 134901
134901
So, your actual fix here would be to switch to binary mode before you try and get the file size.
We'd also greatly prefer that you change the password on your account. We've seen significant abuse from open FTP accounts in the past (some people feel the need to fill up any FTP account they find with various types of pirated content).
Ftp SIZE command does not exists in basic FTP RFC, and added only in RFC 3659.
And, error 550 for FTP is 'action unavailable', i.e. server tells that it just doesn't support such command.
If the server supports the FEAT command issue it to the server and see if it contains 'SIZE' in the result list. This should tell you definitely if the server supports the SIZE command. This ofcourse assumes the server supports the FEAT command itself.

WinForm Security Context

I have written a WinForms app that uploads addresses from a spreadsheet, and geocodes them using an external geocoding service. This all works fine on my local machine, but the time has come for it to be installed on other peoples computers for testing. The app no longer works now though, generating the below error:
System.Net.WebException: The remote server returned an error: (407) Proxy Authentication Required.
Having read a lot and chatted breifly to our network guys, it seems i need to establish the Security Context for the users account and work with this to correct the error.
Has anyone got any pointers about how I should be going about this?
Thanks in advance!
C
It depends on how your uploading the data. If your using a http request (as it looks like you are) it will look something like;
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create("https://test.example.com/");
req.Method = "POST";
req.ContentType = "text/xml";
req.Credentials = new NetworkCredential("TESTACCOUNT", "P#ssword");
StreamWriter writer = new StreamWriter(req.GetRequestStream());
writer.Write(input);
writer.Close();
var rsp = req.GetResponse().GetResponseStream();

System.Net.WebRequest not respecting hosts file

Is there a way to get a System.Net.WebRequest or System.Net.WebClient to respect the hosts or lmhosts file?
For example: in my hosts file I have:
10.0.0.1 www.bing.com
When I try to load Bing in a browser (both IE and FF) it fails to load as expected.
Dns.GetHostAddresses("www.bing.com")[0]; // 10.0.0.1
WebRequest.Create("http://10.0.0.1").GetResponse(); // throws exception (expected)
WebRequest.Create("http://www.bing.com/").GetResponse(); // unexpectedly succeeds
Similarly:
WebClient wc = new WebClient();
wc.DownloadString("http://www.bing.com"); //succeeds
Why would System.Net.Dns respect the hosts file but System.Net.WebRequest ignore it? What do I need to change to make the WebRequest respect the hosts file?
Additional Info:
If I disable IPv6 and set my IPv4 DNS Server to 127.0.0.1, the above code works (fails) as expected. However if I add my normal DNS servers back as alternates, the unexpected behavior resumes.
I've reproduced this on 3 Win7 and 2 Vista boxes. The only constant is my company's network.
I'm using .NET 3.5 SP1 and VS2008
Edit
Per #Richard Beier's suggestion, I tried out System.Net tracing. With tracing ON the WebRequest fails as it should. However as soon as I turn tracing OFF the behavior reverts to the unexpected success. I have reproduced this on the same machines as before in both debug and release mode.
Edit 2
This turned out to be the company proxy giving us issues. Our solution was a custom proxy config script for our test machines that had "bing.com" point to DIRECT instead of the default proxy.
I think that #Hans Passant has spotted the issue here. It looks like you have a proxy setup in IE.
Dns.GetHostAddresses("www.bing.com")[0]; // 10.0.0.1
This works because you are asking the OS to get the IP addresses for www.bing.com
WebRequest.Create("http://www.bing.com/").GetResponse(); // unexpectedly succeeds
This works because you are asking the framework to fetch a path from a server name. The framework uses the same engine and settings that IE frontend uses and hence if your company has specified by a GPO that you use a company proxy server, it is that proxy server that resolves the IP address for www.bing.com rather than you.
WebRequest.Create("http://10.0.0.1").GetResponse(); // throws exception (expected)
This works/fails because you have asked the framework to fetch you a webpage from a specific server (by IP). Even if you do have a proxy set, this proxy will still not be able to connect to this IP address.
I hope that this helps.
Jonathan
I'm using VS 2010 on Windows 7, and I can't reproduce this. I made the same hosts-file change and ran the following code:
Console.WriteLine(Dns.GetHostAddresses("www.bing.com")[0]); // 10.0.0.1
var response = WebRequest.Create("http://www.bing.com/").GetResponse(); // * * *
Console.WriteLine(new StreamReader(response.GetResponseStream()).ReadToEnd());
I got an exception on the line marked "* * *". Here's the exception detail:
System.Net.WebException was unhandled
Message=Unable to connect to the remote server
Source=System
StackTrace:
at System.Net.HttpWebRequest.GetResponse()
at ConsoleApplication2.Program.Main(String[] args) in c:\Data\Projects\ConsoleApplication2\ConsoleApplication2\Program.cs:line 17
InnerException: System.Net.Sockets.SocketException
Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.0.0.1:80
Source=System
ErrorCode=10060
Maybe it's an issue with an earlier .NET version, that's now fixed in .NET 4 / VS 2010? Which version of .NET are you using?
I also found this thread from 2007, where someone else ran into the same problem. There are some good suggestions there, including the following:
Turn on system.net tracing
Work around the problem by using Dns.GetHostAddresses() to resolve it to an IP. Then put the IP in the URL - e.g. "http://10.0.0.1/". That may not be an option for you though.
In the above thread, mariyaatanasova_msft also says: "HttpWebRequest uses Dns.GetHostEntry to resolve the host, so you may get a different result from Dns.GetHostAddresses".
You should overwrite the default proxy.
HttpWebRequest & WebRequest will set a default proxy if present in Internet Explorer and your file hosts will be bypassed.
request.Proxy = new WebProxy();
The following is just an example of code:
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("www.bing.com");
request.Proxy = new WebProxy();
request.Method = "POST";
request.AllowAutoRedirect = false;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
//some code here
}
}
catch (exception e)
{
//Some other code here
}

Using .NET's HttpWebRequest to download a multitude of files in a row

I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though).
I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem.
Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do.
Here's some basic code to verify what I'm doing (just in case I'm missing closing something):
WebRequest webRequest = WebRequest.Create(uri);
webRequest.Method = "GET";
webRequest.Credentials = new NetworkCredential(username, password);
WebResponse webResponse = webRequest.GetResponse();
try
{
using(Stream stream = webResponse.GetResponseStream())
{
// read the stream
}
}
finally
{
webResponse.Close()
}
What kind of application is this? You mentioned that the server is running out of ports, but then you mentioned HttpWebRequest. Are you running this code in a webservice or ASP.NET page, which is trying to then download multiple files for the same incoming request from the client?
What kind of authentication is the page using? If it is using NTLM authentication, then the connections cannot be shared if the credentials being used are different for each request.
What I would suggest is to group your request per credential. So, for eg, all requests using username "John" would be grouped. You can specify the "ConnectionGroupName" property on the service point, so the system will try to reuse connections for the same credential and server.
If that also doesnt work, you will need to do one or more of the following:
1) Throttle your requests.
2) Increase the wildcard port range.
3) Use the BindIPConnectionCallback on ServicePoint to make it bind to a non-wildcard port (i.e a port in the range 1024-16384)
More digging seems to point to it possibly being due to authentication and the UnsafeAuthenticatedConnectionSharing property might alleviate this. However, I'm not sure that's the best thing, either.

Categories

Resources