I'm trying to make a simple method to download a file from an FTP using FtpWebRequest with the method WebRequestMethods.Ftp.DownloadFile. The problem is that I wan't to display the progress of downloading and thus need to know the file size ahead to be able to calculate the percentage transfered. But when I call GetResponse in FtpWebRequest the ContentLength member is -1.
OK - so I get the size of the file in advance using the method WebRequestMethods.Ftp.GetFileSize. No problem. Then after getting the size I download the file.
This is where the problem in question appears...
After getting the size I try to reuse the FtpWebRequest and resets the method to WebRequestMethods.Ftp.DownloadFile. This causes an System.InvalidOperationException saying something like "Can't perform this action after sending the request." (may not be the exact formulation - translated from the one I get in Swedish).
I've found elsewhere that as long as I set the KeepAlive property to true, it doesn't matter, the connection is kept active. This is what I don't understand... The only object I've created is my FtpWebRequest object. And if I create another one, how can it know what connection to use? And what credentials?
Pseudo code:
Create FtpWebRequest
Set Method property to GetFileSize
Set KeepAlive property to true
Set Credentials property to new NetworkCredential(...)
Get FtpWebResponse from the request
Read and store ContentLength
Now I got the file size. So it's time to download the file. Setting Method now causes the exception mentioned above. So do I create a new FtpWebRequest? Or is there anyway to reset the request to be reused? (Closing the response made no difference.)
I don't understand how to move forward without re-creating the object. I could do that, but it just doesn't feel right. So i'm posting here in hope to find the correct way of doing this.
Here's the (non working) code (Inputs are sURI, sDiskName, sUser and sPwd.) :
FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create(sURI);
request.Method = WebRequestMethods.Ftp.GetFileSize;
request.Credentials = new NetworkCredential(sUser, sPwd);
request.UseBinary = true;
request.UsePassive = true;
request.KeepAlive = true;
FtpWebResponse resp = (FtpWebResponse)request.GetResponse();
int contLen = (int)resp.ContentLength;
resp.Close();
request.Method = WebRequestMethods.Ftp.DownloadFile;
resp = (FtpWebResponse)request.GetResponse();
Stream inStr = resp.GetResponseStream();
byte[] buff = new byte[16384];
sDiskName = Environment.ExpandEnvironmentVariables(sDiskName);
FileStream file = File.Create(sDiskName);
int readBytesCount;
int readTotal=0;
while ((readBytesCount = inStr.Read(buff, 0, buff.Length)) > 0)
{
readTotal += readBytesCount;
toolStripProgressBar1.Value = 100*readTotal/contLen;
Application.DoEvents();
file.Write(buff, 0, readBytesCount);
}
file.Close();
I hope someone can explain how this is supposed to work. Thanks in advance.
I don't think this will be answered so I'm "closing it" by telling you how I solved it.
Well, I didn't really solve it. I did however test the download by recreating the FtpWebRequest and noticed that on the FTP server it behaved as I wanted i.e. only one log on and then sequentially executing my requests.
This is how the code getting the file size and starting the download ended up:
// Start by fetching the file size
FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create(sURI);
request.Method = WebRequestMethods.Ftp.GetFileSize;
NetworkCredential nc = new NetworkCredential(sUser, sPwd);
request.Credentials = nc;
request.UseBinary = true;
request.UsePassive = true;
request.KeepAlive = true;
// Get the result (size)
FtpWebResponse resp = (FtpWebResponse)request.GetResponse();
Int64 contLen = resp.ContentLength;
// and now download the file
request = (FtpWebRequest)FtpWebRequest.Create(sURI);
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.Credentials = nc;
request.UseBinary = true;
request.UsePassive = true;
request.KeepAlive = true;
resp = (FtpWebResponse)request.GetResponse();
So no answer on if it's possible to reset the FtpWebRequest for re-use. But at least I know there's no redundant information being transferred.
Thanks to everybody who took an interest and spent time thinking of an answer.
FtpWebRequest can be used for only 1 request, like getting the file size or to download the file, but not both. You have to create 2 FtpWebRequests. Behind the scene, FtpWebRequest notices that it is the same URL and credentials and will reuse the same ftp connection without closing it, as long IsKeepAlieve is true, which is the default setting.
This is a sad example of bad design by Microsoft. Instead of letting us explicitly open and close a connection, they want to do it automatically for us and confuse everyone.
You're going to probably want to use the Async method. Here's the link to the MSDN doc.
http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest.aspx
GetResponseAsync()
That will keep your application from locking up too, so you won't have to use
Application.DoEvents();
You could also look at possibly using an alternative ftp library. imo the FtpWebRequest is not exactly the best ftp class. A quick search turned up this library. Ftp isn't stateless like HTTP. I prefer libraries that let you create a client, open a connect, and keep the connection alive.
http://sanity-free.org/dist/NullFX.Net-binary.zip
Here's the code exacmple I found
FtpClient client =
new FtpClient(
new IPEndPoint( IPAddress.Loopback, 21 ),
new NetworkCredential( "test", "testing#localdomain" )
);
client.Connect();
client.Download("testfile.zip", #"C:\downloads\testfile.zip");
The source is there too, so you would be able to possibly attach some events to the read process for your download progress tracking.
Related
I have tried about everything I can think of. I am trying to get a directory listing from a FTP server. I am able to login and list/download from FileZilla.
My password looks like this (letters changed):
c0dlWTRBOZc=
I have tried using Normalize() and not using it.
It errors on the GetResponse() line.
Here is the code:
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(thisConnection.remoteFTP_URI);
request.KeepAlive = true;
request.UsePassive = true;
request.UseBinary = true;
request.Method = WebRequestMethods.Ftp.ListDirectory;
request.Credentials = new NetworkCredential(thisConnection.userName.Normalize(),thisConnection.passWord.Normalize());
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
I am using this exact same code for other FTP servers with no issues. I don't have direct control over the server so changing password or other server setting would be problematic.
Thank you for any help!
Your password string looks like base64-encoded.
What is actually the form used by FileZilla in its configuration file (sitemanager.xml).
So my guess is that you have copied the encoded password from the sitemanager.xml and you try to use it as a literal password in the FtpWebRequest.
Make sure you use the actual literal password. If you do not remember it, use some base64 decoder.
You will find plenty of them online.
I'm trying to get the file size from a remote FTP file through anonymous FTP.
public static long GetSize(string ftpPath)
{
try
{
FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create(new Uri(ftpPath));
request.Proxy = null;
request.Credentials = new NetworkCredential("anonymous", "ยด");
request.UseBinary = true;
request.Method = WebRequestMethods.Ftp.GetFileSize;
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
long size = response.ContentLength;
response.Close();
return size;
}
catch (WebException e)
{
string status = ((FtpWebResponse)e.Response).StatusDescription;
MessageBox.Show(status);
return 0;
}
}
This currently returns the error "550 Size not allowed in ASCII mode." I'm aware that I have to use binary mode, but setting UseBinary to true (see above) doesn't fix the issue.
Unfortunately, I think you may be stuck. The WebRequestMethods.Ftp class, per this post, will not support sending FTP commands other than the supported ones -- and for your use case, you would need your client to send "TYPE I" (for "image" or binary mode) before sending the SIZE command.
Alternatively, as a hacky workaround, you might try download a file -- any file -- before sending your SIZE command. With request.UseBinary = true for that request, it should cause your client to send the "TYPE I" command to the FTP server. (And it won't matter if that download request fails; the TYPE command will still have been sent.) Most FTP servers, upon receiving a TYPE command, will assume that TYPE for subsequent commands. Then, when you try the GetFileSize request again, the FTP server might be in binary, not ASCII mode, and your SIZE command might succeed.
I'm trying to get my head around how to download a file from a secure FTP server from my AngularJS application using REST.
Thing is, that for security reasons, I can't just append an iframe or set window.location to ftp://myip:81/myfolder/myfile.pdf, so I have to find a way to trigger the download without this. My initial thought was to create a Generic handler which takes the filename and the folder name as parameters and then serve the file to the user through the context.Response somehow.
What I have so far is this:
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(path);
request.UsePassive = false;
request.Credentials = new NetworkCredential(ftpHelper.Username, ftpHelper.Password);
request.Method = WebRequestMethods.File.DownloadFile;
using (var response = (FtpWebResponse)request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
// stuck here ...
}
}
request.Abort();
I've got a feeling that this isn't possible, though ;-) Can anyone confirm/disprove? And if it can be done, I'd love a small example/hint on this :-)
Thanks!
I use C#.
The first time I use WebRequest GetRequestStream() in my code, it takes up to 20 seconds. After that it takes it takes under 1 second.
Below is my code. The row "this.requestStream = httpRequest.GetRequestStream()" is causing the delay.
StringBuilder postData = new StringBuilder(100);
postData.Append("param=");
postData.Append("test");
byte[] dataArray = Encoding.UTF8.GetBytes(postData.ToString());
this.httpRequest = (HttpWebRequest)WebRequest.Create("http://myurl.com");
httpRequest.Method = "POST";
httpRequest.ContentType = "application/x-www-form-urlencoded";
httpRequest.ContentLength = dataArray.Length;
this.requestStream = httpRequest.GetRequestStream();
using (requestStream)
requestStream.Write(dataArray, 0, dataArray.Length);
this.webResponse = (HttpWebResponse)httpRequest.GetResponse();
Stream responseStream = webResponse.GetResponseStream();
StreamReader responseReader = new System.IO.StreamReader(responseStream, Encoding.UTF8);
String responseString = responseReader.ReadToEnd();
How can I see what causes this? (for instance: DNS lookup? Server not responding?)
Thanks and regards, Koen
You could also try to set the .Proxy = null. Sometimes it tries to autodetect a proxy which takes up time.
That sounds like your application is pre-compiling when you first hit it. This is how .net works.
Here is a way to speed up your web app. link text
It's actually the framework for HTML operations doing startup network proxy checking to setup the property HttpWebRequest.DefaultWebProxy.
In my application as part of the startup actions I create a fully formed request as a back ground task to get this overhead out of the way.
HttpWebRequest web = (HttpWebRequest)WebRequest.Create(m_ServletURL);
web.UserAgent = "Mozilla/4.0 (Windows 7 6.1) Java/1.6.0_26";
Setting the UserAgent field in my case is triggers the startup overhead.
I had the same issue but .proxy = null didn't solve it for me. Depending on the network structure the problem might be connected to IPv6. The first request nearly took exactly 21sec each time the application run. Therefore I argue it must be a timeout value. If this value is reached the fallback solution IPv4 is used (for subsequent calls as well). Forcing the use of IPv4 in the first place solved the issue for me!
HttpWebRequest request = WebRequest.Create("http://myurl.com") as HttpWebRequest;
request.ServicePoint.BindIPEndPointDelegate = (servicePount, remoteEndPoint, retryCount) =>
{
if (remoteEndPoint.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork)
{
return new IPEndPoint(IPAddress.Any, 0);
}
throw new System.InvalidOperationException("No IPv4 address found.");
};
One problem may be the fact that .NET, by default, only allows 2 connections at a time.
You can increase the number of simultaneous connections with:
ServicePointManager.DefaultConnectionLimit = newConnectionLimit;
We leave the determination of the optimal value to the user.
We've got a .NET 2.0 WinForms app that needs to upload files to an IIS6 Server via WebDav. From time to time we get complaints from a remote office that they get one of the following error messages
The underlying connection was closed:
an unexpected error occurred on send.
The underlying connection was closed:
an unexpected error occurred on
receive.
This only seems to occur with large files (~20Mb plus). I've tested it with a 40Mb file from my home computer and tried putting 'Sleep's in the loop to simulate a slow connection so I suspect that it's down to network issues at their end... but
The IT at the remote office are no help
I'd like to rule out the posibility my code is at fault.
So - can anybody spot any misakes or suggest any workarounds that might 'bulletproof' the code against this problem. Thanks for any help. Chopped down version of code follows:
public bool UploadFile(string localFile, string uploadUrl)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uploadUrl);
try
{
req.Method = "PUT";
req.AllowWriteStreamBuffering = true;
req.UseDefaultCredentials = Program.WebService.UseDefaultCredentials;
req.Credentials = Program.WebService.Credentials;
req.SendChunked = false;
req.KeepAlive = true;
Stream reqStream = req.GetRequestStream();
FileStream rdr = new FileStream(localFile, FileMode.Open, FileAccess.Read);
byte[] inData = new byte[4096];
int bytesRead = rdr.Read(inData, 0, inData.Length);
while (bytesRead > 0)
{
reqStream.Write(inData, 0, bytesRead);
bytesRead = rdr.Read(inData, 0, inData.Length);
}
reqStream.Close();
rdr.Close();
System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse();
if (response.StatusCode != HttpStatusCode.OK && response.StatusCode!=HttpStatusCode.Created)
{
MessageBox.Show("Couldn't upload file");
return false;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
return false;
}
return true;
}
Try setting KeepAlive to false:
req.KeepAlive = false;
This will allow the connection to be closed and opened again. It will not allow to use a persistent connection. I found a lot of references in the Web that suggested this in order to solve a similar to yours error. This is a relevant link.
Anyway, it is not a good idea to use HTTP PUT (or HTTP POST) to upload large files. It will be better to use FTP or a download/upload manager. These will handle retries, connection problems, timeouts automatically for you. The upload will be faster too and you could also resume a stopped uploading. If you decide to stay with HTTP, you should at least try to add a retry mechanism. If an upload is taking too long, then there is a high probability that it will fail due to proxy, server timeout, firewall or what ever reason not to have with your code.
To remove the risk of a bug in your code, try using WebClient:
using (WebClient client = new WebClient())
{
client.UseDefaultCredentials = Program.WebService.UseDefaultCredentials;
client.Credentials = Program.WebService.Credentials;
client.UploadFile(uploadUrl, "PUT", localFile);
}
Maybe try using POST, but the real culprit is probably the content type.
Try setting
req.ContentType = "application/octet-stream";
req.ContentLength = inData.Length;
or look at the code in the accepted answer here: Upload files with HTTPWebrequest (multipart/form-data)
Both my example and the link I provided involve modifying the ContentType - my example is simpler but might not work, as most applications receiving files expect multipart
Please you check whether [Enable Http Keep-Alives] is set [on] at [Web Site] tab in IIS manager.
The size of the uploads might be limited.
See here for one discussion:
http://www.codeproject.com/KB/aspnet/uploadlargefilesaspnet.aspx
Start by checking some basic configuration. The default values of either of the following may cause problems in file upload - including termination of the connection. I believe IIS 6 would never allow file upload > 2GB (even if it could complete, regardless of config). Msdn describes these nicely.
<httpRuntime executionTimeout = "30" maxRequestLength="200"/>
EDIT: This is ASP.NET config, of course, which assumes you are running your own webdav server or a 3rd party server within ASP.NET. If it's a different webdav server, you'll want to look for the equivalent.