Downloading multiple files from FTP server - c#

I've multiple files on a ftp server.I do not know the names of these files except that they are all. xml files.
How do I programmatically download these files using .Net's FtpWebRequest?
Thanks.

Most likely you'll have to issue a Dir command that lists out all the files, then go through each one downloading it.
Here is some info on getting a directory listing.
http://msdn.microsoft.com/en-us/library/ms229716.aspx

Take a look at the ListDirectory function. It's the equivalent of the NLIST command in FTP.

You'll probably want to use an existing library like this one rather than write your own.

FtpWebRequest __request = (FtpWebRequest)FtpWebRequest.Create(__requestLocation);
__request.Method = WebRequestMethods.Ftp.ListDirectory;
var __response = (FtpWebResponse)__request.GetResponse();
using (StreamReader __directoryList = new StreamReader(__response.GetResponseStream())) {
string ___line = __directoryList.ReadLine();
while (___line != null) {
if (!String.IsNullOrEmpty(___line)) { __output.Add(___line); }
___line = __directoryList.ReadLine();
}
break;
}
Getting the target file...
FtpWebRequest __request = null;
FtpWebResponse __response = null;
byte[] __fileBuffer = null;
byte[] __outputBuffer = null;
__request = (FtpWebRequest)FtpWebRequest.Create(__requestLocation);
__request.Method = WebRequestMethods.Ftp.DownloadFile;
__response = (FtpWebResponse)__request.GetResponse();
using (MemoryStream __outputStream = new MemoryStream()) {
using (Stream __responseStream = __response.GetResponseStream()) {
using (BufferedStream ___outputBuffer = new BufferedStream(__responseStream)) {
__fileBuffer = new byte[BLOCKSIZE];
int ___readCount = __responseStream.Read(__fileBuffer, 0, BLOCKSIZE);
while (___readCount > 0) {
__outputStream.Write(__fileBuffer, 0, ___readCount);
___readCount = __responseStream.Read(__fileBuffer, 0, BLOCKSIZE);
}
__outputStream.Position = 0;
__outputBuffer = new byte[__outputStream.Length];
//Truncate Buffer to only the specified bytes. Store into output buffer
Array.Copy(__outputStream.GetBuffer(), __outputBuffer, __outputStream.Length);
break;
}
}
}
try { __response.Close(); } catch { }
__request = null;
__response = null;
return __outputBuffer;
Ripped out of some other code I have, so it probably wont compile and run directly.

I don't know if the FtpWebRequest is a strict requirement. If you can use a third party component following code would accomplish your task:
// create client, connect and log in
Ftp client = new Ftp();
client.Connect("ftp.example.org");
client.Login("username", "password");
// download all files in the current directory which matches the "*.xml" mask
// at the server to the 'c:\data' directory
client.GetFiles("*.xml", #"c:\data", FtpBatchTransferOptions.Default);
client.Disconnect();
The code uses Rebex FTP which can be downloaded here.
Disclaimer: I'm involved in the development of this product.

Related

Sharepoint ClientObject Model download files C#

I am trying to download files from a SharePoint library using the client object model. I seem to be able to access the files using OpenBinaryStream() and then executing the query, but when I try to access the stream, it is a stream of Length = 0. I've seen many examples and I've tried several, but I can't get the files to download. I've uploaded successfully, and credentials and permissions aren't the problem. Anyone have any thoughts?
public SharepointFileContainer DownloadFolder(bool includeSubfolders, params object[] path)
{
try
{
List<string> pathStrings = new List<string>();
foreach (object o in path)
pathStrings.Add(o.ToString());
var docs = _context.Web.Lists.GetByTitle(Library);
_context.Load(docs);
_context.ExecuteQuery();
var rootFolder = docs.RootFolder;
_context.Load(rootFolder);
_context.ExecuteQuery();
var folder = GetFolder(rootFolder, pathStrings);
var files = folder.Files;
_context.Load(files);
_context.ExecuteQuery();
SharepointFileContainer remoteFiles = new SharepointFileContainer();
foreach (Sharepoint.File f in files)
{
_context.Load(f);
var file = f.OpenBinaryStream();
_context.ExecuteQuery();
var memoryStream = new MemoryStream();
file.Value.CopyTo(memoryStream);
remoteFiles.Files.Add(f.Name, memoryStream);
}
...
}
SharepointFileContainer is just a custom class for my calling application to dispose of the streams when it has finished processing them. GetFolder is a recursive method to drill down the given folder path. I've had problems with providing the direct url and have had the most success with this.
My big question is why "file.Value" is a Stream with a Length == 0?
Thanks in advance!
EDIT:
Thanks for your input so far...unfortunately I'm experiencing the same problem. Both solutions pitched make use of OpenBinaryDirect. The resulting FileInformation class has this for the stream...
I'm still getting a file with 0 bytes downloaded.
You need to get the list item of the file (as a ListItem object) and then use it's property File. Something like:
//...
// Previous code
//...
var docs = _context.Web.Lists.GetByTitle(Library);
var listItem = docs.GetItemById(listItemId);
_context.Load(docs);
clientContext.Load(listItem, i => i.File);
clientContext.ExecuteQuery();
var fileRef = listItem.File.ServerRelativeUrl;
var fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, fileRef);
var fileName = Path.Combine(filePath,(string)listItem.File.Name);
using (var fileStream = System.IO.File.Create(fileName))
{
fileInfo.Stream.CopyTo(fileStream);
}
After that you do whatever you need to do with the stream. The current one just saves it to the specified path, but you can also download it in the browser, etc..
We can use the following code to get the memory stream.
var fileInformation = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, file.ServerRelativeUrl);
if (fileInformation != null && fileInformation.Stream != null)
{
using (MemoryStream memoryStream = new MemoryStream())
{
byte[] buffer = new byte[32768];
int bytesRead;
do
{
bytesRead = fileInformation.Stream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
}
Reference: https://praveenkasireddy.wordpress.com/2012/11/11/download-document-from-document-set-using-client-object-model-om/

Upload > 5 MB files to sharepoint 2013 programmatically

I am having troubles uploading large files to my sharepoint 2013/office 365 site. I am using Visual Stuidos 2010 and .NET 4.0
I have tried code from these questions:
SP2010 Client Object Model 3 MB limit - updating maxReceivedMessageSize doesnt get applied
maximum file upload size in sharepoint
Upload large files 100mb+ to Sharepoint 2010 via c# Web Service
How to download/upload files from/to SharePoint 2013 using CSOM?
But nothing is working. So I need a little help. Here is code that I have tried:
1: ( I have also tried to use SharePointOnlineCredentials instead of NetworkCredential for this one)
#region 403 forbidden
byte[] content = System.IO.File.ReadAllBytes(fileInfo.FullName);
System.Net.WebClient webclient = new System.Net.WebClient();
System.Uri uri = new Uri(sharePointSite + directory + fileInfo.Name);
webclient.Credentials = new NetworkCredential(user, password.ToString(), sharePointSite + "Documents");
webclient.UploadData(uri, "PUT", content);
#endregion
2:
#region 500 Internal Server Error
using (var fs = new FileStream(fileInfo.FullName, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
web.ServerRelativeUrl + "/" + directory,
fs,
true);
}
#endregion
I have gotten smaller file uploads to work with:
#region File upload for smaller files
Folder folder = context.Web.GetFolderByServerRelativeUrl(web.ServerRelativeUrl + directory);
web.Context.Load(folder);
context.ExecuteQuery();
FileCreationInformation fci = new FileCreationInformation();
fci.Content = System.IO.File.ReadAllBytes(fileInfo.FullName);
fciURL = sharePointSite + directory;
fciURL += (fciURL[fciURL.Length - 1] == '/') ? fileInfo.Name : "/" + fileInfo.Name;
fci.Url = fciURL;
fci.Overwrite = true;
Microsoft.SharePoint.Client.FileCollection documentfiles = folder.Files;
context.Load(documentfiles);
context.ExecuteQuery();
Microsoft.SharePoint.Client.File file = documentfiles.Add(fci);
context.Load(file);
context.ExecuteQuery();
#endregion
My Using Statement:
using (Microsoft.SharePoint.Client.ClientContext context = new Microsoft.SharePoint.Client.ClientContext(sharePointSite))
{
//string fciURL = "";
exception = "";
context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(user, password);
Web web = context.Web;
web.Context.Credentials = context.Credentials;
if (!web.IsPropertyAvailable("ServerRelativeUrl"))
{
web.Context.Load(web, w => w.ServerRelativeUrl);
web.Context.ExecuteQuery();
}
//upload large file
}
The solution I went with:
MemoryStream destStream;
using (System.IO.FileStream fInfo = new FileStream(fileInfo.FullName, FileMode.Open))
{
byte[] buffer = new byte[16 * 1024];
byte[] byteArr;
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = fInfo.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
byteArr = ms.ToArray();
}
destStream = new MemoryStream(byteArr);
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
serverRelativeURL + directory + fileInfo.Name,
destStream,
true);
context.ExecuteQuery();
results = "File Uploaded";
return true;
}
The problem with your code snippet number 2 is that you missed a file name:
using (var fs = new FileStream(fileInfo.FullName, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
serverRelativeURL + directory + fs.Name,
^^^^^^^^
fs,
true);
}
My research on the subject showed that using the FrontPage Remote Control Procedures was the most adventageous way of reliably uploading large files.
This is because FrontPage RPC supports file fragmentation, which helps avoid OutOfMemomory exceptions due to Windows needing to allocate the entire file to continuous memory.
It also supports sending meta data, useful it pretty much any file upload application. One major advantage of this is that you can actually specify the correct content type without a user having to log in and change it later. (with all other methods I tried it would just be set as the default type.)
See my answer on the Sharepoint StackExchange for further detail on implementing Frontpage RPC.

Merging files on S3 Amazon

I have an application where i want to merge two files present on s3 into the third file. I thought of using the Copy Object using multipart upload. Below is the code.
AmazonS3Config config = new AmazonS3Config();
AmazonS3 s3Client = new AmazonS3Client(accessKeyID, secretAccessKey, config);
// List to store upload part responses.
List<UploadPartResponse> uploadResponses =
new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses =
new List<CopyPartResponse>();
InitiateMultipartUploadRequest initiateRequest =
new InitiateMultipartUploadRequest()
.WithBucketName(targetBucket)
.WithKey(targetObjectKey);
InitiateMultipartUploadResponse initResponse =
s3Client.InitiateMultipartUpload(initiateRequest);
String uploadId = initResponse.UploadId;
try
{
// Get object size.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest();
metadataRequest.BucketName = sourceBucket;
metadataRequest.Key = sourceObjectKey1;
GetObjectMetadataResponse metadataResponse = s3Client.GetObjectMetadata(metadataRequest);
long objectSize1 = metadataResponse.ContentLength; // in bytes
// Get object size.
GetObjectMetadataRequest metadataRequest2 = new GetObjectMetadataRequest();
metadataRequest2.BucketName = sourceBucket;
metadataRequest2.Key = sourceObjectKey2;
GetObjectMetadataResponse metadataResponse2 = s3Client.GetObjectMetadata(metadataRequest2);
long objectSize2 = metadataResponse2.ContentLength; // in bytes
long bytePosition = 0;
CopyPartRequest copyRequest1 = new CopyPartRequest()
.WithDestinationBucket(targetBucket)
.WithDestinationKey(targetObjectKey)
.WithSourceBucket(sourceBucket)
.WithSourceKey(sourceObjectKey1)
.WithUploadID(uploadId)
.WithFirstByte(bytePosition)
.WithLastByte( objectSize1 - 1 )
.WithPartNumber(1);
copyResponses.Add(s3Client.CopyPart(copyRequest1));
CopyPartRequest copyRequest2 = new CopyPartRequest()
.WithDestinationBucket(targetBucket)
.WithDestinationKey(targetObjectKey)
.WithSourceBucket(sourceBucket)
.WithSourceKey(sourceObjectKey2)
.WithUploadID(uploadId)
.WithFirstByte(bytePosition)
.WithLastByte(objectSize2 - 1)
.WithPartNumber(2);
copyResponses.Add(s3Client.CopyPart(copyRequest2));
////
CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest()
.WithBucketName(targetBucket)
.WithKey(targetObjectKey)
.WithUploadId(initResponse.UploadId)
.WithPartETags(GetETags(copyResponses));
CompleteMultipartUploadResponse completeUploadResponse =
s3Client.CompleteMultipartUpload(completeRequest);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
But it is throwing exception at the last line CompleteMultipartUpload. Below is the S3 exception: Your proposed upload is smaller than the minimum allowed size
Where as if i only upload copyRequest1 it works fine.
Any help is appreciated!!
Regards,
Haseena
Did you manage to solve the problem? It seems that it can't be done using S3 api
It is not possible to merge uploaded files using S3 API, so I am using FTP to download and Merge.

how to move files from one ftp to another

I need to move files from one ftp to another (currently using ftpwebrequest) both requiring authentication and have different settings (timeout, ascii, active etc). Is downloading files from one to a local server and then uploading to the other significant slower than just copying the files (if that exists even, how would you do it, renameto?). It feels like it should be faster but I'm not sure, I have no understanding of file copying or downloading.
they are all .txt or .csv and mostly around 3-10 mb each so quite a bit of data
You can copy a file from FTP-Server A to FTP-Server B using FXP. Both servers and the client have to support that feature.
Some time we need to download, upload file from FTP server. Here is some good example for FTP operation in C#.
You can use this. It will help you to make a C# program to full fill your requirements.
File Download from FTP Server
public void DownloadFile(stringHostURL, string UserName, string Password, stringSourceDirectory, string FileName, string LocalDirectory)
{
if(!File.Exists(LocalDirectory + FileName))
{
try
{
FtpWebRequestrequestFileDownload = (FtpWebRequest)WebRequest.Create(HostURL + “/” + SourceDirectory + “/” + FileName);
requestFileDownload.Credentials = new NetworkCredential(UserName, Password);
requestFileDownload.Method = WebRequestMethods.Ftp.DownloadFile;
FtpWebResponseresponseFileDownload = (FtpWebResponse)requestFileDownload.GetResponse();
StreamresponseStream = responseFileDownload.GetResponseStream();
FileStreamwriteStream = new FileStream(LocalDirectory + FileName, FileMode.Create);
intLength = 2048;
Byte[] buffer = new Byte[Length];
intbytesRead = responseStream.Read(buffer, 0, Length);
while(bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = responseStream.Read(buffer, 0, Length);
}
responseStream.Close();
writeStream.Close();
requestFileDownload = null;
responseFileDownload = null;
}
catch(Exception ex)
{
throwex;
}
}
}
Some Good Examples
Hope it will help you.

c# ftpwebrequest performance

I have an application that download files from a Unix FTP server. It works fine, just have this performance problem: Files which size is <= 1K takes in average between 2084 and 2400 milliseconds to download, while applications like Filezilla download the same files in less than 1 second (per each file).
Maybe this time its OK for some average users, but is not acceptable for my application, since I need to download THOUSANDS of files.
I optimize the code as much as I could:
- The cache and buffer to read the content are created 1 time in the constructor of the class.
- I create 1 time the network credentials, and re-use on every file download. I know this is working, since for the first file it takes like 7s to download, and all subsequent downloads are on the range of 2s.
- I change the size of the buffer from 2K until 32K. I dont know if this will help or not, since the files Im downloading are less than 1K, so in theory the buffer will be fill with all the information in 1 round from network.
Maybe is not related to the network, but to the way Im writing and/or windows handles the write of the file??
Can someone please give me some tips on how to reduce the time to something similar to filezilla??
I need to reduce the time, otherwise my ftp will be running for 3 days 24 hours a day to finish its task :(
Many thanks in advance.
The code here: Its not complete, it just show the download part.
//Create this on the constructor of my class
downloadCache = new MemoryStream(2097152);
downloadBuffer = new byte[32768];
public bool downloadFile(string pRemote, string pLocal, out long donwloadTime)
{
FtpWebResponse response = null;
Stream responseStream = null;
try
{
Stopwatch fileDownloadTime = new Stopwatch();
donwloadTime = 0;
fileDownloadTime.Start();
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(pRemote);
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.UseBinary = false;
request.AuthenticationLevel = AuthenticationLevel.None;
request.EnableSsl = false;
request.Proxy = null;
//I created the credentials 1 time and re-use for every file I need to download
request.Credentials = this.manager.ftpCredentials;
response = (FtpWebResponse)request.GetResponse();
responseStream = response.GetResponseStream();
downloadCache.Seek(0, SeekOrigin.Begin);
int bytesSize = 0;
int cachedSize = 0;
//create always empty file. Need this because WriteCacheToFile just append the file
using (FileStream fileStream = new FileStream(pLocal, FileMode.Create)) { };
// Download the file until the download is completed.
while (true)
{
bytesSize = responseStream.Read(downloadBuffer, 0, downloadBuffer.Length);
if (bytesSize == 0 || 2097152 < cachedSize + bytesSize)
{
WriteCacheToFile(pLocal, cachedSize);
if (bytesSize == 0)
{
break;
}
downloadCache.Seek(0, SeekOrigin.Begin);
cachedSize = 0;
}
downloadCache.Write(downloadBuffer, 0, bytesSize);
cachedSize += bytesSize;
}
fileDownloadTime.Stop();
donwloadTime = fileDownloadTime.ElapsedMilliseconds;
//file downloaded OK
return true;
}
catch (Exception ex)
{
return false;
}
finally
{
if (response != null)
{
response.Close();
}
if (responseStream != null)
{
responseStream.Close();
}
}
}
private void WriteCacheToFile(string downloadPath, int cachedSize)
{
using (FileStream fileStream = new FileStream(downloadPath, FileMode.Append))
{
byte[] cacheContent = new byte[cachedSize];
downloadCache.Seek(0, SeekOrigin.Begin);
downloadCache.Read(cacheContent, 0, cachedSize);
fileStream.Write(cacheContent, 0, cachedSize);
}
}
Sounds to me your problem is related to Nagels algorithm used in the TCP client.
You can try turning the Nagel's algorithm off and also set SendChunked to false.

Categories

Resources