Getting files info (accessed/modified dates) from FTP server (C#) - c#

I'm creating a program which downloads files off various types of servers, such as network paths or HTTP servers, based upon criteria. So far I have it working based upon a regex, but I'd also like it to find files newer (last accessed, modified or created) than a given date. This is easy in the network path type because I can access the FileInfo for that file, but all I have in my FTP server is a 'line' string which obviously just holds the file name.
Is it easy/possible to access the last modified/accesesed/created dates for a file on an FTP server in C#?

Unfortunately FTP provides only limited information about the remote file. With default LIST command you get OS-specific response where one date is usually present (this is usually last modification time). With MLST/MLSD extension commands you get machine-parsable response string but also with just one time.
The exact way to get the date depends on what component or class you use to access the FTP server.
If you need to get more than one date (eg. date of creation and last access), and you can go SFTP route, I'd recommend using SFTP instead.

You could use a third party library such as edtFTP to connect to the FTP server and inspect the last modified/created (not sure if you can get the the last accessed timestamp) timestamps. Its quite an easy library to use:

Related

efficiently pass files from webserver to file server

i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.

sql server 2005 clr c# checking file status

I need to check the status (existing? or last modified date) of a file on multiple remote windows servers (in LAN). The remote servers need a user name and password to access.
I was trying to do it using T-SQL (sql server 2005), but just thinking it's maybe best to be done using a CLR procedure/function? The reason for using a stored procedure is that this will be used by a ssrs report to show a list of each server has the file (last modified date) or not. The parameter for the procedure/function should be the unc/path of the file on the server.
I knows about CLR, but need the c# code to do this. Thanks.
If the files in question are locatable via UNC path names then this is quite straightforward using c#:
var fInfo = new FileInfo("uncPathGoesHere");
if (fInfo.LastWriteTime > DateTime.Now.AddHours(-1))
{
// file modified within last hour
}

How To Do a Server To Server File Transfer without any user interaction?

In my scenario, users are able to upload zip files to a.example.com
I would love to create a "daemon" which in specified time intervals will move-transfer any zip files uploaded by the users from a.example.com to b.example.com
From the info i gathered so far,
The daemon will be an .ashx generic handler.
The daemon will be triggered at the specified time intervals via a plesk cron job
The daemon (thanks to SLaks) will consist of two FtpWebRequest's (One for reading and one for writing).
So the question is how could i implement step 3?
Do i have to read into to a memory() array the whole file and try to write that in b.example.com ?
How could i write the info i read to b.example.com?
Could i perform reading and writing of the file at the same time?
No i am not asking for the full code, i just can figure out, how could i perform reading and writing on the fly, without user interaction.
I mean i could download the file locally from a.example.com and upload it at b.example.com but that is not the point.
Here is another solution:
Let ASP.Net in server A receive the file as a regular file upload and store it in directory XXX
Have a windows service in server A that checks directory XXX for new files.
Let the window service upload the file to server B using HttpWebRequest
Let server B receive the file using a regular ASP.Net file upload page.
Links:
File upload example (ASP.Net): http://msdn.microsoft.com/en-us/library/aa479405.aspx
Building a windows service: http://www.codeproject.com/KB/system/WindowsService.aspx
Uploading files using HttpWebRequest: Upload files with HTTPWebrequest (multipart/form-data)
Problems you gotto solve:
How to determine which files to upload to server B. I would use Directory.GetFiles in a Timer to find new files instead of using a FileSystemWatcher. You need to be able to check if a file have been uploaded previously (delete it, rename it, check DB or whatever suits your needs).
Authentication on server B, so that only you can upload files to it.
To answer your questions - yes you can read and write the files at the same time.
You can open an FTPWebRequest to ServerA and a FTPWebRequest to ServerB. On the FTPWebRequest to serverA you would request the file, and get the ResponseStream. Once you have the ResponseStream, you would read a chunk of bytes at a time, and write that chunck of bytes to the serverB RequestStream.
The only memory you would be using would be the byte[] buffer in your read/write loop. Just keep in mind though that the underlying implementation of FTPWebRequest will download the complete FTP file before returning the response stream.
Similarly, you cannot send your FTPWebRequest to upload the new file until all bytes have been written. In effect, the operations will happen synchronously. You will call GetResponse which won't return until the full file is available, and only then can you 'upload' the new file.
References:
FTPWebRequest
Something you have to take into consideration is that a long running web requests (your .ashx generic handler) may be killed when the AppDomain refreshes. Therefore you have to implement some sort of atomic transaction logic in your code, and you should handle sudden disconnects and incomplete FTP transfers if you go that way.
Did you have a look at Windows Azure before? This cloud platform supports distributed file system, and has built-in atomic transactions. Plus it scales nicely, should your service grow fast.
I would make it pretty simple. The client program uploads the file to server A. This can be done very easily in C# with an FtpWebRequest.
http://msdn.microsoft.com/en-us/library/ms229715.aspx
I would then have a service on server A that monitors the directory where files are uploaded. When a file is uploaded to that directory or on certain intervals it simply copies files over to server B. Again this can be done via Ftp or other means if they're on the same network.
you need some listener on the target domain, ftp server running there, and on the client side you will use System.Net.WebClient and UploadFile or UploadFileAsync to send the file. is that what you are asking?
It sounds like you don't really need a webservice or handler. All you need is a program that will, at regular intervals, open up an FTP connection to the other server and move the files. This can be done by any .NET program with the System.WebClient library, doesn't have to be a "web app". This other program could be a service, which could handle its own timing, or a simple app run by your cron job. If you need this to go two ways, for instance if the two servers are mirrors, you simply have the same app on the second box doing the same thing to upload files over to the first.
If both machines are in the same domain, couldn't you just do file replication at the OS level?
DFS
set up keys if you are using linux based systems:
http://compdottech.blogspot.com/2007/10/unix-login-without-password-setting.html
Once you have the keys working, you can copy the file from system A to system B by writing regular shell scripts that would not need any user interactions.

FtpWebRequest Connecting to an AS/400

I need to download some files via ftp from an old AS/400 server. My code looks more or less like:
FtpWebRequest _request = (FtpWebRequest)WebRequest.Create("ftp://ftpaddress/FOO.CSV");
_request.Credentials = new NetworkCredential(_ftpUsername, _ftpPassword);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
FtpWebResponse response = (FtpWebResponse)_request.GetResponse();
However, an exception is being thrown with the message:
501 Character (/) not allowed in object name.
I'm guessing the AS400 uses a different path separator than / but I can't figure out how to phrase the uri in a way (1) FtpWebRequest accepts and (2) the AS400 understands.
Anyone else bumped into this?
According to this page, fwd slash is the path separator character:
The forward slash is the separator character for paths sent to the FTP server.
A similar conversation over at Microsoft's forums (2005 era) indicates it's a bug in FtpWebRequest:
Currently FtpWebRequest does not support quote and I cannot think of a way you'll be able to overide the method without exposing our code Mariya Atanasova [NCL]MSFT, Moderator, Nov 2005
Try updating to the most recent versions or try a different library; the MS forum thread has several.
I've had this message often in the past, and it meant that I forgot to change the name format.
There are two name formats possible when doing FTP with an AS400, and it can be changed with the FTP command NAMEFMT:
0 is for the library system files (library\filename.member)
1 is for the files in the IFS, where a CSV file would be
By default, it is set to 0.
Change it to 1 and it should work. However I'm not sure how it can be changed with a FtpWebRequest.
To make life a little bit easier, the FTP server decides what NameFormat you want to use, based on your first command. If you start with "cd /home", then the FTP server does automatically set NAMEFMT to 1 for you.
Indeed, you can change this manually during your session with the remote FTP command NAMEFMT. Please, notice that you don't need the (old) iSeries way. You can address EVERY object on the iSeries with NAMEFMT 1. For example, "get /QSYS.LIB/MYLIBRARY.LIB/MYFILE.FILE/MYMEMBER.MBR" will do the trick for any iSeries database table. Even for multimember files!
This is an aggregate answer from the ones previously provided, but I was able to get this working by using the following structure:
ftp://[HostName]/%2F/[directory]/[subdirectory]/[filename].csv
The '%2F' was required and serves as a separator between the host name and the path.

How to get the modified time attribute of a certain file on FTP

I need to monitor a certain file on FTP, once it had been updated, I need to fetch it from FTP. but how to identify whether it's updated or not is a problem.
Does Anybody have any experience on this?
You need to send a LIST command. You'll need to parse the results manually using regex, since there is no standard format for the return result.
File modification data and time can be also obtained using a MLST or a MDTM command. Both ones are extensions of FTP protocol (not guaranteed on all servers), but at least some of them is supported by most servers. These commands return standardized format, it has not to be parsed like results of LIST command.
See the more details in this article.

Categories

Resources