My C# application downloads a .zip that contains at least a .dcm file.
After decompression I get something like:
download/XXXX/YYYYYY/ZZZZZZ/file.dcm
I don't know the name of these intermediary X,Y,Z folders, and I don't know how many of them exist, but I'm certain that at least a single .dcm file exists at the end of the path.
How can I get the full path of folders between download and the folder with .dcm files? (assume Windows filesystem and .Net Framework 4.0).
This will give you a list of all the files contained within the download file that would match your filename:
Directory.GetFiles("C:\\path_to_download_folder", "file.dcm", SearchOption.AllDirectories);
You could then parse the returned filepaths for whatever parts you needed. The System.IO.Path methods will probably give you want you need instead of rolling your own.
Additionally, if your application might be downloading multiple files throughout the day, and you always need to retrieve the path of the very latest matching file, you could send the filepath to a System.IO.FileInfo, which lets you get the creation time of the file, which you could use to determine which file is the newest.
Related
I'm trying to develop a simple SFTP file transferring project with following operations
Upload
Download
Move files within the remote server
Remove files
While uploading in session.PutFiles() we have a property called transferOptions.FileMask to filter the files.
But I didn't see anything of that kind in session.MoveFile() and in session.RemoveFiles()
My question is what should I do if I need to move/remove only selected files?
The Session.RemoveFiles accepts a file mask.
So you can do:
session.RemoveFiles("/home/user/*.txt");
It's the same as with the Session.PutFiles. The TransferOptions.FileMask is actually intended for advanced selections, like when you want to select files recursively, or when you want to exclude certain types of files.
session.PutFiles(#"c:\toupload\*.txt", "/home/user/");
With the TransferOption.FileMask, WinSCP would upload all matching files recursively. While with a simple file mask as the argument to the .PutFiles, it's not recursive.
The Session.MoveFile actually supports the file mask in its first argument too, though it's an undocumented feature.
A proper way would be to list remote directory, select desired files and call the Session.MoveFile for each.
See Listing files matching wildcard. It's a PowerShell example, but mapping it to C# should be easy.
As we all know that we can not get the full path of the file using File Upload control, we will follow the process for saving the file in to our application by creating a folder and by getting that folder path as follows
Server.MapPath
But i am having a scenario to select 1200 excel files, not at a time. I will select each and every excel file and read the requied content from that excel and saving the information to Database. While doing this i am saving the files to the Application Folder by creating a folder Excel. As i am having 1200 files all these files will be saved in to this folder after each and every run.
Is it the correct method to follow or not I don't know
I am looking for an alternative solution rather than saving the file to folder. I would like to save the full path of file temporarily until the process was executed.
So can any tell me the best way as per my requirement.
Grrbrr404 is correct. You can perfectly take the byte [] from the FileUpload.PostedFile and save it to the database directly without using the intermediate folder. You could store the file name with extension on a separate column so you know how to stream it later, in case you need to.
The debate of whether it's good or bad to store these things on the database itself or on the filesystem is very heated. I don't think either approach is best over the other; you'll have to look at your resources and your particular situation and make the appropriate decision. Search for "Store images on database or filesystem" in here or Google and you'll see what I mean.
See this one, for example.
There is a virus that my brother got in his computer and what that virus did was to rename almost all files in his computer. It changed the file extensions as well. so a file that might have been named picture.jpg was renamed to kjfks.doc for example.
so what I have done in order to solve this problem is:
remove all file extensions from files. (I use a recursive method to search for all files in a directory and as I go through the files I remove the extension)
now the files do not have an extension. the files now look like:
I think this file names are stored in a local database created by the virus and if I purchase the anti virus they will be renamed back to their original name.
since my brother created a backup I selected the files that had a creation date latter than when my brother performed the backup. so I have placed that files in a directory.
I am not interested in getting the right extension as long as I can see the content of the file. for example, I will scan each file and if it has text inside I know it will have a .txt extension. maybe it was a .html or .css extension I will not be able to know that I know.
I belive that all pdf files should have something in common. or doc files should also have something in common. How can I figure what the most common types (pdf, doc, docx, png, jpg, etc) files have in common)
Edit:
I know it will probably take less time to go over all this 200 files and test each one instead of creating this program. it is just that I am curios to see if it will be possible to get the file extension.
In unix, you can use file to determine the type of file. There is also a port for windows and you can obviously write a script (batch, powershell, etc.) or C# program to automate this.
First, congratulate your brother on doing a backup. Many people don't, and are absolutely wiped out by these problems.
You're going to have to do a lot of research, I'm afraid, but you're on the right track.
Open each file with a TextReader or a BinaryReader and examine the headers. Most of them are detectable.
For instance: Every PDF starts with "%PDF-" and then its version number. Just look at those first 5 characters. If it's "%PDF-", then put a PDF on the filename and move on.
Similarly: "ÿØÿà..JFIF" for JPEG's, "[InternetShortcut]" for URL shortcuts, "L...........À......Fƒ" for regular shortcuts (the "." is a zero/null, BTW)
ZIPs / Compressed directories start with {0x50}{0x4B]{0x03}{0x04}{0x14}, and you should be aware that Office 2007/2010 documents are really ZIPs with XML files inside of them.
You'll have to do some digging as you find each type, but you should be able to write something to establish most of the file types.
You'll have to write some recursion to work through directories, but you can eliminate any file with no extension.
BTW - A great tool to help pwith this is HxD: http://www.mh-nexus.de/ It's what I used to pull this answer together!
Good luck!
"most common types" each have it's own format and most of them have some magic bytes at the fixed position near beginning of the file. You can detect most of formats quite easily. Even HTML, XML, .CSS and similar text files can be detected by analyzing their beginning. But it will take some time to write an application that will guess the format. For some types (such as ODF format or JAR format, which are built on top of regular ZIPs) you will be also able to detect this format.
But ... Can it be that there exists such application on the market? I guess you can find something if you search, cause the task is not as tricky as it initially seems to be.
I am creating an application that uses Quartz.NET to automatically download and upload files to various sources (HTTP, FTP and Network paths) based upon a regular exprsesion. Users can select multiple paths for each download and upload operation, so a typical job may be to download files from a http server, and also download from an ftp server, and upload all files to a network path.
Currently, I am downloading all files from all the download sources, and storing them in a folder (With the name of a folder being a GUID specific to that job). Then for the upload stage, it will simply read all files from that directory, and upload them to the path, which is great.
Problem is, for specific paths, the user may request these to be deleted after upload has completed, which is an issue as how can I find out where a file come from in a folder? I've been trying to think of ways around this, such as creating folders for each download path, but I'd need to check for duplicate names on download rather than upload, plus I'd need to merge both subfolders...etc!
Can anyone offer any ideas? Many thanks
Think about this in a object oriented manner.
Create a class like this
public class File
{
public string source;
public string destination;
public bool deleteSource; //if true delete the source after the copy
}
Now create a list of File classes like List<File> files and keep that as variable in your app.
Add objects to the list in the start and then traverse the list and copy / upload files. Check the deleteSource property and if it is true delete the file after the copy operation.
This is a basic idea and expand this class as required.
What I want to stress is that think of a problem in the object oriented way and start designing
When you download a file, can you create a separate text file that contains the source and destination paths? That way you can read in that mapping later and process them as necessary based on the source.
I want to select multiple files based on their specific contents from a particular folder and edit their contents. I am using winform in C#.
any idea which are the classes that can be used.
It would be helpful if code given for example.
thanks.
have a look at Directory.GetFiles to get to the names of the files in a directory. If they're text files you can read them via File.ReadAllLines (returnsan array of strings, one per line of the file) or File.ReadAllText (returns a single string containing the entire content of the file).
To save the edited files have a look at File.WriteAllLines or File.WriteAllText.