Is it possible to grab a file version over ftp? - c#

My question is pretty straightforward. I have an exe file on an ftp server with a version of 1.0.0.0. I'd like to download it, but only if the version is greater than a certain pre-set value. (All of this inside a C# desktop application).
I read online that it isn't possible to tell the version of a file through FTP without downloading it first. Is this correct? (I would rather not do this as the file is fairly large and will not need to be downloaded most of the time).
If it is, the solution I saw recommended was to create a text file in the FTP directory that contained the version of the target exe file. Obviously it would not be large so it could be downloaded quickly. Is this the best solution if I can't grab the exe version directly?
Thanks for the help!

You can't because FTP has no protocol specification for the file version. Depending on the FTP server you are using, in the DIR command you may have the datetime information about the file. So yes in my opinion having the metadata file is the best simpler solution. Another more challenging solution is to craft some FTP source code to return the version information along with the DIR command, as far as I know ftp protocol is not restrictive at all about the details of a dir command, but it is not easy so is up to you to evaluate the benefit and keep in mind that was a solution working just with you server client pair. If you don't mind about the protocol, having a Mercurial ( or other verioning system ) repository served would be probably the smartest opion.

No the FTP protcol does not support this without downloading the exe.
I would recommend the version file.

The FTP protocol dosn't support any version check. Add a HTTP service that can check the version of your file.

Related

Create the file before downloading and complete it after downloading by double tapping on it like OneDrive in c#

There is a feature in OneDrive that you can see a file that is on the OneDrive site on your system without actually having that file in your system. And when you double click on that file, that file starts to download and you can see its contents.
I want to implement such a possibility with C#.
I have a site where files are uploaded.
I download the files from there and put them in a folder on my C drive.
But I want that file not to be downloaded until it is double-clicked, something similar to OneDrive.
What should I do?
I compared the FileInfo of these two files, but I didn't see any difference and I couldn't find a solution for this problem.
This is a virtual filesystem implemented using a file system driver.
There are multiple ways to implement this feature using C/C++.
But in your case, using C# means you should use third-party libraries to create a virtual files system.
There is a library called Dokan, which lets you implement a full-featured virtual file system, and you have complete control over its behaviour in your C# project.
it called "Windows Shell namespace"
https://learn.microsoft.com/en-us/windows/win32/shell/namespace-intro
i used EZNameSpace Wrapper for handling this.
there is another library called "CBFS Shell" (formerly shelboost) that you can use.
You could create a dummy file that appears to be correct but is really just a pointer to some code that downloads the correct file. Then use File.Move or File.Copy to replace the dummy file with the actual file.

Bulk Downloads in Azure Blob Storage

I need to figure out a way to let my users download several pdf files (sometimes thousands), from Azure Blob Storage, I know that I can download the files in paralel, and that would make things quicker, but the issue here is that the user could possibly have thousands of pdf files to download, and, that isn't at all reasonable.
Also, I can't download the files to another server, zip them, and let the user download them from there, as that would be incredibly inefficient for me.
Is there a way to create a zip of the files and let the user download that (other than the way above)? I saw other questions on this topic but none gave an answer/solution that suits my needs.
What would be, the absolute best way I can do this? Or isn't there another way to preform this task?
Thank you in advance.
Since no one gave an answer, and I see more posts about this on stack overflow and other sites, I decided to share here my solution (can't share code, because reasons...)
Firstly, as of today 04-09-2020 there's still no support for bulk download from Azure Blob Storage in a zip (or other format) that is directly from azure to client, without routing the download flow through a server that does the organizing and zipping.
The problem I had...
A need to download (several) files from Azure Blob Storage, zip them (maybe organize them by folders), and prompt the client to download them in bulk without any download data passing through the server and not filling the client downloads folder with scattered files...
During my research I thought about doing everything on the client's side in javascript through memory and let the client download it, but it could be quite memory expensive since my downloads could be in the GB size range.
The solution...
Then I came across a javascript library called StreamSaver, this library writes the files with streams and writes directly on the client's machine, meaning the memory expense was much less.
By luck this library also allows to organize the files inside the 'download directory' that will be prompted to the user, and even lets me zip that directory before telling the user if he wants to download it, meaning that this one library solved, almost, all my problems.
Now I only have a webmethod called by javascript that returns all the Azure SAS url to download from, and the rest is all in javascript in the client.
TL;DR:
Used StreamSaver javascript library to download, organize and zip all the files from the client side and then prompt them to download it, only using a webmethod to get all the urls wich are to be downloaded.
This solution works (from what I've tested) in at least these browsers:
Chrome;
FireFox;
Opera;
Edge (Chromium)
Problems I came across using the StreamSaver Library...
There are a few drawbacks/problems with the library,
1st Safary doesn't support it! more info about this here
2nd StreamSaver only allows zipping to files smaller than 4GB, this could be worked around using yet another library for zipping...

What is the standard way for dealing with PowerPoint (.PPTX) files on the server?

I've been tasked with a feature that can generate PowerPoint files on the server using C#. I'd basically start with a template, and programmatically replace some text with live data from the database. I've been doing some research into this area for the past day and here's what I've found:
PowerPoint has this sort of thing built in, meaning it can connect to external data sources and pull in data. Most examples of this, I've found, either use PowerPoint automation done on the server (I've been advised against this) or assume a SQL Server backend. Our company uses Oracle for our RDMS needs. Oracle has a solution for this called Oracle BI, but it requires a whole new web server setup to run various Java EE components and what not. I didn't look at the price, but knowing Oracle it's not cheap. It also requires new software to be installed on the end user's machine, which we really want to avoid.
Generating PowerPoint files on the fly is possible. The company that is basically the go-to guys for this problem (every help forum points to them, and they get all the rave reviews) is Aspose. They have .NET components for dealing with just about any Office format you can think of. The problem is, they are astronomically expensive. Just the PowerPoint component (a site license for up to 10 developers) would cost $3,995.
The third possibility is generating a solution in-house. After all, a PPTX file is just xml, right? Well, looking closer, a PPTX appears to be a gzip archive. It contains many folders, each containing many XML files. Modifying a PPTX file would, correct me if I'm wrong, entail unzipping the file to a temporary directory, reading the XML file and modifying the contents, then packaging up everything again and write the file out to the response stream. Perhaps there are libraries that can work with gzip streams on the fly without extracting everything.
My Question: Are there easier ways to work with a PPTX file using .NET that don't require working with compressed XML files or buying very expensive software? Basically, we need to modify a PowerPoint file, change some text, and allow the user to download that generated file from a web server.
OpenXML is Microsoft's .Net library that lets you manipulate Office documents. It lets you open a PPTX file and provides an object model that wraps the XML contents.
Here's the link to the OpenXML SDK and the MSDN documentation.
I've used OpenXML to let a ASP.Net page dynamically generate Word documents from a database.
Don't use Office Interop on a web server. It's an all-around bad idea.
If you are only replacing text placeholders for files that will not change, the home grown solution that finds the placeholders in the xml files in the gzip archive should be doable. .Net has had zip support for some time, and it is greatly improved if you are able to use .Net 4.5, so you shouldn't need to extract the archive to a temporary location at all.
PowerPoint should also support connecting directly to Oracle in the same way it supports connecting to Sql Server (just play around with the connection options), without needing the special Oracle BI stuff. However, I'd still prefer the home-grown solution, as this will only work while the powerpoint file is able to reach your database directly, which is typically only possible in your local LAN environment or with an active VPN.
If you want anything fancier than a simple text replacement, perhaps looks for an Aspose competitor.

Alternatives to ZIP for combining many files into one on Windows using .NET

Im looking for methods to combine files including their name and relative path into one single file. A folder disguised as a file. I don't need any compression or encryption. Just the file data including some binary metadata attached to each file.
It would be great if this file was possible to open/inspect/unpack with a standard file browser in Windows such as with regular zip-files.
Yes I could use zip. But I'm researching alternatives and I would prefer a simple method I could implement myself in C#/.NET.
UPDATE
I've researched this some more and came across Microsoft's Structured Storage format. It looked promising at first but it seemes to be an obsolete format, replaced with the Open Package Format. And then I found out about the TAR-format. It seemes to be the most basic format. But I'm not sure yet if I can add any custom metadata to the entries with TAR.
UPDATE
I went with DotNetZip at the end anyway...
Why not use zip? You can use a third party library, like dotnetzip, to make the code easy to write. And, as you mentioned, Windows handles zip files well.
If you have specific reason to search an alternative to ZIP, take a look on virtual file systems, eg. CodeBase File System or our Solid File System. Solid File System lets you add alternate data streams (like in NTFS) or tags (small chunks of binary or text data) to each file or directory. And with OS edition of SolFS you can make the filesystem visible to Windows (including Explorer and third-party applications).
I must admit that while virtual file systems are easy to use (easier than ZIP), they are commercial products (I didn't see free virtual file system implementations yet).

Best way to store multiple revisions of a text file to a single file

I'm working on a C# application that needs to store all the successive revisions of a given report file to a single project file: each time the (plain text) report file changes, the contents of the new version shall be appended to the project file, along with some metadata. Other requirements:
each version of the report file is 100 kB to 1 MB. Theoritically, the maximum number of revisions is unlimited but it should be less than 1000 in practice.
to keep things simple, I'd like to avoid computing differences between the revisions of the report - just store the whole report to the project file every time it has changed.
the project file should be compressed - it doesn't need to be a text file
it should be easy to retrieve a given version of the report from the application
How can I implement this in an efficient way? Should I create a custom binary file, consider using a database, other ideas?
Many thanks, Guy.
What's wrong with the simple workflow?
Un-gzip file
Append header and new report
Gzip project file
Gzip is a standard format, so it's easily accessible. Subsequent reports probably won't change that much, so you'll have a great compression ratio. To file every report, just open the file and scan the headers. (If scanning doesn't work, also mirror the metadata in an SQLite database, and make sure to include offsets into the project file so you can seek to the right place quickly.)
If your requirements are flexible (e.g. that "shall append" part) and you just want something to keep track of past versions of the file, a revision control system will do all of what you need quite easily.
No need to implement that. I would suggest you to use source control. Personally I use subversion with TortoiseSVN client. There is also a plug-in that integrates Subversion with Visual Studio, VisualSVN. Have a look at them.
If using SVN is not an option, you can just store each revision in an individual file (with filename that represents date for example). You can use separate files for metadata as well. Then all the aforementioned files are zipped into one file (look at http://DotNetZip.codeplex.com/ for example).
I don't think there is much point building this yourself when there are already tens, if not hundreds, of systems that are basically designed to do exactly this - source control systems.
I'd recommend choosing some source control solution that has bindings to C# and store your document in there. Then you can easily check out any revision of the document. You will also be able to diff, branch, etc. if necessary.
To give just one example to get you started you can use Subversion with C# bindings.
You could use alternate data streams to store the old revisions of your file. There is no built-in support in the .NET framework, but there exist some helper classes and articles like here and here.
I have never used this myself, so I can't really tell if this is a good option. But it seems, it would make an elegant solution, since you could store each file version in a separate data stream and only the current version in the "main file". In any case, it will probably only work on NTFS drives.
I think that the already SVN (or another source control system) is a very good idea because source control seems to have exactly the features you require. But if that's not an option you could use a file database like SQL Server Compact Edition or SQLite.

Categories

Resources