Transferring files with metadata - c#

I am writing a client windows app which will allow files and respective metadata to be uploaded to a server. For example gear.stl (original file) and gear.stl.xml (metadata). I am trying to figure out the correct protcol to use to transfer the files.
I was thinking about using ftp since it is widely used and a proven method to transfer files, except that I would have to transfer 2 files for every actual file (.stl and .stl.xml). However, another thought had also crossed my mind ... What if I create an object and wrap the file, metadata and the directory I needed to tranfer it to, serialize the object and then submit a request to a webservice, to transfer the file.
Original file size would range from 100k to 10MB. Metadata size would probably be less than 200k
The webservice call seems like an easier process to me to deserialize the object and distribute the file and respective metadata accordingly. However I'm not sure if this is a sound idea or if there is a better way to transfer this data other than the two methods I have mentioned.
If someone can point me in the right direction it would be much appreciated.

You could wrap it in a zip file like the "new" office document format does. You might even be able to use their classes to package it all up.
Edit:
Take a look at the System.IO.Packaging.Package class. It seems to be what you need. This class resides in the WindowsBase.dll assembly and became available in .NET 3.0.
PS: Remember that even though it is a zip file, it doesn't need to be compressed. If you have very large files, it may be better to keep them uncompressed. It all depends on how they're going to be used and if the transport size is an issue.

Related

C# A proper and simple way to bundle multiple small files and change/delete/create them, in the bundle without recreating the entire Bundle each time

I am trying to figure out how to store data that can be easily/heavily edited.
Reading data from a big single file isn't really a problem. The problem starts when I need to make changes to that file.
Let's say I have a bit log file which always appends a string to the file. The Filesystem needs to recreate the whole file since it has changed. And the bigger the File the heavier the performance cost.
What I could do is simply create a new file for each log. Creating, removing and editing would be more efficient. Until I would like to copy all these files lets say on a new SSD.
Reading directories and copying thousand of files, even if they are small, hits hard on performance.
So maybe bundle all files into a single file/archive?
But then AFAIK archive like .zip ... also needs to be recreated when something changed.
Is there a good or maybe even simple solution to this?
How does a single file database like SQlite handle this?
Mention: I am using C#

Loading a pcap file

I am trying to write a tcp reconstruction program in c# , by using SharpPcap. So far I am doing a pretty good job, and the reconstruction is working fine. My only problem is, that in order to reconstruct big Pcap files by myself, I need to load them by parts/chunks to the memory, because sharppcap only let's me load the whole file( I think). Any suggestions?
Thanks
The pcap file format is really simple, see here: http://wiki.wireshark.org/FileFormatReference/libpcap
Why not load the file yourself, possibly a packet at a time, and then you can do what you want as you go along rather than having a library dictate your memory usage patterns?
Given that SharpPcap PcapDevices have a GetNextPacket method, from which both LibPcapLiveDevice and CaptureFileReaderDevice inherit that method, I don't see anything that would require you to load the whole file - you might have to read the entire file, but you can just ignore packets you don't want.

Best way to store multiple revisions of a text file to a single file

I'm working on a C# application that needs to store all the successive revisions of a given report file to a single project file: each time the (plain text) report file changes, the contents of the new version shall be appended to the project file, along with some metadata. Other requirements:
each version of the report file is 100 kB to 1 MB. Theoritically, the maximum number of revisions is unlimited but it should be less than 1000 in practice.
to keep things simple, I'd like to avoid computing differences between the revisions of the report - just store the whole report to the project file every time it has changed.
the project file should be compressed - it doesn't need to be a text file
it should be easy to retrieve a given version of the report from the application
How can I implement this in an efficient way? Should I create a custom binary file, consider using a database, other ideas?
Many thanks, Guy.
What's wrong with the simple workflow?
Un-gzip file
Append header and new report
Gzip project file
Gzip is a standard format, so it's easily accessible. Subsequent reports probably won't change that much, so you'll have a great compression ratio. To file every report, just open the file and scan the headers. (If scanning doesn't work, also mirror the metadata in an SQLite database, and make sure to include offsets into the project file so you can seek to the right place quickly.)
If your requirements are flexible (e.g. that "shall append" part) and you just want something to keep track of past versions of the file, a revision control system will do all of what you need quite easily.
No need to implement that. I would suggest you to use source control. Personally I use subversion with TortoiseSVN client. There is also a plug-in that integrates Subversion with Visual Studio, VisualSVN. Have a look at them.
If using SVN is not an option, you can just store each revision in an individual file (with filename that represents date for example). You can use separate files for metadata as well. Then all the aforementioned files are zipped into one file (look at http://DotNetZip.codeplex.com/ for example).
I don't think there is much point building this yourself when there are already tens, if not hundreds, of systems that are basically designed to do exactly this - source control systems.
I'd recommend choosing some source control solution that has bindings to C# and store your document in there. Then you can easily check out any revision of the document. You will also be able to diff, branch, etc. if necessary.
To give just one example to get you started you can use Subversion with C# bindings.
You could use alternate data streams to store the old revisions of your file. There is no built-in support in the .NET framework, but there exist some helper classes and articles like here and here.
I have never used this myself, so I can't really tell if this is a good option. But it seems, it would make an elegant solution, since you could store each file version in a separate data stream and only the current version in the "main file". In any case, it will probably only work on NTFS drives.
I think that the already SVN (or another source control system) is a very good idea because source control seems to have exactly the features you require. But if that's not an option you could use a file database like SQL Server Compact Edition or SQLite.

How to efficiently send large files from the database to the browser?

In my web application I am working with files. Some files are very large. I use Response.Write() to write the file to the browser. This goes well for the smaller files, but for large files this can take a while and the bandwidth is fully used.
Is it possible to split large documents and send it piece by piece to the browser? Are there other ways to send the document quicker to the browser?
I hold the document as a property of an object.
Why don't you compress the file and store it in the DB and decompress it will extracting it?
You can do a lot of things depending on this questions:
How often does the file change?
Do I really need the files in the DB?
Why not store the File path in the
DB and the File on disk?
Anyhow, since your files are extremely high bandwidth and you would want your app to respond appropriately you might want to use AJAX load the files Asynchronously. You can have a WebHandler .ashx for this.
Here's a few examples:
http://www.dotnetcurry.com/ShowArticle.aspx?ID=193&AspxAutoDetectCookieSupport=1
http://www.viawindowslive.com/Articles/VirtualEarth/InvokingserversidecodeusingAJAX.aspx
My question is, is it possible to
split large documents and send it
piece by piece to the browser?
It depends on the file type, but in general no. If you are sending something like an excel file or a word doc etc. the receiving application will need all of the information (bytes) to fully form the document. You could physically separate the document into multiple ones, and that would allow you to do so.
If the bandwidth is fully used, then there is nothing you can do to "speed it up" short of compressing the document prior to send. In other words, zip it up.
Depending on the document (I know you said .mht, but we're talking content here) you will see the size go down by some amount. Maybe it's enough, maybe not.
Either way, this is entirely a function of the amount of content you want to send versus the size of the pipe available to send it. One of those is more difficult to change than the other.
Try setting IIS's dynamic compression. By default, it's set fairly low, but you can try setting it for a higher compression level and see how much that helps.
I'm not up to speed with ASP.NET but you might be able to buffer from a FileStream to some sort of output stream.
You can use the Flush method to send the currently buffered data to the client (the browser).
Note that this has some implications, as is described aptly here.
I've considered using it myself, a project sent documents that became fairly large and I was cautious about storing the whole data in memory. In the end I decided the data was not large enough to be a problem though.
Sadly the MSDN documentation is very, very vague on what Flush implies and you will probably have to use Google to troubleshoot.

Creating File On Full Disk

Is it possible to create a file on a disk which is full??
Does creation of the file take any space??
Basically I am seeing a case where C# has created but failed to write anything whhich I think points to a full disk.
Does anyone know whether creating a file on a full disk will fail or not??
This wa done using c# o Windw xSerevr- The log file was also written to the same drive
Creating (empty) files should still be possible in most cases. The MFT is a separate part of the volume which won't get used for file data.
It should even be possible to store small amounts of data without needing more than the file entry in the MFT. NTFS can store streams as "resident data" in the stream descriptor which doesn't need any additional space, but only works for very small files.
I think your issue is another problem, though. It may be that you have permissions to create a file but not to write anything to it. You might want to check the ACLs of the location where you're trying to write.

Categories

Resources