I'm developping an application with C# that create an ISO image from a CD/DVD, then it lets the user to delete files contained in the Iso file, but so far I didn't find a way to do it.
Please if you have any idea.
Thanks in advance
You should just change the order in which your program operates. Read in the file hierarchy first, then allow the user to select which files to delete, and then write the remaining out as an ISO file. You should be able to keep the files and directories in a tree data structure. Deleting a folder or file would just delete a corresponding node or leaf.
As to the question of directly deleting a file or directory in an ISO image, the same rules above apply, as the ISO9660 (ECMA-119) format is essentially a serialized tree structure. Simply zero out the corresponding records for the subtrees and leafs you want to delete. Do note however, that such an approach will leave garbage space in the image. And that to actually get the image to be smaller in size, you would need to do a compression operation on the image (re-serialize the hierarchy out to a new file).
Related
My goal is to efficiently and simply save tags belonging to image files in a directory. The tags should be stored inside a file that is created.
Let's say in the directory, there is a file 'duck.jpg'. Then p.e. I would like to assign tags 'animal' and 'bird' to this file. The tags are assigned with an image slider, where you tick off checkboxes, and then the tags should be associated with the files.
My question is what data structure / file format would be optimal for this problem. I thought about JSON, XML etc.
The resulting file should not be too big, as tags for many images will be stored, and it must be loadable quickly again and extendable (e.g. adding a new file to the structure should be possible).
What approach would be best suited for the problem?
I have a task to programamatically scan a folder for georeferenced images. There might be a lot of images, some quite large, and some not georeferenced. The spatial information can also be either embedded or in a world file.
How can I tell programmatically (C#/WPF/ESRI Runtime) if "C:\someFolder\file.x" is georeferenced?
Thanks
First check the file type to see if it's a format that supports built in georeferencing (GeoTiff, jp2, and MrSid). Other static image files would need some sort of companion file with the georeferencing information. So for each image file you'd want to look for a matching companion file.
If you add some info on what formats the images/world files are in it'll be easier to show you some sample code.
As we all know that we can not get the full path of the file using File Upload control, we will follow the process for saving the file in to our application by creating a folder and by getting that folder path as follows
Server.MapPath
But i am having a scenario to select 1200 excel files, not at a time. I will select each and every excel file and read the requied content from that excel and saving the information to Database. While doing this i am saving the files to the Application Folder by creating a folder Excel. As i am having 1200 files all these files will be saved in to this folder after each and every run.
Is it the correct method to follow or not I don't know
I am looking for an alternative solution rather than saving the file to folder. I would like to save the full path of file temporarily until the process was executed.
So can any tell me the best way as per my requirement.
Grrbrr404 is correct. You can perfectly take the byte [] from the FileUpload.PostedFile and save it to the database directly without using the intermediate folder. You could store the file name with extension on a separate column so you know how to stream it later, in case you need to.
The debate of whether it's good or bad to store these things on the database itself or on the filesystem is very heated. I don't think either approach is best over the other; you'll have to look at your resources and your particular situation and make the appropriate decision. Search for "Store images on database or filesystem" in here or Google and you'll see what I mean.
See this one, for example.
Is it possible to read the contents of a .ZIP file without fully downloading it?
I'm building a crawler and I'd rather not have to download every zip file just to index their contents.
Thanks;
The tricky part is in identifying the start of the central directory, which occurs at the end of the file. Since each entry is the same fixed size, you can do a kind of binary search starting from the end of the file. The binary search is trying to guess how many entries are in the central directory. Start with some reasonable value, N, and retrieve that portion of the file at end-(N*sizeof(DirectoryEntry)). If that file position does not start with the central directory entry signature, then N is too large - half and repeat, otherwise, N is too small, double and repeat. Like binary search, the process maintains the current upper and lower bound. When the two become equal, you've found the value for N, the number of entries.
The number of times you hit the webserver, is at most 16, since there can be no more than 64K entries.
Whether this is more efficient than downloading the whole file depends on the file size. You might request the size of the resource before downloading, and if it's smaller than a given threshold, download the entire resource. For large resources, requesting multiple offsets will be quicker, and overall less taxing for the webserver, if the threshold is set high.
HTTP/1.1 allows ranges of a resource to be downloaded. For HTTP/1.0 you have no choice but to download the whole file.
the format suggests that the key piece of information about what's in the file resides at the end of it. Entries are then specified as an offset from that particular entry, so you'll need to have access to the whole thing I believe.
GZip formats are able to be read as a stream I believe.
I don't know if this helps, as I'm not a programmer. But in Outlook you can preview zip files and see the actual content, not just the file directory (if they are previewable documents like a pdf).
There is a solution implemented in ArchView
"ArchView can open archive file online without downloading the whole archive."
https://addons.mozilla.org/en-US/firefox/addon/5028/
Inside the archview-0.7.1.xpi in the file "archview.js" you can look at their javascript approach.
It's possible. All you need is server that allows to read bytes in ranges, fetch end recored (to know size of CD), fetch central directory (to know where file starts and ends) and then fetch proper bytes and handle them.
Here is implementation in pyhon: onlinezip
[full disclosure: I'm author of library]
I have a large raw data file (up to 1GB) which contains raw samples from a USB data logger.
I need to store extra information relating to the file (sample rate, description, trigger point, last seek position etc) and was looking into adding this as a some sort of header.
The header file should ideally be human readable and flexible so I've so far ruled out some sort of binary serialization into a header.
I also want to avoid two separate files as they could end up separated when copied or backed up. I remembered somebody telling me that newer *.*x Microsoft Office documents are actually a number of files in a zip. Is there a simple way to achieve this? Could I still keep the quick seek times to the raw file?
Update
I started using the binary serializer and found it to be a pain. I ended up using the xml serializer as I'm more comfortable using it.
I reserve some space at the start of the files for the xml. Simple
When you say you want to make the header human readable, this suggests opening the file in a text editor. Do you really want to do this considering the file size and (I'm assuming), the remainder of the file being non-human readable binary data? If it is, just write the text header data to the start of the binary file - it will be visible when the file is opened but, of course, the remainder of the file will look like garbage.
You could create an uncompressed ZIP archive, which may allow you to seek directly to the binary data. See this for information on creating a ZIP archive: http://weblogs.asp.net/jgalloway/archive/2007/10/25/creating-zip-archives-in-net-without-an-external-library-like-sharpziplib.aspx