I am guessing, as Images and Icons are stored in a resx file, I am guessing that it should be relatively easy to store a byte array (or similar stream) in an embedded Resource file.
How might this be done, should I pretend the binary stream is a Bitmap, or if the Resource file is the wrong place to be embedding binary data, what other techniques should I investigate?
Mitch has pointed to the right answer, but one trick you can keep up your sleeve is storing the data compressed and decompressing on first access. It helps keep your DLLs small. I use this trick to Embed X64 and X32 versions of a native dll:
See for example the code here: http://code.google.com/p/videobrowser/source/browse/trunk/MediaInfoProvider/LibraryLoader.cs
Related
Heading ##I'd like to add a few dictionaries as an embedded resources into my C# solution (*.dic and *.aff files).
Each dictionary is a simple text file inside, so it can be compressed very well.
Is it efficient to store these dictionaries in a *.zip archive, include archive as an embedded resource into my solution and then extract my dictionaries from archive in runtime? Or are embedded resources compressed by default in an assembly file?
By efficient I meant that the install size would be smaller, and runtime slowdown would not be critical.
Yes.
We can see this by adding a 1MB xml file as an embedded resource. The resulting dll will increase by approx 1MB. If we on the other hand zip it before hand, the resulting dll can increase as little a few kb depending on the compression level of the content.
In C#, I have a ZIP file that I want to corrupt by XORing or Nulling its bytes.
(by Nulling I mean make all the bytes in the file zeros)
XORing its bytes requires me to first, read the bytes to a byte array, XOR the bytes in the array with some value, then write the bytes back to the file.
Now, if I XOR/Null All (or half) of the file's bytes, it gets corrupted, but if Just
XOR/Null some of the bytes, say the first few bytes (or any few number of bytes in any position of the file) it doesn't get corrupted, and by that I mean that i can still access the file as if nothing really happend.
Same thing happened with mp3 files.
Why isn't the file getting corrupted ?
and is there a "FAST" way that i could corrupt a file with ?
the problem is that the zip file that I'm dealing with is big,
so XORing/Nulling even half of its bytes will take a couple of secs.
Thank You So Much In Advance .. :)
Just read all files completely and you probaly will get reading errors.
But of course, if you want to keep something 'secret', use encryption.
A zip contains a small header, a directory structure (a the end) and in between the individual files. See Wikipedia for details.
Corrupting the first bytes is sure to corrupt the file but it is also very easily repaired. The reader won't be able to find the directory block at the end.
Damaging the last block has the same effect: the reader will give up immediately but it is repairable.
Changing a byte in the middle will corrupt 1 file. The CRC will fail.
It depends on the file format you are trying to "corrupt". It also depends on what portion of the file you are trying to modify. Lastly, it depends how you are verifying if it is corrupted. Most file formats have some type of error detection.
The other thing working against you is that the zip file format uses a CRC algorithm for corruption. In addition, there are two copies of the directory structure, so you need to corrupt both.
I would suggest you corrupt the directory structure at the end and then modify some of the bytes in the front.
I could just lock the zip entries with a pass, but I don't want anybody to even open it up and see what's in it
That makes it sound as if you're looking for a method of secure deletion. If you simply didn't want someone to read the file, delete it. Otherwise, unless you do something extreme like go over it a dozen times with different values or apply some complex algorithm over it a hundred times, there are still going to be ways to read the data, even if the format is 'corrupt'.
On the other hand, breaking a file simply to stop someone else accessing it conventionally just seems overkill. If it's a zip, you can read it in (there are plenty of questions here for handling archive files), encrypt it with a password and then write it back out. If it's a different type of file, there are literally a million different questions and solutions for encrypting, hiding or otherwise preventing access to data. Breaking a file isn't something you should being going out of your way to do, unless this is to help test some sort of un-zip-corrputing-program or something similar, but your comments imply this is to prevent access. Perhaps a bit more background on why you want to do this could help us provide a better answer?
Hello I am trying to compress a file using GZipStream.
I have created my own extension, let's call it .myextension
I try to compress .myextension and keep its extension. I mean that I am trying to compress a .myextension to the same extension. Example: I have myfile.myextension and
I want to compress it to myfile.myextension. It works. I can compress my file really well.
The problem is that when I try to decompress it using GZipStream it says that the magic number is incorrect.
How can I fix that? When decompressing should I just change the extension to .gz? Should I convert it somehow? Please help me I have no idea how to continue.
This is a common question. I would like to provide you the similar threads with the solutions:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=427166&SiteID=1
A 'Magic Number' is usually a fixed value, and often appearing somewhat arbitrary, possibly indecipherable. For example, a line of code may have:
If X = 31 then
'Do Something
End If
In this case, 31 is a 'Magic Number': It has no obvious meaning (and as far as coding is concerned, a term of derision).
Files (of different types) often have the first few bytes set to certain values, for example, a file which has the first two bytes as then hexadecimal numbers 42 4D is a Bitmap file. These numbers are 'magic numbers' (In this case, 42 4D corresponds to the characters BM). Other files have similar 'magic numbers'.
http://forums.microsoft.com/msdn/showpost.aspx?postid=1154042&siteid=1
Of course, the minute someone (team) develops a no-fuss compression/decompression custom task which supports zip,bzip2, gzip, rar, cab, jar, data and iso files, I'll use that, until that time, I'll stick with the open-source command-line utilities.
Of course, you can code up a solution, but this one is such low hanging fruit. For handling zip files, there is no native .NET library (at least not yet). Now there is support is for handling the compressed streams INSIDE the zip file, but not navigating the archive itself.
Now, as I mentioned in a previously, there are plenty of open-source zip utils like those on Sourceforge. These work fine on Win2003 Server x64, I can attest to that.
However, if you're insistent on a .NET solution for zip decompression, use http://www.icsharpcode.net/OpenSource/SharpZipLib/, which is open source, and which has a clean and reliable 100% .NET implementation.
First off, from other users who have had various issues, GZipStream should not be used since it has bugs. It does not compress short strings correctly and it does not detect corrupted compressed data. It is a very poor implementation.
As for your problem, others using GZipStream see a four-byte prefix to the gzip data which is the number of uncompressed bytes. If that is written to the file, that would cause the problem you are seeing. The gzip file should start with the hex bytes 1f 8b.
I have an implementation of a custom DataObject (Virtual File) see here. I have drag and drop functionality in a control view (drag and drop a file OUT of a control view without having a temp local file).
This works fine with smaller files but as soon as the file is larger than say 12-15megs it says not enough memory available. seems like the memory stream is out of memory.
what can i do about this? can i somehow split a larger byte[] into several memoryStreams and reassemble those to a single file?
Any help would be highly appreciated.
can i somehow split a larger byte[]
into several momoryStreams and
reassemble those to a single file?
Yes.
When I had to deal with a similar situation I built my own stream that internally used byte arrays of 4mb. This "paging" means it never has to allocate ONE LARGE BYTE ARRAY, which is what memory stream does. So, dump memory stream, build your own stream based on another internal storage mechanism.
I'm writing a method that needs to save a System.Drawing.Image to a file. Without knowing the original file the Image was created from, is there anyway to determine what file extension it should have?
The best solution I've come up with is to use a Switch/Case statement with the value of Image.RawFormat.
Does it even matter that I save the Image in it's original format? Is an Image generated from a PNG any different from say one generated from a JPEG? Or is the data stored in an Image object completely generic?
While Steve Danner is correct in that an image created from a JPG will look different to an image created from a PNG once it's loaded into memory it's an uncompressed data stream.
This means that you can save it out to any file format you want.
However, if you load a JPG image and then save it as another JPG you are throwing away more information due to the compression algorithm. If you do this repeatedly you will eventually lose the image.
If you can I'd recommend always saving as PNG.
Image.RawFormat has cooties, stay away from it. I've seen several reports of it having no legal value for no apparent reason. Undiagnosed as yet.
You are quite right, it doesn't matter what format you save it to. After you loaded the file, the internal format is the same for any bitmap (not vector) with the same pixel format. Generally avoid recompressing jpeg files, they tend to get bigger and acquire more artifacts. Steve mentions multi-frame files, they need to be saved a different way.
Yes, it definitely matters because different fileformats support different features such as compression, multiple frames, etc.
I've always used a switch statement like you have, perhaps baked into an extension method or something.
To answer your question 'Does it even matter that I save the Image in it's original format?' explicitly: Yes, it does, but in a negative way.
When you load the image, it is uncompressed internally to a bitmap (or as ChrisF calls it, an uncompressed data stream). So if the original image used a lossy compression (for example jpeg), saving it in the same format will again result in loss of information (i.e. more artifacts, less detail, lower quality). Especially if you have repeated actions of read - modify - save, this is something to avoid.
(Note that it is also something to avoid if you are not modifying the picture. Just the repeated decompress - compress cycles will degrade the image quality).
So if disk space is not an issue here (and it usually isn't in the age of hard disks that are big enough for HD video), always store any intermediate pictures in lossless compression formats, or uncompressed. You may consider saving the finall output in a compressed format, depending on what you use it for. (If you want to present those final pictures on the web, jpeg or png would be good choices).