I do have a Bitmap Array that contains more than 500 Bitmaps object. I need to convert each single Bitmap object within the Array into a binary Array. I'm using the MemoryStream class to achieve this:
using (MemoryStream ms = new MemoryStream())
{
images[0].Save(ms, System.Drawing.Imaging.ImageFormat.Gif);
byte[] byteData = ms.ToArray();
}
I would like to know if there is other way to achieve this. I'm not sure how expensive is this process.
Thanks
I've done some speed tests and converting to ImageFormat.Bmp is the fastest. It doesn't need to do any compression. Though the best format will also depend on what you plan to do with the data after that.
It's also worth considering where the Bitmaps came from in the first place. If you're loading them in from a file it may be worth it to switch things around and read the file data in first, then create your Bitmap objects from it after that.
By choosing Gif you are making a CPU/memory trade off which you most likely don't want. Specifically the Gif is going to be smaller but is going to take some time to compress (unless the images are already in the format) relative to using a BMP.
If you are copying these around enough that you have memory bandwidth issues (and can't fix that) this is a good idea, but otherwise you should stick with BMP. Really though, for 500 images I would expect this to take 1-2 seconds at the most so you probably don't need to worry about this sort of micro optimization. If its taking to long you can move to unmanaged code which will likely perform better because you will have finer control over memory allocations and copies.
Related
First, this question is NOT about "how to save a Bitmap as a jpeg on your disk?"
I can't find (or think of) a way of converting a Bitmap with Jpeg compression, but keep it as a Bitmap object. MSDN clearly shows how to save a Bitmap as a JPEG, but what I'm looking for is how to apply the encoding/compression to the Bitmap object, so I can still pass it around, in my code, without referencing the file.
One of the reasons behind that would be a helper class that handles bitmap, but shouldn't be aware of the persistency method used.
All images are bitmaps when loaded into program memory. Specific compressions are typically utilized when writing to disk, and decompressing when reading from disk.
If you're worried about the in-memory footprint of an image you could zip-compress the bytes and pass the byte array around internally. Zipping would be good for lossless compression of an image. Don't forget that many image compressions have different levels of losiness (sp?) In other words, the compression throws away data to store the image in the smallest number of bytes possible.
De/compression is also a performance tradeoff in that you're trading memory footprint for processing time. And in any case, unless you get really fancy, the image does need to be a bitmap if you need to manipulate it in any way.
Here is an answer for a somewhat similar question which you might find interesting.
Bitmap does not support encoded in-memory storage. It is always unencoded (see the PixelFormat enum). Problably you need to write your own wrapper class/abstraction, or give up on that idea.
var stream = new MemoryStream()
Bitmap.Save(Stream, ImageFormat)
Does it what you need?
Not sure if what I'm trying to do will work out, or is even possible. Basically I'm creating a remote desktop type app which captures the screen as a jpeg image and sends it to the client app for displaying.
I want to reduce the amount of data sent each time by comparing the image to the older one and only sending the differences. For example:
var bitmap = new Bitmap(1024, 720);
string oldBase = "";
using (var stream = new MemoryStream())
using (var graphics = Graphics.FromImage(bitmap))
{
graphics.CopyFromScreen(bounds.X, bounds.Y, 0, 0, bounds.Size);
bitmap.Save(stream, ImageFormat.Jpeg);
string newBase = Convert.ToBase64String(stream.ToArray());
// ! Do compare/replace stuff here with newBase and oldBase !
// Store the old image as a base64 string.
oldBase = newBase;
}
Using something like this I could compare both base64 strings and replace any matches. The matched text could be replaced with something like:
[number of characters replaced]
That way, on the client side I know where to replace the old data and add the new. Again, I'm not sure if this would even work so anyones thoughts on this would be very appreciated. :) If it is possible, could you point me in the right direction? Thanks.
You can do this by comparing the bitmap bits directly. Look into Bitmap.LockBits, which will give you a BitmapData pointer from which you can get the pixel data. You can then compare the pixels for each scan line and encode them into whatever format you want to use for transport.
Note that a scan line's length in bytes is always a multiple of 4. So unless you're using 32-bit color, you have to take into account the padding that might be at the end of the scan line. That's what the Stride property is for in the BitmapData structure.
Doing things on a per-scanline basis is easier, but potentially not as efficient (in terms of reducing the amount of data sent) as treating the bitmap as one contiguous block of data. Your transport format should look something like:
<start marker>
// for each scan line
<scan line marker><scan line number>
<pixel position><number of pixels><pixel data>
<pixel position><number of pixels><pixel data>
...
// next scan line
<scan line marker><scan line number>
...
<end marker>
each <pixel position><number of pixels><pixel data> entry is a run of changed pixels. If a scan line has no changed pixels, you can choose not to send it. Or you can just send the scan line marker and number, followed immediately by the next scan line.
Two bytes will be enough for the <pixel position> field and for the <number of pixels> field. So you have an overhead of four bytes for each block. An optimization you might be interested in, after you have the simplest version working, would be to combine blocks of changed/unchanged pixels if there are small runs. For example, if you have uucucuc, where u is an unchanged pixel and c is a changed pixel, you'll probably want to encode the cucuc as one run of five changed pixels. That will reduce the amount of data you have to transmit.
Note that this isn't the best way to do things, but it's simple, effective, and relatively easy to implement.
In any case, once you've encoded things, you can run the data through the built-in GZip compressor (although doing so might not help much) and then push it down the pipe to the client, which would decompress it and interpret the result.
It would be easiest to build this on a single machine, using two windows to verify the results. Once that's working, you can hook up the network transport piece. Debugging the initial cut by having that transport step in the middle could prove very frustrating.
We're currently working on something very similar - basically, what you're trying to implement is video codec (very simple motion jpeg). There are some simple approaches and some very complicated.
The simplest approach is to compare consecutive frames and send only the differences. You may try to compare color differences between the frames in RGB space or YCbCr space and send only the pixels that changed with some metadata.
The more complicated solution is to compare the pictures after DCT transformation but before entropy coding. That would give you better comparisons and remove some ugly artifacts.
Check more info on JPEG, Motion JPEG, H.264 - you may use some methods these codecs are using or simply use the existing codec if possible.
This wont work for a JPEG. You need to use BMP, or possibly uncompressed TIFF.
I think if it were me I'd use BMP, scan the pixels for changes and construct a PNG where everything except the changes were transparent.
First, this would reduce your transmission size because the PNG conpression is quite good especially for repeating pixels.
Second, it makes dispay on the receiving end very easy since you can simply paint the new image overtop the old image.
I'm working on some university project and got stuck with memory issue.
I load a bitmap which takes about 1,5GB on HDD with code below:
Bitmap bmp = new Bitmap(pathToFile);
The issue is that the newly created Bitmap object uses about 3,5GB of RAM which is something I can't understand (that's really BIG wrapper :E). I need to get to the pixel array, and the use of Bitmap class is really helpful (I use LockBits() method later, and process the array byte per byte) but in this case it's total blocker. So here is my question:
Is there any easy way to extract the pixel array without lending additional 2gb?
I'm using c# just to extract the needed array, which is later processed in c++ - maybe I can extract all needed data in c++ (but conversion issue appears here - I'm concentrating on 24bgr format)?
PS: I need to keep the whole bitmap in memory so splitting it into parts is no solution.
PS2: Just to clarify some issues: I know the difference between file extension and file format. The loaded file is uncompressed bitmap 3 bytes per pixel of size ~1.42GB (16k x 32k pixels), so why Bitmap object is more than two times bigger? Any decompressing issues and converting into other format aren't taking place.
Consider using Memory Mapped Files to access your HUGE data :).
An example focused on what you need can be found here: http://visualstudiomagazine.com/articles/2010/06/23/memory-mapped-files.aspx
It's in managed code but you might as well use it from equivalent native code.
Let me know if you need more details.
You can use this solution , Work with bitmaps faster in C#
http://www.codeproject.com/Tips/240428/Work-with-bitmap-faster-with-Csharp
Or you can use memory mapped files
http://visualstudiomagazine.com/articles/2010/06/23/memory-mapped-files.aspx
You can stop memory caching.
Instead of
Bitmap bmp = new Bitmap(pathToFile);
Use
var bmp = (Bitmap)Image.FromStream(sourceFileStream, false, false);
see https://stackoverflow.com/a/47424918/887092
As I know the jpeg file have a best compression ratio between another image extensions, and if I correct we can not more compress a jpeg file because that have best compression, so please help me about this. I create some jpegs as following:
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo ici = null;
foreach(ImageCodecInfo codec in codecs) {
if(codec.MimeType == "image/jpeg")
ici = codec;
}
EncoderParameters ep = new EncoderParameters();
ep.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, _quality);
using(MemoryStream ms = new MemoryStream()) {
Bitmap capture = GetImage();
capture.Save(ms, ici, ep);
}
And I zipped them with sharpziplib, in average every jpeg size is 130KB and after zip every file compressed to about 70KB, how it possible? there is just 2 answer I can Imagine.
1- We can Compress Jpeg file with more compression ratio by zip libraries
2- My jpeg file not correctly created, and we can create better jpegs (with more compression ratio as we can not more compress them with zip libraries)
Does any one know about this? if we can create better jpegs please help me about it.
Edit:
this is my zip code to compress jpegs:
void addnewentry(MemoryStream stream, string pass,
string ZipFilePath, string entryname){
ICSharpCode.SharpZipLib.Zip.ZipFile zf = new ZipFile(ZipFilePath);
if(!String.IsNullOrEmpty(pass))
zf.Password = pass;
StaticDataSource sds = new StaticDataSource(Stream);
zf.BeginUpdate();
zf.Add(sds, entryName);
zf.CommitUpdate();
zf.IsStreamOwner = true;
zf.Close();
}
public class StaticDataSource : IStaticDataSource {
public Stream stream { get; set; }
public StaticDataSource() {
this.stream.Position = 0;
}
public StaticDataSource(Stream stream) {
this.stream = stream;
this.stream.Position = 0;
}
public Stream GetSource() {
this.stream.Position = 0;
return stream;
}
}
As most of people already stated, you can't compress such already compressed files further easily. Some people works hard on JPEG recompression (recompression = partially decoding already compressed file, and then compressing those data with a custom stronger model and entropy coder. Recompression usually ensures bit-identical results). Even that advanced recompression techniques, I only saw at most 25% improvement. PackJPG is one them. You can have a look at the other compressors here. As you realize, even top rank compressor couldn't reach exactly 25% (even though it's very complex).
Taking these facts into considerations, ZIP (actually deflate) cannot improve compression that much (it's a very old and inefficient if you compare it with top 10 compressors). I believe there are two possible reasons for that problem:
You're accidentally adding some extra data to JPEG stream (possibly adding after JPEG stream).
.NET outputs lots of redundant data to JFIF file. Maybe some big EXIF data and such.
To solve the problem, you can use a JFIF dump tool to observe what's inside the JFIF container. Also, you may want to try your JPEG files with PackJPG.
No one has mentioned that fact that JPEG is merely a container. There are many compression methods that can be used with that file format (JFIF, JPEG-2000, JPEG-LS, etc.) Further compressing that file can yield varying results depending on the content.
Also, some cameras store huge amounts of EXIF data (sometimes resulting in about 20K of data) and that might account for the difference you're seeing.
The JPEG compression algorithm has two stages: a "lossy" stage where visual elements that should be imperceptible to the human eye are removed, and a "lossless" stage where the remaining data is compressed using a technique called Huffmann coding. After Huffmann coding, further lossless compression techniques (like ZIP) will not reduce the size of the image file by significant amount.
However, if you were to zip multiple copies of the same small image together, the ZIP ("DEFLATE") algorithm will recognise the repetition of data, and exploit it to reduce the total file size to less than the sum of the individual files' size. This may be what you're seeing in your experiment.
Stated very simply, losless compression techniques like Huffman coding (part of JPEG) and DEFLATE (used in ZIP) try to discover repeated patterns in your original data, and then represent those repeated patterns using shorter codes.
In short, you won't be able to really improve JPEG by adding on another lossless compression stage.
You can attempt to compress anything with zlib. You just don't always get a reduction in size.
Usually compressing a whole jpeg file will yield a handful of bytes savings as it will compress the jpeg header (including any plain text comments or EXIF data)
This may not fully account for the 40K of compression you see unless you have a huge amount of header data or your jpeg data somehow winds up with alot of repeating values inside.
Zipping JPEGs reduces size because: EXIF data isn't compressed, JPEG is optimized for photos and not GIF-like data, and compressing files creates a single data stream, allowing patterns across multiple files and removing the requirement that each must be aligned with a specific block on disk. The latter can alone save around 4KB per compressed file.
The main problem with zipping pre-compressed images is that it requires extra work (human and CPU) for prep and viewing, which may not be worth the effort (unless you have millions of images that are infrequently accessed, or some kind of automated image service you're developing).
A better approach is to minimize the native file size, forgetting zip. There are many free libraries and apps out there to help with this. For example, ImageOptim combines several libs into one (OptiPNG, PNGCrush, Zopfli, AdvPNG, Gifsicle, PNGOUT), for a barrage of aggressive tricks to minimize size. Works great for PNGs; haven't tried it much with JPEGs.
Though remember that with any compression, there's always a point of diminishing returns. It's up to you to decide whether or not a few extra bytes really matter in the long run.
How can i compress and image file(*bmp,*jpeg) in C#,
I have to display some images as background on my controls, i m using following code to scale my image
Bitmap orgBitmap = new Bitmap(_filePath);
Bitmap regBitmap = new Bitmap(reqSize.Width, reqSize.Height);
using (Graphics gr = Graphics.FromImage(reqBitmap))
{
gr.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
gr.DrawImage(bmp, new RectangleF(0, 0, reqSize.Width, reqSize.Height));
}
It giving me the required bitmap.
My problem is if orginal bitmap is to heavy(2 MB) then when i load 50 image it feed all my memory, i want to compress the image as much i can without losing the so much quality,How can i do the same in .NET ?
Do you definitely need the large images to be present at execution time? Could you resize them (or save them at a slightly lower quality) using an image editing program (Photoshop, Paintshop Pro etc) beforehand? There seems to be little point in doing the work every time you run - especially as editing tools are likely to do a better job anyway.
Of course, this won't work if it's something like the user picking arbitrary images from their hard disk.
Another point: are you disposing of the bitmaps when you're finished with them? You aren't showing that in your code... if you're not disposing of the original (large) bitmaps then you'll be at the mercy of finalizers to release the unmanaged resources. The finalizers will also delay garbage collection of those objects.
JPEG always lose something, PNG don't.
This is how you encode and decode PNG with C#:
http://msdn.microsoft.com/en-us/library/aa970062.aspx
Perhaps I'm misunderstanding things, but why not convert the bitmaps to jpg's before you import them into your project as control backgrounds?
Good luck compressing JPEG. :) It's compressed already. As for your BMPs, make them JPEGs.