This is a follow on to my previous question Speeding Up Image Handling
Apologies if I should have amended that question in some way rather than starting a new one.
I have tried all sorts of different things to speed up the drawing of an image on screen.
I thought that compressing the image until it is smaller would have an effect. However while this might save memory for the object I do not think it has any effect on how long it takes to draw. I tried converting the image to jpeg and using 100% compression. However while this creates a blocky image it does not affect the drawing time. I now think this is because the number of pixels that get rendered is not changed by this
I tried reducing the color palette to 256 colors. This makes the size smaller since it uses less bytes per pixel but does not seem to affect drawing on screen. I had thought that reducing the bytes per pixel GDI+ has to handle might save some time but it is not enough for me to see so far.
So am I wasting my time looking at compression and palette?
I assume that time taken will be affected by the number of pixels to be drawn (width x height) and I have resized the image to match the pixel size as displayed on screen. I think this is the one thing that does have an effect....
I have looked at how to stop autoscaling of the image - does my resize stop that or can the image still be autoscaled when it is rendered?
I am wondering if I can replace the DrawImage call with something using p/Invoke or other API call (which I admit I don't really understand).
You could use the Bitblt P/Invoke, which copies image data directly. It does require some initialisation to convert the image data to something the display device understands, but this copy operation is lightning fast. If you would like to know what exactly is happening in DrawImage btw, use a tool like ILSpy or Reflector.NET to inspect the method.
See BitBlt code not working for an example. For some information about Bitblt see http://www.codeproject.com/KB/GDI-plus/flicker_free.aspx.
Related
I receve Bitmap image from a camera at 30 fps, and I need to display all images in a pictureBox.
The problem is that the PictureBox is very slow!
I have try to implement a custom PictureBox with DoubleBuffer enabled but the problem is not resolved.
Do you have a custom PictureBox or an user control or a solution that can display the image faster?
Additional information:
The image resolution is 2048x1088 with 256 graylevel (8bit image).
I use AForge.NET for elaborate the images.
Thank you
That image gets expensive to draw when it has to be resized to fit the PB's client area. Which is very likely in your case because your images are pretty large. It uses a high-quality bi-cubic filter to make the resized image look good. That's pretty expensive, albeit that the result is good.
To avoid that expense, resize the image yourself before assigning it to the Image property. Make it just as large as the PB's ClientSize.
That's going to make a big difference in itself. The next thing you can do is to create the scaled bitmap with the 32bppPArgb pixel format. It's the format that's about 10 times faster then any other because it matches the video adapter on most machines so no pixel format conversions are necessary.
I have a bitmap image and I want to put it in excel. I used this code I found here:
xlWorkSheet.Shapes.AddPicture("C:\\filesystem\\mb2.bmp",
msoFalse, msoCTrue, 0, 0, 518, 390);
But the resulting image is 1.333 times wider and higher. OK, so I can just multiply the dimensions by 0.75 and I get an image in excel with the desired dimensions.
xlWorkSheet.Shapes.AddPicture("C:\\filesystem\\mb2.bmp",
msoFalse, msoCTrue, 0, 0, (float)(518*0.75), (float)(390*0.75));
But that number 0.75 sitting there hard-coded really bothers me. Especially since I've seen this question in which the op's ratio is 0.76. Knowing that this code needs to run on any number of systems with different displays, I want to know how to get the ratio programmatically.
Somewhat also related to this question which has to do with copy-paste without code.
If you are talking about printing, the display is irrelevant.
The dimensions of the image need to be relative to the paper size. The size values in the AddPicture method are in points and are only loosely related to pixels. Points are units if measure that make sense to your printer. The application translates points to pixels for you so you don't need to worry about that.
You can use the InchesToPoints or CentimetersToPoints methods of the Application object to size your image to paper size.
I found an article on image processing from here: http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-saving-cropping-and-resizing Everything works fine.
I want to keep the high quality when resizing the image. I think if I can increase the DPI value I can achieve this. Does anyone know if this is possible? And if so, how can I implement it in C#?
For starters, it's worth pointing out that there are two general categories of images; vector [e.g. SVG, WMF, Adobe Illustrator and Corel Draw Graphics] and bitmap (also called raster) images [e.g. Bitmap, JPEG and PNG Images].
Vector images are formed from a series of mathematical equations and/or calculations. Bitmap images, on the other hand, are made up of individual dots (pixels) each corresponding to a particular feature on the object the image is taken of.
If it should happen that you want to resize an image, the first thing to consider is if it is a bitmap or vector image. By virtue of the fact that vector images are obtained from calculations, they can be perfectly resized without losing any detail. The case is different for bitmap images. Since each pixel is independent of the other, when you desire to resize it, you are simply increasing or decreasing the source to target pixel ratio.
So in order to double the size of a vector image, simply multiply the target dimensions by two and everything comes out all right. If you should apply the same effect on a bitmap, you are actually increasing each source pixel to cover four pixels (two rows of two horizontal pixels).
Of course, by applying interpolation and filtering, the computer can "smooth" out the edges of the target pixels so they seem to blend into each other and give the appearance of a reasonably resized image but this output is never the same as resizing a vector image; vector images resize perfectly.
You also mentioned DPI in your question. DPI is essentially the number of pixels that correspond to an inch when the image is printed not when it is viewed on a screen. Therefore by increasing the DPI of the image, you do not increase the size of the image on the screen. You only increase the quality of print [which needless to say depends on the maximum resolution of the printer].
If you really desire to resize the image and the image is a bitmap, as a rule of thumb, do not increase the size beyond 200% of the original image's size else you'll lose the quality.
You can see this answer for code to resize bitmap images.
To see a sample vector image, go to this link.
Note Try zooming in and out of the image to see how well it resizes.
A typical bitmap are the StackOverflow sprites. They do not keep their quality resized.
Further Reading
Vector Graphics: http://en.wikipedia.org/wiki/Vector_image
Bitmap Graphics: http://en.wikipedia.org/wiki/Bitmap_image
Simply If the original image is smaller then the re-sized image then there is hardly anything you can do. Rest is a research problem.
This would only be possible if it's a vector graphic. Look into SVG. Otherwise, I think you might need Silverlight or Flex for that part.
What you're asking isn't really possible. You can't enlarge an image while maintaining the same quality.
If you think about an image as a mapped array of pixels (literally, a "bit-map"), this makes sense. The image is saved with a fixed amount of data, and that's all you have to work with when you resize it. Any examples to the contrary (like TV shows, as suggested by one of the comments) are purely fictional.
The best that you can do is set the InterpolationMode property of the Graphics object you're using to do the resizing to "HighQualityBicubic", which is the highest quality smoothing algorithm supported by GDI+ and in fact by every major graphics package on the market. It's the best that even Adobe Photoshop has to offer. Essentially, interpolation means that the computer is calculating the approximate value of the new pixels you're adding to make the image larger from the relative values of neighboring pixels. It's a "best guess" method, but it's the best compromise we've come up with yet.
At the very least, the resulting images won't have "jaggies" or rough, pixelated lines.
Sample code:
Graphics g;
g.InterpolationMode = Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
// ... insert the rest of your code here
Beyond that, it's worth noting that GDI+ (which the .NET Framework uses internally for graphics routines) works best with image sizes that are multiples of 16. So if it all possible, you should try and make the width and height of your resized images a multiple of 16. This will preserve as much of the original image quality as possible.
The ideal solution is to switch to vector graphics that can be resized at will. Instead of pixel information, they contain mathematical information used to draw or "render" the image. Read more on Wikipedia.
let's try metadata in GDI+, may be it can suit your request
I'm working on a silverlight project where users get to create their own Collages.
The problem
When loading a bunch of images by using the BitmapImage class, Silverlight hogs up huge unreasonable amounts of RAM. 150 pictures where single ones fill up at most 4,5mb takes up about 1,6GB of RAM--thus ending up throwing memory exceptions.
I'm loading them through streams, since the user selects their own photos.
What I'm looking for
A class, method or some process to eliminate the huge amount of RAM being sucked up. Speed is an issue, so I don't want to be converting between images formats or anything like that. A fast resizing solution might work.
I've tried using a WriteableBitmap to render the images into, but I find this method forces me to reinvent the wheel when it comes to drag/drop and other things I want users to be able to do with the images.
What I would try is to take load each stream and resize it to a thumbnail (say, 640x480) before loading the next one. Then let the user work with the smaller images. Once you're ready to generate the PDF, reload the JPEGs from the original streams one at a time, disposing of each bitmap before loading the next.
I'm guessing you're doing something liek this:
Bitmap bitmap = new Bitmap (filename of jpeg);
and then doing:
OnPaint (...)
{
Graphics g = ....;
g.DrawImage (bitmap, ...);
}
This will be resizing the huge JPEG image to the size shown on screen every time you draw it. I'm guessing your JPEG is about 2500x2000 pixels in size. When you load a JPEG into a Bitmap, the bitmap loading code uncompresses the data and stores it either as RGB data in a format that will be easy to render (i.e. in the same pixel format as the display) or as a thing known as a Device Independant Bitmap (aka DIBitmap). These bitmaps require more RAM to store than a compressed JPEG.
Your current implementation is already doing format conversion and resizing, but doing it in an innefficent way, i.e. resizing a huge image down to screen size every time it's rendered.
Ideally, you want to load the image and scale it down. .Net has a system to do this:
Bitmap bitmap = new Bitmap (filename of JPEG);
Bitmap thumbnail = bitmap.GetThumbnailImage (width, height, ....);
bitmap.Dispose (); // this releases all the unmanged resources and makes the bitmap unusable - you may have been missing this step
bitmap = null; // let the GC know the object is no longer needed
where width and height are the size of the required thumbnail. Now, this might produce images that don't look as good as you might want them to (but it will use any embedded thumbnail data if it's present so it'll be faster), in which case, do a bitmap->bitmap resize.
When you create the PDF file, you'll need to reload the JPEG data, but from a user's point of view, that's OK. I'm sure the user won't mind waiting a short while to export the data to a PDF so long as you have some feedback to let the user know it's being worked on. You can also do this in a background thread and let the user work on another collage.
What might be happening to you is a little known fact about the garbage collection that got me as well. If an object is big enough ( I don't remember where the line is exactly ) Garbage Collection will decide that even though nothing currently in scope is linked to the object (in both yours and mine the objects are the images) it keeps the image in memory because it has decided that in case you ever want that image again it is cheaper to keep it around rather than delete it and reload it later.
This isn't a complete solution, but if you're going to be converting between bitmaps and JPEG's (and vice versa), you'll need to look into the FJCore image library. It's reasonably simple to use, and allows you to do things like resize JPEG images or move them to a different quality. If you're using Silverlight for client-side image processing, this library probably won't be sufficient, but it's certainly necessary.
You should also look into how you're presenting the images to the user. If you're doing collages with Silverlight, presumably you won't be able to use virtualizing controls, since the users will be manipulating all 150 images at once. But as other folks have said, you should also make sure you're not presenting bitmaps based on full-sized JPEG files either. A 1MB compressed JPEG is probably going to expand to a 10MB Bitmap, which is likely where a lot of your trouble is coming from. Make sure that you're basing the images you present to the user on much smaller (lower quality and resized) JPEG files.
The solution that finally worked for me was using WriteableBitmapEX to do the following:
Of course I only use thumbnails if the image isn't already small enough to store in memory.
The gotch'a I had was the fact that WriteableBitmap doesn't have a parameterless constructor, but initializing it with 0,0 as size and then loading the source sets these automatically. That didn't come naturally to me.
Thanks for the help everybody!
private WriteableBitmap getThumbnailFromBitmapStream(Stream bitmapStream, PhotoFrame photoFrame)
{
WriteableBitmap inputBitmap = new WriteableBitmap(0,0);
inputBitmap.SetSource(bitmapStream);
Size thumbnailSize = getThumbnailSizeFromWriteableBitmap(inputBitmap, photoFrame.size);
WriteableBitmap thumbnail = new WriteableBitmap(0,0);
thumbnail = inputBitmap.Resize((int)thumbnailSize.Width, (int)thumbnailSize.Height, WriteableBitmapExtensions.Interpolation.NearestNeighbor);
return thumbnail;
}
One additional variant to reduce ram using:
Dont load images, which ar invisible at this moment, and load them while user scrolling the page. This method uses by web developers to improve page load speed. For you its the way not to store hole amount of images in ram.
And I think the better way not to make thumbnails on run, but store them near the fullsize pictures and get only links for them. When it needed, you alway can get the link to fullsize picture and load it.
I've been fussing with this for the better part of the night, so maybe one of you can give me a hand.
I have found GDI+ DrawImage in C# to be far too slow for what I'm trying to render, and from the looks of forums, it's the same for other people as well.
I decided I would try using AlphaBlend or BitBlt from the Win32 API to get better performance. Anyway, I've gotten my images to display just fine except for one small detail—no matter what image format I use, I can't get the white background to disappear from my (transparent) graphics.
I've tried BMP and PNG formats so far, and verified that they get loaded as 32bppargb images in C#.
Here's the call I'm making:
// Draw the tile to the screen.
Win32GraphicsInterop.AlphaBlend(graphicsCanvas, destination.X, destination.Y, this.TileSize, this.TileSize,
this.imageGraphicsPointer, tile.UpperLeftCorner.X, tile.UpperLeftCorner.Y,
this.TileSize, this.TileSize,
new Win32GraphicsInterop.BLENDFUNCTION(Win32GraphicsInterop.AC_SRC_OVER, 0,
Convert.ToByte(opacity * 255),
Win32GraphicsInterop.AC_SRC_ALPHA));
For the record, AC_SRC_OVER is 0x00 and AC_SRC_ALPHA is 0x01 which is consistent with what MSDN says they ought to be.
Do any of you guys have a good solution to this problem or know a better (but still fast) way I can do this?
Graphics.DrawImage() speed is critically dependent on the pixel format. Format32bppPArgb is 10 times faster than any other one on any recent machine I've tried.
Also make sure you the image doesn't get resized, be sure to use a DrawImage() overload that sets the destination size equal to the bitmap size. Very important if the video adapter's DPI setting doesn't match the resolution of the bitmap.
Have you tried an opacity of just 255 rather than a calculated one?
This blog post describes what you're trying to do:-
http://blogs.msdn.com/andreww/archive/2007/10/10/preserving-the-alpha-channel-when-converting-images.aspx
Critical thing is that he carries out a conversion of the image to make the alpha channel compatible..
Ok. From a pure Win32 perspective:
In order for AlphaBlend to actually alpha blend... it needs the source device context to contain a selected HBITMAP representing an image with 32bpp bitmap with a pre-multiplied alpha channel.
To get a device bitmap with 32bpp you can either call one of the many functions that will create a screen compatible device bitmap and hope like hell the user has selected 32bpp as the desktop bitdepth. OR, ensure that the source bitmap is a DIBSection. Well, the library or framework that is creating it from the loaded image for you.
So, C# is loading your images with 32bpp argb, BUT, how are you converting that C# representation of the bitmap into a HBITMAP? You need to ensure that a DIB Section is being created, not a DDB (or device dependent bitmap), and that the DIB Section is 32bpp.