Is there any predefined control in WPF or VS2010 to implement the Image Zooming functionality (like Googlemaps) for a bitmap displayed over a panel using C#? My bitmap will be minimum 8GB in Size.
Thanks in advance
Murali
There is DeepZoom for Silvelight. There is no such thing in WPF. It was planned for WPF4, but removed before RTM.
Update:
Loading images of this size is pretty uncommon. You should consider tiling as others suggested. Also consider if you really need load all data at once. If the image has size of for example 30000x30000 then the user do not really need/can't to see all this data. Use tiling and appropriate image format (jpg etc) for each zoom level.
Relevant links:
Single objects still limited to 2 GB in size in CLR 4.0?
Pushing the Limits of Windows: Physical Memory
Related
I need to resize images with multiple dimensions to same size . Those are profile images and I can crop width or height of image if needed to make aspect ratio similar . Also I need to add watermark to image . Suggest any library which is good for image manipulation.
Thanks
Image Magick is very good, has bindings to most language and is available for most operating systems - it is also free.
Here is a short list I can remember:
http://www.graphicsmill.com/
http://imageresizing.net/
http://www.atalasoft.com/products/dotimage/
http://www.coreoptical.com/
http://www.leadtools.com/
Actually your task is pretty simple so I guess most of SDKs will easily handle that. And if you don’t need any specials about .net integration, Image Magick should be good enough.
I was experiencing some performance issues when displaying a large number of images, I discovered the issue was that the full resolution image was being used when I really only needed an image less than 1/4 of the size. So I added a line in between BeginInit and EndInit to set the DecodePixelWidth to 200 which is the max width I will need in my layout. My performance was no longer an issue but some of the images are really small, definitely nowhere close to 200 pixels wide. Most of the images seemed to display correctly, and there doesn't seem to be any rhyme or reason to which are too small and which work correctly. I thought it might be due to differences in the original dimensions of the images but there was no pattern to the results. I tried bumping the width up to 600 which then allowed the offending images to display at the correct width of 200 but the performance then suffers.
At this point I am not even sure where to start looking, and would be really grateful for a kick in the right direction.
EDIT: Some more information below about the images and how I am using them.
Most of the images are around 1000X1500 although some have odd dimensions like 1000x1513. All of the images are JPEG. Currently each image is placed in a custom user control that I designed, nothing too fancy just a background around it with some text. Each user control is then placed in a grid, in its own row/column. The grid is inside a scrollviewer so the user can scroll through the list. This might not be the best way to accomplish what I am looking for but its what I came up with quickly and it works for the most part. Id be happy to switch to another method of display if it would accomplish what i want in an easier or more concise way.
The intended result is for a movie browsing app. There will be a scrollable list of movies, each represented as its own tile complete with title, movie poster, genre info, rating, and description. This list will be sortable on various items. The information about the movies is stored in a sql database on another machine. The images are originally stored on another machine but are copied locally to improve performance.
EDIT: I have been able to solve the issue by not using DecodePixelWidth and instead saving a copy of the image at the desired size, this has improved performance. Youngjae's recommendation of not using DecodePixelWidth along with his mention of using a virtualized list led me to the following set of articles on creating a Virtualized Wrap Panel which should solve any other performance issues. The article is for silverlight but from what I understand silverlight is basically a watered down version of wpf, if it works in silverlight it should work in wpf. It shouldn't be too difficult to convert it for my use.
Part 1 - MeasureOverride
Part 2 - ArrangeOverride
Part 3 - Animation
Part 4 - Virtualization
In MSDN link, you can find words as below.
The JPEG and Portable Network Graphics (PNG) codecs natively decode the image to the specified size; other codecs decode the image at its original size and scale the image to the desired size.
And, I recommend you NOT to use DecodePixelWidth for resizing purpose because of the above reason.
I don't know your original image sizes and formats, but isn't it enough to use with virtualized list and <Image Width="200" Stretch="Uniform">?
I need to speed up my image viewer, and wondering if I should be looking into creating my own DirectX control to do so.
My image viewer displays medical images. They can be pretty large. We're talking 55mb when it comes to mammography. The pixel data is 16bit greyscale stored in a ushort array. Without getting into the gory details, my current approach is loading the pixel data into an ImageSource, and using the WPF Image control.
I've never done anything with DirectX. Is it worth diving into it? Would it be any faster than the native WPF stuff? If so how significantly? Or, should I just forget about DirectX and look into areas where I can improve my current approach?
Before somebody says so, I know WPF utilize DirectX. I'm wondering If removing the WPF layer and writing the DirectX myself will improve performance.
I have some experience drawing multi-gigabyte satellite and chart imagery. Working with imagery around 55MB should probably work okay even without trying to optimize it too much. You haven't really given enough detail to recommend one alternative over the other, so I will give my opinion on the pros and cons.
Using 2D windows APIs will be the simplest to implement and should always be fast enough if you don't need to rotate and simply want to display an image and zoom and pan around. If you treat it as one large image the performance will not be as good when you zoom out if you are drawing with halftoning to give a nice smooth image. This is because it will effectively have to read all 55mb of image every time it draws.
To get around this performance issue you can make multiple bitmaps, effectively mip-mapping your image. As you zoom out you can pick the reduced resolution image closest to the resolution you are trying to draw . If you are not familiar with mip-mapping here is a Wikipedia link:
http://en.wikipedia.org/wiki/Mipmap
Implementing it with DirectX will be 10x as difficult. Different graphics hardware has different maximum texture sizes. Most likely you will need to break your image up in to multiple textures to draw and you will also have to keep track of render states, viewing matrices, etc.
However, if you do use DirectX, you can implement lots of real-time photo adjustments You can do real-time rotation by simply adjusting view matrices. You can do real-time contrast, brightness, gamma, and sharpness easily in a pixel shader.
There are two other API's I might suggest. If you are willing to limit yourself to Vista or later then Direct2D would be a little simpler than Direct3D. Also if you ever will need to implement it on a non-windows platform I would suggest using OpenGL instead. My current project is in Direct3D because a few years ago when we started it OpenGL was falling behind and I didn't forsee the popularity of Android devices. I now wish we had used OpenGL instead.
Try profiling to see where WPF is spending its time. Are you displaying the images at their native resolution? If not it might be worthwhile to do some preprocessing and create 1/2 resolution versions.
I'm creating a cad viewer which deals with very large image files and I am trying to optimise it for as high a framerate and low a memory footprint as possible.
It uses GDI+ for rendering onto a panel.
It's current flaw is with image rendering. Some of the files I'm using reference images which are particularly big (8000x8000 pixels). I've optimised the memory usage by only loading them when they become visible and disposing of them when they're not. This reduces the chance of the program running out of memory but prevents the images from being loaded and unloaded too often; however rendering the images themselves (context.DrawImage) still carries a very large overhead.
I'm now exploring ways of blitting the images into a smaller buffer of some sort, rendering this (generally much smaller) buffer, and then refreshing/rebuilding it when the zoom level changes significantly.
The problem is, I can't find any provision for this in GDI whatsoever. Can anyone suggest how I could achieve it?
I don't think GDI is designed for such high-speed updates of images. If you are trying to scroll the image, and tracking the mouse with each move, try to shift sections of the image and fill in the space opened up by the shift. Essentially reuse the tricks that programmers used when smoothly scrolling/panning graphics at a time when CPU's are slow and RAM is small.
If you're creating a new graphics application that needs a high framerate and are looking for suggestions, then I suggest abandoning GDI+ and using WPF. WPF uses hardware acceleration and supports retained-mode graphics; this has much better performance for less work than GDI+.
If there is some limitation that forbids WPF, please explain it in your question. This is relevant because such limitations can also impact GDI+ drawing.
GDI Binned in favour of Direct3D as 3D elements came into the equation anyway. Images turned into single thumbnails and larger tiles that are loaded in/out as required.
I faced a similar problem when developing my own GIS application. The best solution I found for this (even when using WPF) is to tile big images and display only the portions that are visible. This is being said, I would switch to WPF not only for the reasons given in the above answers but also for the good imaging support offered. See this link for more information
I have a front-end program for PND's running Windows CE (both 5.0 and 6.0) It user a high quantity of images (currently in png format) as buttons or for decoration purposes. The images are loaded from the SD Card via the new Bitmap(path);
I'm currently using v3.5 of the framework.
Upon loading, OS + my application have consumed 75 to 80% of the device's memory.
Wich good ways are there to optimize all that files?
The only ways that come to my mind to optimise these picture would be to resize them to the real size they are needed (like most icons will be used in a 16x16 size) and if you store also as bitmap no your card you can also set the used color palette to a size that matches the exact needs of the given picture (e.g. a picture of 16x16 has 256 pixels. So you need a maximum palette of 256 colors, but maybe also a self-defined 16 color palette is enough, cause in your picture are only 16 different colors.)
As a second approach you could also check if you maybe have the same picture multiple times loaded. In this case you should load it only once and use it multiple times.
A last one, that came to my mind belongs to background pictures. If you have a solid background you don't need a fully sized image of it. Just take a 1x1 bitmap and stretch it to the needed size. The same counts for gradient backgrounds, but in this case you have a 1x2 bitmap which will be stretched. And last but not least if you have a regular pattern, just take the smallest unique brick out of it and use some tile mechanism.
If you create the OS, is it possible to store the images as part of the OS?
In case it is possible, and the image is loaded fully to RAM, you can load the images only on a need to display basis and unload them when not needed. This will eliminate some loading time as well since accessing RAM is faster than accessing the SD card.
Another trick in with the same concept would be to copy all the images to a RAM based FS and load them only on a need basis - the down side is that this needs to be done after every reboot.