had a project called Time Lapse Viewer which opens a folder with thousands of images and animates them using Timer.
The application is written in XAML by teh way and the code below is what I used to load the image.
bi = new BitmapImage();
bi.BeginInit();
bi.DecodePixelWidth = 720;
bi.CacheOption = BitmapCacheOption.OnLoad;
bi.UriSource = new Uri(this.ImageFilenames[this._image_index]);
bi.EndInit();
this.imgImage.Source = bi;
The images being loaded are from the DSLR and these images has 1536x2034 resolution and about 1.30+MB of size.
The code above is already fast enough to load the image in 720p but there are some noticable lag. I understand that when the image gets loaded for the first time, it requires some bit of time.
Is there anymore faster way to load an image other than loading them as a thumbnail?
Thanks.
You can load ina separate thread (BiatmapImage CAN be accessed from another thread when it is frozen - call Freeze after loading).
You can do that in multiple threads
But that is it. This is why most systems prepare thumbnails and store them in a separate file for repeated use.
Related
I am trying to develop a photo slideshow on a panel pc with Windows CE. Each of the images I want to show in a PictureBox on my formular is of *.jpg-Type. Files are e.g. ~1MB big with a resolution of 2304x1728.
When I use following code, I get an OutOfMemory Exception.
Bitmap bitmap = new Bitmap(file_name)
PictureBox.Image = bitmap
After researching I found out that the *.jpg-Files might be "too big" to fit into Bitmap. With Compact Framework on VS2008 it is not possible for me to use something like
Image image1 = Image.From(file_name)
How can I get an image from jpg-Files to my PictureBox ?
EDIT[1]:
Thanks for your comments. I figured out that my pictures can not be loaded correctly because my device does not have enough memory to temporary load "huge" images. I solved the issue by writing some code that resizes my images before I copy them to my device.
Image bitmapNew = null;
using (Bitmap bitmap = (Bitmap)Image.FromFile(filename))
{
double proportion = (double) bitmap.Width / bitmap.Height;
proportion = Math.Round(proportion, 2);
if (proportion > 1)
{
iWidth = iWidthMax;
iHeight = (int)(iWidthMax / proportion);
}
else
{
iHeight = iHeightMax;
iWidth = (int) (iHeightMax * proportion);
}
bitmapNew = new Bitmap(bitmap, new Size(iWidth, iHeight));
}
The deviceĀ“s resolution defines the parameter iWidthMax & iHeightMax
The file size of a JPG or any other compressed image file type does not say how many pixels are stored (5, 10 or more megapixel). To show an image on screen, all pixels have to be loaded into memory. For larger images this is impossible, especially on Windows Mobile with it's 32MB process slot limitation.
And even if the whole image could be loaded it would not make sense, as the screen does not have that number of pixels and would just show only every 10th or 20th pixel.
The best approach is the load only what the screen and the memory can handle. The standard classes all are memory based and so you will reach the memory limit very early. But there is OpenNetCF which is able to 'scale' images on the fly using the file stream only.
I used that in https://github.com/hjgode/eMdiMail/blob/master/ImagePanel/ImageHelper.cs to get scaled images that make sense to load and show. Look at the code for getScaledBitmap() and CreateThumbnail(). These functions are part of a custom control in the code but can easily be used for other purposes.
The project was used to scan documents or business cards, show them as thumbs and zoomed and then eMail the scanned images to a document server.
I'm creating a visual which can have some Images as well. If I use the normal (non-transparent) Png images its working fine (in terms of performance and printing) however as soon as I replace a single transparent png, it becomes very slow and takes more than 3 times of time in printing the visual as well.
I'm using the following code.
var source = new Uri(filePath, UriKind.RelativeOrAbsolute);
BitmapImage imageBitMap =new BitmapImage(source);
var pictureImage = new Image();
pictureImage.Source = imageBitMap;
grid.Children.Add(pictureImage);
I used the ANTS Performance profiler and here are the statistics,
Using 4 graphics (Non-Tranparent) each approximately (50 -100KB) the average time to render them was 10ms for each graphic.
Soon I replace one of them with equivalent Transparent graphic, the average time shoots up and goes up to 34 ms for each graphic.
Any ideas why it takes that long for transparent graphics and how can I reduce it.
I tried to convert the tranparent Pngs into xaml using Adobe Illustrator and InkSpace as well but without success.
The Adobe Illustrator Plug-in, converts the png to 1KB xaml file with having an empty canvas with viewbox in it.
The InkSpace is converting the whole image into base64 string and setting it as source of the Image tag but that is not displaying in the visual at all.
I have a list of high-resolution images obtained from a web server. I need to populate them in the surface SDK scatterview item. To show the images I am using Image control for each image.
Code Logic:
User has identity tags which being placed on the surface table will fetch a list of high-resolution images associated with that tag. The fetching of the images is run in background to avoid jamming the UI. The code to obtain JPEG images in the background is
public BitmapSource FetchImage(string URLlink)
{
JpegBitmapDecoder decoder = null;
BitmapSource bitmapSource = null;
try
{
decoder = new JpegBitmapDecoder(new Uri(URLlink, UriKind.Absolute), BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad);
}
catch (Exception)
{
decoder = new JpegBitmapDecoder(new Uri("pack://application:,,,/Resources/ImageNotFound.jpg", UriKind.RelativeOrAbsolute), BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnDemand);
}
finally
{
bitmapSource = decoder.Frames[0];
bitmapSource.Freeze();
}
return bitmapSource;
}
The images are downloaded from the server and displayed in the image control. However, there is severe performance hit and application just hangs when more than 10 images are loaded. For low-resolution images, i can load even 20-30 images without hanging or slowing down the application.
Since i read the default algorithm for image control is Fant, I tried chaning the rendering properties to HighQuality. Application still hangs, but any lower than this kills the whole idea of having a high-resolution image for display.
RenderOptions.SetBitmapScalingMode(mic.ItemImage, BitmapScalingMode.HighQuality);
Is there a better way of loading images. I was thinking of first saving images to hard-disk and then loading to image source. Would that be effective in improving performance because I was thinking when I am loading images directly from URL it would store it in memory which eventually runs out. By saving images first I might avoid this, but there might a chance .NET actually is doing the same thing of storing first in temp file and then loading.
I also tried changing the BitmapCacheOption to all the available options but it didn't improve anything.
The scatterView can not gother a lot of images.
for exemple try to populate it with 1000 rectangle with one single color.
It will slow the application .
the problem can not be resolved ,Microsoft have to recode the ScatterView.
However you can desactivate some effects on the scatterView.
I'm working on a silverlight project where users get to create their own Collages.
The problem
When loading a bunch of images by using the BitmapImage class, Silverlight hogs up huge unreasonable amounts of RAM. 150 pictures where single ones fill up at most 4,5mb takes up about 1,6GB of RAM--thus ending up throwing memory exceptions.
I'm loading them through streams, since the user selects their own photos.
What I'm looking for
A class, method or some process to eliminate the huge amount of RAM being sucked up. Speed is an issue, so I don't want to be converting between images formats or anything like that. A fast resizing solution might work.
I've tried using a WriteableBitmap to render the images into, but I find this method forces me to reinvent the wheel when it comes to drag/drop and other things I want users to be able to do with the images.
What I would try is to take load each stream and resize it to a thumbnail (say, 640x480) before loading the next one. Then let the user work with the smaller images. Once you're ready to generate the PDF, reload the JPEGs from the original streams one at a time, disposing of each bitmap before loading the next.
I'm guessing you're doing something liek this:
Bitmap bitmap = new Bitmap (filename of jpeg);
and then doing:
OnPaint (...)
{
Graphics g = ....;
g.DrawImage (bitmap, ...);
}
This will be resizing the huge JPEG image to the size shown on screen every time you draw it. I'm guessing your JPEG is about 2500x2000 pixels in size. When you load a JPEG into a Bitmap, the bitmap loading code uncompresses the data and stores it either as RGB data in a format that will be easy to render (i.e. in the same pixel format as the display) or as a thing known as a Device Independant Bitmap (aka DIBitmap). These bitmaps require more RAM to store than a compressed JPEG.
Your current implementation is already doing format conversion and resizing, but doing it in an innefficent way, i.e. resizing a huge image down to screen size every time it's rendered.
Ideally, you want to load the image and scale it down. .Net has a system to do this:
Bitmap bitmap = new Bitmap (filename of JPEG);
Bitmap thumbnail = bitmap.GetThumbnailImage (width, height, ....);
bitmap.Dispose (); // this releases all the unmanged resources and makes the bitmap unusable - you may have been missing this step
bitmap = null; // let the GC know the object is no longer needed
where width and height are the size of the required thumbnail. Now, this might produce images that don't look as good as you might want them to (but it will use any embedded thumbnail data if it's present so it'll be faster), in which case, do a bitmap->bitmap resize.
When you create the PDF file, you'll need to reload the JPEG data, but from a user's point of view, that's OK. I'm sure the user won't mind waiting a short while to export the data to a PDF so long as you have some feedback to let the user know it's being worked on. You can also do this in a background thread and let the user work on another collage.
What might be happening to you is a little known fact about the garbage collection that got me as well. If an object is big enough ( I don't remember where the line is exactly ) Garbage Collection will decide that even though nothing currently in scope is linked to the object (in both yours and mine the objects are the images) it keeps the image in memory because it has decided that in case you ever want that image again it is cheaper to keep it around rather than delete it and reload it later.
This isn't a complete solution, but if you're going to be converting between bitmaps and JPEG's (and vice versa), you'll need to look into the FJCore image library. It's reasonably simple to use, and allows you to do things like resize JPEG images or move them to a different quality. If you're using Silverlight for client-side image processing, this library probably won't be sufficient, but it's certainly necessary.
You should also look into how you're presenting the images to the user. If you're doing collages with Silverlight, presumably you won't be able to use virtualizing controls, since the users will be manipulating all 150 images at once. But as other folks have said, you should also make sure you're not presenting bitmaps based on full-sized JPEG files either. A 1MB compressed JPEG is probably going to expand to a 10MB Bitmap, which is likely where a lot of your trouble is coming from. Make sure that you're basing the images you present to the user on much smaller (lower quality and resized) JPEG files.
The solution that finally worked for me was using WriteableBitmapEX to do the following:
Of course I only use thumbnails if the image isn't already small enough to store in memory.
The gotch'a I had was the fact that WriteableBitmap doesn't have a parameterless constructor, but initializing it with 0,0 as size and then loading the source sets these automatically. That didn't come naturally to me.
Thanks for the help everybody!
private WriteableBitmap getThumbnailFromBitmapStream(Stream bitmapStream, PhotoFrame photoFrame)
{
WriteableBitmap inputBitmap = new WriteableBitmap(0,0);
inputBitmap.SetSource(bitmapStream);
Size thumbnailSize = getThumbnailSizeFromWriteableBitmap(inputBitmap, photoFrame.size);
WriteableBitmap thumbnail = new WriteableBitmap(0,0);
thumbnail = inputBitmap.Resize((int)thumbnailSize.Width, (int)thumbnailSize.Height, WriteableBitmapExtensions.Interpolation.NearestNeighbor);
return thumbnail;
}
One additional variant to reduce ram using:
Dont load images, which ar invisible at this moment, and load them while user scrolling the page. This method uses by web developers to improve page load speed. For you its the way not to store hole amount of images in ram.
And I think the better way not to make thumbnails on run, but store them near the fullsize pictures and get only links for them. When it needed, you alway can get the link to fullsize picture and load it.
Trying to use 300dpi tif images for display on the web. At the moment, when the user uploads an image, I am dynamically creating a thumbnail. If a page is created referencing the high-res image with a width of 500x500px, can I use the same functionality to convert to a gif/jpg on the fly. What is the impending resolution of the jpg that would be created?
EDIT:
To further explain the usage, the user uploads 300dpi images which are approx 3000x3000 pixels. The user is using these images to create a catalog page which will be used for pdf printing. When they are creating the page, we only need 72dpi images to display to the screen, but for print, need the 300dpi images. Obviously they do not want to add a 3000x3000px image to the page, so it needs to be resized to the correct viewing area, e.g. 500x500px etc.
This boils down to a simple image resize. The discussion of DPIs is just ancillary data to calculate the scale factor.
As #Guffa said, you should do this at the time of the upload so that you can just serve static images in your viewer.
This will be a load on the server:
Load the full image. This will be about 27 MB of memory for your 3000x3000 images.
Resize. Lot's of math done lazily (still CPU intensive).
Compress. More CPU + Cost of writing to your drive.
Since you are already taking the time to generate a thumbnail, you can amortize that cost and this cost by not having to repeat Step 1 above (see the code).
After an image is uplaoded, I would recommend spinning off a thread to do this work. It's a load on the web server for sure, but you're only other option is to devote a second machine to performing the work. It will have to be done eventually.
Here is some code to do the job. The important lines are these:
OutputAsJpeg(Resize(big, 300.0, 72.0), new FileStream("ScreenView.jpg"));
OutputAsJpeg(Resize(big, bigSize, 64.0), new FileStream("Thumbnail.jpg"));
We can resize the big image however we need. In the first line we just scale it down by a fixed scale (72.0 / 300.0). On the second line, we force the image to have a final max dimension of 64 (scale factor = 64.0 / 3000.0).
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.IO;
BitmapSource Resize(BitmapSource original,
double originalScale,
double newScale) {
double s = newScale / originalScale;
return new TransformedBitmap(original, new ScaleTransform(s, s));
}
void OutputAsJpeg(BitmapSource src, Stream out) {
var encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(src));
encoder.Save(out);
}
// Load up your bitmap from the file system or whatever,
// then dump it out to a smaller version and a thumbnail.
// Assumes thumbnails have a max dimension of 64
BitmapSource big = new BitmapImage(new Uri("BigPage0.png",
UriKind. RelativeOrAbsolute));
double bigSize = Math.Max(big.PixelWidth, big.PixelHeight);
OutputAsJpeg(Resize(big, 300.0, 72.0), new FileStream("ScreenView.jpg"));
OutputAsJpeg(Resize(big, bigSize, 64.0), new FileStream("Thumbnail.jpg"));
If I understand what you want - you're trying to make a gif or jpg thumbnail of a very high resolution tif, for web display - if not, I apologize in advance....
If you want the thumbnail to be 500x500px, that is the resolution of the jpg/gif you'll want to create - 500x500, or at least 500x<500 or <500x500 (to fit in the box, unless you want distorted images).
For display on the web, the DPI does not matter. Just use the pixel resolution you wish directly.
Technically the JPG/GIF is created at 72-DPI by the Image classes in .NET. But the DPI really doesn't have any meaning to the browser - it just uses the dimensions 500x500.
When displaying an image on a web, the dpi (or more correctly ppi) setting is irrelevant. It's only the size in pixels that is relevant.
You can convert an image on the fly, but it is very work intensive for the server to do that every time the image is displayed. You should create the sizes that you need when the user uploads the image.