I have a C# application that needs to load about 50 TIFF images from a network drive. Each of these images has a size of about 10-15 MByte. I have to load these images, resize them, and export them in a PDF file.
Currently, I am using the following method to load the images from the network drive
Image image = Bitmap.FromFile(path.LocalPath);
The problem is that loading the 50 images takes quite a lot time that is not tolerable for my application scenario. Is there a way to speed up the image loading process?
I suggest you copy them to a local drive first. I suspect that Bitmap.FromFile may seek around the file (possibly reading redundantly) in a way which isn't a good fit for network drives - whereas just copying the files locally and then using Bitmap.FromFile does the expensive part (the network transfer) once.
Related
I am building a mobile app similar to Instagram in terms of working with Images. I am using amazon s3 to store the images and Mysql for the file path. I'm trying to ensure users upload great quality pictures but also ensure the file size is reasonable, Should I compress the images? Does anyone have an idea and what is the acceptable size for an image?
The definition of "acceptable" is totally up to you!
You should certainly store a high-quality image (probably at the original resolution), but you would also want to have smaller images for thumbnails and web/app viewing. This will make it faster to serve images and will reduce bandwidth costs.
A common technique is to have Amazon S3 trigger an AWS Lambda function when a new image is uploaded. The Lambda function can then resize the image into multiple sizes. Later, when your app wishes to retrieve an image, it can point to a resized image rather than the original.
When resizing images, you can also consider image quality. This allows JPG files to reduce in size without needing to reduce in resolution.
I am loading some JPEG and PNG images into GridView. I am using Software Bitmap to represent the images.
However, SoftwareBitmap stores an uncompressed form of the image. So the problem is, when loading multiple (many) images, my app eats lots of RAM, and I am concerned about high memory usage.
I have known that the GridView handles virtualization by itself.
While loading only about 150 images (90 MB as compressed image files on disk), the app's memory usage rises close to 500 MB!
How can I optimize? Do I need to use some SoftwareBitmap feature or it's alternative I am unaware of? Or will I have to do some kind of image processing to store compressed version in the RAM (I don't even know if that's possible).
I was creating a Windows phone 8.1 (RT) app where I'm having a few images both in the LocalStorage and also in the Pictures Library where I'm loading the images using GetThumbnailAsync().
For a PNG image of size 6MB+ the GetThumbnailAsync() in PicturesLibrary takes a few msec while the same image when copied to LocalStorage in the App takes around 10 secs to get the thumbnail.
Also I used
getThumbnailAsync(ThumbnailMode.ListView,100,ThumbnailOptions.ResizeThumbnail)
Yet it takes a long time but returns the Thumbnail in the desired pixel size. Can anyone point out why it takes so much time in case of LocalStorage and if there are any alternatives to make it fast.
The system pre-caches thumbnails for images in the pictures library, whereas it can't do that for images in an app's isolated storage.
There are two workarounds here:
Move the picture to a public location where the system can pre-generate a thumbnail
Embed a thumbnail in the EXIF data for the image in your local storage. Then the system can do a fast extract and return a thumbnail more quickly. Currently it has to decode the entire 6+ MB file to generate a thumbnail, where a fast extract only needs to pop open the much smaller thumbnail
I'm working on a web site which will host thousands of user uploaded images in the formats, .png, .jpeg, and .gif.
Since there will be such a huge amount of images, saving just a few kb of space per file will in the end mean quite a lot on total storage requirements.
My first thought was to enable windows folder compression on the folder that the files are stored in (using a Windows / IIS server). On a total of 1Gb of data the total space saved on this was ~200kb.
This to me seems like a poor result. I therefore went to check if the windows folder compression could be tweaked but according to this post it cant be: NTFS compressed folders
My next though was then that I could use libraries such as Seven Zip Sharp to compress the files individually as I save them. But before I did this I went to test a few different compression programs on a few images.
The results on a 7Mb .gif was that
7z, Compress to .z7 = 1kb space saved
7z, Compress to .zip = 2kb space INCREASE
windows, native zip = 4kb space saved.
So this leaves me with two thoughs.. the zipping programs I'm using aren't very good, or images are pretty much already compressed as far as they can be (..and I'm surprised that windows built in compression is better than 7z).
So my question is, is there any way to decrease the filesize of an image archive consisting of the image formats listed above?
the zipping programs I'm using suck, or images are pretty much already compressed as far as they can be
Most common image formats are already compressed (PNG, JPEG, etc). Compressing a file twice will almost never yield any positive result, most likely it will only increase the file size.
So my question is, is there any way to decrease the filesize of an image archive consisting of the image formats listed above?
No, not likely. Compressed files might have at most a little more to give, but you have specialize on images itself, not the compression algoritm. Some good options are available in the post of Robert Levy. A tool I used to strip out metadata is PNGOUT.
Most users will likely be uploading files that have a basic level of compression already done on them so that's why you aren't seeing a ton of benefit. Some users may be uploading uncompressed files though in which case your attempts would make a difference.
That said, image compression should be thought of as a unique field from normal file compression. Normal file compression techniques will be "lossless", ensuring that every bit of the file is restored when the file is uncompressed - images (and other media) can be compressed in "lossy" ways without degrading the file to an unacceptable level.
There are specialized tools such which you can use to do things like strip out metadata, apply a slight blur, perform sampling, reduce quality, reduce dimensions, etc. Have a look at the answer here for a good example: Recommendation for compressing JPG files with ImageMagick. The top answer took the example file from 264kb to 170kb.
I am working on a system that stores many images in a database as byte[]. Each byte[] is a multi page tiff already, but I have a need to retrieve the images, converting them all to one multi page tiff. The system was previously using the System.Drawing.Image classes, with Save and SaveAdd - this was nice in that it saves the file progressively, and therefore memory usage was minimal, however GDI+ concurrency issues were encountered - this is running on a back end in COM+.
The methods were converted to use the System.Windows.Media.Imaging classes, TiffBitmapDecoder and TiffBitmapEncoder, with a bit of massaging in between. This resolved the concurrency issue, but I am struggling to find a way to save the image progressively (i.e. frame by frame) to limit memory usage, and therefore the size of images that can be manipulated is much lower (i.e. I created a test 1.2GB image using the GDI+ classes, and could have gone on, but could only create a ~600MB file using the other method).
Is there any way to progressively save a multi page tiff image to avoid memory issues? If Save is called on the TiffBitmapEncoder more than once an error is thrown.
I think I would use the standard .NET way to decode the tiff images and write my own tiff encoder that can write progressively to disk. The tiff format specifications are public.
Decoding a tiff is not that easy, that's why I would use the TiffBitmapDecoder for this. Encoding is is easier, so I think it is doable to write an encoder that you can feed with separate frames and that is writing the necessary data progressively to disk. You'll probably have to update the header of the resulting tiff once you are ready to update the IFD (Image File Directory) entry.
Good luck!
I've done this via LibTIFF.NET I can handle multi-gigabyte images this way with no pain. See my question at
Using LibTIFF from c# to access tiled tiff images
Although I use it for tiled access, the memory issues are similar. LibTIFF allows full access to all TIFF functions, so files can be read and stored in a directory-like manner.
The other thing worth noting is the differences between GDI on different windows versions. See GDI .NET exceptions.