I need to edit(Increase the height) the Image on the fly.
The file is mostly 5000*4000 in dimension. I see the memory shoots up to peak level when I create a bmp of large dimensions and call Graphics.DrawImage method on the bmp instance.
How do I get rid of the Out Of Memory exception? Is there a way to work with large bitmaps in c# ?
The problem is the Huge amount of Memory required for the operation. Yours is taking about some GigaBytes, so the solution could be to use a Stream and process the file in chunks.
Or the the best option would be to use some Third party library for it. Below are some for .Net
AForge
Image Resizer
Also have a look at this SO question.
https://stackoverflow.com/questions/158756/what-is-the-best-image-manipulation-library
It's depends on you application specific requeirements, it's not very clear from yuor post, but generaly, working with big media files (images, sounds, videos) I think really good solution is
Memory Mapped Files
Save yuor image on the disk in memory mapped file and resize it having on disk, by free yuor RAM as much as possible from a lot of data that you, probably, don't need to have a fast access(in that moment at least)
Hope this helps.
Regards.
Related
I'm currently working on machine vision project. The issue is saving all of the images fast enough so that the queue of images doesn't build up in RAM and drain the user's memory. Is there any other method available for fast image saving?
This method helps the CPU issue, but because its not fast enough. The queue of images builds up and overloads the ram so I don't know what else I can do to solve both issues.
The fastest way for writing images in Halcon is using their proprietary format .hobj. It is much faster than any other lossless compression:
You can see the benchmark shown above in example write_image_benchmark.hdev
The only disadvantage is that you cannot open this format without the Halcon license.
I am building a windows project in .net 4.0 c#.
I am now saving images to hard drive, and that's not taking any memory at all because i am only loading the image once i want to view it. But now i need to remove the creating of images in hard drive and create it some other way. Like creating a memory stream and save it to an object and serialize it down to hard drive. The important part is that i cant have the images visible in hard drive, they must be encrypted or in an object or something.
So....When i tried to put it in a memory stream and save it to a list and then serialize it down to drive, i got a HUGE program memory allocation because for every image i create,and save as memory stream in my list, i allocate that memory and my program gets over 2-300 mb big.
I really don't have any idea of how to do this, can i somehow save it to memory stream and not allocate that memory in the program? Or can save it some other way without having the pictures totally visible as images in hard drive?
Main thing is as i said, i cant have the images as regular images on hard drive, they must not be able to be viewed by the user without the application. And i need to find a way that don't allocate all computers memory.
Thank you in advance!
save it to memory stream and not allocate that memory in the program
No. If it's in a memory stream, it is obviously in RAM.
So you need to either store the image entirely in RAM, or save it to disk.
If you don't want it to be viewable on the disk, then you need to encrypt the file so that the user can't view it outside of your application. How heavy the encryption is depends on how hard you want it to be to crack. If you don't care that much, then very basic XOR encryption will be very fast and not increase the size of the file. If you do care, then you want to use something like 3DES.
The file access is built on the principle of streams, which can be plug together in a chain. What you can do is, instead of directly reading/writing the images from/to disk through a filestream, you plug a CryptoStream between it.
You can use a GZIPStream and CryptoStream to make the pictures both smaller and encrypted.
This article shows you exactly how;
http://www.liensberger.it/web/blog/?p=33
At the end of my process, I need to upload several paged .tiff file images to a website. The files need to be very small, 500kb or less when i upload them.
The problem is, even with me resizing them a lot but at the same time being able to read a few lines of text that are in some of them, they are around 1mb each or so.
I first resize all images going into the tiff files but it's not enough. I need a way to change the quality of them to decrease their size as well.
Can C# do this or would I need a third party software to do it?
The files being uploaded MUST be .tiff.
You don't provide much detail about your data, so can only make some guesses as to what you might need to look at.
First, can you loose some resolution? Can you make the images smaller?
Second, can you loose some color depth? Are you saving the files in a color format when bilevel or greyscale images would suffice?
Third, how clean are these images? Are they photos, scanned documents, what? If they are scanned documents of text or drawings, then some pre-processing to remove noise can make a significant difference in size.
Lastly, what compression method are you saving the file with? Only a lossy format is going to give you the highest degree of compression is most circumstances.
Based on your follow-up:
1) If you can make smaller, this of course saves significant storage space. Determine what is the minimum acceptable resolution that they need to be and standardize on that.
2) If you need to persist color, then this step might not be as effective, since you would have to algorithmically decrease the dynamic range of colors used in the image to an acceptable level before compressing. If you are not sure what this means, then you would probably best skip considering this completely unless you can spend time learning more about image processing and/or using a image processing library that will simplify this for you.
3) I don't think you addressed this in your comments. If you want more precise help, you should update your original question and add much more detail about what you are trying to accomplish. Provide some explanations of what/why you need to do in order to help determine what tradeoffs make sense.
4) Yes, JPG is a lossy format, but I think you may be confusing a few different things (or I may not be understanding your intent from your description). If you are first resizing your original images down into a new JPG file (an intermediate image file), then you are building a TIFF file and inserting the resized JPG as a source image into a multi-page TIFF and saving that, then you need to realize that the process of how the files are compressed in the intermediate files do not necessarily have any correlation with the compression format used in the TIFF file. Depending on what you are using to build and create the TIFF file, the compression format used in the TIFF is done separately and you probably need to specify those parameters when you save that file. If this is what you are doing, then the intermediary process of saving the JPG files may be increasing the size a bit.
I have to combine lets say about 100 images (png-files).
The problem is not combining them, that runs quick enough.
But loading them from storage takes up to 4sec.
That is too much time.
So I can use TPL or multiple threads but it is still too slow.
How can I speed it up? To hold all the images in main storage is not an option unfortunately.
The images are quite small: from 4KByte to 10KByte
I'm loading the images that way:
Image img = Image.FromFile(file);
Creating multiple threads does not improve I/O speed in your case. That is about your harddisk's read-write speed.
Loading 100 high quality images in 4 seconds seems normal.
Two ideas:
If the bulk of the lag really is from IO, compress the files. Depending on their contents, even a simple ZIP compression could reduce their size, thus less bytes to read. The work will be to decompress them in memory. I don't know if that's applicable for your case
Lazy load them. Do you need all 100 images loaded all the time? Perhaps you can just load the first ones, or the most important ones first, let the software do the other stuff while it finishes loading the remaining images in the background.
How do you load your image files? Please share a piece of your code. My guess is that you read not whole file at once but something like byte by byte until the EOF... One of ways to optimize file loading is to load whole file into preallocated memory buffer.
I am working on a system that stores many images in a database as byte[]. Each byte[] is a multi page tiff already, but I have a need to retrieve the images, converting them all to one multi page tiff. The system was previously using the System.Drawing.Image classes, with Save and SaveAdd - this was nice in that it saves the file progressively, and therefore memory usage was minimal, however GDI+ concurrency issues were encountered - this is running on a back end in COM+.
The methods were converted to use the System.Windows.Media.Imaging classes, TiffBitmapDecoder and TiffBitmapEncoder, with a bit of massaging in between. This resolved the concurrency issue, but I am struggling to find a way to save the image progressively (i.e. frame by frame) to limit memory usage, and therefore the size of images that can be manipulated is much lower (i.e. I created a test 1.2GB image using the GDI+ classes, and could have gone on, but could only create a ~600MB file using the other method).
Is there any way to progressively save a multi page tiff image to avoid memory issues? If Save is called on the TiffBitmapEncoder more than once an error is thrown.
I think I would use the standard .NET way to decode the tiff images and write my own tiff encoder that can write progressively to disk. The tiff format specifications are public.
Decoding a tiff is not that easy, that's why I would use the TiffBitmapDecoder for this. Encoding is is easier, so I think it is doable to write an encoder that you can feed with separate frames and that is writing the necessary data progressively to disk. You'll probably have to update the header of the resulting tiff once you are ready to update the IFD (Image File Directory) entry.
Good luck!
I've done this via LibTIFF.NET I can handle multi-gigabyte images this way with no pain. See my question at
Using LibTIFF from c# to access tiled tiff images
Although I use it for tiled access, the memory issues are similar. LibTIFF allows full access to all TIFF functions, so files can be read and stored in a directory-like manner.
The other thing worth noting is the differences between GDI on different windows versions. See GDI .NET exceptions.