Hey there!
Here is my setting:
I've got a c# application that extracts features from a series of images. Due to the size of a dataset (several thousand images) it is heavily parallelized, that's why we have a high-end machine with ssd that runs on Windows7 x64 (.NET4 runtime) to lift the hard work. I'm developing it on a Windows XP SP3 x86 machine under Visual Studio 2008 (.NET3.5) with Windows Forms - no chance to move to WPF by the way.
Edit3:
It's weird but I think I finally found out what's going on. Seems to be the codec for the image format that yields different results on the two machines! I don't know exactly what is going on there but the decoder on the xp machine produces more sane results than the win7 one. Sadly the better version is still in the x86 XP system :(. I guess the only solution to this one is changing the input image format to something lossless like png or bmp (Stupid me not thinking about the file format in the first place :)).
Edit2:
Thank you for your efforts. I think I will stick to implementing a converter on my own, it's not exactly what I wanted but I have to solve it somehow :). If anybody is reading this who has some ideas for me please let me know.
Edit:
In the comments I was recommended to use a third party lib for this. I think I didn't made myself clear enough in that I don't really want to use the DrawImage approach anyway - it's just a flawed quickhack to get an actually working new Bitmap(tmp, ... myPixelFormat) that would hopefully use some interpolation. The thing I want to achieve is solely to convert the incoming image to a common PixelFormat with some standard interpolation.
My problem is as follows. Some of the source images are in Indexed8bpp jpg format that don't get along very well with the WinForms imaging stuff. Therefore in my image loading logic there is a check for indexed images that will convert the image to my applications default format (e.g. Format16bpp) like that:
Image GetImageByPath(string path)
{
Image result = null;
using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read))
{
Image tmp = Image.FromStream(fs); // Here goes the same image ...
if (tmp.PixelFormat == PixelFormat.Format1bppIndexed ||
tmp.PixelFormat == PixelFormat.Format4bppIndexed ||
tmp.PixelFormat == PixelFormat.Format8bppIndexed ||
tmp.PixelFormat == PixelFormat.Indexed)
{
// Creating a Bitmap container in the application's default format
result = new Bitmap(tmp.Width, tmp.Height, DefConf.DefaultPixelFormat);
Graphics g = Graphics.FromImage(result);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
// We need not to scale anything in here
Rectangle drawRect = new Rectangle(0, 0, tmp.Width, tmp.Height);
// (*) Here is where the strange thing happens - I know I could use
// DrawImageUnscaled - that isn't working either
g.DrawImage(tmp, drawRect, drawRect, GraphicsUnit.Pixel);
g.Dispose();
}
else
{
result = new Bitmap(tmp); // Just copying the input stream
}
tmp.Dispose();
}
// (**) At this stage the x86 XP memory image differs from the
// the x64 Win7 image despite having the same settings
// on the very same image o.O
result.GetPixel(0, 0).B; // x86: 102, x64: 102
result.GetPixel(1, 0).B; // x86: 104, x64: 102
result.GetPixel(2, 0).B; // x86: 83, x64: 85
result.GetPixel(3, 0).B; // x86: 117, x64: 121
...
return result;
}
I tracked the problem down to (*). I think the InterpolationMode has something to do with it but there's no difference which of them I choose the results are different at (**) on the two systems anyway. I've been investigating test image data with some stupid copy&paste lines, to be sure it's not an issue with accessing the data in a wrong way.
The images all together look like this Electron Backscatter Diffraction Pattern. The actual color values differ subtly but they carry a lot of information - the interpolation even enhances it. It looks like the composition algorithm on the x86 machine uses the InterpolationMode property whereas the x64 thingy just spreads the palette values out without taking any interpolation into account.
I never noticed any difference between the output of the two machines until the day I implemented a histogram view feature on the data in my application. On the x86 machine it is balanced as one would expect it from watching the images. The x64 machine on the other hand would rather give some kind of sparse bar-diagram, an indication of indexed image data. It even effects the overall output data of the whole application - the output differs on both machines with the same data, that's not a good thing.
To me it looks like a bug in the x64 implementation, but that's just me :-). I just want the images on the x64 machine to have the same values as the x86 ones.
If anybody has an idea I'd be very pleased. I've been searching for similar behavior on the net for ages but resistance seems futile :)
Oh look out ... a whale!
If you want to make sure that this is always done the same way, you'll have to write your own code to handle it. Fortunately, it's not too difficult.
Your 8bpp image has a palette that contains the actual color values. You need to read that palette and convert the color values (which, if I remember correctly, are 24 bits) to 16-bit color values. You're going to lose information in the conversion, but you're already losing information in your conversion. At least this way, you'll lost the information in a predictable way.
Put the converted color values (there won't be more than 256 of them) into an array that you can use for lookup. Then ...
Create your destination bitmap and call LockBits to get a pointer to the actual bitmap data. Call LockBits to get a pointer to the bitmap data of the source bitmap. Then, for each pixel:
read the source bitmap pixel (8 bytes)
get the color value (16 bits) from your converted color array
store the color value in the destination bitmap
You could do this with GetPixel and SetPixel, but it would be very very slow.
I vaguely seem to recall that .NET graphics classes rely on GDI+. If that's still the case today, then there's no point in trying your app on different 64 bit systems with different video drivers. Your best bet would be to either do the interpolation using raw GDI operations (P/Invoke) or write your own pixel interpolation routine in software. Neither option is particularly attractive.
You really should use OpenCV for image handling like that, it's available in C# here: OpenCVSharp.
I use a standard method for the graphics object, and with this settings outperforms X86. Count performance at release runs, not debug. Also check optimize code at project properties, build tab. Studio 2017, framework 4.7.1
public static Graphics CreateGraphics(Image i)
{
Graphics g = Graphics.FromImage(i);
g.CompositingMode = CompositingMode.SourceOver;
g.CompositingQuality = CompositingQuality.HighSpeed;
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.SmoothingMode = SmoothingMode.HighSpeed;
return g;
}
Related
I've been working on image recognition that grabs the screen using bitmap in winforms at 727, 115 area every 700 milliseconds. The get set pixel method is a way to slow and any other method I have found I don't really know how to use.
Bitmap bitmap = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
Graphics g = Graphics.FromImage(bitmap);
g.CopyFromScreen(896, 1250, 0, 0, bitmap.Size);
Bitmap myPic = Resources.SARCUT;
This creates the image on the area on the screen, and the myPic image is the image needing to be found in a 727, 115 area, as stated before. I've tried using aForge, Emgu, and LockPixel but I couldn't convert the bitmaps to the right format and never got it to work.
Any suggestions?
Bitmap and any image operation, together with rendering, is handled by GDI+ in .NET. The GDI+ albeit being faster than its predecessor GDI, it's still notably slow. Also, you seem to be performing a copy operation and this will always represent a performance hit. If you really need to improve performance you should not use the GDI+ framework, this means you have to operate on bitmaps directly and at a lower level. However, this last statement is very broad because it depends on exactly what you want to accomplish and how. Finally, if you want to compare two images you should avoid doing it pixel by pixel and instead do it byte by byte, it's faster since no indexing format and no value encoding has to be taken into account.
I'm currently using Kinect SDK with C# ( WPF application). I need to get RGB stream and process the images with EMGU library.
The problem is when i try to process the image with EMGU ( like converting image's format and change the colour of some pixels ) the application slows down and takes too long to respond .
I'm using 8GO RAM / Intel HD graphics 4000 / Intel core i7 .
Here's my simple code :
http://pastebin.com/5frLRwMN
Please help me :'(
I have run considerably heavier code (blob analysis) with the Kinect on a per frame basis and got away with great performance on a machine of similar configuration as yours, so I believe we can rule out your machine as the problem. I don't see any EMGU code in your sample however. In your example, you loop through 307k pixels with a pair of for loops. This is naturally a costly procedure to run, depending on the code in your loops. As you might expect, GetPixel and SetPixel are very slow methods to execute.
To speed up your code, first turn your image into an Emgu Image. Then to access your image, use a Byte:
Byte workImageRed = image.Data[x, y, 0];
Byte workImageGreen = image.Data[x, y, 1];
...
The third column refers to the BGR data. To set the pixel to another colour, try something like this:
byte[,,] workIm = image.Data;
workIm[x, y, 0] = 255;
workIm[x, y, 1] = 20;
...
Alternatively, you can set the pixel to a colour directly:
image[x, y] = new Bgr(Color.Blue);
This might be slower however.
Image processing is always slow. And if you do it at 30fps, it's normal that you get you app to hang: real time image processing is always a challenge. You may need to drop some frames in order to increase performace (...or perhaps switch to native C++ and seek a faster library).
if I try to create a bitmap bigger than 19000 px I get the error: Parameter is not valid.
How can I workaround this??
System.Drawing.Bitmap myimage= new System.Drawing.Bitmap(20000, 20000);
Keep in mind, that is a LOT of memory you are trying to allocate with that Bitmap.
Refer to http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/37684999-62c7-4c41-8167-745a2b486583/
.NET is likely refusing to create an image that uses up that much contiguous memory all at once.
Slightly harder to read, but this reference helps as well:
Each image in the system has the amount of memory defined by this formula:
bit-depth * width * height / 8
This means that an image 40800 pixels by 4050 will require over 660
megabytes of memory.
19000 pixels square, at 32bpp, would require 11552000000 bits (1.37 GB) to store the raster in memory. That's just the raw pixel data; any additional overhead inherent in the System.Drawing.Bitmap would add to that. Going up to 20k pixels square at the same color depth would require 1.5GB just for the raw pixel memory. In a single object, you are using 3/4 of the space reserved for the entire application in a 32-bit environment. A 64-bit environment has looser limits (usually), but you're still using 3/4 of the max size of a single object.
Why do you need such a colossal image size? Viewed at 1280x1024 res on a computer monitor, an image 19000 pixels on a side would be 14 screens wide by 18 screens tall. I can only imagine you're doing high-quality print graphics, in which case a 720dpi image would be a 26" square poster.
Set the PixelFormat when you new a bitmap, like:
new Bitmap(2000, 40000,PixelFormat.Format16bppRgb555)
and with the exact number above, it works for me. This may partly solve the problem.
I suspect you're hitting memory cap issues. However, there are many reasons a bitmap constructor can fail. The main reasons are GDI+ limits in CreateBitmap. System.Drawing.Bitmap, internally, uses the GDI native API when the bitmap is constructed.
That being said, a bitmap of that size is well over a GB of RAM, and it's likely that you're either hitting the scan line size limitation (64KB) or running out of memory.
Got this error when opening a TIF file. The problem was due to not able to open CMYK. Changed colorspace from RGB to CMYK and didn't get an error.
So I used taglib library to get image file size instead.
Code sample:
try
{
var image = new System.Drawing.Bitmap(filePath);
return string.Format("{0}px by {1}px", image.Width, image.Height);
}
catch (Exception)
{
try
{
TagLib.File file = TagLib.File.Create(filePath);
return string.Format("{0}px by {1}px", file.Properties.PhotoWidth, file.Properties.PhotoHeight);
}
catch (Exception)
{
return ("");
}
}
How can i compress and image file(*bmp,*jpeg) in C#,
I have to display some images as background on my controls, i m using following code to scale my image
Bitmap orgBitmap = new Bitmap(_filePath);
Bitmap regBitmap = new Bitmap(reqSize.Width, reqSize.Height);
using (Graphics gr = Graphics.FromImage(reqBitmap))
{
gr.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
gr.DrawImage(bmp, new RectangleF(0, 0, reqSize.Width, reqSize.Height));
}
It giving me the required bitmap.
My problem is if orginal bitmap is to heavy(2 MB) then when i load 50 image it feed all my memory, i want to compress the image as much i can without losing the so much quality,How can i do the same in .NET ?
Do you definitely need the large images to be present at execution time? Could you resize them (or save them at a slightly lower quality) using an image editing program (Photoshop, Paintshop Pro etc) beforehand? There seems to be little point in doing the work every time you run - especially as editing tools are likely to do a better job anyway.
Of course, this won't work if it's something like the user picking arbitrary images from their hard disk.
Another point: are you disposing of the bitmaps when you're finished with them? You aren't showing that in your code... if you're not disposing of the original (large) bitmaps then you'll be at the mercy of finalizers to release the unmanaged resources. The finalizers will also delay garbage collection of those objects.
JPEG always lose something, PNG don't.
This is how you encode and decode PNG with C#:
http://msdn.microsoft.com/en-us/library/aa970062.aspx
Perhaps I'm misunderstanding things, but why not convert the bitmaps to jpg's before you import them into your project as control backgrounds?
Good luck compressing JPEG. :) It's compressed already. As for your BMPs, make them JPEGs.
We have an application that show a large image file (satellite image) from local network resource.
To speed up the image rendering, we divide the image to smaller patches (e.g. 6x6 cm) and the app tiles them appropriately.
But each time the satellite image updated, the dividing pre-process should be done, which is a time consuming work.
I wonder how can we load the patches from the original file?
PS 1: I find the LeadTools library, but we need an open source solution.
PS 2: The app is in .NET C#
Edit 1:
The format is not a point for us, but currently it's JPG.
changing the format to a another could be consider, but BMP format is hardly acceptable, because of it large volume.
I wote a beautifull attempt of answer to your question, but my browser ate it... :(
Basically what I tried to say was:
1.- Since Jpeg (and most compression formats) uses a secuential compression, you'll always need to decode all the bits that are before the ones that you need.
2.- The solution I propose need to be done with each format you need to support.
3.- There are a lot of open source jpeg decoders that you could modify. Jpeg decoders need to decode blocks of bits (of variable size) that convert into pixel blocks of size 8x8. What you could do is modify the code to save in memory only the blocks you need and discard all the others as soon as they aren't needed any more (basically as soon as they are decoded). With those memory-saved blocks, create the image you need.
4.- Since Jpeg works with blocks of 8x8, your work could be easier if you work with patches of sizes multiples of 8 pixels.
5.- The modification done to the jpeg decoder could be used to substitute the preprocessing of the images you are doing if you save the patch and discard the blocks as soon as you complete them. It would be really fast and less memory consuming.
I know it needs a lot of work and there are a lot of details to be taken in consideration (specially if you work with color images), but if you need performance I belive you will always end fighting or playing (as you want to see it) with the bytes.
Hope it helps.
I'm not 100% sure what you're after but if you're looking for a way to go from string imagePath, Rectangle desiredPortion to a System.Drawing.Image object then perhaps something like this:
public System.Drawing.Image LoadImagePiece(string imagePath, Rectangle desiredPortion)
{
using (Image img = Image.FromFile(path))
{
Bitmap result = new Bitmap(desiredPortion.Width, desiredPortion.Height, PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage((Image)result))
{
g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
g.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
g.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
g.DrawImage(img, 0, 0, desiredPortion, GraphicsUnit.Pixel);
}
return result;
}
}
Note that for performance reasons you may want to consider building multiple output images at once rather than calling this multiple times - perhaps passing it an array of rectangles and getting back an array of images or similar.
If that's not what you're after can you clarify what you're actually looking for?