I'm making a program which can capture a small area on screen and will run something if there is any color on image that match the target colors. My program run as the following Sequence:
Get image from a specific area from screen
Save to a folder
using CountPixel to detect any target_color
However, after I click the button5 twice times (not double click), it through an exception at below line :
b.Save(#"C:\Applications\CaptureImage000.jpg", ImageFormat.Jpeg);
Exception :
An unhandled exception of type
'System.Runtime.InteropServices.ExternalException' occurred in
System.Drawing.dll
Additional information: A generic error occurred in GDI+
My questions are :
How can i fix this exception ?
I want to use another method instead of CountPixel() to improve performance, because I just need to detect only one target color to rise event.
Step 2 is troublesome. I wonder if i can skip it and use the other way to call: (#"C:\Applications\CaptureImage000.jpg", ImageFormat.Jpeg) , because using this long string isn't comfortable and result error when im trying to use with GetPixel,... or add it into some "value example" code on internet for improvement purpose.
private int CountPixels(Bitmap bm, Color target_color)
{
// Loop through the pixels.
int matches = 0;
for (int y = 0; y < bm.Height; y++)
{
for (int x = 0; x < bm.Width; x++)
{
if (bm.GetPixel(x, y) == target_color) matches++;
}
}
return matches;
}
private Bitmap CapturedImage(int x, int y)
{
Bitmap b = new Bitmap(XX, YY);
Graphics g = Graphics.FromImage(b);
g.CopyFromScreen(x, y, 0, 0, new Size(XX, YY));
b.Save(#"C:\Applications\CaptureImage000.jpg", ImageFormat.Jpeg);
/* Run 3 line below will lead to question 1 - through exception
Bitmap bm = new Bitmap(#"C:\Applications\CaptureImage000.jpg");
int black_pixels = CountPixels(b, Color.FromArgb(255, 0, 0, 0));
textBox3.Text = black_pixels + " black pixels";
*/
return b;
}
private void button5_Click(object sender, EventArgs e)// Do screen cap
{
Bitmap bmp = null;
bmp = CapturedImage(X0, Y0);
}
[EDIT] Worked on this tonight with OP, made some improvements
This now accounts for endianness of the machine and correctly compares colors by converting them to integers with the Color.ToArgb() function
the below code will work, I have added comments for clarity and given you some options. I wrote the code without an IDE but I am confident it is fine.
In both cases below, just keep the handle to the bitmap, don't need to save and reopen regardless of if you need a copy.
Exception issue and improvements to CapturedImage function
option A (recommended)
Don't save the bitmap, you already have a handle, the graphics object just modified the BMP. Just leave the below code as is for this function and it will work fine without un-commenting one of the other options.
Code and other options:
private Bitmap CapturedImage(Bitmap bm, int x, int y)
{
Bitmap b = new Bitmap(XX, YY);
Graphics g = Graphics.FromImage(b);
g.CopyFromScreen(x, y, 0, 0, new Size(XX, YY));
//option B - If you DO need to keep a copy of the image use PNG and delete the old image
/*
try
{
if(System.IO.File.Exists(#"C:\Applications\CaptureImage.png"))
{
System.IO.File.Delete(#"C:\Applications\CaptureImage.png");
}
b.Save(#"C:\Applications\CaptureImage.png", ImageFormat.Png);
}
catch (System.Exception ex)
{
MessageBox.Show("There was a problem trying to save the image, is the file in open in another program?\r\nError:\r\n\r\n" + ex.Message);
}
*/
//option C - If you DO need to keep a copy of the image AND keep all copies of all images when you click the button use PNG and generate unique filename
/*
int id = 0;
while(System.IO.File.Exists(#"C:\Applications\CaptureImage" + id.ToString().PadLeft('0',4) + ".png"))
{
//increment the id until a unique file name is found
id++;
}
b.Save(#"C:\Applications\CaptureImage" + id.ToString().PadLeft('0',4) + ".png", ImageFormat.Png);
*/
int black_pixels = CountPixels(b, Color.FromArgb(255, 0, 0, 0));
textBox3.Text = black_pixels + " black pixels";
return b;
}
Now for the CountPixels function, you have 3 options but really, you have one really solid option, so I am omitting the others.
This locks the bits in the BMP, uses marshalling to copy the data into an array and scans the array for data, very, very fast, and you will likely not even need to remove the count. If you do STILL want to remove the count, just add "return 1;" right underneath where it increments the matches variable.
Speed issue and improvements to CountPixels function
private int CountPixels(Bitmap bm, Color target_color)
{
int matches = 0;
Bitmap bmp = (Bitmap)bm.Clone();
BitmapData bmpDat = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat);
int size = bmpDat.Stride * bmpDat.Height;
byte[] subPx = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(bmpDat.Scan0, subPx, 0, size);
//change the 4 (ARGB) to a 3 (RGB) if you don't have an alpha channel, this is for 32bpp images
//ternary operator to check endianess of machine and organise pixel colors as A,R,G,B or B,G,R,A (little endian is reversed);
Color temp = BitConverter.IsLittleEndian ? Color.FromArgb(subPx[i + 2], subPx[i + 1], subPx[i]) : Color.FromArgb(subPx[i + 1], subPx[i + 2], subPx[i + 3]);
for (int i = 0; i < size; i += 4 ) //4 bytes per pixel A, R, G, B
{
if(temp.ToArgb() == target_color.ToArgb())
{
matches++;
}
}
System.Runtime.InteropServices.Marshal.Copy(subPx, 0, bmpDat.Scan0, subPx.Length);
bmp.UnlockBits(bmpDat);
return matches;
}
Finally the same function but allowing for a tolerance percent
private int CountPixels(Bitmap bm, Color target_color, float tolerancePercent)
{
int matches = 0;
Bitmap bmp = (Bitmap)bm.Clone();
BitmapData bmpDat = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat);
int size = bmpDat.Stride * bmpDat.Height;
byte[] subPx = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(bmpDat.Scan0, subPx, 0, size);
for (int i = 0; i < size; i += 4 )
{
byte r = BitConverter.IsLittleEndian ? subPx[i+2] : subPx[i+3];
byte g = BitConverter.IsLittleEndian ? subPx[i+1] : subPx[i+2];
byte b = BitConverter.IsLittleEndian ? subPx[i] : subPx[i+1];
float distancePercent = (float)Math.Sqrt(
Math.Abs(target_color.R-r) +
Math.Abs(target_color.G-g) +
Math.Abs(target_color.B-b)
) / 7.65f;
if(distancePercent < tolerancePercent)
{
matches++;
}
}
System.Runtime.InteropServices.Marshal.Copy(subPx, 0, bmpDat.Scan0, subPx.Length);
bmp.UnlockBits(bmpDat);
return matches;
}
Related
I'm saving a bitmap to a file on my hard drive inside of a loop (All the jpeg files within a directory are being saved to a database). The save works fine the first pass through the loop, but then gives the subject error on the second pass. I thought perhaps the file was getting locked so I tried generating a unique file name for each pass, and I'm also using Dispose() on the bitmap after the file get saved. Any idea what is causing this error?
Here is my code:
private string fileReducedDimName = #"c:\temp\Photos\test\filePhotoRedDim";
...
foreach (string file in files)
{
int i = 0;
//if the file dimensions are big, scale the file down
Stream photoStream = File.OpenRead(file);
byte[] photoByte = new byte[photoStream.Length];
photoStream.Read(photoByte, 0, System.Convert.ToInt32(photoByte.Length));
Image image = Image.FromStream(new MemoryStream(photoByte));
Bitmap bm = ScaleImage(image);
bm.Save(fileReducedDimName + i.ToString() + ".jpg", ImageFormat.Jpeg);//error occurs here
Array.Clear(photoByte,0, photoByte.Length);
bm.Dispose();
i ++;
}
...
Thanks
Here's the scale image code: (this seems to be working ok)
protected Bitmap ScaleImage(System.Drawing.Image Image)
{
//reduce dimensions of image if appropriate
int destWidth;
int destHeight;
int sourceRes;//resolution of image
int maxDimPix;//largest dimension of image pixels
int maxDimInch;//largest dimension of image inches
Double redFactor;//factor to reduce dimensions by
if (Image.Width > Image.Height)
{
maxDimPix = Image.Width;
}
else
{
maxDimPix = Image.Height;
}
sourceRes = Convert.ToInt32(Image.HorizontalResolution);
maxDimInch = Convert.ToInt32(maxDimPix / sourceRes);
//Assign size red factor based on max dimension of image (inches)
if (maxDimInch >= 17)
{
redFactor = 0.45;
}
else if (maxDimInch < 17 && maxDimInch >= 11)
{
redFactor = 0.65;
}
else if (maxDimInch < 11 && maxDimInch >= 8)
{
redFactor = 0.85;
}
else//smaller than 8" dont reduce dimensions
{
redFactor = 1;
}
destWidth = Convert.ToInt32(Image.Width * redFactor);
destHeight = Convert.ToInt32(Image.Height * redFactor);
Bitmap bm = new Bitmap(destWidth, destHeight,
PixelFormat.Format24bppRgb);
bm.SetResolution(Image.HorizontalResolution, Image.VerticalResolution);
Graphics grPhoto = Graphics.FromImage(bm);
grPhoto.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
grPhoto.DrawImage(Image,
new Rectangle(0, 0, destWidth, destHeight),
new Rectangle(0, 0, Image.Width, Image.Height),
GraphicsUnit.Pixel);
grPhoto.Dispose();
return bm;
}
If I'm reading the code right, your i variable is zero every time through the loop.
It is hard to diagnose exactly what is wrong, I would recommend that you use using statements to ensure that your instances are getting disposed of properly, but it looks like they are.
I originally thought it might be an issue with the ScaleImage. So I tried a different resize function (C# GDI+ Image Resize Function) and it worked, but i is always set to zero at beginning of each loop. Once you move i's initialization outside of the loop your scale method works as well.
private void MethodName()
{
string fileReducedDimName = #"c:\pics";
int i = 0;
foreach (string file in Directory.GetFiles(fileReducedDimName, "*.jpg"))
{
//if the file dimensions are big, scale the file down
using (Image image = Image.FromFile(file))
{
using (Bitmap bm = ScaleImage(image))
{
bm.Save(fileReducedDimName + #"\" + i.ToString() + ".jpg", ImageFormat.Jpeg);//error occurs here
//this is all redundant code - do not need
//Array.Clear(photoByte, 0, photoByte.Length);
//bm.Dispose();
}
}
//ResizeImage(file, 50, 50, fileReducedDimName +#"\" + i.ToString()+".jpg");
i++;
}
}
I know how to do it in WPF but I have problem for capturing depth in winforms application.
I found some code as below:
private void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
Bitmap DepthBitmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
if (_depthPixels.Length != depthFrame.PixelDataLength)
{
_depthPixels = new DepthImagePixel[depthFrame.PixelDataLength];
_mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
}
//Copy the depth frame data onto the bitmap
var _pixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(_pixelData);
BitmapData bmapdata = DepthBitmap.LockBits(new Rectangle(0, 0, depthFrame.Width,
depthFrame.Height), ImageLockMode.WriteOnly, DepthBitmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(_pixelData, 0, ptr, depthFrame.Width * depthFrame.Height);
DepthBitmap.UnlockBits(bmapdata);
pictureBox2.Image = DepthBitmap;
}
}
}
but this is not giving me the greyScale depth and it's purple. Any improvement or help?
I found the solution myself, by a function to convert the depth frame:
void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
this.depthFrame32 = new byte[depthFrame.Width * depthFrame.Height * 4];
//Update the image to the new format
this.depthPixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(this.depthPixelData);
byte[] convertedDepthBits = this.ConvertDepthFrame(this.depthPixelData, ((KinectSensor)sender).DepthStream);
Bitmap bmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, depthFrame.Width, depthFrame.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(convertedDepthBits, 0, ptr, 4 * depthFrame.PixelDataLength);
bmap.UnlockBits(bmapdata);
pictureBox2.Image = bmap;
}
}
}
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
{
//Run through the depth frame making the correlation between the two arrays
for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
{
// Console.WriteLine(i16 + "," + i32);
//We don’t care about player’s information here, so we are just going to rule it out by shifting the value.
int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
//We are left with 13 bits of depth information that we need to convert into an 8 bit number for each pixel.
//There are hundreds of ways to do this. This is just the simplest one.
//Lets create a byte variable called Distance.
//We will assign this variable a number that will come from the conversion of those 13 bits.
byte Distance = 0;
//XBox Kinects (default) are limited between 800mm and 4096mm.
int MinimumDistance = 800;
int MaximumDistance = 4096;
//XBox Kinects (default) are not reliable closer to 800mm, so let’s take those useless measurements out.
//If the distance on this pixel is bigger than 800mm, we will paint it in its equivalent gray
if (realDepth > MinimumDistance)
{
//Convert the realDepth into the 0 to 255 range for our actual distance.
//Use only one of the following Distance assignments
//White = Far
//Black = Close
//Distance = (byte)(((realDepth – MinimumDistance) * 255 / (MaximumDistance-MinimumDistance)));
//White = Close
//Black = Far
Distance = (byte)(255 - ((realDepth - MinimumDistance) * 255 / (MaximumDistance - MinimumDistance)));
//Use the distance to paint each layer (R G & of the current pixel.
//Painting R, G and B with the same color will make it go from black to gray
this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
}
//If we are closer than 800mm, the just paint it red so we know this pixel is not giving a good value
else
{
this.depthFrame32[i32 + RedIndex] = 0;
this.depthFrame32[i32 + GreenIndex] = 0;
this.depthFrame32[i32 + BlueIndex] = 0;
}
}
so i presume that rgb frame is working out for you in that case:
first to enable depth camera you need to call:
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH|all stuff you use also);
second to start streaming you need to call:
if (int(streams&_Kinect_zed)) ret=sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_DEPTH, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
NUI_IMAGE_STREAM_FLAG_DISTINCT_OVERFLOW_DEPTH_VALUES, // Image stream flags // NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE nefunguje !!!
2, // Number of frames to buffer
NULL, // Event handle
&stream_hzed); else stream_hzed=NULL;
beware not all resolution/flags combinations work on all models of kinect !!!
this one above is safe even for the older models like mine
this is how i capture frame (called repeatedly from timer or thread loop)
ret=sensor->NuiImageStreamGetNextFrame(stream_hzed,0,&imageFrame); if (ret>=0)
{
// copy data from frame
imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);
if (LockedRect.Pitch!=0)
{
const BYTE* curr = (const BYTE*) LockedRect.pBits;
union _col { BYTE u8[2]; WORD u16; } col;
col.u16=0;
pnt3d p;
long ax,ay;
float mxs=float(xs)/(62.0*deg),mys=float(ys)/(48.6*deg);
for(int x=0,y=0;;)
{
col.u8[0]=*curr; curr++;
col.u8[1]=*curr; curr++;
p.raw=col.u16;
p.rgb=&rgb_default;
if (p.raw==0x0000) p.z=0.0; // p.z je kolma vzdialenost od senzora (kinect to correctuje sam)
else if (p.raw>=0x8000) p.z=4.0;
else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
// depth FOV correction
p.x=zx[x]*p.z;
p.y=zy[y]*p.z;
// color FOV correction zed 58.5° x 45.6° | rgb 62.0° x 48.6° | 25mm distance
if (p.z>0.0)
{
ax=(((x+10-xs2)*241)>>8)+xs2; // cameras x-offset and different FOV
ay=(((y+30-ys2)*240)>>8)+ys2; // cameras y-offset??? and different FOV
if ((ax>=0)&&(ax<xs))
if ((ay>=0)&&(ay<ys)) p.rgb=&rgb[ay][ax];
}
xyz[y][x]=p;
x++; if (x>=xs) { x=0; y++; if (y>=ys) break; }
}
}
// release frame
imageFrame.pFrameTexture->UnlockRect(0);
ret=sensor->NuiImageStreamReleaseFrame(stream_hzed, &imageFrame);
stream_changed|=_Kinect_zed;
}
Sorry for incomplete source code ...
- all is copy pasted from my kinect class (BDS2006 Turbo C++)
- so you need to check your code if you do not forget something
- and if yes then transform my code to C# (i am not C# user)
- most likely you forget to NUIinitialize with depth flag
- or set invalid resolution/flags/ precision or framerate for your HW
if nothing work at all then you need to initialize the sensor in the first place
int sensors;
INuiSensor *sensor;
if ((NUIGetSensorCount(&sensors)<0)||(sensors<1)) return false;
if (NUICreateSensorByIndex(0,&sensor)<0) return false;
if you link to dll on your own then link only these functions:
typedef HRESULT(__stdcall *_NuiGetSensorCount )(int * pCount); _NuiGetSensorCount NUIGetSensorCount =NULL;
typedef HRESULT(__stdcall *_NuiCreateSensorByIndex)(int index,INuiSensor **ppNuiSensor); _NuiCreateSensorByIndex NUICreateSensorByIndex=NULL;
Every other function (must) is obtained via COM inside SDK headers !!!
if you link and use them on your own then you will not be connected to your physical Kinect !!!
Basically kinect sdk is developed for WPf application. In windows form you have convert the short array of the depth data to the BItmap to display it on picturebox. And based on my expriment WPF is better for programming with kinect.
Below is the function that I used to convert depth frame to Bitmap for showing in picture box.
private Bitmap ImageToBitmap(DepthImageFrame Image)
{
short[] pixeldata = new short[Image.PixelDataLength];
int stride = Image.Width * 2;
Image.CopyPixelDataTo(pixeldata);
Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format16bppRgb555);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
bmap.UnlockBits(bmapdata);
return bmap;
}
You may call it like this:
DepthImageFrame VFrame = e.OpenDepthImageFrame();
if (VFrame == null) return;
short[] pixelS = new short[VFrame.PixelDataLength];
Bitmap bmap = ImageToBitmap(VFrame);
I am trying to increase my image detection class using lockbits, yet this cause problems with the code and thus it does not run. How can i go about using lockbits and getpixel at the same time in order to speed up image detection, or can someone show me an alternative which is just as fast?
code:
static IntPtr Iptr = IntPtr.Zero;
static BitmapData bitmapData = null;
static public byte[] Pixels { get; set; }
static public int Depth { get; private set; }
static public int Width { get; private set; }
static public int Height { get; private set; }
static public void LockBits(Bitmap source)
{
// Get width and height of bitmap
Width = source.Width;
Height = source.Height;
// get total locked pixels count
int PixelCount = Width * Height;
// Create rectangle to lock
Rectangle rect = new Rectangle(0, 0, Width, Height);
// get source bitmap pixel format size
Depth = System.Drawing.Bitmap.GetPixelFormatSize(source.PixelFormat);
// Lock bitmap and return bitmap data
bitmapData = source.LockBits(rect, ImageLockMode.ReadWrite,
source.PixelFormat);
// create byte array to copy pixel values
int step = Depth / 8;
Pixels = new byte[PixelCount * step];
Iptr = bitmapData.Scan0;
// Copy data from pointer to array
Marshal.Copy(Iptr, Pixels, 0, Pixels.Length);
}
static public bool SimilarColors(int R1, int G1, int B1, int R2, int G2, int B2, int Tolerance)
{
bool returnValue = true;
if (Math.Abs(R1 - R2) > Tolerance || Math.Abs(G1 - G2) > Tolerance || Math.Abs(B1 - B2) > Tolerance)
{
returnValue = false;
}
return returnValue;
}
public bool findImage(Bitmap small, Bitmap large, out Point location)
{
unsafe
{
LockBits(small);
LockBits(large);
//Loop through large images width
for (int largeX = 0; largeX < large.Width; largeX++)
{
//And height
for (int largeY = 0; largeY < large.Height; largeY++)
{
//Loop through the small width
for (int smallX = 0; smallX < small.Width; smallX++)
{
//And height
for (int smallY = 0; smallY < small.Height; smallY++)
{
//Get current pixels for both image
Color currentSmall = small.GetPixel(smallX, smallY);
Color currentLarge = large.GetPixel(largeX + smallX, largeY + smallY);
//If they dont match (i.e. the image is not there)
if (!colorsMatch(currentSmall, currentLarge))
//Goto the next pixel in the large image
goto nextLoop;
}
}
//If all the pixels match up, then return true and change Point location to the top left co-ordinates where it was found
location = new Point(largeX, largeY);
return true;
//Go to next pixel on large image
nextLoop:
continue;
}
}
//Return false if image is not found, and set an empty point
location = Point.Empty;
return false;
}
}
You wouldn't want to rely on getPixel() for image processing; it's okay to make an occasional call to get a point value (e.g. on mouseover), but in general it's preferable to do image processing in image memory or in some 2D array that you can convert to a Bitmap when necessary.
To start, you might try writing a method that using LockBits/UnlockBits to extract an array that is convenient to manipulate. Once you're done manipulating the array, you can write it back to a bitmap using a different LockBits/UnlockBits function.
Here's some sample code I've used in the past. The first function returns a 1D array of values from a Bitmap. Since you know the bitmap's width, you can convert this 1D array to a 2D array for further processing. Once you're done processing, you can call the second function to convert the (modified) 1D array into a bitmap again.
public static byte[] Array1DFromBitmap(Bitmap bmp){
if (bmp == null) throw new NullReferenceException("Bitmap is null");
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData data = bmp.LockBits(rect, ImageLockMode.ReadWrite, bmp.PixelFormat);
IntPtr ptr = data.Scan0;
//declare an array to hold the bytes of the bitmap
int numBytes = data.Stride * bmp.Height;
byte[] bytes = new byte[numBytes];
//copy the RGB values into the array
System.Runtime.InteropServices.Marshal.Copy(ptr, bytes, 0, numBytes);
bmp.UnlockBits(data);
return bytes;
}
public static Bitmap BitmapFromArray1D(byte[] bytes, int width, int height)
{
Bitmap grayBmp = new Bitmap(width, height, PixelFormat.Format8bppIndexed);
Rectangle grayRect = new Rectangle(0, 0, grayBmp.Width, grayBmp.Height);
BitmapData grayData = grayBmp.LockBits(grayRect, ImageLockMode.ReadWrite, grayBmp.PixelFormat);
IntPtr grayPtr = grayData.Scan0;
int grayBytes = grayData.Stride * grayBmp.Height;
ColorPalette pal = grayBmp.Palette;
for (int g = 0; g < 256; g++){
pal.Entries[g] = Color.FromArgb(g, g, g);
}
grayBmp.Palette = pal;
System.Runtime.InteropServices.Marshal.Copy(bytes, 0, grayPtr, grayBytes);
grayBmp.UnlockBits(grayData);
return grayBmp;
}
These methods makes assumptions about the Bitmap pixel format that may not work for you, but I hope the general idea is clear: use LockBits/UnlockBits to extract an array of bytes from a Bitmap so that you can write and debug algorithms most easily, and then use LockBits/UnlockBits again to write the array to a Bitmap again.
For portability, I would recommend that your methods return the desired data types rather than manipulating global variables within the methods themselves.
If you've been using getPixel(), then converting to/from arrays as described above could speed up your code considerably for a small investment of coding effort.
Ok where to start. Better you understand what you are doing with lockBits.
First of all make sure, that you dont overwrite your byte array with.
LockBits(small);
LockBits(large);
due to the second call all the first call does is locking your image and that is not good since you doesn't unlock it again.
So add another byte array that represents the image.
You can do something like this
LockBits(small, true);
LockBits(large, false);
and change your Lockbits method
static public void LockBits(Bitmap source, bool flag)
{
...
Marshal.Copy(Iptr, Pixels, 0, Pixels.Length);
if(flag)
PixelsSmall=Pixels;
else
PixelsLarge=Pixels;
}
where PixelsLarge and PixelsSmall are globals and Pixels isn't
Those 2 contain your image. Now you have to compare it.
Now you have to compare each "set of bytes" therefore you have to know the Pixelformat.
Is it 32b/pix 24 or only 8 (ARGB,RGB,grayscale)
Let's take ARGB images. In this case a set would consist of 4 bytes (=32/8)
I am not sure about the order but I think the order of one set is ABGR or BGRA.
Hope this may help you. If you don't figure out how to compare the right pixels then ask again. Ah and dont forget to use the UnlockBits command.
I want to use this code to access the image pixels to be used with my Kinect code.. so i can replace the depth bits with the Image bits, so i created a WPF application and As soon as i run my code, i get this exception (it doesnt happen on a Console application) but i need this to run as a WPF application since . . i intend to use it with Kinect
XamlParseException
'The invocation of the constructor on type
'pixelManipulation.MainWindow' that matches the specified binding
constraints threw an exception.' Line number '3' and line position
'9'.
Here is the code:
public partial class MainWindow : Window
{
System.Drawing.Bitmap b = new
System.Drawing.Bitmap(#"autumn_scene.jpg");
public MainWindow()
{
InitializeComponent();
doSomethingWithBitmapFast(b);
}
public static void doSomethingWithBitmapFast(System.Drawing.Bitmap bmp)
{
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect,
System.Drawing.Imaging.ImageLockMode.ReadOnly,
bmp.PixelFormat);
IntPtr ptr = bmpData.Scan0;
int bytes = bmpData.Stride * bmp.Height;
byte[] rgbValues = new byte[bytes];
System.Runtime.InteropServices.Marshal.Copy(ptr,
rgbValues, 0, bytes);
byte red = 0;
byte green = 0;
byte blue = 0;
for (int x = 0; x < bmp.Width; x++)
{
for (int y = 0; y < bmp.Height; y++)
{
//See the link above for an explanation
//of this calculation (assumes 24bppRgb format)
int position = (y * bmpData.Stride) + (x * 3);
blue = rgbValues[position];
green = rgbValues[position + 1];
red = rgbValues[position + 2];
Console.WriteLine("Fast: " + red + " "
+ green + " " + blue);
}
}
bmp.UnlockBits(bmpData);
}
}
}
The issue is in your xaml file not in your code. As exception states xaml parse exception. My guess is that you had some event handler / property declared in xaml to bind to something that no longer exists. Post xaml file content for more help.
Edit
So it is not what it seams to be. Xaml file is OK, But code is not. There is an exception thrown in constructor on line
System.Drawing.Bitmap b = new
System.Drawing.Bitmap(#"autumn_scene.jpg");
I'm not sure why this call to bitmap constructor is invalid but changing it to:
System.Drawing.Bitmap b = new Bitmap(
System.Drawing.Image.FromFile(#"autumn_scene.jpg"));
should work fine.
Could some rewrite the following function to use any optimized mechanism? I'm pretty sure that this is not the way to proceed, copying pixel by pixel.
I have read about AlphaBlend, or BitBlt, but I'm not used to native code.
public static Bitmap GetAlphaBitmap(Bitmap srcBitmap)
{
Bitmap result = new Bitmap(srcBitmap.Width, srcBitmap.Height, PixelFormat.Format32bppArgb);
Rectangle bmpBounds = new Rectangle(0, 0, srcBitmap.Width, srcBitmap.Height);
BitmapData srcData = srcBitmap.LockBits(bmpBounds, ImageLockMode.ReadOnly, srcBitmap.PixelFormat);
try
{
for (int y = 0; y <= srcData.Height - 1; y++)
{
for (int x = 0; x <= srcData.Width - 1; x++)
{
Color pixelColor = Color.FromArgb(
Marshal.ReadInt32(srcData.Scan0, (srcData.Stride * y) + (4 * x)));
result.SetPixel(x, y, pixelColor);
}
}
}
finally
{
srcBitmap.UnlockBits(srcData);
}
return result;
}
IMPORTANT NOTE: The source image has a wrong pixel format (Format32bppRgb), so I need to adjust the alpha channel. This is the only mechanism that works for me.
The reason why the src image has a wrong pixel format is explained here.
I tried the following options without luck:
Creating a new image and draw the src image using the Graphics.DrawImage from src. Did not preserve the alpha.
Creating a new image using the Scan0 form src. Works fine, but has a problem when the GC dispose the src image (explained in this other post);
This solution is the only that really works, but I know that is not optimal. I need to know how to do it using the WinAPI or other optimal mechanism.
Thank you very much!
Assuming the source image does infact have 32 bits per pixel, this should be a fast enough implementation using unsafe code and pointers. The same can be achieved using marshalling, though at a performance loss of around 10%-20% if I remember correctly.
Using native methods will most likely be faster but this should already be orders of magnitude faster than SetPixel.
public unsafe static Bitmap Clone32BPPBitmap(Bitmap srcBitmap)
{
Bitmap result = new Bitmap(srcBitmap.Width, srcBitmap.Height, PixelFormat.Format32bppArgb);
Rectangle bmpBounds = new Rectangle(0, 0, srcBitmap.Width, srcBitmap.Height);
BitmapData srcData = srcBitmap.LockBits(bmpBounds, ImageLockMode.ReadOnly, srcBitmap.PixelFormat);
BitmapData resData = result.LockBits(bmpBounds, ImageLockMode.WriteOnly, result.PixelFormat);
int* srcScan0 = (int*)srcData.Scan0;
int* resScan0 = (int*)resData.Scan0;
int numPixels = srcData.Stride / 4 * srcData.Height;
try
{
for (int p = 0; p < numPixels; p++)
{
resScan0[p] = srcScan0[p];
}
}
finally
{
srcBitmap.UnlockBits(srcData);
result.UnlockBits(resData);
}
return result;
}
Here is the safe version of this method using marshalling:
public static Bitmap Copy32BPPBitmapSafe(Bitmap srcBitmap)
{
Bitmap result = new Bitmap(srcBitmap.Width, srcBitmap.Height, PixelFormat.Format32bppArgb);
Rectangle bmpBounds = new Rectangle(0, 0, srcBitmap.Width, srcBitmap.Height);
BitmapData srcData = srcBitmap.LockBits(bmpBounds, ImageLockMode.ReadOnly, srcBitmap.PixelFormat);
BitmapData resData = result.LockBits(bmpBounds, ImageLockMode.WriteOnly, result.PixelFormat);
Int64 srcScan0 = srcData.Scan0.ToInt64();
Int64 resScan0 = resData.Scan0.ToInt64();
int srcStride = srcData.Stride;
int resStride = resData.Stride;
int rowLength = Math.Abs(srcData.Stride);
try
{
byte[] buffer = new byte[rowLength];
for (int y = 0; y < srcData.Height; y++)
{
Marshal.Copy(new IntPtr(srcScan0 + y * srcStride), buffer, 0, rowLength);
Marshal.Copy(buffer, 0, new IntPtr(resScan0 + y * resStride), rowLength);
}
}
finally
{
srcBitmap.UnlockBits(srcData);
result.UnlockBits(resData);
}
return result;
}
Edit: Your source image has a negative stride, which means the scanlines are stored upside-down in memory (only on the y axis, rows still go from left to right). This effectively means that .Scan0 returns the first pixel of the last row of the bitmap.
As such I modified the code to copy one row at a time.
notice: I've only modified the safe code. The unsafe code still assumes positive strides for both images!
Try the Bitmap Clone method.
A utility class in my Codeblocks library http://codeblocks.codeplex.com allows you to transform a source image to any other image using LINQ.
See this sample here: http://codeblocks.codeplex.com/wikipage?title=Linq%20Image%20Processing%20sample&referringTitle=Home
While the sample transforms the same image format between source and destination, you could change things around, as well.
Note that I have clocked this code and it is much faster than even unsafe code for large images because it uses cached full-row read ahead.