I could access each pixel of a Gray image using Data[,,] but cannot do so for Bgr image.
I have written the following code:
Image<Bgr, byte> currentFrame = capture.QueryFrame();
Image<Gray, byte> grayFrame = currentFrame.Convert<Gray, byte>();
Byte gray = grayFrame.Data[0, 10, 0];
Byte blue = currentFrame.Data[0, 10, 0];
which throws an exception: Object reference not set to an instance of an object.
I checked by adding breakpoint and the result was this:
currentFrame.Data is nul
grayFrame.Data has 3d array
gray has value 71
and then the next line caused error
Why is the currentFrame.Data, which should have been a 3d array, null? How can I access Image.Data property for Bgr image?
I am using emgucv 2.2.1. Same problem occured with 2.1 version.
Thanks for any help
What I have found is quite surprising.
Image<Bgr, byte> currentFrame = capture.QueryFrame();
byte b;
try
{
b = image.Data[0, 0, 0]; //Line (A)
}
catch (Exception ex)
{ MessageBox.Show("Before Convert : "+ex.Message); }
image = image.Convert<Bgr, byte>();
try
{
b = image.Data[0, 0, 0]; //Line (B)
}
catch (Exception ex)
{ MessageBox.Show("AfterConvert : "+ex.Message); }
In the above code: Line (A) throws exception "Object refence not set to an instance of an object". But after adding the code
image = image.Convert<Bgr, byte>();
Line(B) runs smoothly without any exception.
Does anyone know why this is hapenning?
[Though it is very late to answer, I am posting this for future reference.]
Recently I faced the same problem. I could not access pixel values of an image returned by QueryFrame through Data property. But after applying any operation (e.g. resize, convert) resulting image is accessible. The reason behind this and the solution is quite simple.
Data property of an image returned by QueryFrame is always null. Its image data is stored in unmanaged memory, therfore is not accessible thorough Data property. To access the pixel values of a frame, you just have to clone it.
Image<Bgr, byte> currentFrame = capture.QueryFrame();
Image<Bgr, byte> frame = currentFrame.Clone();
// now access using Data property
byte b = frame.Data[0,0,0];
byte g = frame.Data[0,0,1];
byte r = frame.Data[0,0,2];
capture.QueryFrame() returns null, which you try to access on the last line. I'd look inside that method. I'm guessing .Convert is an extension method, and that that method checks for null and returns something that is not null.
Heyy there,try to add .xml file in your .../bin/Debug. Then type in your ProcesFrame Method: //haar is HaarCascade haar = new HaarCascade("haarcascade_frontalface_default.xml");
Related
I'm using Emgu.CV to templateMatch and to save Images.
Unfortunetly I have ran into an issue that I have no been able to solve for a weeks.
Problem is that i serialize byte array and size from original Image to json file, and whenever i try to convert it back sometimes the image is distorted.
I have already tried skipping over serializing procces and it still became distorted.
Here is code of converting procces:
Image<Bgr565, byte> screenCrop = SnipMaker.takeSnip();//method creates screenshot at this point when i display the images they are 100% correct
byte[] data = screenCrop.Bytes;//I would get normaly all this from json file(in this case im skipping over it)
Mat mat = new Mat(screenCrop.Rows, screenCrop.Cols, screenCrop.Mat.Depth, asset.NumberOfChannels);
Marshal.Copy(data, 0, mat.DataPointer, screenCrop.asset.Cols * screenCrop.asset.Rows * asset.NumberOfChannels);
Image<Bgr565, byte> img = mat.ToImage<Bgr565, byte>();//This image is suddenly distorted
Problem is that this results depending on "I'm not sure what" is either prefecly good image or skwed one:
normal result
same code different result
Its almost like its sometimes 1 pixel behind but only thing that is changing is size and dimentions of screen shots.
I have tried dirrect ways like
Image<Bgr, byte> img = new Image<Bgr, byte>(width, height);
img.Bytes = data;//data is byte array that i got from file
This also gives sometimes correct picture but other times it throws an exeption (out of range exception in marshal.cs when trying to copy bytes from data to img)
only thing that i suspect at this point is that im doing something wrong whenever im taking screenshot but im not sure what:
public static Image<Bgr565, byte> Snip()
{
int screenWidth = (int)System.Windows.SystemParameters.PrimaryScreenWidth;
int screenHeight = (int)System.Windows.SystemParameters.PrimaryScreenHeight;
using (Bitmap bmp = new Bitmap(screenWidth, screenHeight))
{
using (Graphics gr = Graphics.FromImage(bmp))
gr.CopyFromScreen(0, 0, 0, 0, bmp.Size);
using (var snipper = new SnippingTool(bmp))
{
if (snipper.ShowDialog() == true)
{
Bitmap bitmapImage = new Bitmap(snipper.Image);
Rectangle rectangle = new Rectangle(0, 0, bitmapImage.Width, bitmapImage.Height);//System.Drawing
BitmapData bmpData = bitmapImage.LockBits(rectangle, ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);//System.Drawing.Imaging
Image<Bgr565, byte> outputImage = new Image<Bgr565, byte>(bitmapImage.Width, bitmapImage.Height, bmpData.Stride, bmpData.Scan0);
bitmapImage.Dispose();
snipper.Close();
return outputImage;
}
}
return null;
}
}
So far I have not been able to solve this and knowing my luck noone will proppably anwser me here. But please could someone help me with this?
Thank you in advance
So thank you to everyones help.
The issue was indeed in the screenshot script. I've used incorrect combination of
pixel formats which resulted in inconsistent bit transfer.
But because the step property in Image<bgr,byte>.Mat was calculated based on the width of the image (Emgucv SC):
step = sizeof(byte) * s.Width * channels;
It caused that some of the images looked normal and other didn't.(speculation based on observation)
Fix:
change all Image<Bgr, byte> to Image<Bgra, byte>
to make it 32bit and then change:
BitmapData bmpData = bitmapImage.LockBits(rectangle, ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
to:
BitmapData bmpData = bitmapImage.LockBits(rectangle, ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
Hope this will help someone in the future. : )
I'm very new to Emgucv, so need a little help?
The code below is mainly taken from various places from Google. It will take a jpg file (which has a green background) and allow, from a separate form to change the values of h1 and h2 settings so as to create (reveal) a mask.
Now what I want to be able to do with this mask is to turn it transparent.
At the moment it will just display a black background around a person (for example), and then saves to file.
I need to know how to turn the black background transparent, if this is the correct way to approach this?
Thanks in advance.
What I have so far is in C# :
imgInput = new Image<Bgr, byte>(FileName);
Image<Hsv, Byte> hsvimg = imgInput.Convert<Hsv, Byte>();
//extract the hue and value channels
Image<Gray, Byte>[] channels = hsvimg.Split(); // split into components
Image<Gray, Byte> imghue = channels[0]; // hsv, so channels[0] is hue.
Image<Gray, Byte> imgval = channels[2]; // hsv, so channels[2] is value.
//filter out all but "the color you want"...seems to be 0 to 128 (64, 72) ?
Image<Gray, Byte> huefilter = imghue.InRange(new Gray(h1), new Gray(h2));
// TURN IT TRANSPARENT somewhere around here?
pictureBox2.Image = imgInput.Copy(mask).Bitmap;
imgInput.Copy(mask).Save("changedImage.png");
I am not sure I really understand what you are trying to do. But a mask is a binary object. A mask is usually black for what you do not want and white for what you do. As far as I know, there is no transparent mask, as to me that makes no sense. Masks are used to extract parts of an image by masking out the rest.
Maybe you could elaborate on what it is you want to do?
Doug
I think I may have the solution I was looking for. I found some code on stackoverflow which I've tweaked a little :
public Image<Bgra, Byte> MakeTransparent(Image<Bgr, Byte> image, double r1, double r2)
{
Mat imageMat = image.Mat;
Mat finalMat = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 4);
Mat tmp = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
Mat alpha = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
CvInvoke.CvtColor(imageMat, tmp, ColorConversion.Bgr2Gray);
CvInvoke.Threshold(tmp, alpha, (int)r1, (int)r2, ThresholdType.Binary);
VectorOfMat rgb = new VectorOfMat(3);
CvInvoke.Split(imageMat, rgb);
Mat[] rgba = { rgb[0], rgb[1], rgb[2], alpha };
VectorOfMat vector = new VectorOfMat(rgba);
CvInvoke.Merge(vector, finalMat);
return finalMat.ToImage<Bgra, Byte>();
}
I'm now looking at adding SmoothGaussian to the mask to create a kind on blend, where the two images are layered, rather than a sharp cut-out.
Looking for example code to understand how to implement EmguCV tracker. I tried few things like in this poorly written code:
class ObjectTracker
{
Emgu.CV.Tracking.TrackerCSRT tracker = new Emgu.CV.Tracking.TrackerCSRT();
bool isActive = false;
public bool trackerActive = false;
public void Track(Bitmap pic, Rectangle selection,out Rectangle bound)
{
Rectangle result = new Rectangle();
Bitmap bitmap=pic; //This is your bitmap
Emgu.CV.Image<Bgr, Byte> imageCV = new Emgu.CV.Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Emgu.CV.Mat mat = imageCV.Mat; //This is your Image converted to Mat
if (tracker.Init(mat,selection))
{
while (tracker.Update(mat, out bound))
{
result = bound;
}
}
bound = result;
}
I'm aware of there is few logic flaws, but still I couldn't menage to get any result in different attempts.
Thanks!
Turns out, it is really simple.
First define a Tracker in a type you desire.
Example:
Emgu.CV.Tracking.TrackerCSRT Tracker= new Emgu.CV.Tracking.TrackerCSRT();
Then before scaning the selected area(RoI) you need to initialize tracker
Example:
Tracker.Initimage, roi);
Important Note: image must be in type of Emgu.CV.Mat , roi Drawing.Rectangle.
To convert your Bitmap to Mat you can use following method:
Example:
public Emgu.CV.Mat toMat(Bitmap pic)
{
Bitmap bitmap=pic;
Emgu.CV.Image<Bgr, Byte> imageCV = new Emgu.CV.Image<Bgr, byte>(bitmap);
Emgu.CV.Mat mat = imageCV.Mat;
return mat;
}
Note: This code belongs to someone else from stackoverflow which I forget the source. Thanks to him/her.
Finally using following statement will returns Rectangle that envelopes tracked object. Trick is object doesn't return Rectangle as method output, it returns it via out statement.
tracker.Update(image, out roi);
Last Note: If anyone knows how to improve performance or multithread this method, please leave a comment.
Have a nice day!
I am getting the following Exception at ProcessImage(bitmap1, bitmap2);
Unsupported Pixel Format of source or template image
and this is my code:
public static double FindComparisonRatioBetweenImages(
System.Drawing.Image one, System.Drawing.Image two)
{
Bitmap bitmap1 = new Bitmap(one);
Bitmap bitmap2 = new Bitmap(two);
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0);
TemplateMatch[] matchings = null;
matchings = tm.ProcessImage(bitmap1, bitmap2); // Exception occurs here!
return matchings[0].Similarity;
}
I have also passed managedImage from the below code into the method, but it still gives error:
UnmanagedImage unmanagedImageA = UnmanagedImage.FromManagedImage(bitmap1);
Bitmap managedImageA = unmanagedImageA.ToManagedImage();
UnmanagedImage unmanagedImageB = UnmanagedImage.FromManagedImage(bitmap2);
Bitmap managedImageB = unmanagedImageB.ToManagedImage();
I have passed Images randomly from my computer, they all give exception.
I have passed Blank Image edited in paint into the method,it still give exception.
Also checked, jpeg, png, bmp formats, nothing work.
Try ExhaustiveTemplateMatching:
The class implements exhaustive template matching algorithm, which performs complete scan of source image, comparing each pixel with corresponding pixel of template.
The class processes only grayscale 8 bpp and color 24 bpp images.
So, those are the image formats you must use.
As requested, to convert to a specific pixel format, you can do this:
public static Bitmap ConvertToFormat(this Image image, PixelFormat format)
{
Bitmap copy = new Bitmap(image.Width, image.Height, format);
using (Graphics gr = Graphics.FromImage(copy))
{
gr.DrawImage(image, new Rectangle(0, 0, copy.Width, copy.Height));
}
return copy;
}
The one you would use is System.Drawing.Imaging.PixelFormat.Format24bppRgb.
I am having a problem with EmguCV. I used a demo application, and edited it to my needs.
It involves the following function:
public override Image<Gray, byte> DetectSkin(Image<Bgr, byte> Img, IColor min, IColor max)
{
Image<Hsv, Byte> currentHsvFrame = Img.Convert<Hsv, Byte>();
Image<Gray, byte> skin = new Image<Gray, byte>(Img.Width, Img.Height);
skin = currentHsvFrame.InRange((Hsv)min,(Hsv)max);
return skin;
}
In the demo application, the Image comes from a video. The frame is capured from the video like this:
Image<Bgr, Byte> currentFrame;
grabber = new Emgu.CV.Capture(#".\..\..\..\M2U00253.MPG");
grabber.QueryFrame();
currentFrame = grabber.QueryFrame();
In my application, the Image comes from a microsoft kinect stream.
I use the following function:
private void SensorColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
// Copy the pixel data from the image to a temporary array
colorFrame.CopyPixelDataTo(this.colorPixels);
// Write the pixel data into our bitmap
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * sizeof(int),
0);
Bitmap b = BitmapFromWriteableBitmap(this.colorBitmap);
currentFrame = new Image<Bgr, byte>(b);
currentFrameCopy = currentFrame.Copy();
skinDetector = new YCrCbSkinDetector();
Image<Gray, Byte> skin = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
}
}
}
private static System.Drawing.Bitmap BitmapFromWriteableBitmap(WriteableBitmap writeBmp)
{
System.Drawing.Bitmap bmp;
using (System.IO.MemoryStream outStream = new System.IO.MemoryStream())
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create((BitmapSource)writeBmp));
enc.Save(outStream);
bmp = new System.Drawing.Bitmap(outStream);
}
return bmp;
}
Now, the demo application works, and mine doesn't. Mine gives the following exception:
And, the image here, contains the following:
I really don't understand this exception. And, now, when I run the demo, working aplication, the image, contains:
Which is, in my eyes, exactly the same. I really don't understand this. Help is very welcome!
To make things easier I've uploaded a working WPF solution for you to the code reference sourceforge page I've been building:
http://sourceforge.net/projects/emguexample/files/Capture/Kinect_SkinDetector_WPF.zip/download
https://sourceforge.net/projects/emguexample/files/Capture/
This was designed and tested using EMGU x64 2.42 so in the Lib folder of the project you will find the referenced dlls. If you are using a different version you will need to delete the current references and replace them with the version you're using.
Secondly the project is design like all projects from the code reference library to be built from the Emgu.CV.Example folder into the ..\EMGU 2.X.X.X\bin.. global bin directory where the opencv compiled libraries are within a folder either x86 or x64.
If you struggle to get the code working I can provide all components but I hate redistributing all the opencv files that you already have so let me know if you want this.
You will need to resize the Mainwindow manually to display both images as I didn't spend to much time playing with layout.
So the code...
In the form initialisation method I check for the kinect sensor and set up the eventhandlers for the frames ready. I have left the original threshold values and skinDetector type although I don't use the EMGU version I just forgot to remove it. You will need to play with the threshold values and so on.
//// Look through all sensors and start the first connected one.
//// This requires that a Kinect is connected at the time of app startup.
//// To make your app robust against plug/unplug,
//// it is recommended to use KinectSensorChooser provided in Microsoft.Kinect.Toolkit (See components in Toolkit Browser).
foreach (var potentialSensor in KinectSensor.KinectSensors)
{
if (potentialSensor.Status == KinectStatus.Connected)
{
this.KS = potentialSensor;
break;
}
}
//If we have a Kinect Sensor we will set it up
if (null != KS)
{
// Turn on the color stream to receive color frames
KS.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
//Turn on the depth stream to recieve depth frames
KS.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
//Start the Streaming process
KS.Start();
//Create a link to a callback to deal with the frames
KS.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(KS_AllFramesReady);
//We set up a thread to process the image/disparty map from the kinect
//Why? The kinect AllFramesReady has a timeout if it has not finished the streams will simply stop
KinectBuffer = new Thread(ProcessBuffer);
hsv_min = new Hsv(0, 45, 0);
hsv_max = new Hsv(20, 255, 255);
YCrCb_min = new Ycc(0, 131, 80);
YCrCb_max = new Ycc(255, 185, 135);
detector = new AdaptiveSkinDetector(1, AdaptiveSkinDetector.MorphingMethod.NONE);
skinDetector = new YCrCbSkinDetector();
}
I always play with the kinect data in a new thread for speed but you may want to advanced this to a Background worker if you plan to do any more heavy processing so it is better managed.
The thread calls the ProcessBuffer() method you can ignore all the commented code as this is the remanence of the code used to display the depth image. Again I'm using the Marshall copy method to keep things fast but the thing to look for is the Dispatcher.BeginInvoke in WPF that allows the images to be displayed from the Kinect thread. This is required as I'm not processing on the main thread.
//This takes the byte[] array from the kinect and makes a bitmap from the colour data for us
byte[] pixeldata = new byte[CF.PixelDataLength];
CF.CopyPixelDataTo(pixeldata);
System.Drawing.Bitmap bmap = new System.Drawing.Bitmap(CF.Width, CF.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new System.Drawing.Rectangle(0, 0, CF.Width, CF.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, CF.PixelDataLength);
bmap.UnlockBits(bmapdata);
//display our colour frame
currentFrame = new Image<Bgr, Byte>(bmap);
Image<Gray, Byte> skin2 = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
ExtractContourAndHull(skin2);
DrawAndComputeFingersNum();
//Display our images using WPF Dispatcher Invoke as this is a sub thread.
Dispatcher.BeginInvoke((Action)(() =>
{
ColorImage.Source = BitmapSourceConvert.ToBitmapSource(currentFrame);
}), System.Windows.Threading.DispatcherPriority.Render, null);
Dispatcher.BeginInvoke((Action)(() =>
{
SkinImage.Source = BitmapSourceConvert.ToBitmapSource(skin2);
}), System.Windows.Threading.DispatcherPriority.Render, null);
I hope this helps I will at some point neaten up the code I uploaded,
Cheers