My task is to recognize eye borders (not pupil but whole eye).
I can't take the white part of image even using HSV filtering.
I use filtering by hue and saturation and get no result:
Image<Gray, byte> huefilter = imghue.InRange(new Gray(Color.White.GetHue()-10), new Gray(Color.White.GetHue() + 10));
Image<Gray, byte> saturationfilter = imgsaturation.InRange(new Gray(Color.White.GetSaturation() - 10), new Gray(Color.White.GetSaturation() + 10));
Image<Gray, byte> resultImg = huefilter.And(saturationfilter);
Sample images:One Two Three
Please give an advise how to get the light zone around pupil with help of c# EmguCV.
Related
I'm very new to Emgucv, so need a little help?
The code below is mainly taken from various places from Google. It will take a jpg file (which has a green background) and allow, from a separate form to change the values of h1 and h2 settings so as to create (reveal) a mask.
Now what I want to be able to do with this mask is to turn it transparent.
At the moment it will just display a black background around a person (for example), and then saves to file.
I need to know how to turn the black background transparent, if this is the correct way to approach this?
Thanks in advance.
What I have so far is in C# :
imgInput = new Image<Bgr, byte>(FileName);
Image<Hsv, Byte> hsvimg = imgInput.Convert<Hsv, Byte>();
//extract the hue and value channels
Image<Gray, Byte>[] channels = hsvimg.Split(); // split into components
Image<Gray, Byte> imghue = channels[0]; // hsv, so channels[0] is hue.
Image<Gray, Byte> imgval = channels[2]; // hsv, so channels[2] is value.
//filter out all but "the color you want"...seems to be 0 to 128 (64, 72) ?
Image<Gray, Byte> huefilter = imghue.InRange(new Gray(h1), new Gray(h2));
// TURN IT TRANSPARENT somewhere around here?
pictureBox2.Image = imgInput.Copy(mask).Bitmap;
imgInput.Copy(mask).Save("changedImage.png");
I am not sure I really understand what you are trying to do. But a mask is a binary object. A mask is usually black for what you do not want and white for what you do. As far as I know, there is no transparent mask, as to me that makes no sense. Masks are used to extract parts of an image by masking out the rest.
Maybe you could elaborate on what it is you want to do?
Doug
I think I may have the solution I was looking for. I found some code on stackoverflow which I've tweaked a little :
public Image<Bgra, Byte> MakeTransparent(Image<Bgr, Byte> image, double r1, double r2)
{
Mat imageMat = image.Mat;
Mat finalMat = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 4);
Mat tmp = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
Mat alpha = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
CvInvoke.CvtColor(imageMat, tmp, ColorConversion.Bgr2Gray);
CvInvoke.Threshold(tmp, alpha, (int)r1, (int)r2, ThresholdType.Binary);
VectorOfMat rgb = new VectorOfMat(3);
CvInvoke.Split(imageMat, rgb);
Mat[] rgba = { rgb[0], rgb[1], rgb[2], alpha };
VectorOfMat vector = new VectorOfMat(rgba);
CvInvoke.Merge(vector, finalMat);
return finalMat.ToImage<Bgra, Byte>();
}
I'm now looking at adding SmoothGaussian to the mask to create a kind on blend, where the two images are layered, rather than a sharp cut-out.
I'm programming in C# and I use EmguCV (3.1). I use Canny edge detector from CvInvoke class. My problem is this algorithm do not find some edges. My OpenCL = true. Here is my problem:
The input image:
And the result:
As you see, rectangles that are not rotated, miss their top edges. My questions are:
1- Is it normal?
2- In case of NO how can I fix it?
HERE IS MY CODE:
CvInvoke.UseOpenCL = true;
Bitmap bm = new Bitmap(pictureBox1.Image);
Image<Gray, byte> im = new Image<Gray, byte>(bm);
UMat u = im.ToUMat();
CvInvoke.Canny(u, u, 150, 50);
pictureBox1.Image = u.Bitmap;
Hello I'm trying to apply point tracking to a scene.
Now I want to get the points only moving in horizontally. Anyone have any thoughts on this?
The arrays "Actual" and "nextfeature" contain the relevant x,y coordinates. I tried to get the difference from the two arrays, it did not work. I tried to get the optical flow using Farneback but it didn't gave me a satisfying result. I would really appreciate if anyone can give me any thoughts on how to get the points only moving in horizontal line.
thanks.
Here is the code.
private void ProcessFrame(object sender, EventArgs arg)
{
PointF[][] Actual = new PointF[0][];
if (Frame == null)
{
Frame = _capture.RetrieveBgrFrame();
Previous_Frame = Frame.Copy();
}
else
{
Image<Gray, byte> grayf = Frame.Convert<Gray, Byte>();
Actual = grayf.GoodFeaturesToTrack(300, 0.01d, 0.01d, 5);
Image<Gray, byte> frame1 = Frame.Convert<Gray, Byte>();
Image<Gray, byte> prev = Previous_Frame.Convert<Gray, Byte>();
Image<Gray, float> velx = new Image<Gray, float>(Frame.Size);
Image<Gray, float> vely = new Image<Gray, float>(Previous_Frame.Size);
Frame = _capture.RetrieveBgrFrame().Resize(300,300,Emgu.CV.CvEnum.INTER.CV_INTER_AREA);
Byte []status;
Single[] trer;
PointF[][] feature = Actual;
PointF[] nextFeature = new PointF[300];
Image<Gray, Byte> buf1 = new Image<Gray, Byte>(Frame.Size);
Image<Gray, Byte> buf2 = new Image<Gray, Byte>(Frame.Size);
opticalFlowFrame = new Image<Bgr, Byte>(prev.Size);
Image<Bgr, Byte> FlowFrame = new Image<Bgr, Byte>(prev.Size);
OpticalFlow.PyrLK(prev, frame1, Actual[0], new System.Drawing.Size(10, 10), 0, new MCvTermCriteria(20, 0.03d),
out nextFeature, out status, out trer);
for (int x = 0; x < Actual[0].Length ; x++)
{
opticalFlowFrame.Draw(new CircleF(new PointF(nextFeature[x].X, nextFeature[x].Y), 1f), new Bgr(Color.Blue), 2);
}
new1 = old;
old = nextFeature;
Actual[0] = nextFeature;
Previous_Frame = Frame.Copy();
captureImageBox.Image = Frame;
grayscaleImageBox.Image = opticalFlowFrame;
//cannyImageBox.Image = velx;
//smoothedGrayscaleImageBox.Image = vely;
}
}
First... I can only give you a general idea about this, not a code snippet...
Here's how you may do this:
(One of the many possible approaches of tackling this problem)
Take the zero-th frame and pass it through goodFeaturesToTrack. Collect the points in an array ...say, initialPoints.
Grab the (zero + one) -th frame. With respect to the points grabbed from step 1, run it through calcOpticalFlowPyrLK. Store the next points in another array ...say, nextPoints. Also keep track of status and error vectors.
Now, with initialPoints and nextPoints in tow, we leave the comfort of openCV and do things our way. For every feature in initialPoints and nextPoints (with status set to 1 and error below an acceptable threshold), we calculate the gradient between the points.
Accept only those point for horizontal motion whose angle of slope is either around 0 degrees or 180 degrees. Now... vector directions won't lie perfectly at 0 or 180... so take into account a bit of +/- threshold.
Repeat step 1 to 4 for all frames.
Going through the code you posted... it seems like you've almost nailed steps 1 and 2.
However, once you get the vector nextFeature, it seems like you're drawing circles around it. Interesting ...but not what we need.
Check if you can go about implementing the gradient calculation and filtering.
i need to compare two images and identify differences on them as percentage. "Absdiff" function on emgucv doesn't help with that. i already done that compare example on emgucv wiki. what i exactly want is how to get two image difference in numerical format?
//emgucv wiki compare example
//acquire the frame
Frame = capture.RetrieveBgrFrame(); //aquire a frame
Difference = Previous_Frame.AbsDiff(Frame);
//what i want is
double differenceValue=Previous_Frame."SOMETHING";
if you need more detail plz ask.
Thanks in advance.
EmguCV MatchTemplate based comparison
Bitmap inputMap = //bitmap source image
Image<Gray, Byte> sourceImage = new Image<Gray, Byte>(inputMap);
Bitmap tempBitmap = //Bitmap template image
Image<Gray, Byte> templateImage = new Image<Gray, Byte>(tempBitmap);
Image<Gray, float> resultImage = sourceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TemplateMatchingType.CcoeffNormed);
double[] minValues, maxValues;
Point[] minLocations, maxLocations;
resultImage.MinMax(out minValues, out maxValues, out minLocations, out maxLocations);
double percentage = maxValues[0] * 100; //this will be percentage of difference of two images
The two images need to have the same width and height or MatchTemplate will throw an exception. In case if we want to have an exact match.
Or
The template image should be smaller than the source image to get a number of occurrence of the template image on the source image
EmguCV AbsDiff based comparison
Bitmap inputMap = //bitmap source image
Image<Gray, Byte> sourceImage = new Image<Gray, Byte>(inputMap);
Bitmap tempBitmap = //Bitmap template image
Image<Gray, Byte> templateImage = new Image<Gray, Byte>(tempBitmap);
Image<Gray, byte> resultImage = new Image<Gray, byte>(templateImage.Width,
templateImage.Height);
CvInvoke.AbsDiff(sourceImage, templateImage, resultImage);
double diff = CvInvoke.CountNonZero(resultImage);
diff = (diff / (templateImage.Width * templateImage.Height)) * 100; // this will give you the difference in percentage
As per my experience, this is the best method compared to MatchTemplate based comparison. Match template failed to capture very minimal changes in two images.
But AbsDiff will be able to capture very small difference as well
There is a task of recognition red areas on an image and it requires maximum of accuracy. But the quality of a source image is quite bad. I'm trying to minimize a noise on a mask with detected red areas using cvThreshold. Unfortunately, there is no expected effect - gray artifacts stay.
//Converting from Bgr to Hsv
Image<Hsv, Byte> hsvimg = markedOrigRecImg.Convert<Hsv, Byte>();
Image<Gray, Byte>[] channels = hsvimg.Split();
Image<Gray, Byte> hue = channels[0];
Image<Gray, Byte> saturation = channels[1];
Image<Gray, Byte> value = channels[2];
Image<Gray, Byte> hueFilter = hue.InRange(new Gray(0), new Gray(30));
Image<Gray, Byte> satFilter = saturation.InRange(new Gray(100), new Gray(255));
Image<Gray, Byte> valFilter = value.InRange(new Gray(50), new Gray(255));
//Mask contains gray artifacts
Image<Gray,Byte> mask = (hueFilter.And(satFilter)).And(valFilter);
//Gray artifacts stays even if threshold (the third param.) value is 0...
CvInvoke.cvThreshold(mask, mask, 100, 255, THRESH.CV_THRESH_BINARY);
mask.Save("D:/img.jpg");
In the same time here it works fine - saved image is purely white:
#region test
Image<Gray,Byte> some = new Image<Gray, byte>(mask.Size);
some.SetValue(120);
CvInvoke.cvThreshold(some, some, 100, 255, THRESH.CV_THRESH_BINARY);
some.Save("D:/some.jpg");
#endregion
Mask before threshold example:
http://dl.dropbox.com/u/52502108/input.jpg
Mask after threshold example:
http://dl.dropbox.com/u/52502108/output.jpg
Thank You in advance.
Constantine B.
The reason of an appearing of that gray artifacts on saved images after applying of any kind of a threshold is... *.JPG standard saving format of Image<TColor, TDepth>! More precisely, a jpeg compression applied by it. So there was no problem with thresholding itself. Just a spoiled output image. It was confusing strongly though. The right way of saving such images (without artifacts) is, for example: Image<TColor,TDepth>.Bitmap.Save(your_path, ImageFormat.Bmp)