May I ask, how can I separate the detected object in a contour?
below is my source code
Image<Gray, byte> imgOutput = imgInput.Convert<Gray, byte>().ThresholdBinary(new Gray(100), new Gray(255)); Emgu.CV.Util.VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint(); Mat m = new Mat();
//Image<Gray, byte> imgOut = new Image<Gray, byte>(imgInput.Width, imgInput.Height, new Gray(0));
CvInvoke.FindContours(imgOutput, contours, m, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
if (contours.Size > 0)
{
double perimeter = CvInvoke.ArcLength(contours[1], true);
VectorOfPoint approx = new VectorOfPoint();
CvInvoke.ApproxPolyDP(contours[1], approx, 0.04 * perimeter, true);
CvInvoke.DrawContours(imgInput, contours, 1, new MCvScalar(0, 0, 255), 2);
pictureBox2.Image = imgInput.Bitmap;
}
Separated objects result.
Related
I'm using emguCV houghines Method and get a Mat Object, how can I get LineSegment2D[] by this Mat.
Sample of code:
Image<Bgr, Byte> img1 = new Image<Bgr, Byte>(fileName);
UMat uimage = new UMat();
CvInvoke.CvtColor(img1, uimage, ColorConversion.Bgr2Gray);
UMat mat = new UMat();
CvInvoke.HoughLines(uimage, mat, 1, Math.PI / 180, 20);
here I can see the size is not 0, how can I get these lines, or other function to show them
I have the following code:
LinearGradientBrush linGrBrush = new LinearGradientBrush();
linGrBrush.StartPoint = new Point(0,0);
linGrBrush.EndPoint = new Point(1, 0);
linGrBrush.GradientStops.Add(new GradientStop(Colors.Red, 0.0));
linGrBrush.GradientStops.Add(new GradientStop(Colors.Yellow, 0.5));
linGrBrush.GradientStops.Add(new GradientStop(Colors.White, 1.0));
Rectangle rect = new Rectangle();
rect.Width = 1000;
rect.Height = 1;
rect.Fill = linGrBrush;
rect.Arrange(new Rect(0, 0, 1, 1000));
rect.Measure(new Size(1000, 1));
If I do
myGrid.Children.Add(rect);
Then the gradient is drawn fine on the window.
I want to use this gradient for an intensity map somewhere else, so I need to get the pixels out of it. To do this, I understand I can convert it to a bitmap, using RenderTargetBitmap. Here's the next part of the code:
RenderTargetBitmap bmp = new RenderTargetBitmap(
1000,1,72,72,
PixelFormats.Pbgra32);
bmp.Render(rect);
Image myImage = new Image();
myImage.Source = bmp;
To test this, I do:
myGrid.Children.Add(myImage);
But nothing appears on the window. What am I doing wrong?
Arrange has to be called after Measure, and the Rect values should be passed correctly.
Instead of
rect.Arrange(new Rect(0, 0, 1, 1000)); // wrong width and height
rect.Measure(new Size(1000, 1));
you should do
var rect = new Rectangle { Fill = linGrBrush };
var size = new Size(1000, 1);
rect.Measure(size);
rect.Arrange(new Rect(size));
var bmp = new RenderTargetBitmap(1000, 1, 96, 96, PixelFormats.Pbgra32);
bmp.Render(rect);
i need to implement Skeletonization in Emgu CV but i not have success.
I have a code mentioned on the website below but it does not work :
Skeletonization using EmguCV^]
This code below DONT work:
Image<Gray, Byte> eroded = new Image<Gray, byte>(img2.Size);
Image<Gray, Byte> temp = new Image<Gray, byte>(img2.Size);
Image<Gray, Byte> skel = new Image<Gray, byte>(img2.Size);
skel.SetValue(0);
CvInvoke.cvThreshold(img2, img2, 127, 256, 0);
StructuringElementEx element = new StructuringElementEx(3, 3, 1, 1, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_CROSS);
bool done = false;
while (!done)
{
CvInvoke.cvErode(img2, eroded, element,1);
CvInvoke.cvDilate(eroded, temp, element,1);
temp = img2.Sub(temp);
skel = skel | temp;
img2 = eroded;
if (CvInvoke.cvCountNonZero(img2) == 0) done = true;
}
This code WORK but is very slow in video (sequential frames)
Image<Gray, byte> Skeleton(Image<Gray, byte> orgImg)
{
Image<Gray, byte> skel = new Image<Gray, byte>(orgImg.Size);
for (int y = 0; y < skel.Height; y++)
for (int x = 0; x < skel.Width; x++)
skel.Data[y, x, 0] = 0;
imageBoxOutputROI.Image = skel;
Image<Gray, byte> img = skel.Copy();
for (int y = 0; y < skel.Height; y++)
for (int x = 0; x < skel.Width; x++)
img.Data[y, x, 0] = orgImg.Data[y, x, 0];
StructuringElementEx element;
element = new StructuringElementEx(3, 3, 1, 1, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_CROSS);
Image<Gray, byte> temp;
bool done = false;
do
{
temp = img.MorphologyEx(element, Emgu.CV.CvEnum.CV_MORPH_OP.CV_MOP_OPEN, 1);
temp = temp.Not();
temp = temp.And(img);
skel = skel.Or(temp);
img = img.Erode(1);
double[] min, max;
Point[] pmin, pmax;
img.MinMax(out min, out max, out pmin, out pmax);
done = (max[0] == 0);
} while (!done);
return skel;
}
Input image:
I need a help to implement the skeletonization code.
A research in below sites but i not hace success:
http://felix.abecassis.me/2011/09/opencv-morphological-skeleton/[^]
https://stackoverflow.com/questions/26850944/skeleton-of-an-image-in-emgucv[^]
https://stackoverflow.com/questions/26850944/skeleton-of-an-image-in-emgucv[^]
I am grateful for any help .
Richard J. Algarve
Try below code.
It works for me (My input image has white background). I think the problem with your code is Sub and Or operators.
public static Bitmap Skelatanize(Bitmap image)
{
Image<Gray, byte> imgOld = new Image<Gray, byte>(image);
Image<Gray, byte> img2 = (new Image<Gray, byte>(imgOld.Width, imgOld.Height, new Gray(255))).Sub(imgOld);
Image<Gray, byte> eroded = new Image<Gray, byte>(img2.Size);
Image<Gray, byte> temp = new Image<Gray, byte>(img2.Size);
Image<Gray, byte> skel = new Image<Gray, byte>(img2.Size);
skel.SetValue(0);
CvInvoke.Threshold(img2, img2, 127, 256, 0);
var element = CvInvoke.GetStructuringElement(ElementShape.Cross, new Size(3, 3), new Point(-1, -1));
bool done = false;
while (!done)
{
CvInvoke.Erode(img2, eroded, element, new Point(-1, -1), 1, BorderType.Reflect, default(MCvScalar));
CvInvoke.Dilate(eroded, temp, element, new Point(-1, -1), 1, BorderType.Reflect, default(MCvScalar));
CvInvoke.Subtract(img2, temp, temp);
CvInvoke.BitwiseOr(skel, temp, skel);
eroded.CopyTo(img2);
if (CvInvoke.CountNonZero(img2) == 0) done = true;
}
return skel.Bitmap;
}
I am trying to use snake active contour from EmguCV ,but i don't take anything.Here is my code:
Image<Gray, Byte> img = new Image<Gray, Byte>(300, 300, new Gray());
Point center = new Point(100, 100);
double width = 20;
double height = 40;
Rectangle rect = new Rectangle(center, new Size(20, 20));
img.Draw(rect, new Gray(255.0), -1);
using (MemStorage stor = new MemStorage())
{
Seq<Point> pts = new Seq<Point>((int)SEQ_TYPE.CV_SEQ_POLYGON, stor);
pts.Push(new Point(20, 20));
pts.Push(new Point(20, 280));
pts.Push(new Point(280, 280));
pts.Push(new Point(280, 20));
//Image<Gray, Byte> canny = img.Canny(100.0, 40.0);
Seq<Point> snake = img.Snake(pts, 0.1f, 0.5f, 0.4f, new Size(21, 21), new MCvTermCriteria(500, 0.1), stor);
img.Draw(pts, new Gray(120), 1);
img.Draw(snake, new Gray(80), 2);
What i am doing wrong?Any idea?
you miss to draw your initialization points.
I have setup some code for you and for the whole community as there are no emgu snakes sample out there.
private void TestSnake()
{
Image<Gray, Byte> grayImg = new Image<Gray, Byte>(400, 400, new Gray());
Image<Bgr, Byte> img = new Image<Bgr, Byte>(400, 400, new Bgr(255,255,255));
// draw an outer circle on gray image
grayImg.Draw(new Ellipse(new PointF(200,200),new SizeF(100,100),0), new Gray(255.0), -1);
// inner circle on gray image to create a donut shape :-)
grayImg.Draw(new Ellipse(new PointF(200, 200), new SizeF(50, 50), 0), new Gray(0), -1);
// this is the center point we'll use to initialize our contour points
Point center = new Point(200, 200);
// radius of polar points
double radius = 70;
using (MemStorage stor = new MemStorage())
{
Seq<Point> pts = new Seq<Point>((int)Emgu.CV.CvEnum.SEQ_TYPE.CV_SEQ_POLYGON, stor);
int numPoint = 100;
for (int i = 0; i < numPoint; i++)
{ // let's have some fun with polar coordinates
Point pt = new Point((int)(center.X + (radius * Math.Cos(2 * Math.PI * i / numPoint))), (int)(center.Y + (radius * Math.Sin(2 * Math.PI * i / numPoint))) );
pts.Push(pt);
}
// draw contour points on result image
img.Draw(pts, new Bgr(Color.Green), 2);
// compute snakes
Seq<Point> snake = grayImg.Snake(pts, 1.0f, 1.0f, 1.0f, new Size(21, 21), new MCvTermCriteria(100, 0.0002), stor);
// draw snake result
img.Draw(snake, new Bgr(Color.Yellow), 2);
// use for display in a winform sample
imageBox1.Image = grayImg;
imageBox2.Image = img;
}
}
Hope this helps, just change some params and you will be surprised of result.
I'm trying to visualize the FFT of an image with EMGU. Here's the image I'm processing:
Here's the expected result:
Here's what I get:
Here's my code:
Image<Gray, float> image = new Image<Gray, float>(#"C:\Users\me\Desktop\sample3.jpg");
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage);
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
Matrix<float> outReal = new Matrix<float>(image.Size);
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
Image<Gray, float> fftImage = new Image<Gray, float>(outReal.Size);
CvInvoke.cvCopy(outReal, fftImage, IntPtr.Zero);
pictureBox1.Image = image.ToBitmap();
pictureBox2.Image = fftImage.Log().ToBitmap();
What mistake am I making here?
Update: as per Roger Rowland's suggestion here's my updated code. The result looks better but I'm not 100% sure it's correct. Here's the result:
Image<Gray, float> image = new Image<Gray, float>(#"C:\Users\yytov\Desktop\sample3.jpg");
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(image.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
CvInvoke.cvPow(outReal, outReal, 2.0);
CvInvoke.cvPow(outIm, outIm, 2.0);
CvInvoke.cvAdd(outReal, outIm, outReal, IntPtr.Zero);
CvInvoke.cvPow(outReal, outReal, 0.5);
CvInvoke.cvAddS(outReal, new MCvScalar(1.0), outReal, IntPtr.Zero); // 1 + Mag
CvInvoke.cvLog(outReal, outReal); // log(1 + Mag)
// Swap quadrants
int cx = outReal.Cols / 2;
int cy = outReal.Rows / 2;
Matrix<float> q0 = outReal.GetSubRect(new Rectangle(0, 0, cx, cy));
Matrix<float> q1 = outReal.GetSubRect(new Rectangle(cx, 0, cx, cy));
Matrix<float> q2 = outReal.GetSubRect(new Rectangle(0, cy, cx, cy));
Matrix<float> q3 = outReal.GetSubRect(new Rectangle(cx, cy, cx, cy));
Matrix<float> tmp = new Matrix<float>(q0.Size);
q0.CopyTo(tmp);
q3.CopyTo(q0);
tmp.CopyTo(q3);
q1.CopyTo(tmp);
q2.CopyTo(q1);
tmp.CopyTo(q2);
CvInvoke.cvNormalize(outReal, outReal, 0.0, 255.0, Emgu.CV.CvEnum.NORM_TYPE.CV_MINMAX, IntPtr.Zero);
Image<Gray, float> fftImage = new Image<Gray, float>(outReal.Size);
CvInvoke.cvCopy(outReal, fftImage, IntPtr.Zero);
pictureBox1.Image = image.ToBitmap();
pictureBox2.Image = fftImage.ToBitmap();
I cannot comment on the magnitude/intensity of the resulting image, but I can give you a tip about the spatial distribution of points in your image.
OpenCV doesn't rearrange the quadrants to put origin [0,0] into center of image. You have to rearrange the quadrants manually.
Look at step 6 at the following page:
http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
It's official doc for OpenCV, so it's in C++, but principle holds.