After the InRange and GaussianBlur function, I get the following image:
I find the edges and I get:
I need to get from this image three lines, which are represented in the following image (lines that are not red are bulging nodes):
How can I do that?
You should use an Erosion operation with a clever structuring element.
I suggest to use one with the length of the longest horizontal line to remove.
I've made a little example:
Let's take this toErode.png
double longestHorLine = 201;
Image<Gray, byte> toErode = new Image<Gray, byte>(path+ "toErode.png");
for(int i =1;i<4;i++)
{
Mat element = CvInvoke.GetStructuringElement(ElementShape.Rectangle, new Size((int)(longestHorLine / i), 3), new Point(-1, -1));
CvInvoke.Erode(toErode, toErode, element, new Point(-1, -1), i, BorderType.Default, new MCvScalar(0));
toErode.Save(path + "res"+i+".png");
}
Which produces the following outputs: res1.png ,res2.png and
res3.png
Related
How to complete a square using 4 lines?
<- Please see the picture.
the square
trying to find a min' square
Hi, I work with EMGUcv and I am supposed to identify 4 lines from which to complete an exact square. Maybe someone has an idea? I tried to find the least square but it is not exact and it is difficult to work with the results of it.
Thanks.
Image<Bgr, Byte> Clean_Image = new Image<Bgr, Byte>(Original).CopyBlank();
Method.ERectangle ERRectangle = null;//<------my Rectangle object..
if (Pointlist.Count > 0)
{
ERRectangle = new Method.ERectangle(Emgu.CV.CvInvoke.MinAreaRect(Pointlist.ToArray()));
Size s = new Size((int)Method.LineSize(ERRectangle.MainRectangle.Points[0], ERRectangle.MainRectangle.Points[2]),(int)Method.LineSize(ERRectangle.MainRectangle.Points[1], ERRectangle.MainRectangle.Points[3]));
Method.ERectangle rRect = new Method.ERectangle(new RotatedRect(ERRectangle.Center, s, (float)ERRectangle.ContourAngle-(float)45));
//I tried to find a square and align with the lines.. A possible idea but not accurate
Emgu.CV.CvInvoke.Polylines(Clean_Image, rRect.MainRectangle.Points.ToArray(), true, new MCvScalar(255, 0, 0), 7);
}
I'm new to EmguCV, OpenCV and machine vision in general. I translated the code from this stack overflow question from C++ to C#. I also copied their sample image to help myself understand if the code is working as expected or not.
Mat map = CvInvoke.Imread("C:/Users/Cindy/Desktop/coffee_mug.png", Emgu.CV.CvEnum.LoadImageType.AnyColor | Emgu.CV.CvEnum.LoadImageType.AnyDepth);
CvInvoke.Imshow("window", map);
Image<Gray, Byte> imageGray = map.ToImage<Gray, Byte>();
double min = 0, max = 0;
int[] minIndex = new int[5], maxIndex = new int[5];
CvInvoke.MinMaxIdx(imageGray, out min, out max, minIndex, maxIndex, null);
imageGray -= min;
Mat adjMap = new Mat();
CvInvoke.ConvertScaleAbs(imageGray, adjMap, 255/(max-min), 0);
CvInvoke.Imshow("Out", adjMap);
Original Image:
After Processing:
This doesn't look like a depth map to me, it just looks like a slightly modified grayscale image, so I'm curious where I went wrong in my code. MinMaxIdx() doesn't work without converting the image to grayscale first, unlike the code I linked above. Ultimately, what I'd like to do is to be able to generate relative depth maps from a single webcamera.
I want code to match two pictures on basis of SIFT keypoints.?
I have the following code for SIFT
public static Image<Bgr, Byte> siftFunction(Bitmap sourceBitmap)
{
Image<Gray, Byte> modelImage = new Image<Gray, byte>(sourceBitmap);
SIFTDetector siftCPU = new SIFTDetector();
VectorOfKeyPoint modelKeyPoints = new VectorOfKeyPoint();
MKeyPoint[] mKeyPoints = siftCPU.DetectKeyPoints(modelImage, null);
modelKeyPoints.Push(mKeyPoints);
ImageFeature<float>[] reulst = siftCPU.ComputeDescriptors(modelImage, null, mKeyPoints);
Image<Bgr, Byte> result = Features2DToolbox.DrawKeypoints(modelImage, modelKeyPoints, new Bgr(Color.Red), Features2DToolbox.KeypointDrawType.DEFAULT);
return result;
}
one solution is to use the provided example of object detection and then compare the area of detection. In case the whole observed image corresponds to the model image - your images match.
other solution - do not use the descriptors at all but just select the key points. Then compare the key points arrays of the two pictures and in case of equality consider the images to be matching.
first solution is somehow more reliable while the second is faster and easier.
Hi am using HoughLines Method to detect lines from a camera, i've filtered my image "imgProcessed" using ROI it means getting just the black objects to make the tracking simple, then when i intend to use the HoughLines method it gives me an error that my "CannyEdges" has some invalid arguments, here's my code :
Image<Gray, Byte> gray = imgProcessed.Convert<Gray, Byte>().PyrDown().PyrUp();
Gray cannyThreshold = new Gray(180);
Gray cannyThresholdLinking = new Gray(120);
Gray circleAccumulatorThreshold = new Gray(120);
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking);
LineSegment2D[] lines = imgProcessed.cannyEdges.HoughLines(
cannyThreshold,
cannyThresholdLinking,
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
50, //threshold
100, //min Line width
1 //gap between lines
)[0]; //Get the lines from the first channel
I've edited my code & it works, i've devised it in two parts : the first part where i've detected the canny edges rather then use the HoughLines method to detect it automatically, and the 2nd where i've used the HoughLinesBinary method rather than HoughLines with less arguments, here is the code :
Image<Gray, Byte> gray1 = imgProcessed.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> cannyGray = gray1.Canny(120, 180);
imgProcessed = cannyGray;
LineSegment2D[] lines = imgProcessed.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
50, //threshold
100, //min Line width
1 //gap between lines
)[0]; //Get the lines from the first channel
I need to detect a spiral shaped spring and count its coil turns.
I have tried as follows:
Image<Bgr, Byte> ProcessImage(Image<Bgr, Byte> img)
{
Image<Bgr, Byte> imgClone = new Image<Bgr,byte>( img.Width, img.Height);
imgClone = img.Clone();
Bgr bgrRed = new Bgr(System.Drawing.Color.Red);
#region Algorithm 1
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
Image<Gray, Byte> imgCloneGray = new Image<Gray, byte>(imgClone.Width, imgClone.Height);
CvInvoke.cvCvtColor(imgClone, imgCloneGray, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_BGR2GRAY);
imgCloneGray = imgCloneGray.Canny(c_thresh, c_threshLink);//, (int)c_threshSize);
Contour<System.Drawing.Point> pts = imgCloneGray.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
CvInvoke.cvCvtColor(imgCloneGray, imgCloneYcc, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_GRAY2BGR);
if (null != pts)
{
imgClone.Draw(pts, bgrRed, 2);
imgClone.Draw(pts.BoundingRectangle, bgrRed, 2);
}
#endregion
return imgClone;
}
I am some how able to get the spring but how to get the counts. I am looking for algorithms.
I am currently not looking for speed optimization.
This is similar like counting fingers. Spring spiral is very thin to get using contour. What else can be done. http://www.luna-arts.de/others/misc/HandsNew.zip
You have a good final binarization over there, but it looks like to be too restricted to this single case. I would do a relatively simpler, but probably more robust, preprocessing to allow a relatively good binarization. From Mathematical Morphology, there is a transform called h-dome, which is used to remove irrelevant minima/maxima by suppressing minima/maxima of height < h. This operation might not be readily available in your image processing library, but it is not hard to implement it. To binarize this preprocessed image I opted for Otsu's method, since it is automatic and statistically optimal.
Here is the input image after h-dome transformations, and the binary image:
Now, to count the number of "spiral turns" I did something very simple: I split the spirals so I can count them as connected components. To split them I did a single morphological opening with a vertical line, followed by a single dilation by an elementary square. This produces the following image:
Counting the components gives 15. Since you have 13 of them that are not too close, this approach counted them all correctly. The groups at left and right were counted as a single one.
The full Matlab code used to do these steps:
f = rgb2gray(imread('http://i.stack.imgur.com/i7x7L.jpg'));
% For this image, the two next lines are optional as they will to lead
% basically the same binary image.
f1 = imhmax(f, 30);
f2 = imhmin(f1, 30);
bin1 = ~im2bw(f2, graythresh(f2));
bin2 = bwmorph(imopen(bin1, strel('line', 15, 90)), 'dilate');