I want code to match two pictures on basis of SIFT keypoints.?
I have the following code for SIFT
public static Image<Bgr, Byte> siftFunction(Bitmap sourceBitmap)
{
Image<Gray, Byte> modelImage = new Image<Gray, byte>(sourceBitmap);
SIFTDetector siftCPU = new SIFTDetector();
VectorOfKeyPoint modelKeyPoints = new VectorOfKeyPoint();
MKeyPoint[] mKeyPoints = siftCPU.DetectKeyPoints(modelImage, null);
modelKeyPoints.Push(mKeyPoints);
ImageFeature<float>[] reulst = siftCPU.ComputeDescriptors(modelImage, null, mKeyPoints);
Image<Bgr, Byte> result = Features2DToolbox.DrawKeypoints(modelImage, modelKeyPoints, new Bgr(Color.Red), Features2DToolbox.KeypointDrawType.DEFAULT);
return result;
}
one solution is to use the provided example of object detection and then compare the area of detection. In case the whole observed image corresponds to the model image - your images match.
other solution - do not use the descriptors at all but just select the key points. Then compare the key points arrays of the two pictures and in case of equality consider the images to be matching.
first solution is somehow more reliable while the second is faster and easier.
Related
After the InRange and GaussianBlur function, I get the following image:
I find the edges and I get:
I need to get from this image three lines, which are represented in the following image (lines that are not red are bulging nodes):
How can I do that?
You should use an Erosion operation with a clever structuring element.
I suggest to use one with the length of the longest horizontal line to remove.
I've made a little example:
Let's take this toErode.png
double longestHorLine = 201;
Image<Gray, byte> toErode = new Image<Gray, byte>(path+ "toErode.png");
for(int i =1;i<4;i++)
{
Mat element = CvInvoke.GetStructuringElement(ElementShape.Rectangle, new Size((int)(longestHorLine / i), 3), new Point(-1, -1));
CvInvoke.Erode(toErode, toErode, element, new Point(-1, -1), i, BorderType.Default, new MCvScalar(0));
toErode.Save(path + "res"+i+".png");
}
Which produces the following outputs: res1.png ,res2.png and
res3.png
I would like to get the histogram of an image using Emgu.
I have a Gray scale double image
Image<Gray, double> Crop;
I can get a histogram using
Image<Gray, byte> CropByte = Crop.Convert<Gray, byte>();
DenseHistogram hist = new DenseHistogram(BinCount, new RangeF(0.0f, 255.0f));
hist.Calculate<Byte>(new Image<Gray, byte>[] { CropByte }, true, null);
The problem is, doing it this way I needed to convert to a byte Image. This is problematic because it skews my results. This gives a slightly different histogram to what I would get if it were possible to use a double image.
I have tried using CvInvoke to use the internal opencv function to compute a histogram.
IntPtr[] x = { Crop };
DenseHistogram cropHist = new DenseHistogram
(
BinCount,
new RangeF
(
MinCrop,
MaxCrop
)
);
CvInvoke.cvCalcArrHist(x, cropHist, false, IntPtr.Zero);
The trouble is I'm finding it hard to find how to use this function correctly
Does emgu/opencv allow me to do this? Do I need to write the function myself?
This is not an EmguCV/OpenCV issue, the idea itself makes no sense as a double histogram would require a lot more memory than what's available. What I say is true when you have an histogram allocated with a fixed size. The only way to go around this would be to have an histogram with dynamic allocation as the image is processed. But this would be dangerous with big images as it could allocate as much memory as the image itself.
I guess that your double image contains many identical values, otherwise an histogram would not be very useful. So one way to go around this is to remap your values to a short (16-bits) instead of byte (8-bits) this way your histogram would be quite similar to what you expect from your double values.
I had a look in histogram.cpp in the opencv source code.
Inside function
void cv::calcHist( const Mat* images, int nimages, const int* channels,
InputArray _mask, OutputArray _hist, int dims, const int* histSize,
const float** ranges, bool uniform, bool accumulate )
There is a section which handles different image types
if( depth == CV_8U )
calcHist_8u(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else if( depth == CV_16U )
calcHist_<ushort>(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else if( depth == CV_32F )
calcHist_<float>(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else
CV_Error(CV_StsUnsupportedFormat, "");
While double images are not handled here yet, float is.
While floating point looses a little bit of precision from double, its not a significant problem.
The following code snippet worked well for me
Image<Gray, float> CropByte = Crop.Convert<Gray, float>();
DenseHistogram hist = new DenseHistogram(BinCount, new RangeF(0.0f, 255.0f));
hist.Calculate<float>(new Image<Gray, float>[] { CropByte }, true, null);
I have an image and I want to find the most dominant lowest hue colour and the most dominant highest hue colour from an image.
It is possible that there are several colours/hues that are close to each other in populace to be dominant and if that is the case I would need to take an average of the most popular.
I am using emgu framework here.
I load an image into the HSV colour space.
I split the hue channel away from the main image.
I then use the DenseHistogram to return my ranges of 'buckets'.
Now, I could enumerate through the bin collection to get what I want but I am mindful of conserving memory when and wherever I can.
So, is there a way of getting what I need at all from the DenseHistogram 'object'?
i have tried MinMax (as shown below) and I have consider using linq but not sure if that is expensive to use and/or how to use it with just using a float array.
This is my code so far:
float[] GrayHist;
Image<Hsv, Byte> hsvSample = new Image<Hsv, byte>("An image file somewhere");
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Histo.Calculate(new Image<Gray, Byte>[] { hsvSample[0] }, true, null);
GrayHist = new float[256];
Histo.MatND.ManagedArray.CopyTo(GrayHist, 0);
float mins;
float maxs;
int[] minVals;
int[] maxVals;
Histo.MinMax(out mins, out maxs, out minVals, out maxVals); //only gets lowest and highest and not most popular
List<float> ranges= GrayHist.ToList().OrderBy( //not sure what to put here..
I need to detect a spiral shaped spring and count its coil turns.
I have tried as follows:
Image<Bgr, Byte> ProcessImage(Image<Bgr, Byte> img)
{
Image<Bgr, Byte> imgClone = new Image<Bgr,byte>( img.Width, img.Height);
imgClone = img.Clone();
Bgr bgrRed = new Bgr(System.Drawing.Color.Red);
#region Algorithm 1
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
Image<Gray, Byte> imgCloneGray = new Image<Gray, byte>(imgClone.Width, imgClone.Height);
CvInvoke.cvCvtColor(imgClone, imgCloneGray, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_BGR2GRAY);
imgCloneGray = imgCloneGray.Canny(c_thresh, c_threshLink);//, (int)c_threshSize);
Contour<System.Drawing.Point> pts = imgCloneGray.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
CvInvoke.cvCvtColor(imgCloneGray, imgCloneYcc, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_GRAY2BGR);
if (null != pts)
{
imgClone.Draw(pts, bgrRed, 2);
imgClone.Draw(pts.BoundingRectangle, bgrRed, 2);
}
#endregion
return imgClone;
}
I am some how able to get the spring but how to get the counts. I am looking for algorithms.
I am currently not looking for speed optimization.
This is similar like counting fingers. Spring spiral is very thin to get using contour. What else can be done. http://www.luna-arts.de/others/misc/HandsNew.zip
You have a good final binarization over there, but it looks like to be too restricted to this single case. I would do a relatively simpler, but probably more robust, preprocessing to allow a relatively good binarization. From Mathematical Morphology, there is a transform called h-dome, which is used to remove irrelevant minima/maxima by suppressing minima/maxima of height < h. This operation might not be readily available in your image processing library, but it is not hard to implement it. To binarize this preprocessed image I opted for Otsu's method, since it is automatic and statistically optimal.
Here is the input image after h-dome transformations, and the binary image:
Now, to count the number of "spiral turns" I did something very simple: I split the spirals so I can count them as connected components. To split them I did a single morphological opening with a vertical line, followed by a single dilation by an elementary square. This produces the following image:
Counting the components gives 15. Since you have 13 of them that are not too close, this approach counted them all correctly. The groups at left and right were counted as a single one.
The full Matlab code used to do these steps:
f = rgb2gray(imread('http://i.stack.imgur.com/i7x7L.jpg'));
% For this image, the two next lines are optional as they will to lead
% basically the same binary image.
f1 = imhmax(f, 30);
f2 = imhmin(f1, 30);
bin1 = ~im2bw(f2, graythresh(f2));
bin2 = bwmorph(imopen(bin1, strel('line', 15, 90)), 'dilate');
I'm trying to substract 1 image from another, somewhat like this:
Image<Gray, float> result, secondImage;
Image<Gray, byte> firstImage;
result = firstImage - secondImage;
But it gives an error
Operator '-' cannot be applied to operands of type 'Emgu.CV.Image<Emgu.CV.Structure.Gray,byte>' and 'Emgu.CV.Image<Emgu.CV.Structure.Gray,float>'
Maybe i need to convert firstImage into Image<Gray, float> type. But I don't know how to do it.
To quote from the documentation:
Color and Depth Conversion
Converting an Image between different colors and depths are simple. For example, if you have Image img1 and you wants to convert it to a grayscale image of Single, all you need to do is
Image<Gray, Single> img2 = img1.Convert<Gray, Single>();
So, in your case, you could use
result = firstImage.Convert<Gray, float>() - secondImage;