I am new to EmguCV and OpenCV. I want to detect the text Regions from an Image using EmguCV.
There are already some solutions posted on Stack using OpenCV.
Extracting text OpenCV
But unable to convert that OpenCV code to EmguCV.
Here is a direct conversion of the accepted answer in the link you provided into c# with EMGU. You might have to make some alterations since its a slightly different implementation but it should get you started. I also doubt it is a very robust so depending on your specific use it might not be suitable. Best of luck.
public List<Rectangle> detectLetters(Image<Bgr, Byte> img)
{
List<Rectangle> rects = new List<Rectangle>();
Image<Gray, Single> img_sobel;
Image<Gray, Byte> img_gray, img_threshold;
img_gray = img.Convert<Gray, Byte>();
img_sobel = img_gray.Sobel(1,0,3);
img_threshold = new Image<Gray, byte>(img_sobel.Size);
CvInvoke.cvThreshold(img_sobel.Convert<Gray, Byte>(), img_threshold, 0, 255, Emgu.CV.CvEnum.THRESH.CV_THRESH_OTSU);
StructuringElementEx element = new StructuringElementEx(3, 17, 1, 6, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_RECT);
CvInvoke.cvMorphologyEx(img_threshold, img_threshold, IntPtr.Zero, element, Emgu.CV.CvEnum.CV_MORPH_OP.CV_MOP_CLOSE, 1);
for (Contour<System.Drawing.Point> contours = img_threshold.FindContours(); contours != null; contours = contours.HNext)
{
if (contours.Area > 100)
{
Contour<System.Drawing.Point> contours_poly = contours.ApproxPoly(3);
rects.Add(new Rectangle(contours_poly.BoundingRectangle.X, contours_poly.BoundingRectangle.Y, contours_poly.BoundingRectangle.Width, contours_poly.BoundingRectangle.Height));
}
}
return rects;
}
Usage:
Image<Bgr, Byte> img = new Image<Bgr, Byte>("VfDfJ.png");
List<Rectangle> rects = detectLetters(img);
for (int i=0;i<rects.Count();i++)
img.Draw(rects.ElementAt<Rectangle>(i),new Bgr(0,255,0),3);
CvInvoke.cvShowImage("Display", img.Ptr);
CvInvoke.cvWaitKey(0);
CvInvoke.cvDestroyWindow("Display");
Related
I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.
I have an image of a "windows control" lets say a Text-box and I want to get background color of the text written within the text box by finding max color occurred in that picture by pixel color comparison.
I searched in google and I found that every one is talking about histogram and also some code is given to find out histogram of an image but no one described the procedure after finding histogram.
the code I found on some sites is like
// Create a grayscale image
Image<Gray, Byte> img = new Image<Gray, byte>(bmp);
// Fill image with random values
img.SetRandUniform(new MCvScalar(), new MCvScalar(255));
// Create and initialize histogram
DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));
// Histogram Computing
hist.Calculate<Byte>(new Image<Gray, byte>[] { img }, true, null);
Currently I have used the code which takes a line segment from the image and finds the max color but which is not the right way to do it.
the currently used code is as follows
Image<Bgr, byte> image = new Image<Bgr, byte>(temp);
int height = temp.Height / 2;
Dictionary<Bgr, int> colors = new Dictionary<Bgr, int>();
for (int i = 0; i < (image.Width); i++)
{
Bgr pixel = new Bgr();
pixel = image[height, i];
if (colors.ContainsKey(pixel))
colors[pixel] += 1;
else
colors.Add(pixel, 1);
}
Bgr result = colors.FirstOrDefault(x => x.Value == colors.Values.Max()).Key;
please help me if any one knows how to get it. Take this image as input ==>
Emgu.CV's DenseHistogram exposes the method MinMax() which finds the maximum and minimum bin of the histogram.
So after computing your histogram like in your first code snippet:
// Create a grayscale image
Image<Gray, Byte> img = new Image<Gray, byte>(bmp);
// Fill image with random values
img.SetRandUniform(new MCvScalar(), new MCvScalar(255));
// Create and initialize histogram
DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));
// Histogram Computing
hist.Calculate<Byte>(new Image<Gray, byte>[] { img }, true, null);
...find the peak of the histogram with this method:
float minValue, maxValue;
Point[] minLocation;
Point[] maxLocation;
hist.MinMax(out minValue, out maxValue, out minLocation, out maxLocation);
// This is the value you are looking for (the bin representing the highest peak in your
// histogram is the also the main color of your image).
var mainColor = maxLocation[0].Y;
I found a code snippet in stackoverflow which does my work.
code goes like this
int BlueHist;
int GreenHist;
int RedHist;
Image<Bgr, Byte> img = new Image<Bgr, byte>(bmp);
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Image<Gray, Byte> img2Blue = img[0];
Image<Gray, Byte> img2Green = img[1];
Image<Gray, Byte> img2Red = img[2];
Histo.Calculate(new Image<Gray, Byte>[] { img2Blue }, true, null);
double[] minV, maxV;
Point[] minL, maxL;
Histo.MinMax(out minV, out maxV, out minL, out maxL);
BlueHist = maxL[0].Y;
Histo.Clear();
Histo.Calculate(new Image<Gray, Byte>[] { img2Green }, true, null);
Histo.MinMax(out minV, out maxV, out minL, out maxL);
GreenHist = maxL[0].Y;
Histo.Clear();
Histo.Calculate(new Image<Gray, Byte>[] { img2Red }, true, null);
Histo.MinMax(out minV, out maxV, out minL, out maxL);
RedHist = maxL[0].Y;
I got a kinect mount in the ceiling pointing down to a table (that is roughly 2 to 3 meters of the kinect) and my objective is using the kinect depth stream, locate the objects and get their position to late send to unity. So for that i am using c sharp and opencv (emgu wrapper), i first use canny to get the edges and then use BoundingRect to create a box around the object.
The result is the as follows:
Original:
As you can see the stationary objects are no problem but the box around the hands (the first was a flyer on it) is to big and sometimes is even worse. Is there any other way (using opencv ) of getting the positions of the objects?
The objective is then to send the position and dimension (of the object) to unity (trough tcp/ip), so preferably the shapes have to squares or rectangles to easy manipulation on unity.
The code I have so far is:
Image<Bgr, Byte> grayImage = new Image<Bgr, Byte>("C:\\Users\\Pedro\\Desktop\\imgRva\\KinectSnapshot-03-48-01.png");
Image<Gray, Byte> gray = grayImage.Convert<Gray, Byte>().PyrDown().PyrUp();
CvInvoke.cvShowImage("texto", gray);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> bin = gray.ThresholdBinary(new Gray(40), new Gray(255));
Image<Gray, Byte> cannyEdges = bin.Canny(300, 300);
CvInvoke.cvShowImage("texto", cannyEdges);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> canny = new Image<Gray, byte>(cannyEdges.Size);
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1 = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
Debug.WriteLine(r.Location + " x " + r.Width + " y " + r.Height);
}
CvInvoke.cvShowImage("texto", canny);
CvInvoke.cvWaitKey(0);
Ty
i have another question. Idk what is happening buti tried to make Canny edge detector. Problem is that when i want to detect edges on simple shape like square, the program is able to detect it. But when i want to detect shapes on not very simple image program just gives me image filled by only black color. Do you guys have an idea what is going on?
I use this code below:
public Bitmap CannyEdge(Bitmap bmp)
{
Image<Gray, Byte> Cannybmp;
Image<Gray, Byte> GrayBmp;
Image<Bgr, Byte> orig = new Image<Bgr, Byte>(bmp);
Image<Bgr, Byte> imgSmooth;
Bitmap output;
imgSmooth = orig.PyrDown().PyrUp();
imgSmooth._SmoothGaussian(3);
GrayBmp = imgSmooth.Convert<Gray, byte>();
Gray grayCannyThreshold = new Gray(160.0);
Gray grayThreshLinking = new Gray(80.0);
Cannybmp = GrayBmp.Canny(grayCannyThreshold.Intensity, grayThreshLinking.Intensity);
output = Cannybmp.ToBitmap();
//int a = 5;
return output;
}
private void button1_Click(object sender, EventArgs e)
{
Bitmap bmp = new Bitmap(pictureBox1.Image);
pictureBox2.Image = CannyEdge(bmp);
}
Did you try to set your grayCannyThreshold to become lesser the value of your grayThreshLinking?
Gray grayCannyThreshold = new Gray(80.0);
Gray grayThreshLinking = new Gray(160.0);
I don't need the thresholded image. I want the threshold. I found this in OpenCV.
cv::threshold( orig_img, thres_img, 0, 255, CV_THRESH_BINARY+CV_THRESH_OTSU );
Is there an equivalent in EmguCv. Thanks in advance.
PS. I need to use this threshold for canny edge detector
You can refer this code for Auto Canny Edge detector!
Image<Gray, byte> Img_Source_Gray = Img_Org_Gray.Copy();
Image<Gray, byte> Img_Egde_Gray = Img_Source_Gray.CopyBlank();
Image<Gray, byte> Img_SourceSmoothed_Gray = Img_Source_Gray.CopyBlank();
Image<Gray, byte> Img_Otsu_Gray = Img_Org_Gray.CopyBlank();
Img_SourceSmoothed_Gray = Img_Source_Gray.SmoothGaussian(3);
double CannyAccThresh = CvInvoke.cvThreshold(Img_EgdeNR_Gray.Ptr, Img_Otsu_Gray.Ptr, 0, 255, Emgu.CV.CvEnum.THRESH.CV_THRESH_OTSU | Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY);
double CannyThresh = 0.1 * CannyAccThresh;
Img_Otsu_Gray.Dispose();
Img_Egde_Gray = Img_SourceSmoothed_Gray.Canny(CannyThresh, CannyAccThresh);
imageBox2.Image = Img_Egde_Gray;