I have Emgu image :
Image<Bgr,byte> image = new Image<Bgr,byte>("image.jpg");
Here is how the file(image.jpg) looks like:
All pixels that inside red-yellow triangle I want to copy to the new image called:
Image<Bgr,byte> copiedSegment;
Any idea how to implement it if I have coordinates all coordinates of the triangle contour.
Thank you in advance.
In the opencv c++ api you can just use the matrix copy function with a mask composed of your triangular components.
Mat image = imread("image.jpg",CV_LOAD_IMAGE_COLOR);
vector<Point> triangleRoi;
Mat mask;
//draw your trianlge on the mask
cv::fillConvexPoly(mask, triangleRoi, 255);
Mat copiedSegment;
image.copyTo(copiedSegment,mask);
You should beable to write some similar code in emgu based on this.
// no we apply filter to get rid of Equalization Noise.
ImageBilateral = new Image<Gray, Byte>(grayImg.Size);
CvInvoke.BilateralFilter(grayImg, ImageBilateral, 0, 20.0, 2.0);
//ImageBilateral.Save(String.Format("C:\\Temp\\BilateralFilter{0}.jpg", counter));
retImage = AlignFace(ImageBilateral);
Point faceCenter = new Point(ImageBilateral.Width / 2, (int)Math.Round(ImageBilateral.Height * FACE_ELLIPSE_CY));
Size size = new Size((int)Math.Round(ImageBilateral.Width * FACE_ELLIPSE_W), (int)Math.Round(ImageBilateral.Width * FACE_ELLIPSE_H));
// Filter out the corners of the face, since we mainly just care about the middle parts.
// Draw a filled ellipse in the middle of the face-sized image.
Image<Gray, Byte> mask = new Image<Gray, Byte>(ImageBilateral.Size);
// draw Ellipse on Mask
CvInvoke.Ellipse(mask, faceCenter, size, 0, 0, 360, new MCvScalar(255, 255, 255), -1, Emgu.CV.CvEnum.LineType.AntiAlias, 0);
mask.Save(String.Format("C:\\Temp\\mask{0}.bmp", counter));
Image<Gray, Byte> temp1 = ImageBilateral.Copy(mask);
ImageBilateral.Save(String.Format("C:\\Temp\\ImageBilateral.CopyTo(mask){0}.bmp", counter));
temp1.Save(String.Format("C:\\Temp\\temp1{0}.bmp", counter));
Related
I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.
I want to draw a rotated rectangle. The canny image is not one counter it is a set of counters. So I want to get all points of all counters. e.g.
Points[] pts = points of all counters.
Please see attached image.
The canny show all edges in your image, not an external contour. This one way (but not the only one) to do object detection (foreground / background extraction).
The following snippet extract object roi (Min area rect) by using a canny edge detector. Be carrefull about canny thresh parameteres comming from otsu threshold.
Hope it helps.
Detection result
Image<Gray,byte> imageFullSize = new Image<Gray, byte>(imagePath);
Rectangle roi = new Rectangle(Your Top Left X,Your Top Left Y,Your ROI Width,Your ROI Height);
//Now get a roi (No copy, it is a new header pointing original image
Image<Gray,byte> image = imageFullSize.GetSubRect(roi);
//Step 1 : contour detection using canny
//Gaussian noise removal, eliminate background false détections
image._SmoothGaussian(15);
//Canny thresh based on otsu threshold value:
Mat binary = new Mat();
double otsuThresh = CvInvoke.Threshold(image, binary, 0, 255, ThresholdType.Binary | ThresholdType.Otsu);
double highThresh = otsuThresh;
double lowThresh = otsuThresh * 0.5;
var cannyImage = image.Canny(lowThresh, highThresh);
//Step 2 : contour extraction
Mat hierarchy=new Mat();
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
CvInvoke.FindContours(cannyImage,contours,hierarchy,RetrType.External,ChainApproxMethod.ChainApproxSimple);
//Step 3 : contour fusion
List<Point> points=new List<Point>();
for(int i = 0; i < contours.Size; i++)
{
points.AddRange(contours[i].ToArray());
}
//Step 4 : Rotated rect
RotatedRect minAreaRect = CvInvoke.MinAreaRect(points.Select(pt=>new PointF(pt.X,pt.Y)).ToArray());
Point[] vertices = minAreaRect.GetVertices().Select(pt => new Point((int)pt.X, (int)pt.Y)).ToArray();
//Step 5 : draw result
Image <Bgr,byte > colorImageFullSize =new Image<Bgr, byte>(imagePath);
Image <Bgr,byte > colorImage =colorImageFullSize.GetSubRect(roi);
colorImage.Draw(vertices,new Bgr(Color.Red),2 );
I got a kinect mount in the ceiling pointing down to a table (that is roughly 2 to 3 meters of the kinect) and my objective is using the kinect depth stream, locate the objects and get their position to late send to unity. So for that i am using c sharp and opencv (emgu wrapper), i first use canny to get the edges and then use BoundingRect to create a box around the object.
The result is the as follows:
Original:
As you can see the stationary objects are no problem but the box around the hands (the first was a flyer on it) is to big and sometimes is even worse. Is there any other way (using opencv ) of getting the positions of the objects?
The objective is then to send the position and dimension (of the object) to unity (trough tcp/ip), so preferably the shapes have to squares or rectangles to easy manipulation on unity.
The code I have so far is:
Image<Bgr, Byte> grayImage = new Image<Bgr, Byte>("C:\\Users\\Pedro\\Desktop\\imgRva\\KinectSnapshot-03-48-01.png");
Image<Gray, Byte> gray = grayImage.Convert<Gray, Byte>().PyrDown().PyrUp();
CvInvoke.cvShowImage("texto", gray);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> bin = gray.ThresholdBinary(new Gray(40), new Gray(255));
Image<Gray, Byte> cannyEdges = bin.Canny(300, 300);
CvInvoke.cvShowImage("texto", cannyEdges);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> canny = new Image<Gray, byte>(cannyEdges.Size);
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1 = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
Debug.WriteLine(r.Location + " x " + r.Width + " y " + r.Height);
}
CvInvoke.cvShowImage("texto", canny);
CvInvoke.cvWaitKey(0);
Ty
i have a blood image and i applied watershed on it .. its works and determines the cells but i don't know how to put each cell in a separate image .. i'm working with emgu.cv can i get some help
here i segment the picture using my watershed method and then put the marker on the original image
Image<Gray, Int32> boundaryImage = watershedSegmenter.Process(image);
Image<Gray, Byte> test = watershedSegmenter.GetWatersheds(); Image<Bgr, byte>dest=new Image<Bgr, byte>(image.Width, image.Height);
dest = image.And(image, test);
pictureBox1.Width = boundaryImage.ToBitmap().Width;
pictureBox1.Height = boundaryImage.ToBitmap().Height;
pictureBox1.BackgroundImage = boundaryImage.ToBitmap();
It seems like EMGUcv cvWatershed() is having some bugs which is not yet resolved. The Marker Image is always returning white image. Can you share your 1st stage output images?
I'm not able to view any images by using this code.
Image<Bgr, Byte> image = Img_Source_Bgr.Copy();
Image<Gray, Int32> marker = new Image<Gray, Int32>(image.Width, image.Height);
Rectangle rect = image.ROI;
marker.Draw(
new CircleF(
new PointF(rect.Left + rect.Width / 2.0f, rect.Top + rect.Height / 2.0f),
/*(float)(Math.Min(image.Width, image.Height) / 20.0f)*/ 5.0f),
new Gray(255),
0);
Image<Bgr, Byte> result = image.ConcateHorizontal(marker.Convert<Bgr, byte>());
Image<Gray, Byte> mask = new Image<Gray, byte>(image.Size);
CvInvoke.cvWatershed(image, marker);
CvInvoke.cvCmpS(marker, 0.10, mask, CMP_TYPE.CV_CMP_GT);
imageBox1.Image = mask;
I'm doing a small experiment using fourier transform on emgu cv. My aim is to have the fourier transform of an image, then take the inverse fourier transform again, and check if the image shows up or not. mathematically, it should.
this is my code which i believe is correct
Image<Gray, float> image = new Image<Gray, float>("c://box1.png");
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
Matrix<float> idft = new Matrix<float>(dft.Rows, dft.Cols, 2);
CvInvoke.cvDFT(dft, idft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, 0);
IntPtr complexImage2 = CvInvoke.cvCreateImage(idft.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_8U, 2);
CvInvoke.cvShowImage("picture", idft);
System.Threading.Thread.Sleep(99999); // to wait and see the picture
I have two problems:
1- error : an error shows up saying " OpenCV: Source image must have 1, 3 or 4 channels " i believe its related about the IDFT, but I couldn't solve it
2- it still shows an output image, but unfortunately, its not the original image that was input. all what show is a plain grey image.
Thanks.
Try this,
Image<Gray, System.Single> _image = new Image<Gray, System.Single>("c://box1.png");
Image<Gray, System.Single> DFTimage = new Image<Gray, System.Single>(_image.Size);
Image<Gray, System.Single> Original = new Image<Gray, System.Single>(_image.Size);
CvInvoke.cvDFT(_image.Ptr, DFTimage.Ptr,Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT(DFTimage.Ptr, Original.Ptr,Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
CvInvoke.cvShowImage("picture", Original);