I am trying to quantify the width (in pixels) of hair using OpenCV.
Right now, I use a segmentation to binarize the image, then an idea I had is to generate lines over the image, and then using an AND gate, get the line widths, use FindContours to get the contours, then use ContourArea to calculate the area of each contour, sum them, and finally calculate the pixelWidth using the square root of the area divided by the number of contours:
This is the segmented and binarized crop of hair:
Then, this is the line mask I will apply to the previous image:
And finally, this is the result of the AND gate between both images:
Then, the code I am using to calculate the pixel width, given the contours of the previous image:
for (int i=0; i < blobs.Size; i++) // Blobs is the result of FindContours
area += CvInvoke.ContourArea(blobs[i]);
pixelWidth += Math.Sqrt(area / blobs.Size);
return (int)Math.Ceiling(pixelWidth);
The result I am obtaining here, is 5 pixels width, whereas the real pixel width I can check with GIMP is about 6-8 (depending of the section).
I tested this method with several hairs, and in most ocassions the measure is wrong for about 1 pixel, in others the measure is correct, and in other like the shown, it fails for various pixels.
Do you know any way to face this problem better?
Algorithm
Step1: Detect contour points using FindContours().
Step2: Find bounding rectangle by using BoundingRect() and the detected contour points.
Step3: The width of the rectangle is the desired output.
Related
I'm using C# WinForms.
The rotated polygon is drawn on a picturebox. The width and height of the rotated polygon is 101, 101. Now, I want to transfer the contents of rotated polygon to new bitmap of size 101,101
I tried to paint pixels of the rectangle using this code
for (int h = 0; h < pictureBox1.Image.Height; h++)
{
for (int w = 0; w < pictureBox1.Image.Width; w++)
{
if (IsPointInPolygon(rotatedpolygon, new PointF(w, h)))
{
g.FillRectangle(Brushes.Black, w, h, 1, 1); //paint pixel inside polygon
}
}
}
The pixels are painted in the following manner:
Now, how do I know which location on the rotated rectangle goes to which location in the new bitmap. That is how do i translate pixel co-ordinates of rotated rectangle to new bitmap.
or simply put, is it possible to map rows and columns from rotated rectangle to new bitmap as shown below?
Sorry, if the question is not clear.
What you asking to do is not literally possible. Look at your diagram:
On the left side, you've drawn pixels that are themselves oriented diagonally. But, that's not how the pixels actually are oriented in the source bitmap. The source bitmap will have square pixels oriented horizontally and vertically.
So, let's just look at a little bit of your original image:
Consider those four pixels. You can see in your drawing that, considered horizontally and vertically, the top and bottom pixels overlap the left and right pixels. More specifically, if we overlay the actual pixel orientations of the source bitmap with your proposed locations of source pixels, we get something like this:
As you can see, when you try to get the value of the pixel that will eventually become the top-right pixel of the target image, you are asking for the top pixel in that group of four. But that top pixel is actually made up of two different pixels in the original source image!
The bottom line: if the visual image that you are trying to copy will be rotated during the course of copying, there is no one-to-one correspondence between source and target pixels.
To be sure, resampling algorithms that handle this sort of geometric projection do apply concepts similar to that which you're proposing. But they do so in a mathematically sound way, in which pixels are necessarily merged or interpolated as necessary to map the square, horizontally- and vertically-oriented pixels from the source, to the square, horizontally- and vertically-oriented pixels in the target.
The only way you could get literally what you're asking for — to map the pixels on a one-to-one basis without any change in the actual pixel values themselves — would be to have a display that itself rotated.
Now, all that said: I claim that you're trying to solve a problem that not only is not solvable, but also is not worth solving.
Display resolution is so high on modern computing devices, even on the phone that you probably have next to you or in your pocket, that the resampling that occurs when you rotate bitmap images is of no consequence to the human perception of the bitmap.
Just do it the normal way. It will work fine.
I am using C# to write a program to detect the paper edges and crop out the square edges of the paper from the images.
Below is the image I wish to crop. The paper will always appear at the bottom of the pages.
I had read through these links but I still have no idea how to do it.
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Edit: I am using EMGU for this OCR project
You can also:
Convert your image to grayscale
Apply ThresholdBinary by the pixel intensity
Find contours.
To see examples on finding contours you can look on this post.
FundContours method doesn't care about the contours size. The only thing to be done here before finding contours is emphasizing them by binarizing the image (and we do this in step 2).
For more info also look at OpenCV docs: findContours, example.
Find proper contour by the size and position of its bounding box.
(In this step we iterate over all found on contours and try to figure out, which one the contour of the paper sheet, using known paper dimensions, proportions of them and the relative positon - left bottom corner of the image ).
Crop the image using bounding box of the sheet of paper.
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
using (MemStorage store = new MemStorage())
for (Contour<Point> contours= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours != null; contours = contours.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours, 1);
// filter contours by position and size of the box
}
// crop the image using found bounding box
UPD: I have added more details.
Decide on the Paper color
Decide on a delta to allow the color to vary
Decide on points along the bottom to do vertical tests
Do the vertical tests going up, collecting the minimum y where the color stops appearing
Do at least 10-20 such tests
The resulting y should be 1 more than what you want to keep. You may need to insert a limit to avoid cropping everything if the image is too bright. Either refine the algorithm or mark such an the image as an exception for manual treatment!
To crop you use the DrawImage overload with source and dest rectangles!
Here are a few more hints:
To find the paper color you can go up right from the left bottom edge diagonally until you hit a pixel with a Color.GetBrightness of > 0.8; then go further for 2 pixels to get clear of any antialiased pixels.
A reasonable delta will depend on your images; start with 10%
Use a random walk along the bottom; when you are done maybe add one extra pass in the close vicinity of the minimum found in pass one.
The vertical test can use GetPixel to get at the colors or if that is too slow you may want to look into LockBits. But get the search algorithm right first, only then think about optimizing!
If you run into trouble with your code, expand your question!
When I take the mouse coordinates relative to the top left corner and set that pixel to a colour, that pixel is not at the mouse's position and it even differs from bitmap to bitmap. At one bitmap the coordinates seemed to be multiplied by 0.8 but the second one I tried was like *0.2. I tried using PageUnit = GraphicsUnit.Pixel;, that also didn't work. I think the bitmaps might be set to use different pixel size but even if that's the case, I don't know how to handle that.
Looks like your bitmaps have varying dpi settings.
You may need to correct them to be the same as the Graphics object has:
Bmp.SetResolution(g.DpiX, g.DpiY);
g.DrawImage(Bmp, 0, 0);
I use OpenCV for image pre-processing. I need cut only scanned photo, whithout white area.
I use algoritm:
image_canny <- apply canny edge detector to this channel
for threshold in bunch_of_increasing_thresholds:
image_thresholds[threshold] <- apply threshold to this channel
for each contour found in {image_canny} U image_thresholds:
Approximate contour with polygons
if the approximation has four corners and the angles are close to 90 degrees.for find rectangle object on scanned image. But this example not work, if i put my picture in corner of scanner!
Can anybody advise, how i can find this photo on scanned image? any examples, methods?
There are several ways to achieve your goal. I will give code for OpenCvSharp, it will be similar for plain C++.
Try to add some neutral border around your image. For example, you can just add 10-20 pixels of white around your source image. It can create false contours, but still your target part of the image will be not in the corner anymore.
Mat img = Cv2.ImRead("test.jpg");
Mat imgExtended = new Mat(new OpenCvSharp.Size(img.Size().Width + 40, img.Size().Height + 40), MatType.CV_8UC3, Scalar.White);
OpenCvSharp.Rect roi = new OpenCvSharp.Rect(new OpenCvSharp.Point(20, 20), img.Size());
img.CopyTo(imgExtended.SubMat(roi));
img = imgExtended;
Mat coloredImage = img.Clone();
Cv2.CvtColor(img, img, ColorConversionCodes.BGR2GRAY);
OpenCvSharp.Point[][] contours;
HierarchyIndex[] hierarchy;
Cv2.Canny(img, result, 80, 150);
Cv2.FindContours(result, out contours, out hierarchy, RetrievalModes.External, ContourApproximationModes.ApproxSimple);
You have object and practically white background. You can do any thresholding operation, and then take the biggest blob.
Update.
In both cases dark line at the top of your image and dark area at the left corner can still be the problem. In this case you can choose the contour with the largest area by the function
double Cv2.ContourArea(Point[] Contour);
And then try to create bounding box, which will minimize the error.
Okay, so I have an Image which holds my tile set. Then I have my PictureBox used as my "game screen". All the code does is takes a snippet of my tile set (a tile) and place it on the game screen.
Here's my code.
private void picMap_Click(object sender, EventArgs e)
{
//screenMain = picMap.CreateGraphics();
// Create image.
//gfxTiles = Image.FromFile(#Program.resourceMapFilePath + "poatiles.png");
// Create coordinates for upper-left corner of image.
int x = 0;
int y = 0;
// Create rectangle for source image.
Rectangle srcRect = new Rectangle(16, 16, 16, 16);
GraphicsUnit units = GraphicsUnit.Pixel;
// Draw image to screen.
screenMain.DrawImage(gfxTiles, x, y, srcRect, units);
screenMain.DrawImage(gfxTiles, 16, 0, srcRect, units);
screenMain.DrawImage(gfxTiles, 32, 0, srcRect, units);
screenMain.DrawImage(gfxTiles, 16, 16, srcRect, units);
}
And here is my output:
Any reason why that space between each "tile" is there (it's a 2 pixels gap)? I could ghetto rig the code, but I plan to use algebra to programatically figure out where tiles need to go, etc etc, so a ghetto rig would work, but to do that throughout the entire game would be troublesome, and at the very least, sloppy.
I think the call to DrawImage is okay. In the image you posted it looks like 16x16 tiles next to each other. I'd check poatiles.png. I'm not sure what's at Rectangle(16, 16, 16, 16) in it. It may not be what you think.
EDIT: I don't know what to say. I made a png almost the size of poatiles and put a 16x16 square in it a 16,16, and it drew exactly like you'd expect.
The code looks fine and since it works on smaller images, the only thing I can think of is there's a problem with poatiles.
There's the following comment in MSDN about Graphics.DrawImage Method (Image, Int32, Int32, Rectangle, GraphicsUnit)
This method draws a portion of an image using its physical size, so
the image portion will have its correct size in inches regardless of
the resolution (dots per inch) of the display device. For example,
suppose an image portion has a pixel width of 216 and a horizontal
resolution of 72 dots per inch. If you call this method to draw that
image portion on a device that has a resolution of 96 dots per inch,
the pixel width of the rendered image portion will be (216/72)*96 =
288.
Since you're specifying pixels as the unit I'd expect it to ignore that. But in the absence of better ideas you might want to compare the dpi of poatiles versus the smaller images. (Image.HorizontalResolution & Image.VerticalResolution)
I'm not sure that all of the information is there to start with, but here's some suggestions I have from looking at what you've done so far.
1) Check poatiles.png to make sure that it's definitely a 16x16 pixel image with no black pixels around it.
2) It seems odd that your Rectangle has four int's in its constructor. A rectangle should usually only have a width and height (if any sides have different lengths, then it's not a true rectangle!)
3) You might want to determine your positions on screen by multiplying by width and height of the Rectangle that you're trying to draw and adding that value to the origin (0,0).