Detecting Paper Edge and Crop it - c#

I am using C# to write a program to detect the paper edges and crop out the square edges of the paper from the images.
Below is the image I wish to crop. The paper will always appear at the bottom of the pages.
I had read through these links but I still have no idea how to do it.
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Edit: I am using EMGU for this OCR project

You can also:
Convert your image to grayscale
Apply ThresholdBinary by the pixel intensity
Find contours.
To see examples on finding contours you can look on this post.
FundContours method doesn't care about the contours size. The only thing to be done here before finding contours is emphasizing them by binarizing the image (and we do this in step 2).
For more info also look at OpenCV docs: findContours, example.
Find proper contour by the size and position of its bounding box.
(In this step we iterate over all found on contours and try to figure out, which one the contour of the paper sheet, using known paper dimensions, proportions of them and the relative positon - left bottom corner of the image ).
Crop the image using bounding box of the sheet of paper.
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
using (MemStorage store = new MemStorage())
for (Contour<Point> contours= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours != null; contours = contours.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours, 1);
// filter contours by position and size of the box
}
// crop the image using found bounding box
UPD: I have added more details.

Decide on the Paper color
Decide on a delta to allow the color to vary
Decide on points along the bottom to do vertical tests
Do the vertical tests going up, collecting the minimum y where the color stops appearing
Do at least 10-20 such tests
The resulting y should be 1 more than what you want to keep. You may need to insert a limit to avoid cropping everything if the image is too bright. Either refine the algorithm or mark such an the image as an exception for manual treatment!
To crop you use the DrawImage overload with source and dest rectangles!
Here are a few more hints:
To find the paper color you can go up right from the left bottom edge diagonally until you hit a pixel with a Color.GetBrightness of > 0.8; then go further for 2 pixels to get clear of any antialiased pixels.
A reasonable delta will depend on your images; start with 10%
Use a random walk along the bottom; when you are done maybe add one extra pass in the close vicinity of the minimum found in pass one.
The vertical test can use GetPixel to get at the colors or if that is too slow you may want to look into LockBits. But get the search algorithm right first, only then think about optimizing!
If you run into trouble with your code, expand your question!

Related

How can I measure the width (diameter) of a hair using OpenCV?

I am trying to quantify the width (in pixels) of hair using OpenCV.
Right now, I use a segmentation to binarize the image, then an idea I had is to generate lines over the image, and then using an AND gate, get the line widths, use FindContours to get the contours, then use ContourArea to calculate the area of each contour, sum them, and finally calculate the pixelWidth using the square root of the area divided by the number of contours:
This is the segmented and binarized crop of hair:
Then, this is the line mask I will apply to the previous image:
And finally, this is the result of the AND gate between both images:
Then, the code I am using to calculate the pixel width, given the contours of the previous image:
for (int i=0; i < blobs.Size; i++) // Blobs is the result of FindContours
area += CvInvoke.ContourArea(blobs[i]);
pixelWidth += Math.Sqrt(area / blobs.Size);
return (int)Math.Ceiling(pixelWidth);
The result I am obtaining here, is 5 pixels width, whereas the real pixel width I can check with GIMP is about 6-8 (depending of the section).
I tested this method with several hairs, and in most ocassions the measure is wrong for about 1 pixel, in others the measure is correct, and in other like the shown, it fails for various pixels.
Do you know any way to face this problem better?
Algorithm
Step1: Detect contour points using FindContours().
Step2: Find bounding rectangle by using BoundingRect() and the detected contour points.
Step3: The width of the rectangle is the desired output.

Tranfer Contents inside a Rotated Polygon to Bitmap

I'm using C# WinForms.
The rotated polygon is drawn on a picturebox. The width and height of the rotated polygon is 101, 101. Now, I want to transfer the contents of rotated polygon to new bitmap of size 101,101
I tried to paint pixels of the rectangle using this code
for (int h = 0; h < pictureBox1.Image.Height; h++)
{
for (int w = 0; w < pictureBox1.Image.Width; w++)
{
if (IsPointInPolygon(rotatedpolygon, new PointF(w, h)))
{
g.FillRectangle(Brushes.Black, w, h, 1, 1); //paint pixel inside polygon
}
}
}
The pixels are painted in the following manner:
Now, how do I know which location on the rotated rectangle goes to which location in the new bitmap. That is how do i translate pixel co-ordinates of rotated rectangle to new bitmap.
or simply put, is it possible to map rows and columns from rotated rectangle to new bitmap as shown below?
Sorry, if the question is not clear.
What you asking to do is not literally possible. Look at your diagram:
On the left side, you've drawn pixels that are themselves oriented diagonally. But, that's not how the pixels actually are oriented in the source bitmap. The source bitmap will have square pixels oriented horizontally and vertically.
So, let's just look at a little bit of your original image:
Consider those four pixels. You can see in your drawing that, considered horizontally and vertically, the top and bottom pixels overlap the left and right pixels. More specifically, if we overlay the actual pixel orientations of the source bitmap with your proposed locations of source pixels, we get something like this:
As you can see, when you try to get the value of the pixel that will eventually become the top-right pixel of the target image, you are asking for the top pixel in that group of four. But that top pixel is actually made up of two different pixels in the original source image!
The bottom line: if the visual image that you are trying to copy will be rotated during the course of copying, there is no one-to-one correspondence between source and target pixels.
To be sure, resampling algorithms that handle this sort of geometric projection do apply concepts similar to that which you're proposing. But they do so in a mathematically sound way, in which pixels are necessarily merged or interpolated as necessary to map the square, horizontally- and vertically-oriented pixels from the source, to the square, horizontally- and vertically-oriented pixels in the target.
The only way you could get literally what you're asking for — to map the pixels on a one-to-one basis without any change in the actual pixel values themselves — would be to have a display that itself rotated.
Now, all that said: I claim that you're trying to solve a problem that not only is not solvable, but also is not worth solving.
Display resolution is so high on modern computing devices, even on the phone that you probably have next to you or in your pocket, that the resampling that occurs when you rotate bitmap images is of no consequence to the human perception of the bitmap.
Just do it the normal way. It will work fine.

Crop Image in Aspect Fill UIImageView

I have been trying to figure this out for hours with no answer.
I have an UIimageView with the ContentMode as UIViewContentMode.ScaleAspectFill which I would like to crop using an overlying resizable frame. The cropping frame no longer maintains a 1:1 relationship between the bounds that contains both the UIimageView and cropping tool. The cropping usually would be as simple as:
using (var imageRef = image.CGImage.WithImageInRect(frame)) {
return UIImage.FromImage(imageRef);
}
But, the calculations are required in the case, how would I calculate the offset to match the cropping tool frame to the newly inflated UIImageView (or rather UIImage)? Here's a image to help paint the picture:
This picture shows a few key things.
Upper Right Image: What will be cropped currently (area contained in blue section)
Blue Rectangle: Where the cropping is currently being taken place, notice the position and height is skewed compared to red.
Red Rectangle: The cropping control, essentially where the cropping should be taking place. INSTEAD of where blue is.
Upper Blue Textbox: Ignore this.
Essentially, I want the blue rect to be where the red frame is.
Any help would be greatly appeciated, I am using C# for this, but Objective C answers are more than welcomed.

Document detection on scanned image OpenCV

I use OpenCV for image pre-processing. I need cut only scanned photo, whithout white area.
I use algoritm:
image_canny <- apply canny edge detector to this channel
for threshold in bunch_of_increasing_thresholds:
image_thresholds[threshold] <- apply threshold to this channel
for each contour found in {image_canny} U image_thresholds:
Approximate contour with polygons
if the approximation has four corners and the angles are close to 90 degrees.for find rectangle object on scanned image. But this example not work, if i put my picture in corner of scanner!
Can anybody advise, how i can find this photo on scanned image? any examples, methods?
There are several ways to achieve your goal. I will give code for OpenCvSharp, it will be similar for plain C++.
Try to add some neutral border around your image. For example, you can just add 10-20 pixels of white around your source image. It can create false contours, but still your target part of the image will be not in the corner anymore.
Mat img = Cv2.ImRead("test.jpg");
Mat imgExtended = new Mat(new OpenCvSharp.Size(img.Size().Width + 40, img.Size().Height + 40), MatType.CV_8UC3, Scalar.White);
OpenCvSharp.Rect roi = new OpenCvSharp.Rect(new OpenCvSharp.Point(20, 20), img.Size());
img.CopyTo(imgExtended.SubMat(roi));
img = imgExtended;
Mat coloredImage = img.Clone();
Cv2.CvtColor(img, img, ColorConversionCodes.BGR2GRAY);
OpenCvSharp.Point[][] contours;
HierarchyIndex[] hierarchy;
Cv2.Canny(img, result, 80, 150);
Cv2.FindContours(result, out contours, out hierarchy, RetrievalModes.External, ContourApproximationModes.ApproxSimple);
You have object and practically white background. You can do any thresholding operation, and then take the biggest blob.
Update.
In both cases dark line at the top of your image and dark area at the left corner can still be the problem. In this case you can choose the contour with the largest area by the function
double Cv2.ContourArea(Point[] Contour);
And then try to create bounding box, which will minimize the error.

How to change invert frames in EmguCV?

Currently, I am working on a real-time IRIS detection application.
I want to perform an invert operation to the frames taken from the web camera, like this:
I managed to get this line of code, but this is not giving the above results. Maybe parameters need to be changed, but I am not sure.
CvInvoke.cvThreshold(grayframeright, grayframeright, 160, 255.0, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY_INV);
From the images above it feels that the second image is the negative of the first image(correct me if I am wrong),
The function you are using is a threshold function,i.e. will render everything as white if it falls between the specified color range and other wise it will render it as black.
To find the negative of an image you can use one of the following methods.
Taking the NOT of an image.
Image<Bgr, Byte> img2 = img1.Not();// imag1 can be a static image or your current captured frame
for more details you can refer the documentation here.
If you want to invert an image you can do the following:
Mat white = Mat::ones(grayframeright.rows, grayframeright.cols, grayframeright.type);
Mat dst = white - grayframeright;
Also note that pupil can be detected with OpenCV detector initialized with HAAR cascade for eyes that OpenCV code comes with.

Categories

Resources