Tranfer Contents inside a Rotated Polygon to Bitmap - c#

I'm using C# WinForms.
The rotated polygon is drawn on a picturebox. The width and height of the rotated polygon is 101, 101. Now, I want to transfer the contents of rotated polygon to new bitmap of size 101,101
I tried to paint pixels of the rectangle using this code
for (int h = 0; h < pictureBox1.Image.Height; h++)
{
for (int w = 0; w < pictureBox1.Image.Width; w++)
{
if (IsPointInPolygon(rotatedpolygon, new PointF(w, h)))
{
g.FillRectangle(Brushes.Black, w, h, 1, 1); //paint pixel inside polygon
}
}
}
The pixels are painted in the following manner:
Now, how do I know which location on the rotated rectangle goes to which location in the new bitmap. That is how do i translate pixel co-ordinates of rotated rectangle to new bitmap.
or simply put, is it possible to map rows and columns from rotated rectangle to new bitmap as shown below?
Sorry, if the question is not clear.

What you asking to do is not literally possible. Look at your diagram:
On the left side, you've drawn pixels that are themselves oriented diagonally. But, that's not how the pixels actually are oriented in the source bitmap. The source bitmap will have square pixels oriented horizontally and vertically.
So, let's just look at a little bit of your original image:
Consider those four pixels. You can see in your drawing that, considered horizontally and vertically, the top and bottom pixels overlap the left and right pixels. More specifically, if we overlay the actual pixel orientations of the source bitmap with your proposed locations of source pixels, we get something like this:
As you can see, when you try to get the value of the pixel that will eventually become the top-right pixel of the target image, you are asking for the top pixel in that group of four. But that top pixel is actually made up of two different pixels in the original source image!
The bottom line: if the visual image that you are trying to copy will be rotated during the course of copying, there is no one-to-one correspondence between source and target pixels.
To be sure, resampling algorithms that handle this sort of geometric projection do apply concepts similar to that which you're proposing. But they do so in a mathematically sound way, in which pixels are necessarily merged or interpolated as necessary to map the square, horizontally- and vertically-oriented pixels from the source, to the square, horizontally- and vertically-oriented pixels in the target.
The only way you could get literally what you're asking for — to map the pixels on a one-to-one basis without any change in the actual pixel values themselves — would be to have a display that itself rotated.
Now, all that said: I claim that you're trying to solve a problem that not only is not solvable, but also is not worth solving.
Display resolution is so high on modern computing devices, even on the phone that you probably have next to you or in your pocket, that the resampling that occurs when you rotate bitmap images is of no consequence to the human perception of the bitmap.
Just do it the normal way. It will work fine.

Related

How can I measure the width (diameter) of a hair using OpenCV?

I am trying to quantify the width (in pixels) of hair using OpenCV.
Right now, I use a segmentation to binarize the image, then an idea I had is to generate lines over the image, and then using an AND gate, get the line widths, use FindContours to get the contours, then use ContourArea to calculate the area of each contour, sum them, and finally calculate the pixelWidth using the square root of the area divided by the number of contours:
This is the segmented and binarized crop of hair:
Then, this is the line mask I will apply to the previous image:
And finally, this is the result of the AND gate between both images:
Then, the code I am using to calculate the pixel width, given the contours of the previous image:
for (int i=0; i < blobs.Size; i++) // Blobs is the result of FindContours
area += CvInvoke.ContourArea(blobs[i]);
pixelWidth += Math.Sqrt(area / blobs.Size);
return (int)Math.Ceiling(pixelWidth);
The result I am obtaining here, is 5 pixels width, whereas the real pixel width I can check with GIMP is about 6-8 (depending of the section).
I tested this method with several hairs, and in most ocassions the measure is wrong for about 1 pixel, in others the measure is correct, and in other like the shown, it fails for various pixels.
Do you know any way to face this problem better?
Algorithm
Step1: Detect contour points using FindContours().
Step2: Find bounding rectangle by using BoundingRect() and the detected contour points.
Step3: The width of the rectangle is the desired output.

C# : Issue with scaling graphics with when image is resized

I'd have a program that will allow you to draw lines over an image which will eventually be used for calculating distance.
To make things simple, my current image (which is in a PictureBox) is an image of a ruler. When you left click, a path is created and drawn.
Originally, to zoom in, I had it so that a new bitmap would be created with the images new size and I was able to use Graphics.ScaleTransform and it worked fine but it would just crop the image.
I needed the image to actually change width and height so now what I'm doing is just adding/subtracting a constant zoom amount to the width & height when zooming in/out.
With this approach, I can't seem to scale the graphics and the paths are skewed into different directions and not the right size when the image is zoomed in.
I completely understand why this is happening, because the image is getting larger and the graphics are staying the same, I just need whatever math is required to scale the graphics.
I've tried using Graphics.ScaleTransform as well as moving the graphics x & y to their current position + the current zoom amount (offset)
As directed by #TaW I changed the zooming functionality to calculate a new Width & Height based on the whatever zoom was applied then create a new Bitmap which contained the original image with the new width and height.

Detecting Paper Edge and Crop it

I am using C# to write a program to detect the paper edges and crop out the square edges of the paper from the images.
Below is the image I wish to crop. The paper will always appear at the bottom of the pages.
I had read through these links but I still have no idea how to do it.
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Edit: I am using EMGU for this OCR project
You can also:
Convert your image to grayscale
Apply ThresholdBinary by the pixel intensity
Find contours.
To see examples on finding contours you can look on this post.
FundContours method doesn't care about the contours size. The only thing to be done here before finding contours is emphasizing them by binarizing the image (and we do this in step 2).
For more info also look at OpenCV docs: findContours, example.
Find proper contour by the size and position of its bounding box.
(In this step we iterate over all found on contours and try to figure out, which one the contour of the paper sheet, using known paper dimensions, proportions of them and the relative positon - left bottom corner of the image ).
Crop the image using bounding box of the sheet of paper.
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
using (MemStorage store = new MemStorage())
for (Contour<Point> contours= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours != null; contours = contours.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours, 1);
// filter contours by position and size of the box
}
// crop the image using found bounding box
UPD: I have added more details.
Decide on the Paper color
Decide on a delta to allow the color to vary
Decide on points along the bottom to do vertical tests
Do the vertical tests going up, collecting the minimum y where the color stops appearing
Do at least 10-20 such tests
The resulting y should be 1 more than what you want to keep. You may need to insert a limit to avoid cropping everything if the image is too bright. Either refine the algorithm or mark such an the image as an exception for manual treatment!
To crop you use the DrawImage overload with source and dest rectangles!
Here are a few more hints:
To find the paper color you can go up right from the left bottom edge diagonally until you hit a pixel with a Color.GetBrightness of > 0.8; then go further for 2 pixels to get clear of any antialiased pixels.
A reasonable delta will depend on your images; start with 10%
Use a random walk along the bottom; when you are done maybe add one extra pass in the close vicinity of the minimum found in pass one.
The vertical test can use GetPixel to get at the colors or if that is too slow you may want to look into LockBits. But get the search algorithm right first, only then think about optimizing!
If you run into trouble with your code, expand your question!

Bitmap - wrong coordinates

When I take the mouse coordinates relative to the top left corner and set that pixel to a colour, that pixel is not at the mouse's position and it even differs from bitmap to bitmap. At one bitmap the coordinates seemed to be multiplied by 0.8 but the second one I tried was like *0.2. I tried using PageUnit = GraphicsUnit.Pixel;, that also didn't work. I think the bitmaps might be set to use different pixel size but even if that's the case, I don't know how to handle that.
Looks like your bitmaps have varying dpi settings.
You may need to correct them to be the same as the Graphics object has:
Bmp.SetResolution(g.DpiX, g.DpiY);
g.DrawImage(Bmp, 0, 0);

Document detection on scanned image OpenCV

I use OpenCV for image pre-processing. I need cut only scanned photo, whithout white area.
I use algoritm:
image_canny <- apply canny edge detector to this channel
for threshold in bunch_of_increasing_thresholds:
image_thresholds[threshold] <- apply threshold to this channel
for each contour found in {image_canny} U image_thresholds:
Approximate contour with polygons
if the approximation has four corners and the angles are close to 90 degrees.for find rectangle object on scanned image. But this example not work, if i put my picture in corner of scanner!
Can anybody advise, how i can find this photo on scanned image? any examples, methods?
There are several ways to achieve your goal. I will give code for OpenCvSharp, it will be similar for plain C++.
Try to add some neutral border around your image. For example, you can just add 10-20 pixels of white around your source image. It can create false contours, but still your target part of the image will be not in the corner anymore.
Mat img = Cv2.ImRead("test.jpg");
Mat imgExtended = new Mat(new OpenCvSharp.Size(img.Size().Width + 40, img.Size().Height + 40), MatType.CV_8UC3, Scalar.White);
OpenCvSharp.Rect roi = new OpenCvSharp.Rect(new OpenCvSharp.Point(20, 20), img.Size());
img.CopyTo(imgExtended.SubMat(roi));
img = imgExtended;
Mat coloredImage = img.Clone();
Cv2.CvtColor(img, img, ColorConversionCodes.BGR2GRAY);
OpenCvSharp.Point[][] contours;
HierarchyIndex[] hierarchy;
Cv2.Canny(img, result, 80, 150);
Cv2.FindContours(result, out contours, out hierarchy, RetrievalModes.External, ContourApproximationModes.ApproxSimple);
You have object and practically white background. You can do any thresholding operation, and then take the biggest blob.
Update.
In both cases dark line at the top of your image and dark area at the left corner can still be the problem. In this case you can choose the contour with the largest area by the function
double Cv2.ContourArea(Point[] Contour);
And then try to create bounding box, which will minimize the error.

Categories

Resources