I have been trying to figure this out for hours with no answer.
I have an UIimageView with the ContentMode as UIViewContentMode.ScaleAspectFill which I would like to crop using an overlying resizable frame. The cropping frame no longer maintains a 1:1 relationship between the bounds that contains both the UIimageView and cropping tool. The cropping usually would be as simple as:
using (var imageRef = image.CGImage.WithImageInRect(frame)) {
return UIImage.FromImage(imageRef);
}
But, the calculations are required in the case, how would I calculate the offset to match the cropping tool frame to the newly inflated UIImageView (or rather UIImage)? Here's a image to help paint the picture:
This picture shows a few key things.
Upper Right Image: What will be cropped currently (area contained in blue section)
Blue Rectangle: Where the cropping is currently being taken place, notice the position and height is skewed compared to red.
Red Rectangle: The cropping control, essentially where the cropping should be taking place. INSTEAD of where blue is.
Upper Blue Textbox: Ignore this.
Essentially, I want the blue rect to be where the red frame is.
Any help would be greatly appeciated, I am using C# for this, but Objective C answers are more than welcomed.
Related
I'm using C# WinForms.
The rotated polygon is drawn on a picturebox. The width and height of the rotated polygon is 101, 101. Now, I want to transfer the contents of rotated polygon to new bitmap of size 101,101
I tried to paint pixels of the rectangle using this code
for (int h = 0; h < pictureBox1.Image.Height; h++)
{
for (int w = 0; w < pictureBox1.Image.Width; w++)
{
if (IsPointInPolygon(rotatedpolygon, new PointF(w, h)))
{
g.FillRectangle(Brushes.Black, w, h, 1, 1); //paint pixel inside polygon
}
}
}
The pixels are painted in the following manner:
Now, how do I know which location on the rotated rectangle goes to which location in the new bitmap. That is how do i translate pixel co-ordinates of rotated rectangle to new bitmap.
or simply put, is it possible to map rows and columns from rotated rectangle to new bitmap as shown below?
Sorry, if the question is not clear.
What you asking to do is not literally possible. Look at your diagram:
On the left side, you've drawn pixels that are themselves oriented diagonally. But, that's not how the pixels actually are oriented in the source bitmap. The source bitmap will have square pixels oriented horizontally and vertically.
So, let's just look at a little bit of your original image:
Consider those four pixels. You can see in your drawing that, considered horizontally and vertically, the top and bottom pixels overlap the left and right pixels. More specifically, if we overlay the actual pixel orientations of the source bitmap with your proposed locations of source pixels, we get something like this:
As you can see, when you try to get the value of the pixel that will eventually become the top-right pixel of the target image, you are asking for the top pixel in that group of four. But that top pixel is actually made up of two different pixels in the original source image!
The bottom line: if the visual image that you are trying to copy will be rotated during the course of copying, there is no one-to-one correspondence between source and target pixels.
To be sure, resampling algorithms that handle this sort of geometric projection do apply concepts similar to that which you're proposing. But they do so in a mathematically sound way, in which pixels are necessarily merged or interpolated as necessary to map the square, horizontally- and vertically-oriented pixels from the source, to the square, horizontally- and vertically-oriented pixels in the target.
The only way you could get literally what you're asking for — to map the pixels on a one-to-one basis without any change in the actual pixel values themselves — would be to have a display that itself rotated.
Now, all that said: I claim that you're trying to solve a problem that not only is not solvable, but also is not worth solving.
Display resolution is so high on modern computing devices, even on the phone that you probably have next to you or in your pocket, that the resampling that occurs when you rotate bitmap images is of no consequence to the human perception of the bitmap.
Just do it the normal way. It will work fine.
I'm developing drawing tool control using inkcanvs in wpf.
When i draw rectangle, this is original image.
Original Image
And i set that editing mode is erase by point. When i erase rectangle, this is image after working.
Erased Image
I want that erase function works as well as default drawing tool in windows. It would be worked by pixel. Erase shape by pixel.
Part of Source Codes
public class Label : Stroke
{
protected override void DrawCore(DrawingContext drawingContext, DrawingAttributes drawingAttributes)
Rect rect = new Rect((Point)this.StylusPoints[0], (Point)this.StylusPoints[1]);
drawingContext.DrawRectangle(...)
This is not possible with the standard API.
Only by translating an ink image (strokes, which are enhanced vectors) to a bitmap image (which contains pixels) individual pixels can be manipulated but the stroke information will be lost.
With quite some effort an application could be created that keeps track of all its user's actions (draw some strokes, add a pixel, remove a pixel) and replays them all every time the UI needs a redraw, injecting the transformation from vector to bitmap where needed.
This is a nice exercise but might prove to be too slow and/or complex.
I am using C# to write a program to detect the paper edges and crop out the square edges of the paper from the images.
Below is the image I wish to crop. The paper will always appear at the bottom of the pages.
I had read through these links but I still have no idea how to do it.
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Edit: I am using EMGU for this OCR project
You can also:
Convert your image to grayscale
Apply ThresholdBinary by the pixel intensity
Find contours.
To see examples on finding contours you can look on this post.
FundContours method doesn't care about the contours size. The only thing to be done here before finding contours is emphasizing them by binarizing the image (and we do this in step 2).
For more info also look at OpenCV docs: findContours, example.
Find proper contour by the size and position of its bounding box.
(In this step we iterate over all found on contours and try to figure out, which one the contour of the paper sheet, using known paper dimensions, proportions of them and the relative positon - left bottom corner of the image ).
Crop the image using bounding box of the sheet of paper.
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
using (MemStorage store = new MemStorage())
for (Contour<Point> contours= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours != null; contours = contours.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours, 1);
// filter contours by position and size of the box
}
// crop the image using found bounding box
UPD: I have added more details.
Decide on the Paper color
Decide on a delta to allow the color to vary
Decide on points along the bottom to do vertical tests
Do the vertical tests going up, collecting the minimum y where the color stops appearing
Do at least 10-20 such tests
The resulting y should be 1 more than what you want to keep. You may need to insert a limit to avoid cropping everything if the image is too bright. Either refine the algorithm or mark such an the image as an exception for manual treatment!
To crop you use the DrawImage overload with source and dest rectangles!
Here are a few more hints:
To find the paper color you can go up right from the left bottom edge diagonally until you hit a pixel with a Color.GetBrightness of > 0.8; then go further for 2 pixels to get clear of any antialiased pixels.
A reasonable delta will depend on your images; start with 10%
Use a random walk along the bottom; when you are done maybe add one extra pass in the close vicinity of the minimum found in pass one.
The vertical test can use GetPixel to get at the colors or if that is too slow you may want to look into LockBits. But get the search algorithm right first, only then think about optimizing!
If you run into trouble with your code, expand your question!
I have a UIImageView as a subview of a UIScrollView, the image view is filled using a CGBitmapContext which is drawn to using a DrawTileAtIndex method.
In order to increase performance, I would only like to draw those tiles that are visible, is there any way I can detect how much of my UIImageView is currently visible to the user? I.E What area, pixels etc If so, how do I detect this in order to draw the correct tiles?
CATiledLayer is not an option.
Thanks in advance,
Jack
Assuming you have an image that is larger than can be viewed at any one time, then create a UIView subclass with the image as a property. When you are sent drawRect:, then you can use the passed in rect to test your tiles against, and only those tiles that overlap that rect need to be drawn.
You can use the 'CGRectIntersectsRect' function to do the test.
With a mobile device I take a picture of a flat light object on a dark surface. (for instance a coupon clipped out of a newspaper).
The image is then run through a brightness/contrast filter. If it is too dark, vital components are left out. If it is too bright, the writing on the coupon is lost.
This image is then converted into a bitonal image. Any pixel that is 50% or more dark is converted to black, everything else is white. (done)
I am left with a skewed bitonal image (think of a white trapezoid inside a larger rectangle with a black background).
I need to figure out how to crop the image - which when it's on a black background is easier than when it's on a white background. Then, I have to de-skew the image so it is rectangular instead of trapezoidal, while attempting to preserve aspect.
The end result should be a nicely cropped, bitonal, readable image of the coupon.
To crop your image, you can use the LockBits method and scan through all your pixels to find the first pixel with content from the top, left, right and bottom, respectively. How to use LockBits is described nicely here: https://web.archive.org/web/20141229164101/http://bobpowell.net/lockingbits.aspx
Assuming your image is not rotated, and that the skewing comes from the camera held at an angle against the table where the coupon is being photographed, you should now have a skewed image of the coupon, fitting perfectly within the bounds of the cropped bitmap. You should also know the four corners of the trapezoid.
"Undistorting" an image is not as easy as you might think though. However, good people have solved this problem and you can probably port their code to your own use. Here is a link I used to explore this problem in a similar case some time ago:
http://ryoushin.com/cmerighi/en-US/2007-10-29_61/Image_Distortion_Enhancements
I also have some code stored somewhere if you can't make any sense of what you find.