I am working on a C# program to process images ( given as int[,] )..
I have a 2D array of pixels, and I need to rotate them around a point, then scale them down to fit the original array. I already found articles about using matrix to transform to a point and rotate then transform back. What remains is to scale the resultant image to fit an array of original size.
How that can be done? (preferably with 2 equations one for x and one for y )
In the Matrix class you have both functions Rotate(At) and Scale. What other would you find out?
Have a look here. That should give you all the math behind doing coordinate rotations.
You need to find a transform from the resultant array to the original image. You then transform points in the destination to points in the source image and copy. Anti-aliasing via oversampling is also an option. Your rotation matrix can also apply a scaling - just multiply the matrix by the scale factor (this assumes a 2x2). If you're doing 3x3 matrix for rotation, scaling, and translation, then just multiply the upper left 2x2 by the scale factor.
Lastly, at the risk of some humility here is a link to some old TP6/asm DOS code I wrote for doing full screen roto-zooming. Strange the stuff that sticks around on the net:
http://www.hornet.org/cgi-bin/scene-search.cgi?search=Paul%20H.%20Kahler
Everything you need to do can be done with Bitmap images in GDI+ (using the System.Drawing... namespaces). These classes are designed and optimized for doing exactly this sort of thing (image manipulation). Is there any particular reason you can't work with an actual Bitmap instead of an int[,]? You could even write a very simple routine to create a Bitmap from an int[,], do whatever you need to to on the Bitmap, and then convert it back to int[,] at the end.
Related
This question already has answers here:
Non-Affine image transformations in .NET
(3 answers)
Closed 1 year ago.
I need to combine two images in C# ( 4.7.2 ), and have the top image transformed putting each of the four corners at specific coordinates in the image.
Is that possible? Preferably with a solution that doesn't require spending a ton of money. As far as I can tell i can't do it with the Bitmap/Graphics classes.
Image of what I'm trying to do
Shear (or skew), which is what an affine transform such as used in GDI+ or WPF, is unlikely to do what you want, if I understand the question correctly. With shear/skew the transformed coordinate space is still a parallelogram, whereas in your image, the original rectangle is squeezed or stretched arbitrarily.
Assuming that's correct, I would recommend using the features in the WPF Media3D namespace (WPF, simply because it's the most accessible 3D API in the .NET context). In particular, you will want to define a texture that is your original bitmap. Then you will want to define a quadrilateral 2D surface in 3D coordinate space with sufficient resolution (i.e. triangles) for your purposes (see below), and where the triangles in that surface are constructed by tessellating the shape that you want as your final image, and where you've interpolated the texture (UV) coordinates for that shape across the vertexes that result from the tessllation.
How many triangles you actually want depends on the desired quality. In theory, you could use just two. This is the simplest approach, and determining the UV coordinates is trivial, because you only have your original four corners. But there will be a visual discontinuity along the diagonal where the two triangles meet, where the interpolation of the texture pixels changes direction due to the triangles not being square to each other.
For better results, you'll need to use more triangles. But then this complicates the assignment of the UV coordinates. For each inner vertex of this surface, you'll need to interpolate across the surface. This is probably easier to do if you generate the tessellation in the first place by subdividing the quadrilateral with lines connecting opposite sides (which will form smaller interior quadrilaterals bounded by intersecting lines) and then just divide each of those quadrilaterals into pairs of triangles. If you do it this way, then you can use the distance along each line to determine the appropriate U or V coordinate at each vertex that line goes through.
Having created the appropriate texture and geometry, it's a simple matter to render the result into a RenderTargetBitmap via the Viewport3DVisual class, and then do whatever you want with that bitmap.
Now, all that said…
If it turns out that your problem can be simplified such that shear/skew is sufficient for your needs, you can look at De-skew characters in binary image for help with that. In that particular example, they are trying to undo skew caused by optical effects, but skewing is skewing; the same exact principle works in either direction.
Even if your problem is not amenable to shear/skew approaches, before you implement your own solution (e.g. based on my outline above), you may want to look at other available tools. Information about some options can be found in, for example, Image Modification (cropping and de-skewing) in C# and Image comparison - rotation, alignment and scaling.
I have two sets of X,Y co-ordinates as separate lists. Both represent the same irregular polygonal shape, but in different orientations and sizes/scale.
Need to write a program in C#, to compare both the points set, rotate any one of the shape such that it aligns with the another, so that they are in same orientation.
Tried searching for solution, and got to know using concave hull with angles difference can help, but could not find a good C# implementation for the same.
Can some one help me, if there is a minimal way to achieve this?
Edit: The two points-set might not be the same. One may contain more points than other.
I have contour co-ordinates of a shape and a PNG which is of same shape, but orientation is different. I want to read the PNG, calculate the angle to turn it to the fit the Contour.
Calculate image moments for point cloud
Evaluate orientation of both clouds with Theta angle.
Rotate one cloud by theta difference.
Use other moments (centroid etc) to find translation and scale
I have to detect all the points of a white polygon on a black background in c#. Here is an image of several examples. I wouldn't think it is too difficult, but I am unable to detect this properly with all the variations. My code is too much to post here, but basically I went through each side and look for when it changes from black and white. Should I use Open CV? I was hoping for a simple algorithm I could implement in C#. Any suggestions? Thank you.
In your case I would do this:
pre process image
so remove noise in color if present (like JPG distortion etc) and binarize image.
select circumference pixels
simply loop through all pixels and set each white pixel that has at least one black neighbor to distinct color that will represent your circumference ROI mask or add the pixel position to some list of points instead.
apply connected components analysis
so you need to find out the order of the points (how are connected together). The easiest way to do this is use flood filing of the ROI from first found pixel until all ROI is filled and remember the order of filled points (similar to A*). There should be 2 distinct paths at some point and both should join at last. So identify these 2 points and construct the circumference point order (by reversing one half and handling the shared part if present).
find vertexes
if you compute the angle change between all consequent pixels then on straight lines the angle change should be near zero and near vertexes much bigger. So threshold that and you got your vertexes. To make this robust you need to compute slope angle from a bit more distant pixels then the closest pixels. Also thresholding this angle change against sliding average often provides more stable results.
So find out how far the pixels should be to compute angle so you got not too big noise and vertexes has still big peaks and also find out the threshold value that is safe above any noise.
This can be done also by hough transform and or find contours functions that are present in many CV libs. Another option is also regress/fit the lines in the point list directly and compute intersections which can provide sub pixel precision.
For more info see related QAs:
Backtracking in A star
Finding holes in 2d point sets
growth fill
I was thinking about writing a very simple program to compare 2 ARGB array pixel by pixel. Both images are same resolution, taken with the same camera source.
Since the camera is being hold still, I was expecting it's a fairly simple program to compare bitmap source by
Convert every pixel into Gray scale pixel
Literally compare each pixel from position 0 to N.
Have a isClose method to do an approximate +/- 3.
The result is that I have too much error bits. But when taking JPEG out of it and view it with naked eye, they seem to be identical (which is the case).
Why do you think I am seeing so much error when comparing them?
BTW - I am trying a write a very basic version of motion detection.
If you are tracking a known object you can pre-process your images before you compare them. For example, if it's a ball you're tracking and it appears brighter than its surroundings, you can threshold your greyscale image that will produce an image with only black or white. You then detect what are known as "contours" (see openCV documentation). Once you get the contour you are after (the ball) in any image, you can compare the location of it in each successive image. There are some algorithms that help figure out where a moving object will be next so it helps finding it in the next frame.
Without knowing exactly what you are doing, it's hard to give anything concrete.
And I see you're C#...maybe this will help: .Net (dotNet) wrappers for OpenCV?
b/c the pictures are not the same.
each one you pressed the button of the camera a little differently.
the change is "huge" if you compare pixel by pixel.
I'm not an expert on motion detection, but try to compare averages around a pixel - I think it will give you better results.
I'm new to images processing. Now I have a problem.
I'm writing a simple program on C#, that have to define some objects on images through some samples.
For example here's the sample:
Later I have to compare with it objects that I find on a loadable image.
Sizes of objects and samples are always equal. Images are binarized. We always know rotation point (it's the image's center). Samples are always normalized, but we never know object's rotation angle relative to the normal.
Here are some objects that I find on the loadable image:
The question is how to find angle #1.
Sorry for my English and thanks.
if you are using aforge libraries, you can utilize his extension too, named Accord.Net.
Accord.Net is similar at Aforge, you install it, add the references at your project and you are done.
After that you can use the simply the RawMoments by passing the target image, and after you can use them to compute CentralMoments
At this point you can get the angle of your image with the method of CentralMoments GetOrientation() and you get the angle.
I used it on an hand-gestures recognition project and worked like a charm.
UPDATE:
I have just checked that GetOrientation get only the angle but not the direction.
So an upside-down image have the same angle of the original.
A fix, can be the pixel counting, but this time you will get only 2 (worst case) samples to check and not 360 (worst case) samples.
Update2
If you have a lot of samples, i suggest you to filter them with the size of the rotated image.
Example:
I get the image, i see that is in a Horizontal position (90°) i rotate it of 90° and now i have the original width and heigth that i can utilize to skip the samples that are not similar, like:
If (Founded.Width != Sample.Width) //You can add a range too if in case during
Continue; //the rotation are added some pixels
To recap, you have a sample image and a rotated image of the same source image. You also have two values 0,1 for the pixels.
A simple pseudo-code that can yield moderate success can be implemented using a binary search :
Start with a value for the rotation to be 180 degress - both clockwise and counter-clockwise
Rotate the image to both values.
XOR the original image from the rotated one.
Count the number of 0 pixels and check if it's less than the margin of error you define.
Continue the search with half of the rotation angle.
look at this
Rotation angle of scanned document