I have a set of 3D points that will fit neatly using using a line segment. I need to get the center of that line (no problem, a mean of X, Y and Z will work great for that). I also need to get a couple vectors that describes the orientation of the line in 3D space. In other words I need to describe how much the sampled data's X, Y and Z axis is rotated.
If you imagine an airplane (this is not an aviation application, just a handy example) and the 3d points are randomly spread in the area of the wings. I need to use these points to describe the orientation of the airplane in 3d space to determine exactly what direction the nose is pointed and the location of the tip of the wings.
I have been looking for linear fit libraries but they all seem to be for 2D data sets or are commercial
I could also fit two linear equations through the x/y and x/z data and use those but this seems wrong and a work around.
Does anybody have any thoughts on how to solve this problem?
Related
I've been trying to calculate an arbitrary plane shaped intersection in a 3-dimensional array but am unable to find any solution for my problem using C#.
I have a 3D array which is basically a stack of images (x and y correspond to the height and width of the images, z to the number of the image within the stack). The user can define three points (x,y,z), which indicate the position and orientation of a plane within the array. It can lie straight or angled in any direction within the 3D array.
I find it tricky to find a solution to get all values from this plane, from between the points as well as beyond them as they do not necessarily lay on the edge of the 3D array. As the points can be arranged in any way (all three are not colinear), the width and height of the plane is also unknown.
Does someone have an idea how to approach this problem please?
I have two sets of X,Y co-ordinates as separate lists. Both represent the same irregular polygonal shape, but in different orientations and sizes/scale.
Need to write a program in C#, to compare both the points set, rotate any one of the shape such that it aligns with the another, so that they are in same orientation.
Tried searching for solution, and got to know using concave hull with angles difference can help, but could not find a good C# implementation for the same.
Can some one help me, if there is a minimal way to achieve this?
Edit: The two points-set might not be the same. One may contain more points than other.
I have contour co-ordinates of a shape and a PNG which is of same shape, but orientation is different. I want to read the PNG, calculate the angle to turn it to the fit the Contour.
Calculate image moments for point cloud
Evaluate orientation of both clouds with Theta angle.
Rotate one cloud by theta difference.
Use other moments (centroid etc) to find translation and scale
I have an application I'm working on that requires a fair amount of 3D graphics programming. I have a series of lines that create both text and 3D cylindrical holes (see images).
I would like to be able to click and drag the objects in question using my mouse through the X,Y plane (Z constant). My understanding is that in order for the bounding boxes to be setup correctly, I have to have everything in using 3D polygons (triangles). I would like to be able to do collision detection without this conversion. Is this possible? If I must convert, can anyone point me to a piece of code that does this rather painlessly?
You can treat each line segment as a cylinder, and check them for collision.
Here's the math, as well as more alternatives.
Hi I am writing a program which retrieves the z coordinates of a ball from a disparity map. I am using the EmguCv wrapper class. At present I have a lot of elements working although admittedly not perfectly just yet but just need some tweaking. The steps completed so far are as follows:
The two cameras operating at the same time with each cameras view displayed in an image box.
Camera calibration is carried out with the chessboard squares identified and the intrinsic and extrinsic parameters stored.
The images are rectified and undistorted in order to remove as much noise and distortion as possible.
I have the ball being identified in each image with the centre of the ball marked and the x and y coordinates retrieved.
The disparity map is created and displayed and the reprojectImageTo3D() method implemented to give the x, y and z coordinates of the pixels in the map.
The issue I am having at present is how to isolate the ball in the disparity map in order to get only the x, y and especially z coordinates. I have seen instances where a single object is extracted from a disparity map, e.g. http://disparity.wikidot.com/, under the heading "Adding Color and Motion to Disparity Maps".
Is there a method which could be used in identifying and extracting the ball or is the extraction performed by things such SURF or SIFT processes?
Thanks in advance
Steve
You need to provide a lot more details to get a useful answer.
For example, are you looking for a fully automated solution, or is it acceptable to have a human operator provide some input ("hints")? If the latter, the problem becomes a lot easier - a common method is to get mouse input (click) on one or a few pixels in an input image, look up the corresponding depth(s) through the disparity map, then "grow" a fitted sphere from there by adding neighboring pixels - you'll want to start with a lose fit error threshold, and tighten it as the number of added samples increases.
I am attempting to create a function taking a plane in 3d space, and returning a plane which will fit in its entirety inside one section of a grid on the screen.
The grid on the screen is fixed and is defined by either divisions in X and Y, or by a set of lines across the screen.
The original plane can be any size or orientation on the screen, though it will never take the whole screen.
I am working in Unity3.5.2f2 with C#. I have posted this on SO as it is quite heavily math based as opposed to just Unity general knowledge. Ideally a solution will not use external libraries, though it is a possibility.
I have a few methods in mind and would appreciate any input;
Project the plane to screen space, get the min/max x and y values of the mesh, (bounding box), use this to calculate a scale xform (using difference in height/length of mesh to that of a screen division). Re-project into world space, after snapping two edges of the mesh to a selected division.
As the divisions are rectangular in nature, create several view frustums, and come up with some method of scaling/translating the plane in 3d space to fit the frustum.
Function prototype would be;
Plane adjustPlaneToFitScreens(Plane _plane)
Any thoughts?
I solved this issue using method 01. above. Unity provided several handy functions making the math easy, and calculating scaling and translation in pixel/screen space was far easier than in 3d space while having to take into account view angle / FOV.
There are issues with the re-projection into world after the scaling, but this particular application doesnt have the camera moving when viewing the scaled object, so the issues are not actually noticeable in black box