In c# winforms, I am drawing a figure onto the form. and you can move the figure around in a 2d game-like fashion. Left and right will turn the figure in a direction (Changes it's heading), and the up and down keys will move the figure forwards or backwards (change it's velocity). However, lets say the figure is pointed at 135 degrees. How would I know to move the x,y coordinates accordingly.
in the image below the figure is at coordinates (140, 140) with a heading of 135. To move forward, how would I calculate the new position.
Here is the big picture of what I am trying to create
y=mx+c will help you to decide y position according to x's coordinate .
take a look at this image
x1,y1 is 140,140 in your case.
y=mx+c
c=0 because 140,140
y=x tan(45°)
y=x hence tan(45°)=1
y=x
so in c#
void move object1(){
int speed=2;
x+=speed;
y+=speed;
}//but this is really easy because 135 degree and start point is 140,140 but when those are not equal this will bit different
Related
I'm trying to rotate a beam/cuboid around a pivot using MRTK, Unity, and the Hololens 1 when you're doing the pinch and hold gesture. The beam should remain in place once you've let go of the pinch.
My initial thoughts were to get the cartesian coordinates of the pinch and based on their position relative to the pivot, have the beam rotate by however many degrees needed. E.g. the hand position while pinching is (1,1,0), and the pivot position is (0,0,0). Thus, the beam should be rotated at 45 deg in the XY plane (we ignore the z components). I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2. (https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/HandTracking.html#hand-tracking-events & https://microsoft.github.io/MixedRealityToolkit-Unity/api/Microsoft.MixedReality.Toolkit.Input.IMixedRealityHand.html#Microsoft_MixedReality_Toolkit_Input_IMixedRealityHand_TryGetJoint_).
Does anyone know how to go about doing this or at least point me in the right direction (tutorials/code/assets would be much appreciated!)
Thank you!
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2
Yes, HoloLens1 does not support hand tracking, such as touching holograms directly with your hands or pointing and committing with hands. It is recommended that you try to use the interaction model Gaze and commit, so that you can easily get the position of GGVPointer.
Pinch to rotate interaction can be achieved by adding the ManipulationHandler component from MRTK to your cube. The component can be configured to allow two hand manipulation like this.
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2.
There are a few ways to query pointer position. The code below should return the Right GGVPointer position for the hololens.
Vector3 pos;
GGVPointer pointer = PointerUtils.GetPointer<GGVPointer>(Handedness.Right);
if(pointer != null)
{
pos = pointer.Position;
}
In case you just want to rotate the Object around its center,you can use the Boundingbox Component. It creates handles that can be pinched and moved to rotate an object. you can disable the axis you don't want. It works even on the HoloLens 1.
I'm still making the game about tower building using Unity and now I have problem that have haunted me for about week now.
Game mechanic for losing is that there is line which goes up at a certain speed and when it goes above the tower, game should end. I'm wondering is there any way of checking highest objects highest point(because of rotated objects and irregularly stacked objects)?
There's a few ways to achieve this:
1) You can shoot a bunch of rays down from high up in the sky. Find all the hit.point positions and then loop through the points and store which building is the highest.
2) Another would be for each block of your building that is added - keep it as a child of an Empty Building gameObject. Then all you need to do is see which Building gameObject has the most children and you know it's the tallest. This assumes all blocks are the same size in Y and then you can easily calculate the height with highestChildCount * blockSizeY
3) Another way to do it would be to use the point in the line that is traveling up. Shoot a ray out of that point to the left and right. If it is hitting a building then the game continues. If it doesn't hit anything the game is over. This is the simpliest as it doesn't require calculation of any heights and your buildings can be made any way you like as long as they have colliders on it for the ray to hit. <--- This is likely the best method for what I'm hearing you asking.
(Note. I might have some spelling mistakes in the naming of methods so proofread before copy-pasting)
Since your are using a line, you might want to find the bounding box of an object. I have never tried the bounding box method so it might not work. The second method uses a little bit of math. If your line is vertical, then finding the highest point is easy. All you need to do is find the y position of the object and add half the y-scale to find the highest point. Note it will only work if the transform origin of the object is at the center. If the origin is at the bottom of the line you will have to add the full y-scale value. If its one third the way up, then only 2 thirds the y-scale value. I think you get the idea. This rule apples for the next condition too. If your line is at an angle, this is where it gets a little bit more complicated. We need to find the absolute value of the rotation in which the line is rotated at. Make sure the line is rotated at less than a 90 degree angle from being vertical. After this, we need to know the length of the line. Imagine a right triangle that the line itself is the hypotenuse, the base is the distance between the farthest left point of the line to the farthest right on the line(or other way around), and the distance from the lowest point of the line to the highest point of the line being the actual height of the triangle. Since we know the angle the line is rotated at and the length of the line, we need to figure out the ratio between the side opposite to the angle that represents the rotation the the hypotenuse(aka the length of the line) and the hypotenuse. This always stays the same if the rotation is the same for all right triangles. Because of this why use mathf.sin(), the rotation of the line. Remember to convert the rotation value(which is stored in degrees) to radians. This can be done by multiplying the rotation value by mathf.deg2rad. Once we know the sin, we multiply the length by the sin value that is outputted. Now we know how long the distance from the bottom to the top of the line is. Again, if the origin is the middle we add the y-position to half the value we get from the previous calculations. If it is in the bottom then the y-position plus the whole value we get from the previous calculations. Same rule as before. I am also quite new to Unity, only a little over a year of experience so there may be fallacies in my answer. Hope it helps. :)
I am continuing to build upon a voxel-based game engine made in OpenTK (a .NET/Mono binding of OpenGL). In this engine, there is a basic class called Volume which possesses traits such as position, rotation and scale, as well as rules to edit these values for animation.
How would I go about providing a function to rotate one point about another point?
I could quite easily rotate an object about its center (by changing its rotation property), but what if I need the object to rotate about origin or a random point in space? This would be useful for grouping blocks together, as I could therefore rotate objects as if they were stuck together - rather than them rotating individually.
I heard I would need to dive in at the deep end and learn about rotation matrices, but honestly it went over my head. The closest resource I have been able to find so far was this link, however it details rotating around an axis. Could somebody adapt these instructions: or even better, give me basic pseudocode for a function that rotates from a position and point of rotation?
EDIT:
The following solution doesn't seem to work. My code is as simple as:
void RotateAboutPoint(Vector3 point, Vector3 amount)
{
v.Translate(point);
v.Rotate(amount);
v.Translate(-point);
}
Should this work, and if not, could anyone help further now that the situation is explained properly?
As far as I can tell, this may as well just be:
void RotateAboutPoint(Vector3 point, Vector3 amount)
{
v.Rotate(amount);
}
Which defeats the object of performing this around a point.
These co-ordinates are not in relation to the object... Sorry if my poor explanation made this unclear before!
I answered a similar question here: Rotating around a point different from origin
in the link you provided author put the steps of rotation :
(1) Translate space so that the rotation axis passes through the origin.
(2) Rotate space about the z axis so that the rotation axis lies in the xz plane.
(3) Rotate space about the y axis so that the rotation axis lies along the z axis.
(4) Perform the desired rotation by θ about the z axis.
(5) Apply the inverse of step (3).
(6) Apply the inverse of step (2).
(7) Apply the inverse of step (1).
Actually in this process (2),(3),(5),(6) are unnecessary if you need to rotate about a point. These steps are the case if you need to rotate your object around a line.
In your case : lets say you want to rotate your object around (a,b)
GL.pushmatrix();
translate your object by (a,b);
rotate your object;
translate your object by (-a,-b);
GL.popmatrix();
EDIT:
Sorry I forgot to add encapsulation of your rotation process.(It was on the post I gave the link though)
Further info:
What is this encapsulation? why do we need this? Answer is simple. OpenGL stores a 4x4 matrice which is initially an identity matrice. When you perform a translate or rotate operation, opengl updates your matrice and at the final state opengl multiply each vertice with that matrice. (And if you do not perform any operation, vertices multiply with identity matrice give you the same vertice coordinates)
The problem in your code is when you don't apply an encapsulation to your rotation/translate block, The final matrice will be same for all your objects in the scene. With encapsulation we guaranteed that the updated matrice will be used only inside that block.
I'm making a small game in XNA. I have a camera up in the air 20 pixels on the y axis. Below it I have a grid of tiles that are 100x100. Right now what I'm trying to do is have a 3D object move with the mouse along the X and Z axis of the grid. I'm using viewport.unproject to convert the 2D screen coordinates to 3D ones, but whatever I try it doesn't seem to be quite right. At the moment I have this:
Vector3 V1 = graphicsDevice.Viewport.Unproject(new Vector3(mouse.X, mouse.Y, 0f), camera.Projection, camera.View, camera.World);
If I use this then it moves, but only by a tiny amount. I've tried replacing the Z axis with a 1 and but then it moves a drastic amount (I understand why, just not really sure how to fix it).
I've tried various other methods such as having 2 vectors, 1 with a 0 Z and 1 with a 1 on the Z and then subtracting them/normalizing them but that wasn't it either.
The closest I got was multiplying the result by the amount it's zoomed, but it wasn't perfect and was slightly offsetting and would go crazy whenever I scrolled the screen so I figured that was the wrong approach too.
Any help would be greatly appreciated, thanks.
I was wondering how (if at all) it would be possible to determine a shape given a set of X,Y coordinates of mouse clicks?
We're dealing with a number of issues here, there may be clicks (coords) which are irrelevant to the shape. Here is an example: http://tinypic.com/view.php?pic=286tlkx&s=6 The green dots represent mouse clicks, and the search is for a square at least x in height/width, at most y in height/width and compromised of four points, the red lines indicate the shape found. I'd like to be able to find a number of basic shapes, such as squares, rectangles, triangles and ideally circles too.
I've heard that Least Squares is something that would help me, but it's not clear to me how this would help me if at all. I'm using C# and examples are more than welcome :)
You can create detectors for each shape you want to support. These detectors will tell, if a set of points form the shape.
So for example you would pass 4 points to the quad detector and it returns, if the 4 points are aligned in a quad or not. The quad detector could work like this:
for each point
find the closest neighbour point
compute the inner angle
compute the distance to the neighbours
if all inner angles are 90° +- some threshold -> ok
if all distances are equal +- some threshold (percentage) -> ok
otherwise it is no quad.
A naive way to use these detectors is to pass every subset of points to them. If you have enough time, then this is the easiest way. If you want to achieve some performance, you can select the points to pass a bit smarter.
E.g. if quads are always axis aligned, you can start at any point, go right until you hit another point (again with some thresold), go down, go left.
Those are just some thoughts that might help you further. I can imagine that there are algorithms in AI that can solve this problem in a more pragmatic way, maybe neural networks.