Background
I am producing a physics teaching platform using XNA C# + Kinect. A user may set up a scene with objects, including:
Sphere
Block
Plane
The obvious fun thing to do is to use different gestures to represent an object. Plane seems to be the easiest one so I intend to start there.
The flow of inputting an object is like this:
Choosing an object (through gesture recognition)
Scale the object
Rotate the object
Place the object
My idea
Here is my idea. We track 6 joints of the upper body:
LEFT + RIGHT wrist
LEFT + RIGHT elbow
LEFT + RIGHT shoulder
If this set of points is collinear, i.e. both arms held horizontal, we will say this is a input gesture for a plane.
If my idea is to be used, then there is a need to determine how collinear the points are, for example, some algorithm which returns the "collinearality" of a set of points as a float in interval [0, 1]. Then I can say, for example, anything >0.9 will be accepted as a plane, allowing some room for error.
OR
Alternatively, the idea of "Template Based Posture Detection" sounds great, stated in the link below:
http://blogs.msdn.com/b/eternalcoding/archive/2011/08/02/kinect-toolkit-1-1-template-based-posture-detector-and-voice-commander.aspx
But from what I understand, this seems to be a generic learning algorithm for ANY gestures. I would tend to write something on my own to try out Kinect as a start.
Question
So... does anyone know of any existing algorithm that determines the degree of "collinearness" of a set of points?
Related
I'm trying to rotate a beam/cuboid around a pivot using MRTK, Unity, and the Hololens 1 when you're doing the pinch and hold gesture. The beam should remain in place once you've let go of the pinch.
My initial thoughts were to get the cartesian coordinates of the pinch and based on their position relative to the pivot, have the beam rotate by however many degrees needed. E.g. the hand position while pinching is (1,1,0), and the pivot position is (0,0,0). Thus, the beam should be rotated at 45 deg in the XY plane (we ignore the z components). I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2. (https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/HandTracking.html#hand-tracking-events & https://microsoft.github.io/MixedRealityToolkit-Unity/api/Microsoft.MixedReality.Toolkit.Input.IMixedRealityHand.html#Microsoft_MixedReality_Toolkit_Input_IMixedRealityHand_TryGetJoint_).
Does anyone know how to go about doing this or at least point me in the right direction (tutorials/code/assets would be much appreciated!)
Thank you!
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2
Yes, HoloLens1 does not support hand tracking, such as touching holograms directly with your hands or pointing and committing with hands. It is recommended that you try to use the interaction model Gaze and commit, so that you can easily get the position of GGVPointer.
Pinch to rotate interaction can be achieved by adding the ManipulationHandler component from MRTK to your cube. The component can be configured to allow two hand manipulation like this.
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2.
There are a few ways to query pointer position. The code below should return the Right GGVPointer position for the hololens.
Vector3 pos;
GGVPointer pointer = PointerUtils.GetPointer<GGVPointer>(Handedness.Right);
if(pointer != null)
{
pos = pointer.Position;
}
In case you just want to rotate the Object around its center,you can use the Boundingbox Component. It creates handles that can be pinched and moved to rotate an object. you can disable the axis you don't want. It works even on the HoloLens 1.
i am developing a racing game. But it's not your usual racing game, the bike is supposted to always have the Y coordinate of the same value as the closest point on the map mesh, in other words it is ALWAYS touching it.
What i do not want is the Y coordinate to be dependent upon the X and Y position, as there will be 2 (or maybe more floors).
I have absolutely no idea how to implement this. Completly zero. I am rather new to scripting, and this i way out of my league, i don't even know how to start... The map is not a simple plane, so simple maths won't help.
I'll apreaciate any help at all, not necessarily a solution.
Thanks in advance
This idea is an adapted version of #LeoBartkus's
I suggest using 2 raycasts from the bottom of the bike's wheels and using the 2 hits to position and rotate the bike. This allows for an accurate positioning of the bike for all kinds of terrain, except for spikes narrow and tall enough to appear to pierce the bike. Using a single raycast from the bike's center might cause problems if the ground is uneven, like a crater for example
I need to develop a compass for a device we are using. This device is directionally unaware (no gyroscope), but has a GPS module. How can I make a compass, with a needle, that leads from a start coordinate (likely their current position) to an end coordinate?
My current thoughts are:
Poll coordinates on the GPS sensor as quickly as appropriate.
Record coordinates where the PDOP is within a respectable range (maybe less than 2.0).
Determine the direction they are facing based on the coordinate changes of them walking.
I have a few issues with this though:
Firstly, the unit has to be moved around to get a sense of where they are.
Doesn't seem like it would be the most accurate, i.e. how many past points do you use to determine direction change?
I'm not really sure if this is a feasible solution. Is there some implementation theory I can read on this?
Is there a better way to solve my problem? The scope of the project involves going from a 'current location' to some geo-tagged item in an oil field.
Using a Windows Mobile 6.5 device - C# on VS2008.
I think the issue you have is that the way people use a compass doesn't fit with what you are achieving with a 'getting closer' technique. When you look at a compass you tend to spin to get a sense of direction, which is going to fail in your case, no matter how clever you get with distances. In fact the direction of the phone is going to be irrelevant, because it doesn't know which way up it is to be able to adjust any compass.
I think you'd be better dropping the compass idea and developing an interface that works with the 'getting closer' method you suggest. Maybe a simple distance to target display would be better? I think showing something that doesn't do what it should do is likely to be counter productive. If it was sat nav, within a moving vehicle then maybe it might work, as you can't spin that easily on the spot and it's likely to update itself before you've spun around.
Giving it some further thgouht I think using a map and showing your last movements on that with some sort of line (although the map may display upside down in relation to the users direction) would be better locating a target, if you showed a vector of your current location towards your target. At least if the vector and your movement line were aligned you'd know you were travelling in the right direction.
If I were in your shoes, I would display a notice on the screen that tells the user to always keep the top of the device pointed in the direction of travel. While they are moving, calculate the direction they're going using their current position and their last known position. Then calculate the direction they should be moving based off of their current position and the target's position. Then calculate the difference between their direction of travel and the direction they should be traveling, and use that difference to point an arrow on the screen that shows which direction they should be going relative to their current direction.
Here's an example:
Let D represent Direction of travel. Let's say that's 100 degrees in this example.
Let T represent the direction they should be traveling to reach the target. Let's say that's 90 degrees in this example. T - D = -10, so draw an arrow on the screen pointing -10 degrees from straight up. Remember straight up is the same as D if they're following the instructions I mentioned earlier. This means that the arrow is pointing at D-10, which is 90 degrees, which is the way they should be going.
Now you have another problem: if they stop moving, you no longer have any way to tell which way the device is pointing. In that case, hide the arrow and let the user know that it will return once the GPS starts moving again.
The last thing to keep in mind is that a 1 degree change in latitude represents more distance than a 1 degree change in longitude, so determining your headings isn't as straightforward as you might think. Here's a link to an article that tells you how to calculate headings based on 2 GPS points: http://www.movable-type.co.uk/scripts/latlong.html
Good luck!
Edit: Most GPS devices give you the direction of travel, but unless you're using the same algorithm the GPS uses when you calculate the direction they should be going, there could be a discrepancy that would cause your arrow to be a little off.
In your case you only can create an instrument which shows the geographical direction you are moving.
This is equal (or better) then a compass when the user points the device in the direction he is moving.
This is not equal to a compass, but in some cases even more usefull: e.g inside a vehilce where a compass using the magnet field would not work well.
GPS has a "course" atribute. Just use that, and you are ready.
I was wondering how (if at all) it would be possible to determine a shape given a set of X,Y coordinates of mouse clicks?
We're dealing with a number of issues here, there may be clicks (coords) which are irrelevant to the shape. Here is an example: http://tinypic.com/view.php?pic=286tlkx&s=6 The green dots represent mouse clicks, and the search is for a square at least x in height/width, at most y in height/width and compromised of four points, the red lines indicate the shape found. I'd like to be able to find a number of basic shapes, such as squares, rectangles, triangles and ideally circles too.
I've heard that Least Squares is something that would help me, but it's not clear to me how this would help me if at all. I'm using C# and examples are more than welcome :)
You can create detectors for each shape you want to support. These detectors will tell, if a set of points form the shape.
So for example you would pass 4 points to the quad detector and it returns, if the 4 points are aligned in a quad or not. The quad detector could work like this:
for each point
find the closest neighbour point
compute the inner angle
compute the distance to the neighbours
if all inner angles are 90° +- some threshold -> ok
if all distances are equal +- some threshold (percentage) -> ok
otherwise it is no quad.
A naive way to use these detectors is to pass every subset of points to them. If you have enough time, then this is the easiest way. If you want to achieve some performance, you can select the points to pass a bit smarter.
E.g. if quads are always axis aligned, you can start at any point, go right until you hit another point (again with some thresold), go down, go left.
Those are just some thoughts that might help you further. I can imagine that there are algorithms in AI that can solve this problem in a more pragmatic way, maybe neural networks.
I'm designing a 3D game with a camera not entirely unlike that in The Sims and I want to prevent the player character from being hidden behind objects, including walls, pillars and other objects.
One easy way to handle the walls case is to have them face inwards and not have an other side, but that won't cover the other cases at all.
What I had planned is to somehow check for objects that are "in front" of the player, relative to the camera, and hide them - be it by alpha blending or not rendering at all.
One probably not so good idea I had in mind is to scan from the camera to the player in a straight line and see if you hit a non-hidden object, continuing until you reach the player. Unfortunately, I am an almost complete newbie on 3D programming.
Demonstration SVG illustration < that wall panel obscures the player, so it must be hidden. Another unrelated and pretty much already solved problem is removing all three wall panels on that side, which is irrelevant to this question and only caused by the mapping system I came up with.
What I had planned is to somehow check for objects that are "in front" of the player, relative to the camera, and hide them - be it by alpha blending or not rendering at all.
This is a good plan. You'll want to incorporate some kind of bounding volume onto the player, so the entire player (plus a little extra) is visible at all times. Then, simply run the intersection algorithm for each corner of the bounding volume.
Finding which object is at a given point on screen is called picking. Here's an XNA link for you which should direct you to an example. The idea is that you retrieve the 3D point in the game from the 2D point, and then can use standard collision detection methods to work out which object is occupying that space. Then you can elect to render that object differently.
One hack which might suffice if you have trouble with the picking approach is to render the character once as part of the scene, and then render it again at the end at half-alpha on top of everything. That way you can see the whole character and the wall, though you won't see through the wall as such.
One easy way, at least for prototyping, would be to always draw the player after you draw the rest of the scene. This would ensure that the player is rendered on top of anything else in the scene. Crude but effective.
Create a bounding volume from the camera to the extents of the player, determine what objects intersect that volume, and then render them in whatever alternate style you want?
There might be some ultra-clever way to do this, but this seems like the pretty straightforward version, and shouldn't be too much of a perf hit (you're probably doing collision every frame anyway....)
The simplest thing I can think of that should work relatively well is to model all obstacles by a plane perpendicular to your ground (assuming you have a ground.) Roughly assuming everything that is an obstacle is a wall with some height.
Model your player as a point somewhere, and model your camera as another point. The line in 3d that connects these two points lies in a plane that is particularly interesting to you, because if this plane intersects an "obstacle plane" below the height of the obstacle, that means that that obstacle is blocking your view of the player point.
I hope thats somewhat clear. To make this into an algorithm you'd have to implement a general method for determining where two planes intersect (to determine if the obstacle is tall enough to block view.)