Is it possible to change the elevation angle of the kinect motor automatically?
I mean, till now I have this code (c#):
private void CameraAngleSlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
int angle = (int)CameraAngleSlider.Value;
KinectSensorManager.ElevationAngle = angle;
}
and I change the angle manually using a slider called CameraAngleSlider.
Just an example: I would imagine that when the kinect start session, I place in front of the kinect and the sensor tries to adjust the angle related to my position.
This should be perfectly possible, however you will have to program this manually and take into account that the adjustment is not very fast. Also the kinect sensors don't push any data while it is adjusting its angle.
You have 2 situations:
1) the skeleton is already being tracked
=> move the angle up when the head is too close to the top of the screen
=> move the angle down when the head is under half the screen heigth
2) no skeleton being tracked
=> you will have to guess.. I'd suggest moving the angle up and down untill you get a tracked skeleton (and abort after a few tries so it doens't just keep adjusting)
There is no flag or function you can set on the Kinect to have it seek out the best position for tracking, based on player position. The Kinect does detect clipping and can be queried as to if a portion of the player is out of the FOV.
The two functions that will allow you to detect clipping of the skeleton are:
FrameEdges Enumeration
Skeleton.ClippedEdges
Depending on how your application is operation you can monitor either one of these and, when clipping of the Skeleton is detected, adjust the KinectSensor.ElevationAngle accordingly.
Related
Lets say I have a path, could be a line or a curve in Unity and I wanted to allow my player to snap to that path and move along it how would I achieve this. This would look like either a cover system or something similar to how ledges work in Sly Cooper.
While if you want your player's movement to follow a path, there is a technique called WayPoint, which is simply using the build-in navigation system in Unity. There exist so many articles that teaching you how to manually build such a Way Point System, and at the same time, you can get a SimpleWayPoint module from the online assets store.
But if your player is controlled by user, which means the user can control it's movement, let's say pressing 'W' button meaning moving forward while pressing 'S' button meaning moving backwards, and the leftwards and rightwards movement are constrained by the path you defined, unfortunately I didn't find any mature technology which allows you realize this constrains.
However, in the project I accomplished several months ago, I manually handle the coordinates successfully, since I create a city whose streets strictly observe horizontal and vertical grid lines, I can easily controls the target gameobject's X and Y coordinates, by either using the build-in freeze coordinates API or manually reseting the axis values over and over again in the Update() method. So maybe you could using the Update() method to constrain the targets' coordinates by dynamically calculating it's coordinates using the functions representing the path, but I can imagine how complex it would be.
You could try using a navmeshagent, which will be enabled as soon the player would like to use this "auto movement". In your code you can set the destination and in your scene you can create a navmesh, which can be whatever shape you like
You can do it by:
Mark the position from where you want your player NOT to move forward/cross the line.
In your script, mention the position.
LOGIC
if, player position > position marked
then stop the movement,
SYNTAX
private void update(){
if(transform.position.x < -10){
transform.position = new Vector3(-10, transform.position.y, transform.position.z); // Here you can either use `Vector3` or `Vector2` according to your game
}
}
I have a simple application which allows me controlling PTZCameras.
My problem is getting the current camera position.
To do this I've used var position = ptzClient.GetStatus(profile.token).Position instruction.
If I set a new preset (using SetPreset method), without moving the camera device, its position results different from status.Position.
I think that the actual camera position is the preset.PTZPosition and not status.Position value.
How can I know the true camera value? And which position is status.Position?
Thank you.
I am trying to drag an Image in a certain area. For that I am using IDragHandler. To prevent the image to go outside the area, i put four box colliders in a square shape. The box was still moving out. So, I put the fixed timestep to 0.0001. Now, when the image goes out of boundry, it pushes back the image in the specified area which is fine but I want the image to stop moving out the boundary the moment it touch the edge of the image.
Here's my code:
public class Draggable : MonoBehaviour, IDragHandler
{
public GameObject box;
private void OnCollisionEnter2D(Collision2D collision)
{
Debug.Log("Triggered");
}
public void OnDrag(PointerEventData eventData)
{
box.transform.position = eventData.position;
}
}
Don’t use physics and fixed timestep to solve this problem. It is performance killer to reduce timestep.
One way to do it : use Physics.OverlapBox ( see documentation https://docs.unity3d.com/ScriptReference/Physics.OverlapBox.html)
Do this test between your draggable object and the limit of your draggable zone in your OnDrag method. Calculate the « Wanted » position base on the event you receive and if your object overlap with your borders then dont move it, if not your are safe and you can move it.
Try to achieve your bounderies as simple numbers. Maybe create a rectangle and take its corners. Then you get the best user experience by using min and max functions, effectively "clamping" the allowed coordinates before they are actually set.
It makes sense to clamp x and y separately. That way the user can still drag along the Y axis if there is room despite the mouse being outside the boundary on the X axis.
I want to implement zoom feature using two-fingers slide in/out gesture that is commonly met in games such as Angry Birds. Now i'm using slider zoom and it feels not so good as the simple gesture. So i've tried to look at the gestures implementation in MonoGame but haven't figured out what actualy can help me to achieve described beahviour.
Any help will be appreciated, thanks!
Short answer: you need to use the TouchPanel gesture functionality to detect the Pinch gesture, then process the resultant gestures.
The longer answer...
You will get multiple GestureType.Pinch gesture events per user gesture, followed by a GestureType.PinchComplete when the user releases one or both fingers. Each Pinch event will have two pairs of vectors - a current position and a position change for each touch point. To calculate the actual change for the pinch you need to back-calculate the prior positions of each touch point, get the distance between the touch points at prior and current states, then subtract to get the total change. Compare this to the distance of the original pinch touch points (the original positions of the touch points from the first pinch event) to get a total distance difference.
First, make sure you initialize the TouchPanel.EnabledGestures property to include GestureType.Pinch and optionally GestureType.PinchComplete depending on whether you want to capture the end of the user's pinch gesture.
Next, use something similar to this (called from your Game class's Update method) to process the events
bool _pinching = false;
float _pinchInitialDistance;
private void HandleTouchInput()
{
while (TouchPanel.IsGestureAvailable)
{
GestureSample gesture = TouchPanel.GetGesture();
if (gesture.GestureType == GestureType.Pinch)
{
// current positions
Vector2 a = gesture.Position;
Vector2 b = gesture.Position2;
float dist = Vector2.Distance(a, b);
// prior positions
Vector2 aOld = gesture.Position - gesture.Delta;
Vector2 bOld = gesture.Position2 - gesture.Delta2;
float distOld = Vector2.Distance(aOld, bOld);
if (!_pinching)
{
// start of pinch, record original distance
_pinching = true;
_pinchInitialDistance = distOld;
}
// work out zoom amount based on pinch distance...
float scale = (distOld - dist) * 0.05f;
ZoomBy(scale);
}
else if (gesture.GestureType == GestureType.PinchComplete)
{
// end of pinch
_pinching = false;
}
}
}
The fun part is working out the zoom amounts. There are two basic options:
As shown above, use a scaling factor to alter zoom based on the raw change in distance represented by the current Pinch event. This is fairly simple and probably does what you need it to do. In this case you can probably drop the _pinching and _pinchInitialDistance fields and related code.
Track the distance between the original touch points and set zoom based on current distance as a percentage of initial distance (float zoom = dist / _pinchInitialDistance; ZoomTo(zoom);)
Which one you choose depends on how you're handling zoom at the moment.
In either case, you might also want to record the central point between the touch points to use as the center of your zoom rather than pinning the zoom point to the center of the screen. Or if you want to get really silly with it, record the original touch points (aOld and bOld from the first Pinch event) and do translation, rotation and scaling operations to have those two points follow the current touch points.
Is there a way to override the elevation angle property of the Kinect device?
As far as I know, the device is calculating this angle with internal sensor and using its value to improve user detection and tracking.
What I want is to mount the Kinect on the ceiling and point it straight down to the ground, then I would somehow try to set the elevation angle to 0 and fool him into thinking that it is placed in a normal position, thus maybe improving the tracking of the user that is lying on the ground underneath the sensor.