Getting start location of Flick gesture in XNA - c#

I'm using MonoGame implementation of XNA to create a Windows Phone game. The player should be able to move objects across the level using flick gestures. I'm using the TouchPanel class to read gestures.
Here's how I initialize the TouchPanel:
TouchPanel.EnabledGestures = GestureType.Flick;
Here is how I read the gestures in Update method:
while (TouchPanel.IsGestureAvailable)
{
var g = TouchPanel.ReadGesture();
...
}
However, the only field that is filled is the Delta vector. But how do I find out the point from where the user has started the gesture?
Since I want my game to be cross-platform, I cannot rely on non-XNA code such as Silverlight gesture handlers.

Using Flick gesture to move objects is not a good idea, as Flicks are positionless.
You should use Hold or FreeDrag gesture instead, detecting the object and then moving it across the screen.
The purpose of Flick is just zooming, I suggest you to choose another gesture.

Related

How do I acquire the Eye-gaze location within 2D UWP Apps with HoloLens 2 in C#

Online docs like- 2D app design considerations: UI/UX state that in 2D UWP apps
When a user gazes towards something or points with a motion controller, a touch hover event will occur. This consists of a PointerPoint where PointerType is Touch, but IsInContact is false.
Which implies that eye-gaze is automatically mapped into your 2D app window on HoloLens 2 and is available through events like the PointerMovedEvent. However, based on my tests I am finding that it is not eye-gaze or even head based gaze that is passed into the app as a touch hover event, but on the HoloLens 2 it is actually the pointing ray cast by your finger through hand recognition if you point at the 2D app window frame in 3D space. But if someone is aware of eye-gaze actually being mapped and available inside a 2D UWP app window please let me know.
The next approach would seem to be by leveraging the SpatialPointPose API
I am able to grab the starting position of the HoloLens when the app launches and hang on to a stationary reference to that location with the following lines of code:
private async void Page_Loaded(object sender, RoutedEventArgs e)
{
if (EyesPose.IsSupported() == true)
{
var ret = await EyesPose.RequestAccessAsync();
originLocator = SpatialLocator.GetDefault();
originRefFrame = originLocator.CreateStationaryFrameOfReferenceAtCurrentLocation();
coordinateSystem = originRefFrame .CoordinateSystem;
}
gazeTimer.Start();
}
I have also had success with then using that reference frame to get the coordinate system
and by using a dispatch timer I can then pass that along with a perception time stamp to get a SpatialPointPose object:
From which I can then retrieve the eye gaze origin and gaze direction and display and update the values on screen in xaml textblocks:
private void GazeTimer_Tick(object sender, object e)
{
timestamp = PerceptionTimestampHelper.FromHistoricalTargetTime(DateTime.Now);
spPose = SpatialPointerPose.TryGetAtTimestamp( coordinateSystem, timestamp );
if (spPose.Eyes != null && spPose.Eyes.IsCalibrationValid)
{
if (spPose.Eyes.Gaze != null)
{
OriginXtextBlock.Text = spPose.Eyes.Gaze.Value.Origin.X.ToString();
OriginYtextBlock.Text = spPose.Eyes.Gaze.Value.Origin.Y.ToString();
OriginZtextBlock.Text = spPose.Eyes.Gaze.Value.Origin.Z.ToString();
DirectionXtextBlock.Text = spPose.Eyes.Gaze.Value.Direction.X.ToString();
DirectionYtextBlock.Text = spPose.Eyes.Gaze.Value.Direction.Y.ToString();
DirectionZtextBlock.Text = spPose.Eyes.Gaze.Value.Direction.Z.ToString();
}
}
}
However, I haven't found a way to get the UWP 2D app window frame's bounding box location in 3D space or to find specifics on of how the app windows pixel resolution maps to that frame so that I could then somehow resolve the x,y screen coordinates that the eye gaze pointer intersects within the 2d app window.
I'm basically looking to get a similar kind of result with eye gaze as is described in the online help docs (which seems to be actually implemented for the hand recognition pointing finger). But if it is not exposed through something like a pointer event, I am fine with using the SpatialPointPose api if there is a way to complete the last mile of querying the location of the 2D app window frame and resolving the screen coordinates that are being intersected by the eye gaze ray.
The gaze behavior described in the 2D app design considerations documentation is for first generation HoloLens.
On HoloLens 2, Gaze will trigger Pointer events in 2D apps when the user is in Select mode. This is to prevent the "inadvertent pop-up or unwanted interaction" problem apps had with Gaze in HoloLens 2. Gaze interaction now occurs only intentionally.
We'll update the docs to clarify.
There is no way for a 2D app to find its location in holographic space, so there is no consistent way to map the SpatialPointerPose into the 2D app's coordinate space. This API is useful only for holographic apps.

Vuforia / Lean Touch and cancelling bg interference with objects

I've been working on a project for a while and ultimately what I have is a minority report style setup with AR screens in front of me. I'm using a unity plugin called lean touch which allows me to tap on the mobile screen and select the 3d objects. I can then move them around and scale them which is great.
I have an undesired side effect though, under a different situation this would be fantastic but in this case it's hurting my head. If you trigger the target using a user defined target the screen appears in front of you with the camera (real world) behind it. I then tap on the item and can drag on my mobile device to interact. The problem is, if someone walks past my mobile in the background the tracking 'pushes' my item off my screen as they walk past.
Under a different setup I could take advantage of this as I wave my hand behind the camera and can interact with the item, unfortunately all I can do is push it from side to side not scale or anything else and it's kind of making the experience problematic. I therefore would like help to find out how to stop the tracking of my objects position if the camera background changes. Thus leaving my objects on screen until I touch the screen to interact.
I'm not sure exactly what needs turning on or off, if it's a vuforia setting or a lean touch unity setting with layers etc or rigidbody or something to add. I need directional advice on which way to go as I'm going in circles at the moment.
Screenshot below.
http://imgur.com/a/NYekP
If I wave my hand behind or someone walks past it moves the AR elements.
Ideas would be appreciated.
The only code that interacts with an object is the below which is on a plane used as a holder object for the interactive screen prefab so lean touch will move the plane as the screen thinks its a gui element and therefore not interactable, the last line is added to hide the original plane so the duplicate isnt visable when I lean touch drag the screen.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;
public class activateChildObjects : MonoBehaviour {
public GameObject MyObjName;
//Get the original GUI item to disable on drag of the cloned item
public string pathToDeactivate;
void Update(){
if(DefaultTrackableEventHandler.lostFound == true){
foreach (Transform child in MyObjName.transform) {
child.gameObject.SetActive(true);
}
Debug.Log("GOT IT");
}else{
foreach (Transform child in MyObjName.transform) {
child.gameObject.SetActive(false);
}
Debug.Log("Lost");
}
//Remove original panel
GameObject.Find("UserDefinedTarget/"+pathToDeactivate).SetActive(false);
}
}
EDIT
Ok I haven't stopped it from happeneing but after numerous code attempts I just ticked all the tracking options to try and prevent additional tracking once found and it's not right but it's an improvement. It will have to do for now.

How to initiate click to activate code in Unity 3D?

I am designing a shooting game in Unity 3d for which I have a requirement that only when an image is tapped the shooting should occur. Please check the command used for the purpose:
if (Input.GetMouseButtonDown(0)){
if (!clickedConfirmed){
GameObjectClicked = GetClickedGameObject();
clickedConfirmed = true;
}
Currently, shooting occurs when clicked anywhere on the screen. How can activation of shooting be bonded only to the gameobject (image) instead of being activated when clicked anywhere on the screen.
Your need to give more context in your example, especially what GetClickedGameObject() is actually doing internally.
As such I'm limited in what I can say is the problem, but I suspect that you aren't using GameObjectClicked correctly.
I would expect that you'd do a check such as:
this.gameObject == GameObjectClicked
Note that if this being used as a method of developing a UI there are more elegant solutions out there, including the long awaited new GUI system included in the Unity3d 4.6 open beta.

Generating Keystroke With the Kinect

I'm currently working on a project for the final year of my degree and am having some trouble with the coding aspect, being fairly new to C# and unity. I'm using the Kinect, unity and zigfu package and i want to use the persons position in relation to the kinect to generate keystroke. Eg if the player is closer to the kinect it will trigger the forward button to be pressed, if they are farther away then it will trigger the back back button with a neutral area in the middle.
//has user moved back
if (rootPosition.z < -2)
{
//print(rootPosition.z);
v = -1;
}
//has user moved forward
if (rootPosition.z > -1)
{
//print(rootPosition.z);
v = 1;
}
I've managed to find the section which registers where i am in relation to the kinect but don't know how to trigger keystroke. Any help with this matter would be greatly appreciated as the deadline is fast approaching and i'm struggling with the technical side of things.
I've used something like this in the past for Simulated Keystrokes:
Windows Input Simulator (C# SendInput Wrapper - Simulate Keyboard and Mouse)

Capture Kinect event to a scatterViewItem

I'm attempting to adapt a MSSurface application to allow use of a Kinect. Using the code4fun libraries, I'm able to generate an event from the Kinect when a user puts their hand towards the screen, but what I'm missing is how to trigger a ScatterViewItem's touch or click event to grab item, and then release it once finished moving. from the kinect skeleton model I can get adjusted x/y co-ordinates which i could apply if I can trap the right events in the ScatterViewItem.. And code suggestions would be appreciated...
regards,
Rob
If you are just looking to move the item, the easiest thing is to set the ScatterViewItem's Center property to the translated x/y coordinates. You can then control when the item is 'grabbed' fairly easily using whatever conditions you want.
If you are also after pinch/zoom, you'll have to do some playing around. Since the Kinect doesn't have the resolution to detect the fingers pinching and zooming, you could implement this by mapping the Z coordinate of the hand to preset sizes on the grabbed ScatterViewItem.

Categories

Resources