I'm currently working on a project for the final year of my degree and am having some trouble with the coding aspect, being fairly new to C# and unity. I'm using the Kinect, unity and zigfu package and i want to use the persons position in relation to the kinect to generate keystroke. Eg if the player is closer to the kinect it will trigger the forward button to be pressed, if they are farther away then it will trigger the back back button with a neutral area in the middle.
//has user moved back
if (rootPosition.z < -2)
{
//print(rootPosition.z);
v = -1;
}
//has user moved forward
if (rootPosition.z > -1)
{
//print(rootPosition.z);
v = 1;
}
I've managed to find the section which registers where i am in relation to the kinect but don't know how to trigger keystroke. Any help with this matter would be greatly appreciated as the deadline is fast approaching and i'm struggling with the technical side of things.
I've used something like this in the past for Simulated Keystrokes:
Windows Input Simulator (C# SendInput Wrapper - Simulate Keyboard and Mouse)
Related
Online docs like- 2D app design considerations: UI/UX state that in 2D UWP apps
When a user gazes towards something or points with a motion controller, a touch hover event will occur. This consists of a PointerPoint where PointerType is Touch, but IsInContact is false.
Which implies that eye-gaze is automatically mapped into your 2D app window on HoloLens 2 and is available through events like the PointerMovedEvent. However, based on my tests I am finding that it is not eye-gaze or even head based gaze that is passed into the app as a touch hover event, but on the HoloLens 2 it is actually the pointing ray cast by your finger through hand recognition if you point at the 2D app window frame in 3D space. But if someone is aware of eye-gaze actually being mapped and available inside a 2D UWP app window please let me know.
The next approach would seem to be by leveraging the SpatialPointPose API
I am able to grab the starting position of the HoloLens when the app launches and hang on to a stationary reference to that location with the following lines of code:
private async void Page_Loaded(object sender, RoutedEventArgs e)
{
if (EyesPose.IsSupported() == true)
{
var ret = await EyesPose.RequestAccessAsync();
originLocator = SpatialLocator.GetDefault();
originRefFrame = originLocator.CreateStationaryFrameOfReferenceAtCurrentLocation();
coordinateSystem = originRefFrame .CoordinateSystem;
}
gazeTimer.Start();
}
I have also had success with then using that reference frame to get the coordinate system
and by using a dispatch timer I can then pass that along with a perception time stamp to get a SpatialPointPose object:
From which I can then retrieve the eye gaze origin and gaze direction and display and update the values on screen in xaml textblocks:
private void GazeTimer_Tick(object sender, object e)
{
timestamp = PerceptionTimestampHelper.FromHistoricalTargetTime(DateTime.Now);
spPose = SpatialPointerPose.TryGetAtTimestamp( coordinateSystem, timestamp );
if (spPose.Eyes != null && spPose.Eyes.IsCalibrationValid)
{
if (spPose.Eyes.Gaze != null)
{
OriginXtextBlock.Text = spPose.Eyes.Gaze.Value.Origin.X.ToString();
OriginYtextBlock.Text = spPose.Eyes.Gaze.Value.Origin.Y.ToString();
OriginZtextBlock.Text = spPose.Eyes.Gaze.Value.Origin.Z.ToString();
DirectionXtextBlock.Text = spPose.Eyes.Gaze.Value.Direction.X.ToString();
DirectionYtextBlock.Text = spPose.Eyes.Gaze.Value.Direction.Y.ToString();
DirectionZtextBlock.Text = spPose.Eyes.Gaze.Value.Direction.Z.ToString();
}
}
}
However, I haven't found a way to get the UWP 2D app window frame's bounding box location in 3D space or to find specifics on of how the app windows pixel resolution maps to that frame so that I could then somehow resolve the x,y screen coordinates that the eye gaze pointer intersects within the 2d app window.
I'm basically looking to get a similar kind of result with eye gaze as is described in the online help docs (which seems to be actually implemented for the hand recognition pointing finger). But if it is not exposed through something like a pointer event, I am fine with using the SpatialPointPose api if there is a way to complete the last mile of querying the location of the 2D app window frame and resolving the screen coordinates that are being intersected by the eye gaze ray.
The gaze behavior described in the 2D app design considerations documentation is for first generation HoloLens.
On HoloLens 2, Gaze will trigger Pointer events in 2D apps when the user is in Select mode. This is to prevent the "inadvertent pop-up or unwanted interaction" problem apps had with Gaze in HoloLens 2. Gaze interaction now occurs only intentionally.
We'll update the docs to clarify.
There is no way for a 2D app to find its location in holographic space, so there is no consistent way to map the SpatialPointerPose into the 2D app's coordinate space. This API is useful only for holographic apps.
I've been working on a project for a while and ultimately what I have is a minority report style setup with AR screens in front of me. I'm using a unity plugin called lean touch which allows me to tap on the mobile screen and select the 3d objects. I can then move them around and scale them which is great.
I have an undesired side effect though, under a different situation this would be fantastic but in this case it's hurting my head. If you trigger the target using a user defined target the screen appears in front of you with the camera (real world) behind it. I then tap on the item and can drag on my mobile device to interact. The problem is, if someone walks past my mobile in the background the tracking 'pushes' my item off my screen as they walk past.
Under a different setup I could take advantage of this as I wave my hand behind the camera and can interact with the item, unfortunately all I can do is push it from side to side not scale or anything else and it's kind of making the experience problematic. I therefore would like help to find out how to stop the tracking of my objects position if the camera background changes. Thus leaving my objects on screen until I touch the screen to interact.
I'm not sure exactly what needs turning on or off, if it's a vuforia setting or a lean touch unity setting with layers etc or rigidbody or something to add. I need directional advice on which way to go as I'm going in circles at the moment.
Screenshot below.
http://imgur.com/a/NYekP
If I wave my hand behind or someone walks past it moves the AR elements.
Ideas would be appreciated.
The only code that interacts with an object is the below which is on a plane used as a holder object for the interactive screen prefab so lean touch will move the plane as the screen thinks its a gui element and therefore not interactable, the last line is added to hide the original plane so the duplicate isnt visable when I lean touch drag the screen.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;
public class activateChildObjects : MonoBehaviour {
public GameObject MyObjName;
//Get the original GUI item to disable on drag of the cloned item
public string pathToDeactivate;
void Update(){
if(DefaultTrackableEventHandler.lostFound == true){
foreach (Transform child in MyObjName.transform) {
child.gameObject.SetActive(true);
}
Debug.Log("GOT IT");
}else{
foreach (Transform child in MyObjName.transform) {
child.gameObject.SetActive(false);
}
Debug.Log("Lost");
}
//Remove original panel
GameObject.Find("UserDefinedTarget/"+pathToDeactivate).SetActive(false);
}
}
EDIT
Ok I haven't stopped it from happeneing but after numerous code attempts I just ticked all the tracking options to try and prevent additional tracking once found and it's not right but it's an improvement. It will have to do for now.
I am pretty new at C# and I wanted to make a simple 2D RPG (Role-Playing-Game) character which can move around with its walking animation by simply using 'W' 'A' 'S' 'D' keywords. To do that, I used Picture Box to hold character image and 2 Timers Tool, one for managing the 'walking' animation by changing the picture every 100 ms, and the another timer is for moving that Picture Box location every 1 ms.
In the 'Form_KeyDown' event, I set those 2 timers Enabled = True whenever user presses one of the moving keywords and I set those 2 timers Enabled = False in the 'Form_KeyUp' event to indicate that the character is no longer moving.
Here is the first timer code that control the animation by changing the picture on each tick:
private void timerchangepic_Tick(object sender, EventArgs e)
{
//movementPhase will determine the picture to be displayed, added by 1
//every tick means character image change every tick
movementPhase++;
if (movementPhase > 4) movementPhase = 1;
//determining which image is currently displayed
if (charDirection == Direction.Front)
{
if (movementPhase == 1)
pbcharacter.BackgroundImage = Image.FromFile("Icon\\front.png");
else if (movementPhase == 2)
pbcharacter.BackgroundImage = Image.FromFile("Icon\\front2.png");
else if (movementPhase == 3)
pbcharacter.BackgroundImage = Image.FromFile("Icon\\front3.png");
else if (movementPhase == 4)
pbcharacter.BackgroundImage = Image.FromFile("Icon\\front4.png");
}
//and goes the same for another 3 directions (left, right, and back)
}
Here is the second timer code that move the location of the character on each tick:
private void timermovement_Tick(object sender, EventArgs e)
{
if (charDirection == Direction.Front)
{
pbcharacter.Location = new Point(pbcharacter.Location.X, pbcharacter.Location.Y + 5);
}
//and goes the same for another 3 directions (left, right, and back)
}
My problem is: the character can't move well when I hold one of the moving keystroke. In a first second it works fine, but after a few second (2-3 seconds) pressing and holding 'S' stroke made the character stopped, moved a little, stopped again, moved a little and over and over. Besides, the animation only worked for 1 lap, the picture changed from 'front' to 'front2' until 'front4' well, but not from 'front4' back to 'front'. In conclusion, the character's animation only ran for 1 shift, then it became a static image which moved a little, stopped, moved again, and stopped again whenever i hold 'S' button.
What is wrong with my codes? Are there any better approaches to implement a moving 2D character task with its animation?
I suggest you to use somethimg more specific to build your application: XNA, MonoGame or Unity3D. But if you are using winforms I have several suggestion for you:
1) Cache images instead load them from file every time.
2) Cause Timer Events interval is not very accurate calculate ElapsedTime from last event. And make change +5 to something dependent of ElapsedTime.
3) Instead using several timers organize game loop to handle your events.
4) Use Double Buffer on your form.
Building a game using windows forms can be incredible hard, and also extremely inefficient. If you were to use XNA, which isn't that difficult to learn, you could create a much better and much stronger game.
If your using WinForms i will presume your a beginner so I wouldn't bother with unity or mono as they are much more complicated. If you still reject this I would advise you to:
Cache images
more accurately calculate ElapsedTime and use dependencies
use a while loop to repeat for each tick, this point should fix your problem
Hope to have been of help.
I think if you must make your game using windows form then you need to
at least handle your update with a gameloop instead of form timers and
if your moving things around your going to need some kind of clock to
help your game run at the same speed on any cpu
I'm using MonoGame implementation of XNA to create a Windows Phone game. The player should be able to move objects across the level using flick gestures. I'm using the TouchPanel class to read gestures.
Here's how I initialize the TouchPanel:
TouchPanel.EnabledGestures = GestureType.Flick;
Here is how I read the gestures in Update method:
while (TouchPanel.IsGestureAvailable)
{
var g = TouchPanel.ReadGesture();
...
}
However, the only field that is filled is the Delta vector. But how do I find out the point from where the user has started the gesture?
Since I want my game to be cross-platform, I cannot rely on non-XNA code such as Silverlight gesture handlers.
Using Flick gesture to move objects is not a good idea, as Flicks are positionless.
You should use Hold or FreeDrag gesture instead, detecting the object and then moving it across the screen.
The purpose of Flick is just zooming, I suggest you to choose another gesture.
I'm attempting to adapt a MSSurface application to allow use of a Kinect. Using the code4fun libraries, I'm able to generate an event from the Kinect when a user puts their hand towards the screen, but what I'm missing is how to trigger a ScatterViewItem's touch or click event to grab item, and then release it once finished moving. from the kinect skeleton model I can get adjusted x/y co-ordinates which i could apply if I can trap the right events in the ScatterViewItem.. And code suggestions would be appreciated...
regards,
Rob
If you are just looking to move the item, the easiest thing is to set the ScatterViewItem's Center property to the translated x/y coordinates. You can then control when the item is 'grabbed' fairly easily using whatever conditions you want.
If you are also after pinch/zoom, you'll have to do some playing around. Since the Kinect doesn't have the resolution to detect the fingers pinching and zooming, you could implement this by mapping the Z coordinate of the hand to preset sizes on the grabbed ScatterViewItem.