how to display certain data to kinect? - c#

I am using kinect v2 c# to detect hand gesture. All my algo used is working right. The problem is that I want kinect to display only hand detected but not all of body and to give all points of the hand in a black background?
This is code that gets the points of contouring hand.
private void HandsController_HandsDetected(object sender, HandCollection e) {
// Display the results!
if (e.HandLeft != null)
{
point = e.HandLeft.ContourDepth;
}
}

for example you can write it like this
// //left hand in front of left Shoulder
if (body.Joints[JointType.HandLeft].Position.Z < body.Joints[JointType.ElbowLeft].Position.Z && body.Joints[JointType.HandRight].Position.Y < body.Joints[JointType.SpineBase].Position.Y)
{
//Action here
}
You can see a sample of how other libraries uses this code here
I also have a tutorial on how to implements swipe gesture using Vitruvius Library do check them out! :D

Related

OnHandMeshUpdated() not triggered in Mixed Reality Toolkit

I'm working on a Mixed Reality app in Unity.
I'm trying to update my own hand mesh according to:
https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/features/input/hand-tracking?view=mrtkunity-2021-05#hand-mesh-prefab
public void OnHandMeshUpdated(InputEventData<HandMeshInfo> eventData)
{
print("Hand mesh update!");
if (eventData.Handedness == myHandedness)
{
myMesh.vertices = eventData.InputData.vertices;
myMesh.normals = eventData.InputData.normals;
myMesh.triangles = eventData.InputData.triangles;
if (eventData.InputData.uvs != null && eventData.InputData.uvs.Length > 0)
{
myMesh.uv = eventData.InputData.uvs;
}
}
}
However, the method OnHandMeshUpdated(InputEventData eventData) never triggers. I use Holographic Remoting. I made sure I set the Hand Mesh Visualization Modes to "Everything" in MRTK -> Articulated Hand Tracking.
MRTK Image
What am I missing? How can I animate my own hand mesh?
Unity: 2020.3.26f1
MRTK: 2.7.3
check if you registered for global input events.
Maybe this link will provide some information.
https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk2/features/input/input-events?view=mrtkunity-2022-05
I tried Global input registration example in this link and it worked.

Camera stretched after deinitializing

I am using cardboard v1.4.1, and wanting to be able to toggle in and out of the cardboard vr. When transitioning out from cardboard the camera is stretched (I assume one of the eyes is stretched across the screen). I am using the following code that was provided in the example.
private void StopXR()
{
Debug.Log("Stopping XR...");
XRGeneralSettings.Instance.Manager.StopSubsystems();
Debug.Log("XR stopped.");
Debug.Log("Deinitializing XR...");
XRGeneralSettings.Instance.Manager.DeinitializeLoader();
Debug.Log("XR deinitialized.");
}
What would be the best way to approach this?
Try this:
void StopXR()
{
var xrManager = XRGeneralSettings.Instance.Manager;
if (!xrManager.isInitializationComplete)
return; // Safety check
xrManager.StopSubsystems();
xrManager.DeinitializeLoader();
}
And if you are experiencig a distorted or stretched screen after exiting VR:
camera.ResetAspect();
camera.fieldOfView = defaultFov;
camera.ResetProjectionMatrix();
camera.ResetWorldToCameraMatrix();

Webcam - Camera Preview rotates wrong way

I would like your help please on a WebCamera problem. I've used a library available from Nuget; WebEye.Controls.Wpf.WebCameraControl (version 1.0.0). The URL is https://www.nuget.org/packages/WebEye.Controls.Wpf.WebCameraControl/
The article and instructions are available at: http://www.codeproject.com/Articles/330177/Yet-another-Web-Camera-control
Due to project constraints, I developed a WPF application for a Linx tablet (Windows 10), rather than as a Universal Windows Application. I used the WebEye library to connect to the webcam on the tablet and take pictures with it. It works fine when I hold the tablet in landscape, but not when I hold the tablet in portrait mode. In portrait mode, the CameraPreview/VideoWindow automatically rotates -90 degrees.
I tried resolve the problem to no avail.
Rotate the control surrounding the video window via the Webcamera RenderTransform or LayoutTransform property – The control rotates but the video image does not rotate correctly.
Tried to rotate the VideoWindow inside the WebCamera Content property - I got the source code from the GitHub and set the VideoWindow available for access. Recompled the library and used it to rotate the VideoWindow via its RenderTransform property. https://github.com/jacobbo/WebEye/tree/master/WebCameraControl
No matter what I do, the camera preview is always -90 degrees.
The library is simple and it does not have many properties to manipulate the video window.
The webcontrol is in XAML.
<wpf:WebCameraControl x:Name="webCameraControl"
MouseDoubleClick="webCameraControl_MouseDoubleClick"
StylusButtonUp="webCameraControl_StylusButtonUp"
MouseUp="webCameraControl_MouseUp"
TouchUp="webCameraControl_TouchUp"
GotMouseCapture="webCameraControl_GotMouseCapture"
/>
This is how I initialised the WebCamera. When the UserControl is loaded, it will then automatically connect to the webcam on the tablet. See startViewing() function.
private WebCameraId _cameraID = null;
private void UserControl_Loaded(object sender, RoutedEventArgs e)
{
startViewing();
}
private void startViewing()
{
List<WebCameraId> cams = (List<WebCameraId>)webCameraControl.GetVideoCaptureDevices();
if (cams.Count > 0)
{
_cameraID = (WebCameraId)cams[0];
webCameraControl.StartCapture(_cameraID);
}
}
I tried to force the control to rotate it correctly when the app detects a change in the Display screen. See DisplaySettingsChanged event.
public ucWebCam()
{
InitializeComponent();
Microsoft.Win32.SystemEvents.DisplaySettingsChanged += SystemEvents_DisplaySettingsChanged;
}
private void SystemEvents_DisplaySettingsChanged(object sender, EventArgs e)
{
try
{
double angle = 0;
if (SystemParameters.PrimaryScreenWidth > SystemParameters.PrimaryScreenHeight)
{
angle = 0;
}
else
{
angle = -90;
}
webCameraControl.StopCapture();
adjustWebcamAngle(angle);
webCameraControl.StartCapture(_cameraID);
}
catch (Exception ex)
{
}
}
private void adjustWebcamAngle(double angle)
{
try
{
// IGNORE portrait boolean flag
bool portrait = false;
if (angle == 90 || angle == 180)
{
portrait = true;
}
// TRIED TO SET THE ANGLE OF THE CONTROL TO NO AVAIL
RotateTransform rotTransform = new RotateTransform(angle);
//rotTransform.Angle = angle;
ScaleTransform scaleTransform = new ScaleTransform();
//scaleTransform.ScaleX = (portrait) ? 0 : 1;
//scaleTransform.ScaleY = (portrait) ? 0 : 1;
TransformGroup tGroup = new TransformGroup();
//tGroup.Children.Add(scaleTransform);
tGroup.Children.Add(rotTransform);
// ROTATE CAMERA!
webCameraControl.RenderTransform = tGroup;
} catch (Exception ex)
{
}
}
So far I only rotated the WebCam control, not the video image.
I looked through the comments in Alexander's article with no joy at: http://www.codeproject.com/Articles/330177/Yet-another-Web-Camera-control
How can I rotate the Camera preview correctly? Can you please advise where I'm going wrong?
I have attached two images to illustrate my problem.
In case you are not a stranger to C#, I would recommend you use the EMGUCV nuget and create your own preview application. It`s not that hard, and you could use MVVM to bind the frames to your View, making it better than using code behind your control.
PS: This is not an answer, but it is a solution to your problem. I cannot not post comments yet, as I do not have 50 points :(
Have a nice day.

Managing multi-touch events in WPF

I am writing a program using the Surface SDK and .NET 4.0. I have to distinguish between multi-touch events and I'm having trouble distinguishing between gestures.
With two fingers I want to be able to zoom and rotate, but because generally the fingers are not moving in straight lines or perfect circles on the screen, the result is a combination of zoom and rotate. Could someone point out how this problem can be overcome? I am using some thresholds for ignoring small deviations, but these thresholds need to be manually tweaked and I couldn't find a good value for them.
I was thinking I could detect which kind of a gesture it is in the onManipulationStarting method and ignore the rest, but sometimes the gesture can start with just one finger on the screen and I'm identifying the wrong gesture.
I am including some code below:
private void OnManipulationDeltaHandler(object sender, ManipulationDeltaEventArgs mdea)
{
var zoomAmount = Math.Abs(mdea.DeltaManipulation.Scale.Length - Math.Sqrt(2));
// ZOOM ACTION: 2 fingers and scaling bigger than a threshold
if ((TouchesOver.Count() == 2) && (zoomAmount > scaleThreshold))
{
if (ZoomCommand != null)
{
if (Math.Abs(zoomAmount - 0) > 0.1)
{
ZoomCommand.Execute((-zoomAmount).ToString());
}
}
}
else
{
var rotateAmount = -mdea.DeltaManipulation.Rotation;
if ((TouchesOver.Count() == 2))
{
headValue += rotateAmount;
if (HeadRotationCommand != null)
{
HeadRotationCommand.Execute(new Orientation(pitchValue, headValue, rotateAmount));
}
}
}
mdea.Handled = true;
base.OnManipulationDelta(mdea);
}
Can someone help? Thanks!

WPF Render Transform Behaving Weird

I am experiencing a weird problem with a render transform in WPF. The project I'm working on needs to display a clicked user point over an image. When the user clicks a point, a custom control is placed at the location of their click. The image should then be able to be scaled around any point using the mouse wheel, and the custom control should be translated (not scaled) to the correct location.
To do this, I follow the MouseWheel event as follows:
private void MapPositioner_MouseWheel(object sender, MouseWheelEventArgs e)
{
Point location = Mouse.GetPosition(MainWindow.Instance.imageMap);
MainWindow.Instance.imageMap.RenderTransform = null;
ScaleTransform st = new ScaleTransform(scale + (e.Delta < 0 ? -0.2 : 0.2), scale += (e.Delta < 0 ? -0.2 : 0.2));
st.CenterX = location.X;
st.CenterY = location.Y;
TransformGroup tg = new TransformGroup();
tg.Children.Add(st);
//tg.Children.Add(tt);
MainWindow.Instance.imageMap.RenderTransform = tg;
if (scale <= 1)
{
MainWindow.Instance.imageMap.RenderTransform = null;
}
if (TransformationChanged != null)
TransformationChanged();
}
Then, I implemented an event handler in the custom control for the TransformationChanged event seen at the end of the above code block as follows:
private void Instance_TransformationChanged()
{
//check image coords
//
if (MainWindow.Instance.imageMap.RenderTransform != null)
{
if (MainWindow.Instance.imageMap.RenderTransform != Transform.Identity)
{
Transform st = MainWindow.Instance.imageMap.RenderTransform;
Point image = MainWindow.VideoOverlayCanvas.TransformToVisual(MainWindow.Instance.MapImage).Transform(loc2);
Point trans = st.Transform(image);
Point final = MainWindow.Instance.MapImage.TransformToVisual(MainWindow.VideoOverlayCanvas).Transform(trans);
// selected = anchor2;
// final = ClipToOverlay(final);
// selected = null;
connector.X2 = final.X;
connector.Y2 = final.Y;
Canvas.SetLeft(anchor2, final.X);
Canvas.SetTop(anchor2, final.Y);
}
}
else
{
connector.X2 = loc2.X;
connector.Y2 = loc2.Y;
Canvas.SetLeft(anchor2, loc2.X);
Canvas.SetTop(anchor2, loc2.Y);
}
}
This way, I can ensure that the custom control's position is updated only after the new transform is set. Note that since I am applying the transform to the point, there is no scaling done to the control, the effect is that it is translated to the point it should. This works fine as long as the user is only scaling around one point. If they change that point, it doesnt work.
Here are some images that show the problem:
User clicks a point
user zooms out, what happened here?
after zooming out (all the way out in this case) it looks ok
I've been messing with this for about two days now, so I apologize if my code looks messy. I know this is a pretty obscure question so any help would be appreciated.
Thanks,
Max
If anyone is looking for an answer to this, because of deadlines, I had to write a workaround by having the user pan with the right mouse button and zoom with the mouse wheel. This way zooming always happens around the center of the image, so the controls are always lined up. I'm still looking for answers to the original question though if anyone can figure it out
Thanks,
Max
I'm not sure what's wrong with your transform, but have you considered an alternate approach? For example, you might want to add a transparent canvas set to stay at the same size as the image, z-order above the image (explicitly set or just put the Canvas element just after the image element). Then you can just use Canvas.SetLeft and Canvas.SetTop to place the user control where the user clicked, and to move it around. A lot easier than using a transform.

Categories

Resources