As a side note, this question has nothing to do with SharpDX, it is purely a Kinect 2.0 SDK question.
I am migrating a completed Kinect 1.8 project to the Kinect 2.0 SDK. In this program I have a WPF front end, but 99% of the code is written in SharpDX for C#. The program hides the KinectRegions cursor and uses the cursor location and grip data as the input to the SharpDX code. With this new version of the Kinect SDK however, I can't find any way to get the relative cursor data (hand position relative to the user). I tried using skeleton data to extrapolate the cursor location, just a simple primary shoulder location - primary hand location. This has the issue of when the hand occludes the shoulder, the cursor would shoot around. If I switch shoulders by reflecting across the spine when the occlusion occurs, I get a momentary jump. I can think of a way to get this to work but it will take quite a bit of code. I want to make sure there is no other way before I dive into that. Thanks in advance for any help!
You should check out Mike Taulty's blog. He utilizes the KinectCoreWindow to grab the pointer movement. One note of caution though: this event gets raised for both hands even if they're not "active." I mitigated this by utilizing the body frame to set which hand was higher (y) to denote that that hand was "active."
...
var window = KinectCoreWindow.GetForCurrentThread();
window.PointerMoved += window_PointerMoved;
...
void window_PointerMoved(object sender, KinectPointerEventArgs e)
{
if ((!rightHand && e.CurrentPoint.Properties.HandType == HandType.LEFT) ||
(rightHand && e.CurrentPoint.Properties.HandType == HandType.RIGHT))
{
//do something with this hand pointer's location
}
}
void _bodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
//get and check frame
//...
using (frame)
{
frame.GetAndRefreshBodyData(_bodies);
foreach(Body body in _bodies)
{
if(body.IsTracked)
{
CameraSpacePoint left = body.Joints[JointType.HandLeft].Position;
CameraSpacePoint right = body.Joints[JointType.HandRight].Position;
if (left.Y > right.Y)
rightHand = false;
else
rightHand = true;
break; //for this example I'm just looking at the first
//tracked body - other handling is required if you
//want to keep track of more than one body's hands
}
}
}
}
The other part is that your previous application could hide the KinectRegion cursor; I have not as of yet found out how to do that with the Kinect v2 (which is what brought me to this question actually lol).
Related
I am creating a simple application which scrapes some XML relating to the status of some machine tools which are outputting live sensor data and uses the X/Y coordinates of the device to make a little rat dance around the screen.
The rat is placed in the correct location the first time the machine is polled but doesn't move each time the draw function is called by subsequent timer driven events.
I assumed this was just due to machine being on standby and the only coordinate changes being little jitters of the servo but just to check I created a random number generator and had the system use the randomly generated coordinates instead of the scaled X/Y data coming in.
I then found that the rat doesn't move!
This is the function where I am drawing the rat(s) (There are 2 systems but we are only worrying about 'bakugo' right now) We are looking particularly at if (dekuWake == false) and (bauwake == true);
Here I have had the values printed to the console (Driven by a timer) and the "system.drawing.point(s)" are shown to be valid (in range and changing).
The timer is initiated by a button in form1.
Timer event calls polling function which scrapes the XY variables from the site (See my question here for that function - What is wrong with my use of XPath in C#?)
At this point it ascertains whether the status was 'AVAILABLE' (which it is) and sets the 'rat's' 'awake' bool to true (determines which images are drawn, if a machine is offline the 'rat' stays in its box)
It then scales the coordinates to the resolution of the program window
(Normally, right now it is stepping through 2 arrays of integers generated when the polling first begins. The update coordinate function sets the X,Y coords of ImageRat.Bakugo) and calls drawRats().
Why does changing the location of my images not actually relocate the pictureboxes?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Reflection;
using System.Drawing;
using System.Windows.Forms;
namespace XMLRats3
{
public class Drawing
{
private PictureBox HouseImage;
private PictureBox DekuImage;
private PictureBox BakuImage;
public Drawing(PictureBox house, PictureBox deku, PictureBox baku)
{
HouseImage = house;
DekuImage = deku;
BakuImage = baku;
}
public void ClearRats()
{
HouseImage.Hide();
DekuImage.Hide();
BakuImage.Hide();
}
public void DrawRats(bool DekuWake, bool BakuWake) // Call this function using active status of 2 machines
{
ClearRats();
/*// This shows that the generated coordinates are reaching this point successfully
Console.WriteLine("BAKU X: " + ImageRat.Bakugo.PosX);
Console.WriteLine("BAKU Y: " + ImageRat.Bakugo.PosY);
*/
System.Drawing.Point DekuCoord = new System.Drawing.Point(ImageRat.Deku.PosX, ImageRat.Deku.PosY); // Create a 'System Point' for Deku
System.Drawing.Point BakuCoord = new System.Drawing.Point(ImageRat.Bakugo.PosX, ImageRat.Bakugo.PosY); // Create a 'System Point' for Bakugo
if (DekuWake == false)
{
DekuImage.Hide();
if (BakuWake == false)
{
BakuImage.Hide();
HouseImage.Image = DesktopApp1.Properties.Resources.bothsleep;// set HouseImage to both sleep
}
else
{
BakuImage.Location = BakuCoord;
Console.WriteLine("Point:" + BakuCoord);
//Console.WriteLine("Reaching Relocation condition"); // Ensure we are getting here as animation not working
BakuImage.Show();
//BakuImage.
HouseImage.Image = DesktopApp1.Properties.Resources.dekuSleep; //Set HouseImage to DekuSleep
}
}
else //DekuWake == true
{
DekuImage.Show();
if (BakuWake == true)
{
HouseImage.Image = DesktopApp1.Properties.Resources.nosleep;//Set House image to nosleep
BakuImage.Location = DekuCoord;
DekuImage.Show();
BakuImage.Location = BakuCoord;
BakuImage.Show();
}
else
{
BakuImage.Hide();
HouseImage.Image = DesktopApp1.Properties.Resources.bakusleep;// Set house image to bakusleep
DekuImage.Location = DekuCoord;
DekuImage.Show();
}
}
HouseImage.Show(); // Out here as it should always happen
}
}
}
Ok so I don't have an exact answer on how to resolve this but I can tell you why it occurs and point you in the direction of some knowledge that will help you.
At the time I wrote this code I was (and still am) very new to C# and the concept of multithreaded applications in general.
It's a matter of poor software architecture.
The problem here is that the UI can only be updated from a single thread in c#, and since the timer runs in another thread, anything called from the timer is not allowed to update the UI.
I think it's possible to dip into the UI thread using delegates, though I haven't read into this enough yet to give you exact information on how this is done.
Hopefully this will help the people who have starred my question!
How do I update the GUI from another thread?
Assuming that you are using a timer form a different class, that occupies a different thread than the main one, for a timer to use an object, you should add the object as one of "Synchronizing Objects". So assuming that your timer is called timer1, in a method where you set the properties of the timer, you should write the following line of code
timer1.SynchronizingObject = yourPictureBox;
I hope that solves your problem
I'm developing an application for a Windows Surface Pro. I need to gather the pen pressure on the screen and i figuret out to do.
After solving some issue due to Event (StylusDown, StylusUp and other Stylus event are never raised, bu only mouse event) i landed in a work around.
I will show you the code (taken from some microsoft guide)
Basically is a filter for raw event
class MyFilterPlugin : StylusPlugIn
{
protected override void OnStylusDown(RawStylusInput rawStylusInput)
{
// Call the base class before modifying the data.
base.OnStylusDown(rawStylusInput);
Console.WriteLine("Here");
// Restrict the stylus input.
Filter(rawStylusInput);
}
private void Filter(RawStylusInput rawStylusInput)
{
// Get the StylusPoints that have come in.
StylusPointCollection stylusPoints = rawStylusInput.GetStylusPoints();
// Modify the (X,Y) data to move the points
// inside the acceptable input area, if necessary.
for (int i = 0; i < stylusPoints.Count; i++)
{
Console.WriteLine("p: " + stylusPoints[i].PressureFactor);
StylusPoint sp = stylusPoints[i];
if (sp.X < 50) sp.X = 50;
if (sp.X > 250) sp.X = 250;
if (sp.Y < 50) sp.Y = 50;
if (sp.Y > 250) sp.Y = 250;
stylusPoints[i] = sp;
}
// Copy the modified StylusPoints back to the RawStylusInput.
rawStylusInput.SetStylusPoints(stylusPoints);
}
Added as Filter in StylusPlugin
StylusPlugIns.Add(new MyFilterPlugin());
While i run this, however, i always get 0.5 as PressureFactor (the default value) and inspecting more deep I still can see the evironment of Stylus is not properly set (his Id is 0 for example).
There is a way to gather StylusEvent correctly?
The main point is: i need to gather the pressure (how much pressure) of the pen on the screen.
Thanks a lot.
Some Info: i'm developing with visuals tudio 2012 Ultimate, on a win 7 Pc. I deploy the application in a Surface Pro with windows 8.1. I installed and configured the Surface SDK 2.0 and the Surface Runtime.
For completition i have solved this.
Stylus event (don't know how) conflicted with the Surface SDK. Once i removed the SDK reference from the project all went smooth and the problem i solved.
Now i can gather correctly the pressure of each pen touch and on pen move.
I post this only for community information.
In my opinion the MS Office Smooth Typing is a very innovative feature in the Office Suite, and I'd like to know if this feature is available for programmers in the .NET Framework, specifically in the C# language.
If so, could you please post in your answer a usage example and link to the documentation?
Thanks.
By "smooth typing" I'm referring to the typing animation, that makes the cursor slide during typing.
I don't own Office, so I can't look at the feature, but I needed to fiddle around with the caret in RichTextBoxes a while ago and decided that it wasn't worth the effort. Basically you are on your own. No helper functions from .NET, but everything is handled by the backing Win32 control. You will have a hard time defeating what already happens under the hood. And probably ending up intercepting window messages and lots of ugly code.
So my basic advice is: Don't do it. At least for basic form controls like the TextBox or RichTextBox. You may have more luck trying to remote access an running office from within .NET, but that is a totally different can of worms.
If you really insist on going the SetCaretPos - route, here is some code to get you up and running with a basic version where you can improve upon:
// import the functions (which are part of Win32 API - not .NET)
[DllImport("user32.dll")] static extern bool SetCaretPos(int x, int y);
[DllImport("user32.dll")] static extern Point GetCaretPos(out Point point);
public Form1()
{
InitializeComponent();
// target position to animate towards
Point targetCaretPos; GetCaretPos(out targetCaretPos);
// richTextBox1 is some RichTextBox that I dragged on the form in the Designer
richTextBox1.TextChanged += (s, e) =>
{
// we need to capture the new position and restore to the old one
Point temp;
GetCaretPos(out temp);
SetCaretPos(targetCaretPos.X, targetCaretPos.Y);
targetCaretPos = temp;
};
// Spawn a new thread that animates toward the new target position.
Thread t = new Thread(() =>
{
Point current = targetCaretPos; // current is the actual position within the current animation
while (true)
{
if (current != targetCaretPos)
{
// The "30" is just some number to have a boundary when not animating
// (e.g. when pressing enter). You can experiment with your own distances..
if (Math.Abs(current.X - targetCaretPos.X) + Math.Abs(current.Y - targetCaretPos.Y) > 30)
current = targetCaretPos; // target too far. Just move there immediately
else
{
current.X += Math.Sign(targetCaretPos.X - current.X);
current.Y += Math.Sign(targetCaretPos.Y - current.Y);
}
// you need to invoke SetCaretPos on the thread which created the control!
richTextBox1.Invoke((Action)(() => SetCaretPos(current.X, current.Y)));
}
// 7 is just some number I liked. The more, the slower.
Thread.Sleep(7);
}
});
t.IsBackground = true; // The animation thread won't prevent the application from exiting.
t.Start();
}
Use SetCaretPos with your own animation timing function. Create a new thread that interpolates the caret's position based on the previous location and the new desired location.
I'm writing an application that can take several different external inputs (keyboard presses, motion gestures, speech) and produce similar outputs (for instance, pressing "T" on the keyboard will do the same thing as saying the word "Travel" out loud). Because of that, I don't want any of the input managers to know about each other. Specifically, I don't want the Kinect manager (as much as possible) to know about the Speech manager and vice versa, even though I'm using the Kinect's built-in microphone (the Speech manager should work with ANY microphone). I'm using System.Speech in the Speech manager as opposed to Microsoft.Speech.
I'm having a problem where as soon as the Kinect motion recognition module is enabled, the speech module stops receiving input. I've tried a whole bunch of things like inverting the skeleton stream and audio stream, capturing the audio stream in different ways, etc. I finally narrowed down the problem: something about how I'm initializing my modules does not play nicely with how my application deals with events.
The application works great until motion capture starts. If I completely exclude the Kinect module, this is how my main method looks:
// Main.cs
public static void Main()
{
// Create input managers
KeyboardMouseManager keymanager = new KeyboardMouseManager();
SpeechManager speechmanager = new SpeechManager();
// Start listening for keyboard input
keymanager.start();
// Start listening for speech input
speechmanager.start()
try
{
Application.Run();
}
catch (Exception ex)
{
MessageBox.Show(ex.StackTrace);
}
}
I'm using Application.Run() because my GUI is handled by an outside program. This C# application's only job is to receive input events and run external scripts based on that input.
Both the keyboard and speech modules receive events sporadically. The Kinect, on the other hand, generates events constantly. If my gestures happened just as infrequently, a polling loop might be the answer with a wait time between each poll. However, I'm using the Kinect to control mouse movement... I can't afford to wait between skeleton event captures, because then the mouse would be very laggy; my skeleton capture loop needs to be as constant as possible. This presented a big problem, because now I can't have my Kinect manager on the same thread (or message pump? I'm a little hazy on the difference, hence why I think the problem lies here): from the way I understand it, being on the same thread would not allow keyboard or speech events to consistently get through. Instead, I kind of hacked together a solution where I made my Kinect manager inherit from System.Windows.Forms, so that it would work with Application.Run().
Now, my main method looks like this:
// Main.cs
public static void Main()
{
// Create input managers
KeyboardMouseManager keymanager = new KeyboardMouseManager();
KinectManager kinectManager = new KinectManager();
SpeechManager speechmanager = new SpeechManager();
// Start listening for keyboard input
keymanager.start();
// Attempt to launch the kinect sensor
bool kinectLoaded = kinectManager.start();
// Use the default microphone (if applicable) if kinect isn't hooked up
// Use the kinect microphone array if the kinect is working
if (kinectLoaded)
{
speechmanager.start(kinectManager);
}
else
{
speechmanager.start();
}
try
{
// THIS IS THE PLACE I THINK I'M DOING SOMETHING WRONG
Application.Run(kinectManager);
}
catch (Exception ex)
{
MessageBox.Show(ex.StackTrace);
}
For some reason, the Kinect microphone loses its "default-ness" as soon as the Kinect sensor is started (if this observation is incorrect, or there is a workaround, PLEASE let me know). Because of that, I was required to make a special start() method in the Speech manager, which looks like this:
// SpeechManager.cs
/** For use with the Kinect Microphone **/
public void start(KinectManager kinect)
{
// Get the speech recognizer information
RecognizerInfo recogInfo = SpeechRecognitionEngine.InstalledRecognizers().FirstOrDefault();
if (null == recogInfo)
{
Console.WriteLine("Error: No recognizer information found on Kinect");
return;
}
SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine(recogInfo.Id);
// Loads all of the grammars into the recognizer engine
loadSpeechBindings(recognizer);
// Set speech event handler
recognizer.SpeechRecognized += speechRecognized;
using (var s = kinect.getAudioSource().Start() )
{
// Set the input to the Kinect audio stream
recognizer.SetInputToAudioStream(s, new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
// Recognize asycronous speech events
recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
}
For reference, the start() method in the Kinect manager looks like this:
// KinectManager.cs
public bool start()
{
// Code from Microsoft Sample
kinect = (from sensorToCheck in KinectSensor.KinectSensors where sensorToCheck.Status == KinectStatus.Connected select sensorToCheck).FirstOrDefault();
// Fail elegantly if no kinect is detected
if (kinect == null)
{
connected = false;
Console.WriteLine("Couldn't find a Kinect");
return false;
}
// Start listening
kinect.Start();
// Enable listening for all skeletons
kinect.SkeletonStream.Enable();
// Obtain the KinectAudioSource to do audio capture
source = kinect.AudioSource;
source.EchoCancellationMode = EchoCancellationMode.None; // No AEC for this sample
source.AutomaticGainControlEnabled = false; // Important to turn this off for speech recognition
kinect.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(allFramesReady);
connected = true;
return true;
}
So when I disable motion capture (by having my main() look similar to the first code segment), speech recognition works fine. When I enable motion capture, motion works great but no speech gets recognized. In both cases, keyboard events always work. There are no errors, and through tracing I found out that all the data in the speech manager is initialized correctly... it seems like the speech recognition events just disappear. How can I reorganize this code so that the input modules can work independently? Do I use threading, or just Application.Run() in a different way?
The Microsoft Kinect SDK have several known issues, one of them being that audio is not processed if you begin tracking the skeleton after starting the audio processor. From the known issues:
Audio is not processed if skeleton stream is enabled after starting audio capture
Due to a bug, enabling or disabling the SkeletonStream will stop the AudioSource
stream returned by the Kinect sensor. The following sequence of instructions will
stop the audio stream:
kinectSensor.Start();
kinectSensor.AudioSource.Start(); // --> this will create an audio stream
kinectSensor.SkeletonStream.Enable(); // --> this will stop the audio stream as an undesired side effect
The workaround is to invert the order of the calls or to restart the AudioSource after changing SkeletonStream status.
Workaround #1 (start audio after skeleton):
kinectSensor.Start();
kinectSensor.SkeletonStream.Enable();
kinectSensor.AudioSource.Start();
Workaround #2 (restart audio after skeleton):
kinectSensor.Start();
kinectSensor.AudioSource.Start(); // --> this will create an audio stream
kinectSensor.SkeletonStream.Enable(); // --> this will stop the audio stream as an undesired side effect
kinectSensor.AudioSource.Start(); // --> this will create another audio stream
Resetting the SkeletonStream engine status is an expensive call. It should be made at application startup only, unless the app has specific needs that require turning Skeleton on and off.
I also hope that when you say you're using "version 1" of the SDK, you mean "version 1.6". If you are using anything but 1.5 or 1.6, you are only hurting yourself due to the many changes that were made in 1.5.
I have a windows media player COM in my Windows Form project that plays and opens videos admirably. However, I would like to be able to grab the first frame of the loaded video so my program users can preview the video (and, ideally, recognize one video from another).
How can I update the frame displayed by the windows media player object?
I have tried using the following code at the end of my openFileDialog event response:
private void openFileDialog1_FileOk(object sender, CancelEventArgs e)
{
Text = openFileDialog1.SafeFileName + " - MPlayer 2.0";
//mediaPlayer1.openPlayer(openFileDialog1.FileName);
mediaPlayer1.URL = openFileDialog1.FileName;
//hopefully, this will load the first frame.
mediaPlayer1.Ctlcontrols.play();
mediaPlayer1.Ctlcontrols.pause();
}
However, when I run this, the pause command gets ignored (Auto-play for video loading is turned off, so the video won't start playing without calling .play(), above). If I had to guess, I'd say that this is because of some threading operation that calls play, moves on to call pause, calls pause, and then, finally, the play resolves, and the video starts - but because the .pause resolved before the .play, the net effect is the .pause is ultimately unheeded.
Firstly, is there a way other than .play(); .pause(); to snag a preview image of the video for the AxWindowsMediaPlayer object? If not, how can I make sure that my .pause() doesn't get ignored?
(I know that .play(); .pause(); works in the general case, because I tested with a separate button that invoked those two methods after the video finished loading, and it worked as expected)
You can't do a lot of things with this COM. However Follow this link and you will find a Class that will help you extract an image from a Video file. You could simply get extract the image and just put it in top of the video, or next to it. This is a simple workaround for your requirement. If not happy with it, I would strongly recommend not using this COM at all and use some other open source video player/plugins. There a lot of real good ones, but I could recommend the VLC Plugin, or try finding another.
Good luck in your quest.
Hanlet
While the Windows Media Player Com might not officially support a feature like this, its quite easy to 'fake' this. If you use a CtlControls2, you have access to the built-in "step(1)" function, which proceeds exactly one frame.
I've discovered that if you call step(1) after calling pause(), searching on the trackbar will also update the video.
It's not pretty, but it works!
This is a little trick to solve the common step(-1) not working issue.
IWMPControls2 Ctlcontrols2 = (IWMPControls2)WindowsMediaPlayer.Ctlcontrols;
double frameRate = WindowsMediaPlayer.network.encodedFrameRate;
Console.WriteLine("FRAMERATE: " + frameRate); //Debug
double step = 1.0 / frameRate;
Console.WriteLine("STEP: " + step); //Debug
WindowsMediaPlayer.Ctlcontrols.currentPosition -= step; //Go backwards
WindowsMediaPlayer.Ctlcontrols.pause();
Ctlcontrols2.step(1);