I'm using a Wiimote controller as an input device.
I'm using this wrapper for HID calls / polling.
In the demo scene that comes with this wrapper, polling the controller is done in the Update event.
In many Wii games aiming extremely up and down quickly triggers an action.
The wrapper indicates extreme vertical aiming positions (where the aim goes out of scope / is "offscreen") as
Y=-1
I tried to detect such a rapid up-down movements by
1) Detecting if aim is off-screen
2) If yes, have a look if the aim is within the screen again
3) Detect if aim is off-screen again and if all this happened in a certain time period
The problem however is (I think due to the nature of polling only in the Update event), #2) doesn't necessarily have to occur. It's possible that the aim was in the screen, but the controller wasn't polled when it was.
I would like to ask what might be a valid solution to this problem.
Your polling would have to be pretty bad for this to be an issue, but other than increasing the poll rate, there isn't really anything you can do here.
The only other option would be to use the gyroscope or accelerometer instead (sorry, didn't bother to check if your wrapper exposes those). You could essentially combine a harsh vertical shake with the point being off screen, but if your polling is an issue in the original solution, it will still probably be an issue here too.
Related
I am having a issue where i suppose to load a large number of objects in my scene on an event,
so when ever i start loading these Scenesmy Vive/vr goes to compositor screen and it kinda flickers between my main scene and compositor screen
until my loading of Scenes is finished.
So my question is how to stop this flickering, i am happy to call the compositor screen till my Loading is completed and then it switch back to my Main scene or something like that which can solve this flickering issue
i have been searching around that how to call compositor screen on my own or how to stop it being called when my Loading is in progress but in vein.
any help would be much appreciated because i am out of ideas.
Thanks...
Flickering and showing compositor screen is generally caused by Unity not being able to Render a frame. I'm sure you will be able to find which object(s) (if any in particular) is causing a lag in the Profiler Tab.
I assume by "Loading" objects you mean Instancing them. If so, it depends:
a) If you instancing multiple identical objects - look up into GPU instancing methods
b) if the objects aren't unique, you may try to:
Load scene asynchronously
Place all (deactivated) objects on the first scene and instead of instancing or
loading new scene, just activate them. This will only increase initial loading
time.
Another things that you can use is:
Single Pass Stereo Rendering
use of ONLY or mostly baked lightning - which will SIGNIFICANTLY increase performance (that let me display in real time VR scene with model that counts more than 40 million vertices)
By default, SteamVR fades to grid whenever an app hangs. While of course it's ideal to keep your framerate above the threshold that would normally trigger this, there are many cases (such as loading scenes) where all the optimization in the world won't keep Unity from hanging long enough to trigger this issue.
Fortunately, this behavior can be disabled entirely! SteamVR Settings -> Developer -> Do not fade to grid when app hangs. This setting can be also be changed in the steamvr.vrsettings file which can be found using vrpathreg.exe. There, set steamvr.doNotFadeToGrid to true. This file is modified by the SteamVR Settings interface as well, but accessing it via the file directly allows you to modify settings for your target devices from within a game installer, for example.
Unfortunately as of now there isn't a way to configure this on a per-app basis, meaning the behavior will be turned off for all SteamVR apps on a user's machine. Tweaking this setting comes with the caveat that low framerates make users feel sick and the fade-to-grid feature prevents this from happening. However, in my experience, the feature is a nuisance that detracts from immersion far more than a temporary dip in framerate as long as you're following performance best practices, and if framerate is dropping anyway, this fade to grid feature causes intense flickering that's just as uncomfortable as low framerate.
I am developing a WPF App that uses Kinect v2, and I use the hand to simulate the mouse. It works but I have a little problem - when I close the hand I simulate a click but the cursor drops its position a little bit relative to when the hand was open and sometimes it will end in a click in the wrong button or place.
Any ideas on how can I solve this?
I already tried to track the wrist and the thumbs instead of the hand but the problem still happens.
Thanks!
Here are some ideas:
Filter and smooth the hand position data a bit more. For a UI/menu system, it should be acceptable to have some latency as it doesn't require reduced latency as much as other uses.
Modify the hand position based on the hand's open/close state. Introduce a constant to bump up the hand position when the hand is closed, with appropriate smoothing to get this to feel and look correct
Keep a list of hand positions and use the data from a few frames before (though it might be tricky to get this to feel and look correct)
As a note, also consider these points:
Use bigger buttons. Buttons should have appropriate spacing, placement, and sizes. The app's UI should be specifically designed for a Kinect application.
Use a different gesture for a mouse click, such as push or press which is the recommended approach in the Kinect Human Interface Guidelines 2.0
I have a DirectShow graph which records and displays a video source. When I move Video Renderer window to other monitor, what I recorded gets deleted and recording starts again. I searched and found this link which says changing monitor stops and starts the graph. How can I stop the graph from being restarted? I don't want to lose my recording while switching between monitors.
Thanks
The behavior you are describing is basically behavior by design (even though the side effect is pretty much annoying and confusing). Moving a video renderer between the monitors makes it re-allocate hardware resources used to present video, and this in turn needs a state transition. For recording, state transition means opening and closing the file.
Your solution is to either split into presentation and recording graphs, or to use custom allocator/presenter to take care of presentation yourself the way you want. Supposedly, graph splitting (what Wimmel suggests in another answer) is the preferable way adding other degrees of freedom in particular.
There is probably a good reason that the EC_DISPLAY_CHANGED Message behaves that way, so I don't know what the disadvantages are when you handle this message yourself and don't restart the graph.
Instead you could separate the rendering graph from the recording using GMFBridge. Use one graph to capture and record. Use the second graph only for rendering, so restarting that graph would not stop the recording.
Edit: Possibly you need to disconnect before the second graph is restarted. That will mean you do need to process the EC_DISPLAY_CHANGED message, even if you use GMFBridge.
m_pController->BridgeGraphs(NULL, NULL);
I think a lot of people used application "Fraps" for recording video from game. I use it for displaying FPS (frames per second) in the games. Fraps can show digits at the corner of screen when game runs.
I want to display core temperature of processor. The temperature I will find, but I need to khow, how can I display it in the game? (I need it for testing core temperatures in the game, because stress tests of Everest/AIDA64 doesn't much load the system).
Want to use C# (but can listen to all solutions, C++, Java)
Example games: Dirt2, Call of Duty 5 (DirectX)
P.S. This post was similar...
c# text/winForm overlay video games like xfire,PIX,steam,fraps etc
What you want to do is a bit more complex than you might think. There are different sources on the web about this, some might be a bit outdated. A good search Term is "Hook direct 3d", there are also other threads on stackoverflow about this topic. A good thread is also this.
One advice: You are changing the runtime code of the game, which can be detected by anti cheat mechanism and can cause banning if the game is a multiplayer game. It is even possible that widely known applications like fraps are on some sort of whitelist against these checks, but i'm not sure about that.
An alternative to what you want could be to make your form always stay on top (form.TopMost = true;). Then you can set the transparency color the same color as your form (by default it would be form.TransparencyKey = System.Drawing.SystemColors.Control;). After that you can remove the border of your form (form.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None;).
Be careful not to use the transparency color anywhere else (it'll make part of images transparent if it contains this color).
Make sure to have a way of closing the form. (and moving it if needed).
I'm doing a kinect Application using Official Kinect SDK.
The Result I want
1) able to identify the body have been waving for 5sec. Do something if it does
2) able to identify leaning with one leg for 5sec. do something if it does.
Anyone knows how to do so? I'm doing in a WPF application.
Would like to have some example. I'm rather new to Kinect.
Thanks in advance for all your help!
The Kinect provides you with the skeletons it's tracking, you have to do the rest. Basically you need to create a definition for each gesture you want, and run that against the skeletons every time the SkeletonFrameReady event is fired. This isn't easy.
Defining Gestures
Defining the gestures can be surprisingly difficult. The simplest (easiest) gestures are ones that happen at a single point in time, and therefore don't rely on past locations of the limbs. For example, if you want to detect when the user has their hand raised above their head, this can be checked on every individual frame. More complicated gestures need to take a period of time into account. For your waving gesture, you won't be able to tell from a single frame whether a person is waving or just holding their hand up in front of them.
So now you need to be able to store relevant information from the past, but what information is relevant? Should you keep a store of the last 30 frames and run an algorithm against that? 30 frames only gets you a second's worth of information.. perhaps 60 frames? Or for your 5 seconds, 300 frames? Humans don't move that fast, so maybe you could use every fifth frame, which would bring your 5 seconds back down to 60 frames. A better idea would be to pick and choose the relevant information out of the frames. For a waving gesture the hand's current velocity, how long it's been moving, how far it's moved, etc. could all be useful information.
After you've figured out how to get and store all the information pertaining to your gesture, how do you turn those numbers into a definition? Waving could require a certain minimum speed, or a direction (left/right instead of up/down), or a duration. However, this duration isn't the 5 second duration you're interested in. This duration is the absolute minimum required to assume that the user is waving. As mentioned above, you can't determine a wave from one frame. You shouldn't determine a wave from 2, or 3, or 5, because that's just not enough time. If my hand twitches for a fraction of a second, would you consider that a wave? There's probably a sweet spot where most people would agree that a left to right motion constitutes a wave, but I certainly don't know it well enough to define it in an algorithm.
There's another problem with requiring a user to do a certain gesture for a period of time. Chances are, not every frame in that five seconds will appear to be a wave, regardless of how well you write the definition. Where as you can easily determine if someone held their hand over their head for five seconds (because it can be determined on a single frame basis), it's much harder to do that for complicated gestures. And while waving isn't that complicated, it still shows this problem. As your hand changes direction at either side of a wave, it stops moving for a fraction of a second. Are you still waving then? If you answered yes, wave more slowly so you pause a little more at either side. Would that pause still be considered a wave? Chances are, at some point in that five second gesture, the definition will fail to detect a wave. So now you need to take into account a leniency for the gesture duration.. if the waving gesture occurred for 95% of the last five seconds, is that good enough? 90%? 80%?
The point I'm trying to make here is there's no easy way to do gesture recognition. You have to think through the gesture and determine some kind of definition that will turn a bunch of joint positions (the skeleton data) into a gesture. You'll need to keep track of relevant data from past frames, but realize that the gesture definition likely won't be perfect.
Consider the Users
So now that I've said why the five second wave would be difficult to detect, allow me to at least give my thoughts on how to do it: don't. You shouldn't force users to repeat a motion based gesture for a set period of time (the five second wave). It is surprisingly tiring and just not what people expect/want from computers. Point and click is instantaneous; as soon as we click, we expect a response. No one wants to have to hold a click down for five seconds before they can open Minesweeper. Repeating a gesture over a period of time is okay if it's continually executing some action, like using a gesture to cycle through a list - the user will understand that they must continue doing the gesture to move farther through the list. This even makes the gesture easier to detect, because instead of needing information for the last 5 seconds, you just need enough information to know if the user is doing the gesture right now.
If you want the user to hold a gesture for a set amount of time, make it a stationary gesture (holding your hand at some position for x seconds is a lot easier than waving). It's also a very good idea to give some visual feedback, to say that the timer has started. If a user screws up the gesture (wrong hand, wrong place, etc) and ends up standing there for 5 or 10 seconds waiting for something to happen, they won't be happy, but that's not really part of this question.
Starting with Kinect Gestures
Start small.. really small. First, make sure you know your way around the SkeletonData class. There are 20 joints tracked on each skeleton, and they each have a TrackingState. This tracking state will show whether the Kinect can actually see the joint (Tracked), if it is figuring out the joint's position based on the rest of the skeleton (Inferred), or if it has entirely abandoned trying to find the joint (NotTracked). These states are important. You don't want to think the user is standing on one leg simply because the Kinect doesn't see the other leg and is reporting a bogus position for it. Each joint has a position, which is how you know where the user is standing.. piece by piece. Become familiar with the coordinate system.
After you know the basics of how the skeleton data is reported, try for some simple gestures. Print a message to the screen when the user raises a hand above their head. This only requires comparing each hand to the Head joint and seeing if either hand is higher than the head in the coordinate plane. After you get that working, move up to something more complicated. I'd suggest trying a swiping motion (hand in front of body, moves either right to left or left to right some minimum distance). This requires information from past frames, so you'll have to think through what information to store. If you can get that working, you could try string a series of swiping gestures in a small amount of time and interpreting that as a wave.
tl;dr: Gestures are hard. Start small, build your way up. Don't make users do repetitive motions for a single action, it's tiring and annoying. Include visual feedback for duration based gestures. Read the rest of this post.
The Kinect SDK helps you get the coordinates of different joints. A gesture is nothing but change in position of a set of joints over a period of time.
To recognize gestures, you've to store the coordinates for a period of time and iterate through it to see if it obeys the rules for a particular gesture (such as - the right hand always moves upwards).
For more details, check out my blog post on the topic:
http://tinyurl.com/89o7sf5