I'm making a game in XNA and currently I'm checking the coordinates of the mouse click against the coordinates of each object that can be clicked.
This is fine for my small game but for larger games it would become CPU intensive to check through every object for each frame.
Is there a better way to approach this?
You will want to partition your world space with some sort of algorithm like Quadtree.
In your most basic form you'll want to be able to take all objects and be able to quickly throw out a bunch of them before you even do your detailed check. For instance, if you are clicking on the right side of the screen you want to throw out everything on the left side of the screen automagically.
Related
im currently developing a solution to for ARCore(Android) with Unity.
My actual problem is that i cant figure out how to give a fixed coordinate to an object to spawn, i dont want to spawn objects while touching the screen, i want them to be there when the app loads, For example:
I want an underground wiresystem to show up in the application on a specific real world zone, so the workers could know where to dig instead of looking at random positions.
I'vve been reading about Anchors, but i have only found examples on using the trackable parameter according to RayCast, like spawning on real time while touching the screen, and not preloading those objects in the app. IS there a way to give real GPS parameters to anchors?
I am developing a WPF App that uses Kinect v2, and I use the hand to simulate the mouse. It works but I have a little problem - when I close the hand I simulate a click but the cursor drops its position a little bit relative to when the hand was open and sometimes it will end in a click in the wrong button or place.
Any ideas on how can I solve this?
I already tried to track the wrist and the thumbs instead of the hand but the problem still happens.
Thanks!
Here are some ideas:
Filter and smooth the hand position data a bit more. For a UI/menu system, it should be acceptable to have some latency as it doesn't require reduced latency as much as other uses.
Modify the hand position based on the hand's open/close state. Introduce a constant to bump up the hand position when the hand is closed, with appropriate smoothing to get this to feel and look correct
Keep a list of hand positions and use the data from a few frames before (though it might be tricky to get this to feel and look correct)
As a note, also consider these points:
Use bigger buttons. Buttons should have appropriate spacing, placement, and sizes. The app's UI should be specifically designed for a Kinect application.
Use a different gesture for a mouse click, such as push or press which is the recommended approach in the Kinect Human Interface Guidelines 2.0
I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.
I'm doing a kinect Application using Official Kinect SDK.
The Result I want
1) able to identify the body have been waving for 5sec. Do something if it does
2) able to identify leaning with one leg for 5sec. do something if it does.
Anyone knows how to do so? I'm doing in a WPF application.
Would like to have some example. I'm rather new to Kinect.
Thanks in advance for all your help!
The Kinect provides you with the skeletons it's tracking, you have to do the rest. Basically you need to create a definition for each gesture you want, and run that against the skeletons every time the SkeletonFrameReady event is fired. This isn't easy.
Defining Gestures
Defining the gestures can be surprisingly difficult. The simplest (easiest) gestures are ones that happen at a single point in time, and therefore don't rely on past locations of the limbs. For example, if you want to detect when the user has their hand raised above their head, this can be checked on every individual frame. More complicated gestures need to take a period of time into account. For your waving gesture, you won't be able to tell from a single frame whether a person is waving or just holding their hand up in front of them.
So now you need to be able to store relevant information from the past, but what information is relevant? Should you keep a store of the last 30 frames and run an algorithm against that? 30 frames only gets you a second's worth of information.. perhaps 60 frames? Or for your 5 seconds, 300 frames? Humans don't move that fast, so maybe you could use every fifth frame, which would bring your 5 seconds back down to 60 frames. A better idea would be to pick and choose the relevant information out of the frames. For a waving gesture the hand's current velocity, how long it's been moving, how far it's moved, etc. could all be useful information.
After you've figured out how to get and store all the information pertaining to your gesture, how do you turn those numbers into a definition? Waving could require a certain minimum speed, or a direction (left/right instead of up/down), or a duration. However, this duration isn't the 5 second duration you're interested in. This duration is the absolute minimum required to assume that the user is waving. As mentioned above, you can't determine a wave from one frame. You shouldn't determine a wave from 2, or 3, or 5, because that's just not enough time. If my hand twitches for a fraction of a second, would you consider that a wave? There's probably a sweet spot where most people would agree that a left to right motion constitutes a wave, but I certainly don't know it well enough to define it in an algorithm.
There's another problem with requiring a user to do a certain gesture for a period of time. Chances are, not every frame in that five seconds will appear to be a wave, regardless of how well you write the definition. Where as you can easily determine if someone held their hand over their head for five seconds (because it can be determined on a single frame basis), it's much harder to do that for complicated gestures. And while waving isn't that complicated, it still shows this problem. As your hand changes direction at either side of a wave, it stops moving for a fraction of a second. Are you still waving then? If you answered yes, wave more slowly so you pause a little more at either side. Would that pause still be considered a wave? Chances are, at some point in that five second gesture, the definition will fail to detect a wave. So now you need to take into account a leniency for the gesture duration.. if the waving gesture occurred for 95% of the last five seconds, is that good enough? 90%? 80%?
The point I'm trying to make here is there's no easy way to do gesture recognition. You have to think through the gesture and determine some kind of definition that will turn a bunch of joint positions (the skeleton data) into a gesture. You'll need to keep track of relevant data from past frames, but realize that the gesture definition likely won't be perfect.
Consider the Users
So now that I've said why the five second wave would be difficult to detect, allow me to at least give my thoughts on how to do it: don't. You shouldn't force users to repeat a motion based gesture for a set period of time (the five second wave). It is surprisingly tiring and just not what people expect/want from computers. Point and click is instantaneous; as soon as we click, we expect a response. No one wants to have to hold a click down for five seconds before they can open Minesweeper. Repeating a gesture over a period of time is okay if it's continually executing some action, like using a gesture to cycle through a list - the user will understand that they must continue doing the gesture to move farther through the list. This even makes the gesture easier to detect, because instead of needing information for the last 5 seconds, you just need enough information to know if the user is doing the gesture right now.
If you want the user to hold a gesture for a set amount of time, make it a stationary gesture (holding your hand at some position for x seconds is a lot easier than waving). It's also a very good idea to give some visual feedback, to say that the timer has started. If a user screws up the gesture (wrong hand, wrong place, etc) and ends up standing there for 5 or 10 seconds waiting for something to happen, they won't be happy, but that's not really part of this question.
Starting with Kinect Gestures
Start small.. really small. First, make sure you know your way around the SkeletonData class. There are 20 joints tracked on each skeleton, and they each have a TrackingState. This tracking state will show whether the Kinect can actually see the joint (Tracked), if it is figuring out the joint's position based on the rest of the skeleton (Inferred), or if it has entirely abandoned trying to find the joint (NotTracked). These states are important. You don't want to think the user is standing on one leg simply because the Kinect doesn't see the other leg and is reporting a bogus position for it. Each joint has a position, which is how you know where the user is standing.. piece by piece. Become familiar with the coordinate system.
After you know the basics of how the skeleton data is reported, try for some simple gestures. Print a message to the screen when the user raises a hand above their head. This only requires comparing each hand to the Head joint and seeing if either hand is higher than the head in the coordinate plane. After you get that working, move up to something more complicated. I'd suggest trying a swiping motion (hand in front of body, moves either right to left or left to right some minimum distance). This requires information from past frames, so you'll have to think through what information to store. If you can get that working, you could try string a series of swiping gestures in a small amount of time and interpreting that as a wave.
tl;dr: Gestures are hard. Start small, build your way up. Don't make users do repetitive motions for a single action, it's tiring and annoying. Include visual feedback for duration based gestures. Read the rest of this post.
The Kinect SDK helps you get the coordinates of different joints. A gesture is nothing but change in position of a set of joints over a period of time.
To recognize gestures, you've to store the coordinates for a period of time and iterate through it to see if it obeys the rules for a particular gesture (such as - the right hand always moves upwards).
For more details, check out my blog post on the topic:
http://tinyurl.com/89o7sf5
In this game im trying to create, players are going to be able to go in all directions
I added one single image(1024x768 2d texture) as background, or terrain.
Now, when player moves around I want to display some stuff.
For example, lets say a lamp, when player moves enough, he will see lamp. if he goes back, lamp will disappear because it wont be anymore in screen
If Im unclear, think about mario. when you go further, coin-boxes will appear, if you go back they will disappear. but background will always stay same
I thought if I spawn ALL my sprites at screen, but in positions like 1599, 1422 it will be invisible because screen is only 1024x768, and when player moves, I will set place of that sprite to 1599-1,1422-1 and so. Is it a good way to do this ?
Are there better ways?
There are two ways you can achieve this result.
Keep player and camera stationary, move everything else.
Keep everything stationary except the player and the camera.
It sounds like you are trying to implement the first option. This is a fine solution, but it can become complicated quickly as the number of items grows. If you use a tile system, this can become much easier to manage. I recommend you look into using a tile engine of some sort. There are a lot of great tile map editors as well.
Some resources for using Tiles:
Tiled -- Nice Map Editor
TiledLib -- XNA Library for using Tiled Maps
What you're describing there is a Viewport, which describes a portion of the 'world' that is currently visible.
You need to define the contents of your 'world' somehow. This can be done with a data structure such as a scene graph, but for the simple 2D environment you're describing, you could probably store objects in an array. You would need to bind your direction keys to change the coordinates of the viewport (and your character if you want them to stay centered).
It's a good idea to only draw objects that are currently visible. Without knowing which languages or packages you are using it's difficult to comment on that.
I would look into Parallax scrolling. Here is an example of it in action.
If this is what you require, then here is a tutorial with source code.
XNA Parallax Scrolling
After you are finished with basic scrolling, try to implement some frustum culling. That is only draw objects which are actually visible on the screen and avoid unnecessary drawing of stuff that cannot be seen anyway.
I would prefer solution number 2 (move player and camera) - It would be easier for me, but maybe its just personal preference.