I need to develop a compass for a device we are using. This device is directionally unaware (no gyroscope), but has a GPS module. How can I make a compass, with a needle, that leads from a start coordinate (likely their current position) to an end coordinate?
My current thoughts are:
Poll coordinates on the GPS sensor as quickly as appropriate.
Record coordinates where the PDOP is within a respectable range (maybe less than 2.0).
Determine the direction they are facing based on the coordinate changes of them walking.
I have a few issues with this though:
Firstly, the unit has to be moved around to get a sense of where they are.
Doesn't seem like it would be the most accurate, i.e. how many past points do you use to determine direction change?
I'm not really sure if this is a feasible solution. Is there some implementation theory I can read on this?
Is there a better way to solve my problem? The scope of the project involves going from a 'current location' to some geo-tagged item in an oil field.
Using a Windows Mobile 6.5 device - C# on VS2008.
I think the issue you have is that the way people use a compass doesn't fit with what you are achieving with a 'getting closer' technique. When you look at a compass you tend to spin to get a sense of direction, which is going to fail in your case, no matter how clever you get with distances. In fact the direction of the phone is going to be irrelevant, because it doesn't know which way up it is to be able to adjust any compass.
I think you'd be better dropping the compass idea and developing an interface that works with the 'getting closer' method you suggest. Maybe a simple distance to target display would be better? I think showing something that doesn't do what it should do is likely to be counter productive. If it was sat nav, within a moving vehicle then maybe it might work, as you can't spin that easily on the spot and it's likely to update itself before you've spun around.
Giving it some further thgouht I think using a map and showing your last movements on that with some sort of line (although the map may display upside down in relation to the users direction) would be better locating a target, if you showed a vector of your current location towards your target. At least if the vector and your movement line were aligned you'd know you were travelling in the right direction.
If I were in your shoes, I would display a notice on the screen that tells the user to always keep the top of the device pointed in the direction of travel. While they are moving, calculate the direction they're going using their current position and their last known position. Then calculate the direction they should be moving based off of their current position and the target's position. Then calculate the difference between their direction of travel and the direction they should be traveling, and use that difference to point an arrow on the screen that shows which direction they should be going relative to their current direction.
Here's an example:
Let D represent Direction of travel. Let's say that's 100 degrees in this example.
Let T represent the direction they should be traveling to reach the target. Let's say that's 90 degrees in this example. T - D = -10, so draw an arrow on the screen pointing -10 degrees from straight up. Remember straight up is the same as D if they're following the instructions I mentioned earlier. This means that the arrow is pointing at D-10, which is 90 degrees, which is the way they should be going.
Now you have another problem: if they stop moving, you no longer have any way to tell which way the device is pointing. In that case, hide the arrow and let the user know that it will return once the GPS starts moving again.
The last thing to keep in mind is that a 1 degree change in latitude represents more distance than a 1 degree change in longitude, so determining your headings isn't as straightforward as you might think. Here's a link to an article that tells you how to calculate headings based on 2 GPS points: http://www.movable-type.co.uk/scripts/latlong.html
Good luck!
Edit: Most GPS devices give you the direction of travel, but unless you're using the same algorithm the GPS uses when you calculate the direction they should be going, there could be a discrepancy that would cause your arrow to be a little off.
In your case you only can create an instrument which shows the geographical direction you are moving.
This is equal (or better) then a compass when the user points the device in the direction he is moving.
This is not equal to a compass, but in some cases even more usefull: e.g inside a vehilce where a compass using the magnet field would not work well.
GPS has a "course" atribute. Just use that, and you are ready.
Related
i am developing a racing game. But it's not your usual racing game, the bike is supposted to always have the Y coordinate of the same value as the closest point on the map mesh, in other words it is ALWAYS touching it.
What i do not want is the Y coordinate to be dependent upon the X and Y position, as there will be 2 (or maybe more floors).
I have absolutely no idea how to implement this. Completly zero. I am rather new to scripting, and this i way out of my league, i don't even know how to start... The map is not a simple plane, so simple maths won't help.
I'll apreaciate any help at all, not necessarily a solution.
Thanks in advance
This idea is an adapted version of #LeoBartkus's
I suggest using 2 raycasts from the bottom of the bike's wheels and using the 2 hits to position and rotate the bike. This allows for an accurate positioning of the bike for all kinds of terrain, except for spikes narrow and tall enough to appear to pierce the bike. Using a single raycast from the bike's center might cause problems if the ground is uneven, like a crater for example
I am currently working on a project which we have a set of photos of trucks going by a camera. I need to detect what type of truck it is (how many wheels it has). So I am using EMGU to try to detect this.
Problem I have is I cannot seem to be able to detect the wheels using EMGU's HoughCircle detection, it doesn't detect all the wheels and will also detect random circles in the foliage.
So I don't know what I should try next, I tried implementing SURF algo to match wheels between them but this does not seem to work either since they aren't exactly the same, is there a way I could implement a "loose" SURF algo?
This is what I start with.
This is what I get after the Hough Circle detection. Many erroneous detections, has some are not even close to having a circle and the back wheels are detected as a single one for some reason.
Would it be possible to either confirm that the detected circle are actually wheels using SURF and matching them between themselves? I am a bit lost on what I should do next, any help would be greatly appreciated.
(sorry for the bad English)
UPDATE
Here is what i did.
I used blob tracking to be able to find the blob in my set of photos. With this I effectively can locate the moving truck. Then i split the rectangle of the blob in two and take the lower half from there i know i get the zone that should contain the wheels which greatly increases the detection. I will then run a light intensity loose check on the wheels i get. Since they are in general more black i should get a decently low value for those and can discard anything that is too white, 180/255 and up. I also know that my circles radius cannot be greater than half the detection zone divided by half.
In this answer I describe an approach that was tested successfully with the following images:
The image processing pipeline begins by either downsampling the input image, or performing a color reduction operation to decrease the amount data (colors) in the image. This creates smaller groups of pixels to work with. I chose to downsample:
The 2nd stage of the pipeline performs a gaussian blur in order to smooth/blur the images:
Next, the images are ready to be thresholded, i.e binarized:
The 4th stage requires executing Hough Circles on the binarized image to locate the wheels:
The final stage of the pipeline would be to draw the circles that were found over the original image:
This approach is not a robust solution. It's meant only to inspire you to continue your search for answers.
I don't do C#, sorry. Good luck!
First, the wheels projections are ellipses and not circles. Second, some background gradient can easily produce circle-like object so there should be no surprise here. The problem with ellipses of course is that they have 5 DOF and not 3DOF as circles. Note thatfive dimensional Hough space becomes impractical. Some generalized Hough transforms can probably solve ellipse problem at the expense of a lot of additional false alarm (FA) circles. To counter FA you have to verify that they really are wheels that belong to a truck and nothing else.
You probably need to start with specifying your problem in terms of objects and backgrounds rather than wheel detection. This is important since objects would create a visual context to detect wheels and background analysis will show how easy would it be to segment a truck (object) on the first place. If camera is static one can use motion to detect background. If background is relatively uniform a gaussian mixture models of its colors may help to eliminate much of it.
I strongly suggest using:
http://cvlabwww.epfl.ch/~lepetit/papers/hinterstoisser_pami11.pdf
and the C# implementation:
https://github.com/dajuric/accord-net-extensions
(take a look at samples)
This algorithm can achieve real-time performance by using more than 2000 templates (20-30 fps) - so you can cover ellipse (projection) and circle shape cases.
You can modify hand tracking sample (FastTemplateMatchingDemo)
by putting your own binary templates (make them in Paint :-))
P.S:
To suppress false-positives some kind of tracking is also incorporated. The link to the library that I have posted also contains some tracking algortihms like: Discrete Kalman Filter and Particle Filter all with samples!
This library is still under development so there is possibility that something will not work.
Please do not hesitate sending me a message.
Background
I am producing a physics teaching platform using XNA C# + Kinect. A user may set up a scene with objects, including:
Sphere
Block
Plane
The obvious fun thing to do is to use different gestures to represent an object. Plane seems to be the easiest one so I intend to start there.
The flow of inputting an object is like this:
Choosing an object (through gesture recognition)
Scale the object
Rotate the object
Place the object
My idea
Here is my idea. We track 6 joints of the upper body:
LEFT + RIGHT wrist
LEFT + RIGHT elbow
LEFT + RIGHT shoulder
If this set of points is collinear, i.e. both arms held horizontal, we will say this is a input gesture for a plane.
If my idea is to be used, then there is a need to determine how collinear the points are, for example, some algorithm which returns the "collinearality" of a set of points as a float in interval [0, 1]. Then I can say, for example, anything >0.9 will be accepted as a plane, allowing some room for error.
OR
Alternatively, the idea of "Template Based Posture Detection" sounds great, stated in the link below:
http://blogs.msdn.com/b/eternalcoding/archive/2011/08/02/kinect-toolkit-1-1-template-based-posture-detector-and-voice-commander.aspx
But from what I understand, this seems to be a generic learning algorithm for ANY gestures. I would tend to write something on my own to try out Kinect as a start.
Question
So... does anyone know of any existing algorithm that determines the degree of "collinearness" of a set of points?
I was wondering how (if at all) it would be possible to determine a shape given a set of X,Y coordinates of mouse clicks?
We're dealing with a number of issues here, there may be clicks (coords) which are irrelevant to the shape. Here is an example: http://tinypic.com/view.php?pic=286tlkx&s=6 The green dots represent mouse clicks, and the search is for a square at least x in height/width, at most y in height/width and compromised of four points, the red lines indicate the shape found. I'd like to be able to find a number of basic shapes, such as squares, rectangles, triangles and ideally circles too.
I've heard that Least Squares is something that would help me, but it's not clear to me how this would help me if at all. I'm using C# and examples are more than welcome :)
You can create detectors for each shape you want to support. These detectors will tell, if a set of points form the shape.
So for example you would pass 4 points to the quad detector and it returns, if the 4 points are aligned in a quad or not. The quad detector could work like this:
for each point
find the closest neighbour point
compute the inner angle
compute the distance to the neighbours
if all inner angles are 90° +- some threshold -> ok
if all distances are equal +- some threshold (percentage) -> ok
otherwise it is no quad.
A naive way to use these detectors is to pass every subset of points to them. If you have enough time, then this is the easiest way. If you want to achieve some performance, you can select the points to pass a bit smarter.
E.g. if quads are always axis aligned, you can start at any point, go right until you hit another point (again with some thresold), go down, go left.
Those are just some thoughts that might help you further. I can imagine that there are algorithms in AI that can solve this problem in a more pragmatic way, maybe neural networks.
I think the title is rather self explanatory but just to clarify I am trying to figure out how to tell which side the collision has occured on.
For a bit more detail, I'm trying to make a maze-like game so I can't simply stop all movement upon a collision. Instead I need to be able to tell which side the collision has happened on so I can block that direction.
Any help is appreciated and if there is a better approach to this issue, I'm all for trying it out.
I hope this is enough details but if you need anymore, ask and I'll edit. Thanks in advance.
[edit]
#viggity - No, I'm not using any specific game engine and I would post the current "detection" code but it's a little, absurdly, robust.
#Streklin - I'm using the this.Paint event to draw onto the form itself as it was recommended I start by doing that to get better at drawing real time. I'm also using a location that's updated each time the timer ticks based on what I press (left, right, up, down). Yes the maze is tile based. Currently it only consists of 3 colors even. I'm not a very advanced programmer.
#Eric - Definately a one-d game. Again, I only have 3 colors, the lines are black, the background is white and the square (the user) is green. I'm using the DrawImage() with Bitmaps to draw onto the screen.
[edit psuedo-code summary]
foreach(Wall _wall in walls)
if(player.intersectsWith(_wall))
stop movement;
#JeffH - I'm not really sure what you're asking as that's pretty much all there is besides testing code that I was using to try and get it working. The only thing I left out was the if statement to check if it was the x axis or not so that x and y could move indepedently from each other. So instead of getting "stuck" because you touched the wall, you could slide against it. I didn't see the point in including that though since the problem occurs before that.
Assuming you're talking about a 3D game here.
The normal of the face you can see points towards you, so the dot product of your direction vector with the face normal will be negative. If it's positive then you are coming at the face from the back.
If it's zero you're travelling at right angles to the face.
| <---------- your direction of travel
|
|----------> <- face normal
|
| <- face
If you're not in 3D then you could store the direction the wall is facing (as a 2D vector) and do the same dot product with your 2D direction of movement.
Based on your edit you can only go one direction at a time? Or can you go in diagonal directions? If it's the later, ChrisF has provided you the answer in 3D and the corresponding information for 2D. If not, you should just have to stop travel in the direction of travel - since there are only four possibilities it should be easy enough to check them all for simple starter game.