Vector direction for gravity in a circular orbit - c#

I am currently working on a project in C# where i play around with planetary gravitation, which i know is a hardcore topic to graps to it's fullest but i like challenges. I've been reading up on Newtons laws and Keplers Laws, but one thing i cannot figure out is how to get the correct gravitational direction.
In my example i only have 2 bodies. A Satellite and a Planet. This is to make is simplify it, so i can grasp it - but my plan is to have multiple objects that dynamically effect each other, and hopefully end up with a somewhat realistic multi-body system.
When you have an orbit, then the satellite has a gravitational force, and that is ofcourse in the direction of the planet, but that direction isn't a constant. To explain my problem better i'll try using an example:
let's say we have a satellite moving at a speed of 50 m/s and accelerates towards the planet at a speed of 10 m/s/s, in a radius of 100 m. (all theoretical numbers) If we then say that the framerate is at 1, then after one second the object will be 50 units forward and 10 units down.
As the satellite moves multiple units in a frame and about 50% of the radius, the gravitational direcion have shifted alot, during this frame, but the applied force have only been "downwards". this creates a big margin of error, especially if the object is moving a big percentage of the radius.
In our example we'd probably needed our graviational direction to be based upon the average between our current position and the position at the end of this frame.
How would one go about calculating this?
I have a basis understanding of trigonometry, but mainly with focus on triangles. Assume i am stupid, because compared to any of you, i probably am.
(I made a previous question but ended up deleting it as it created some hostility and was basicly not that well phrased, and was ALL to general - it wasn't really a specific question. i hope this is better. if not, then please inform me, i am here to learn :) )
Just for reference, this is the function i have right now for movement:
foreach (ExtTerBody OtherObject in UniverseController.CurrentUniverse.ExterTerBodies.Where(x => x != this))
{
double massOther = OtherObject.Mass;
double R = Vector2Math.Distance(Position, OtherObject.Position);
double V = (massOther) / Math.Pow(R,2) * UniverseController.DeltaTime;
Vector2 NonNormTwo = (OtherObject.Position - Position).Normalized() * V;
Vector2 NonNormDir = Velocity + NonNormTwo;
Velocity = NonNormDir;
Position += Velocity * Time.DeltaTime;
}
If i have phrased myself badly, please ask me to rephrase parts - English isn't my native language, and specific subjects can be hard to phrase, when you don't know the correct technical terms. :)
I have a hunch that this is covered in keplers second law, but if it is, then i'm not sure how to use it, as i don't understand his laws to the fullest.
Thank you for your time - it means alot!
(also if anyone see multi mistakes in my function, then please point them out!)

I am currently working on a project in C# where i play around with planetary gravitation
This is a fun way to learn simulation techniques, programming and physics at the same time.
One thing I cannot figure out is how to get the correct gravitational direction.
I assume that you are not trying to simulate relativistic gravitation. The Earth isn't in orbit around the Sun, the Earth is in orbit around where the sun was eight minutes ago. Correcting for the fact that gravitation is not instantaneous can be difficult. (UPDATE: According to commentary this is incorrect. What do I know; I stopped taking physics after second year Newtonian dynamics and have only the vaguest understanding of tensor calculus.)
You'll do best at this early stage to assume that the gravitational force is instantaneous and that planets are points with all their mass at the center. The gravitational force vector is a straight line from one point to another.
Let's say we have a satellite moving at a speed of 50 m/s ... If we then say that the framerate is one frame per second then after one second the object will be 50 units right and 10 units down.
Let's make that more clear. Force is equal to mass times acceleration. You work out the force between the bodies. You know their masses, so you now know the acceleration of each body. Each body has a position and a velocity. The acceleration changes the velocity. The velocity changes the position. So if the particle starts off having a velocity of 50 m/s to the left and 0 m/s down, and then you apply a force that accelerates it by 10 m/s/s down, then we can work out the change to the velocity, and then the change to the position. As you note, at the end of that second the position and the velocity will have both changed by a huge amount compared to their existing magnitudes.
As the satellite moves multiple units in a frame and about 50% of the radius, the gravitational direcion have shifted alot, during this frame, but the applied force have only been "downwards". this creates a big margin of error, especially if the object is moving a big percentage of the radius.
Correct. The problem is that the frame rate is enormously too low to correctly model the interaction you're describing. You need to be running the simulation so that you're looking at tenths, hundredths or thousanths of seconds if the objects are changing direction that rapidly. The size of the time step is usually called the "delta t" of the simulation, and yours is way too large.
For planetary bodies, what you're doing now is like trying to model the earth by simulating its position every few months and assuming it moves in a straight line in the meanwhile. You need to actually simulate its position every few minutes, not every few months.
In our example we'd probably needed our graviational direction to be based upon the average between our current position and the position at the end of this frame.
You could do that but it would be easier to simply decrease the "delta t" for the computation. Then the difference between the directions at the beginning and the end of the frame is much smaller.
Once you've got that sorted out then there are more techniques you can use. For example, you could detect when the position changes too much between frames and go back and redo the computations with a smaller time step. If the positions change hardly at all then increase the time step.
Once you've got that sorted, there are lots of more advanced techniques you can use in physics simulations, but I would start by getting basic time stepping really solid first. The more advanced techniques are essentially variations on your idea of "do a smarter interpolation of the change over the time step" -- you are on the right track here, but you should walk before you run.

I'll start with a technique that is almost as simple as the Euler-Cromer integration you've been using but is markedly more accurate. This is the leapfrog technique. The idea is very simple: position and velocity are kept at half time steps from one another.
The initial state has position and velocity at time t0. To get that half step offset, you'll need a special case for the very first step, where velocity is advanced half a time step using the acceleration at the start of the interval and then position is advanced by a full step. After this first time special case, the code works just like your Euler-Cromer integrator.
In pseudo code, the algorithm looks like
void calculate_accel (orbiting_body_collection, central_body) {
foreach (orbiting_body : orbiting_body_collection) {
delta_pos = central_body.pos - orbiting_body.pos;
orbiting_body.acc =
(central_body.mu / pow(delta_pos.magnitude(),3)) * delta_pos;
}
}
void leapfrog_step (orbiting_body_collection, central_body, delta_t) {
static bool initialized = false;
calculate_accel (orbiting_body_collection, central_body);
if (! initialized) {
initialized = true;
foreach orbiting_body {
orbiting_body.vel += orbiting_body.acc*delta_t/2.0;
orbiting_body.pos += orbiting_body.vel*delta_t;
}
}
else {
foreach orbiting_body {
orbiting_body.vel += orbiting_body.acc*delta_t;
orbiting_body.pos += orbiting_body.vel*delta_t;
}
}
}
Note that I've added acceleration as a field of each orbiting body. This was a temporary step to keep the algorithm similar to yours. Note also that I moved the calculation of acceleration to it's own separate function. That is not a temporary step. It is the first essential step to advancing to even more advanced integration techniques.
The next essential step is to undo that temporary addition of the acceleration. The accelerations properly belong to the integrator, not the body. On the other hand, the calculation of accelerations belongs to the problem space, not the integrator. You might want to add relativistic corrections, or solar radiation pressure, or planet to planet gravitational interactions. The integrator should be unaware of what goes into those accelerations are calculated. The function calculate_accels is a black box called by the integrator.
Different integrators have very different concepts of when accelerations need to be calculated. Some store a history of recent accelerations, some need an additional workspace to compute an average acceleration of some sort. Some do the same with velocities (keep a history, have some velocity workspace). Some more advanced integration techniques use a number of techniques internally, switching from one to another to provide the best balance between accuracy and CPU usage. If you want to simulate the solar system, you need an extremely accurate integrator. (And you need to move far, far away from floats. Even doubles aren't good enough for a high precision solar system integration. With floats, there's not much point going past RK4, and maybe not even leapfrog.)
Properly separating what belongs to whom (the integrator versus the problem space) makes it possible to refine the problem domain (add relativity, etc.) and makes it possible to easily switch integration techniques so you can evaluate one technique versus another.

So i found a solution, it might not be the smartest, but it works, and it's pretty came to mind after reading both Eric's answer and also reading the comment made by marcus, you could say that it's a combination of the two:
This is the new code:
foreach (ExtTerBody OtherObject in UniverseController.CurrentUniverse.ExterTerBodies.Where(x => x != this))
{
double massOther = OtherObject.Mass;
double R = Vector2Math.Distance(Position, OtherObject.Position);
double V = (massOther) / Math.Pow(R,2) * Time.DeltaTime;
float VRmod = (float)Math.Round(V/(R*0.001), 0, MidpointRounding.AwayFromZero);
if(V > R*0.01f)
{
for (int x = 0; x < VRmod; x++)
{
EulerMovement(OtherObject, Time.DeltaTime / VRmod);
}
}
else
EulerMovement(OtherObject, Time.DeltaTime);
}
public void EulerMovement(ExtTerBody OtherObject, float deltaTime)
{
double massOther = OtherObject.Mass;
double R = Vector2Math.Distance(Position, OtherObject.Position);
double V = (massOther) / Math.Pow(R, 2) * deltaTime;
Vector2 NonNormTwo = (OtherObject.Position - Position).Normalized() * V;
Vector2 NonNormDir = Velocity + NonNormTwo;
Velocity = NonNormDir;
//Debug.WriteLine("Velocity=" + Velocity);
Position += Velocity * deltaTime;
}
To explain it:
I came to the conclusion that if the problem was that the satellite had too much velocity in one frame, then why not seperate it into multiple frames? So this is what "it" does now.
When the velocity of the satellite is more than 1% of the current radius, it seperates the calculation into multiple bites, making it more precise.. This will ofcourse lower the framerate when working with high velocities, but it's okay with a project like this.
Different solutions are still very welcome. I might tweak the trigger-amounts, but the most important thing is that it works, then i can worry about making it more smooth!
Thank's everybody that took a look, and everyone who helped be find the conclusion myself! :) It's awesome that people can help like this!

Related

Unity - Moving the player or the scenario - Endless runner

I'm going to create an endless runner in Unity and I was wondering if I should move the player or the scenario during the runs.
The most obvious answer sounds like "the player" because you are moving fewer objects but... does the performance get affected if the size of the scenes is too large? I don't think so but my real worry is about the coordinate:
What happens if the player runs so far away that the coordinates can't fit in a float variable? I think that the transform component uses a Vector3 to store the coordinates and this Vector3 uses float variables (with a limit of +3.4E+38) for each of the coordinates.
Thank you for your answers in advance,
Guillem Poy
Even if I don't know what happens if the value of the coordenates exceeds the capacity of a float variable (I think that then the number simply will be wrong) doing some calculus...
Even if the player is moving 1000 units per second, to create that problem, the player should be playing during 9.4+E32 hours (2.5+E29 years). I think that nobody is going to play this much.
So, the best option I think that is to move the player.

Calculating Speed Based on Randomly-Timed Received Position Coordinates

I'm writing an application that has a need to know the speed you're traveling. My application talks to several pieces of equipment, all with different built-in GPS receivers. Where the hardware I'm working with reports speed, I use that parameter. But in some cases, I have hardware which does NOT report speed, simply latitude and longitude.
What I have been doing in that case, is marking the time that I receive the first coordinate, then waiting for another coordinate to come in. I then calculation the distance traveled and divide by the elapsed time.
The problem I'm running into is that some of the hardware reports position quickly (5-10 times per second) while some reports position slowly (0.5 times per second). When I'm receiving the GPS position quickly, my algorithm fails to accurately calculate the speed due to the inherent inaccuracies of GPS receivers. In order words, the position will naturally move due to GPS inaccuracy, and since the elapsed time span from the last received position is so small, my algorithm thinks we've moved far over a short time - meaning we are going fast (when in reality we may be standing still).
How can I go about averaging the speed to avoid this problem? It seems like the process will have to be adaptive based on how fast the points come in. For example if I simply average the last 5 points collected to do my speed calculation, it will probably work great for "fast" reporting units but it will hurt my accuracy for "slow" reporting units.
Any ideas?
Use a simple filter:
Take a position only if it is more than 10 meters away from last taken position.
Then caluclate the distance between lastGood and thisGood, and divide by timeDiff.
Your further want to ignore all speeds under 5km/h were GPS is most noisy.
You further can optimize by calcuklating the direction between last and this, if it stays stable you take it. This helps filtering.
I would average the speed over the last X seconds. Let's pick X=3. For your fast reporters that means averaging your speed with about 20 data points. For your slow reporters, that may only get you 6 data points. This should keep the accuracy fairly even across the board.
I'd try using the average POSITION over the last X seconds.
This should "average out" the random noise associated with the high frequency location input....which should yield a better speed computation.
(Obviously you'd use "averaged" positions to compute your speed)
You probably have an existing data point structure to pull a linq query from?
In light of the note that we need to account for negative vectors, and the suggestion to account for known margins of error here is a more complex example:
class GPS
{
List<GPSData> recentData;
TimeSpan speedCalcZone = new TimeSpan(100000);
decimal acceptableError = .5m;
double CalcAverageSpeed(GPSData newestPoint)
{
var vectors = (from point in recentData
where point.timestamp > DateTime.Now - speedCalcZone
where newestPoint.VectorErrorMargin(point) < acceptableError
select new
{
xVector = newestPoint.XVector(point),
yVector = newestPoint.YVector(point)
});
var averageXVector = (from vector in vectors
select vector.xVector).Average();
var averageYVector = (from vector in vectors
select vector.yVector).Average();
var averagedSpeed = Math.Sqrt(Math.Pow(averageXVector, 2) + Math.Pow(averageYVector, 2));
return averagedSpeed;
}
}
But as pointed out in comments, there is no one magic algorithm, you have to tweak it for your circumstances and needs.
You're looking for one ideal algorithm that may not exist for one very simple reason: you can't invent data where there isn't any and some times you can't even tell where the data ends and error begins.
That being said there are ways to reduce the "noise" as you've discovered with averaging 5 consecutive measurements, I'd add to that you can throw away the "outliers" and choose 3 of the 5 that are closest to each-other.
The question here is what would work best (or acceptably well) for your situation. If you're tracking trucks moving around the continent a few mph won't matter as the errors will cancel themselves out, but if you're tracking a flying drone that moves between buildings the difference can be quite significant.
Here are some more ideas, you can pick and choose how far you can go, I'm assuming the truck scenario and the idea is to get most probable speed when you don't have an accurate reading:
- discard "improbable" speeds - tall buildings can reflect GPS signal causing speeds of over 100mph when you're just walking, having a "highway map" (see below) can help managing the cut-off value
- transmit, store and calculate with error ranges rather than point values (some GPS reports error range).
- keep average error per location
- keep average error per reporting device
- keep average speed per location, you'll end up having a map of highways vs other roads
- you can correlate location speed and direction

What's the best way to move a sprite faster than the update rate in a simple 2d game?

At the moment, I have a sprite that I have arbitrarily set to move 1 pixel per second. The code is basically this (The code isn't optimised at all, I could do it much nicer but it is the principle I am trying to solve first:):
private const long MOVEMENT_SPEED = 10000000; // Ticks in 1 second
private long movementTimeSpan = MOVEMENT_SPEED;
protected void PerformMovement(GameTime gameTime)
{
movementTimeSpan -= gameTime.ElapsedGameTime.Ticks;
if (movementTimeSpan <= 0)
{
// Do the movement of 1 pixel in here, and set movementTimeSpan back to MOVEMENT_SPEED
}
}
Perform movement is called in a loop as you'd expect, and it equates to updating around 10 times per second. So if I lower the MOVEMENT_SPEED, my sprite speeds up, but it never gets any faster than 10 pixels per second. For projectiles and other stuff I obviously want it to update much faster than this.
If I alter the movement to 2 pixels or more, it creates issues with calculating collisions and suchlike, but these might be possible to overcome.
The other alternative is to store x and y as a float rather than an int, and increase the values as a fraction of the number of elapsed ticks. I am not sure if this will create smooth movement or not as there still has to be some rounding involved.
So my question is, does anyone know the standard way?
Should I increase the amount to more than 1 pixel and update my collision detection to be recursive, should I store X,Y as floats and move as a % of elapsed time, or is there a 3rd better way of doing it?
The standard way is to not count down a timer to move, but instead the opposite:
private const float MOVEMENT_SPEED = 10.0f; //pixels per second
private float time;
protected void PerformMovement(GameTime gameTime)
{
time = (float)gameTime.ElapsedGameTime.TotalSeconds;
character.X += MOVEMENT_SPEED * time;
}
Make the movement based on the time elapsed. The reason floats are commonly used is to get the fractional value of motion. Fixed-point is another common fractional representation but uses ints instead.
As for collision, collision can be very tricky but in general you don't absolutely need to do it once per pixel of motion (as you suggested with recursion); that's overkill and will lead to terrible performance in no time. If you are currently having trouble with 2-pixel motion, I would reevaluate how you're doing your collisions. In general, it becomes problematic when you're moving very fast to the point of skipping over thin walls, or even passing over to the "wrong side" of a wall, depending on how your collision is set up. This is known as "tunnelling". There are many ways of solving this. Look here and scroll down to "Preventing Tunnelling". As the article states many people just cap their speed at a safe value. But another common method is to "step" through your algorithm in smaller time steps than is currently being passed in. For example, if the current elapsed time is 0.1, you could step by 0.01 within a loop and check each small step.
A way to do what you request, although not very recommended, is to increase your game's update frequency to a higher value than the usual 30 or 60 fps, but only draw to the screen every N frames. You can do it by just having your graphics engine ignore Draw calls until a count or timer reaches the desired value.
Of course, this solution should be avoided unless it is specifically desired, because performance can degrade quite fast as the number of updated elements increases.
For example, Proun (not an XNA game) uses this trick for exactly your reasons.
With the default of IsFixedTimeStep = true, XNA behaves in a similar fashion, skipping calls to Draw if Update takes too long.

XNA - 3D Rotation about local (changing) ship axes - What am I missing?

I'm developing a 3D spaceshooter in XNA as a school project (basically Asteroids in 3D with power-ups), and have been working to implement roll, pitch, and yaw with respect to the ship's local axes. (I should emphasize: the rotation is not with respect to the absolute/world x, y, and z axes.) Sadly, I've been struggling with this for the last few weeks. Google and my neolithic monkey brain have failed me; maybe you folks can help!
Here's my setup:
Via keyboard input, I have the following variables ready to go:
yawRadians, which stores the desired yaw away from the ship's initial
position
pitchRadians, which stores the desired pitch away from the
ship's initial position
rollRadians, which stores the desired roll
away from the ship's initial position
The ship also maintains its own Front, Back, Right, Left, Top and Bottom unit vectors, which are used both for the rotations and also for propulsion. (Different keys will propel the ship toward the Front, Back, etc. This part is working great.)
Ultimately, I generate the rotation matrix mShipRotation, representing all of the ship's rotations, which is passed to the ship's draw method.
The problem I have is with the rotations themselves. Different solutions I've tried have had differing results. Here's what I've gone with so far:
Method 1 – Yaw, Pitch, and Roll relative to the absolute/world x, y, and z axes
At first, I naively tried using the following in my ship's Update method:
qYawPitchRoll = Quaternion.CreateFromYawPitchRoll(yawRadians, pitchRadians, rollRadians);
vFront = Vector3.Transform(vOriginalFront, qYawPitchRoll);
vBack = -1 * vFront;
vRight = Vector3.Transform(vOriginalRight, qYawPitchRoll);
vLeft = -1 * vRight;
vTop = Vector3.Transform(vOriginalTop, qYawPitchRoll);
vBottom = -1 * vTop;
mShipRotation = Matrix.CreateFromQuaternion(qYawPitchRoll);
(vOriginalFront, vOriginalRight, and vOriginalTop just store the ship's initial orientation.)
The above actually works without any errors, except that the rotations are always with respect to the x, y, and z axes, and not with respect to the ship's Front/Back/Right/Left/Top/Bottom vectors. This results in the ship not always yawing and pitching as expected. (Specifically, yawing degenerates to rolling if you have pitched up so the ship is pointing to the top. This makes sense, as yawing in this solution is just rotating about the world up axis.)
I heard about the Quarternion.CreateFromAxisAngle method, which sounded perfect. I could just combine three Quaternion rotations, one around each of the ship's local axis. What could go wrong?
Method 2 – Quaternion.CreateFromAxisAngle
Here's the second code snippet I used in my ship's Update method:
qPitch = Quaternion.CreateFromAxisAngle(vRight, pitchRadians);
qYaw = Quaternion.CreateFromAxisAngle(vTop, yawRadians);
qRoll = Quaternion.CreateFromAxisAngle(vFront, rollRadians);
qPitchYawAndRoll = Quaternion.Concatenate(Quaternion.Concatenate(qPitch, qYaw), qRoll);
vFront = Vector3.Normalize(Vector3.Transform(vOriginalFront, qPitchYawAndRoll));
vBack = -1 * vFront;
vRight = Vector3.Normalize(Vector3.Transform(vOriginalRight, qPitchYawAndRoll));
vLeft = -1 * vRight;
vTop = Vector3.Normalize(Vector3.Transform(vOriginalTop, qPitchYawAndRoll));
vBottom = -1 * vTop;
mShipRotation = Matrix.CreateFromQuaternion(qPitchYawAndRoll);
The above works perfectly if I only do one rotation at a time (yaw, pitch, or roll), but if I combine more than one rotation simultaneously, the ship begins to wildly spin and point in many different directions, getting more and more warped until it disappears entirely.
I've tried variants of the above where I first apply the Pitch to all the vectors, then the Yaw, then the Roll, but no luck.
I also tried it using Matrices directly, despite concerns of Gimbal Lock:
Method 3: Matrices
mShipRotation = Matrix.Identity;
mShipRotation *= Matrix.CreateFromAxisAngle(vRight, pitchRadians);
mShipRotation *= Matrix.CreateFromAxisAngle(vFront, rollRadians);
mShipRotation *= Matrix.CreateFromAxisAngle(vTop, yawRadians);
vFront = Vector3.Normalize(Vector3.Transform(vOriginalFront, mShipRotation));
vBack = -1 * vFront;
vRight = Vector3.Normalize(Vector3.Transform(vOriginalRight, mShipRotation));
vLeft = -1 * vRight;
vTop = Vector3.Normalize(Vector3.Transform(vOriginalTop, mShipRotation));
vBottom = -1 * vTop;
No luck; I got the same behavior. One rotation at a time is okay, but rotating about multiple axes resulted in the same bizarre spinning behavior.
After some brilliant debugging (read as: blindly outputting variables to the console), I noticed that the Front/Right/Top vectors were slowly, over time, becoming less orthogonal to one another. I added Normalization to vectors basically every step of the way, and also tried computing new vectors based on cross products, to try to ensure that they always remained perpendicular to one another, but even then they were not perfectly orthogonal. I'm guessing this is due to floating point math not being perfectly precise.
Note that I regenerate the mShipRotation matrix every Update method, so it cannot be accumulating drift or inaccuracies directly. I think that applying multiple Quarternion rotations may be accumulating error (as I can do one rotation just fine), but my attempts to fix it have not worked.
In short:
I can pitch/roll/yaw relative to the world axes x, y, and z just
fine. It's just not what the player would expect to happen as the
rolling/pitching/yawing is not relative to the ship, but to the
world.
I can roll, pitch, or yaw around the ship's local axes (Front/Back/Top/Bottom/Left/Right) just fine, but only one at a time. Any combination of them will cause the ship to spiral and deform rapidly.
I've tried Quaternions and Matrices. I've tried suggestions I've found in various forums, but ultimately do not wind up with a working solution. Often people recommend using Quaternion.CreateFromYawPitchRoll, not really realizing that the intent is to have a ship rotate about its own (constantly changing) axes, and not the (fixed) world axes.
Any ideas? Given a situation where you are given the roll, pitch, and yaw about a ship's front, right, and top vectors, how would you go about creating the rotation matrix?
You seem to be applying your overall angles (yawRadians, pitchRadians, rollRadians) to your local axis in your methods 2 & 3. These values are married to the world axis and have no meaning in local space. The root of your problem is wanting to hang onto the 3 angles.
In local space, use an angular amount that is the amount you want to rotate between frames. If you only pitched up 0.002f radians since the last frame, that would be what you would use when you rotate around the vRight axis.
This will screw with your overall angle values (yawRadians, pitchRadians, & rollRadians) and render them useless but most folks who stick with 3d programming quickly drop the angle approach to storing the orientation anyway.
Simply rotate your matrix or quaternion little by little each frame around your local axis and store the orientation in that structure (the quat or matrix) instead of the 3 angles.
There is no worries about gimbal lock when you are rotating a matrix about local axis like this. You would have to have 90 degree rotations between frames to bring that into the picture.
If you want to avoid error accumulation use a quat to store the orientation and normalize it each frame. Then the matrix you send to the effect will be made each frame from the quat and will be ortho-normal. Even if you didn't use a quat and stored your orientation in a matrix it would take hours or days to accumulate enough error to be visually noticeable.
This blog might help: http://stevehazen.wordpress.com/2010/02/15/matrix-basics-how-to-step-away-from-storing-an-orientation-as-3-angles/
I think this might be what you're looking for:
http://forums.create.msdn.com/forums/t/33807.aspx
I'm pretty sure that CreateFromAxisAngle is the way to go.

WPF: Speed of Movement (Translation) varies with distance

With reference to this programming game I am currently building.
I wrote the below method to move (translate) a canvas to a specific distance and according to its current angle:
private void MoveBot(double pix, MoveDirection dir)
{
if (dir == MoveDirection.Forward)
{
Animator_Body_X.To = Math.Sin(HeadingRadians) * pix;
Animator_Body_Y.To = ((Math.Cos(HeadingRadians) * pix) * -1);
}
else
{
Animator_Body_X.To = ((Math.Sin(HeadingRadians) * pix) * -1);
Animator_Body_Y.To = Math.Cos(HeadingRadians) * pix;
}
Animator_Body_X.To += Translate_Body.X;
Animator_Body_Y.To += Translate_Body.Y;
Animator_Body_X.From = Translate_Body.X;
Translate_Body.BeginAnimation(TranslateTransform.XProperty, Animator_Body_X);
Animator_Body_Y.From = Translate_Body.Y;
Translate_Body.BeginAnimation(TranslateTransform.YProperty, Animator_Body_Y);
TriggerCallback();
}
One of the parameters it accepts is a number of pixels that should be covered when translating.
As regards the above code, Animator_Body_X and Animator_Body_Y are of type DoubleAnimation, which are then applied to the robot's TranslateTransform object: Translate_Body
The problem that I am facing is that the Robot (which is a canvas) is moving at a different speed according to the inputted distance. Thus, the longer the distance, the faster the robot moves! So to put you into perspective, if the inputted distance is 20, the robot moves fairly slow, but if the inputted distance is 800, it literally shoots off the screen.
I need to make this speed constant, irrelevant of the inputted distance.
I think I need to tweak some of the Animator_Body_X and Animator_Body_Y properties in according to the inputted distance, but I don't know what to tweak exactly (I think some Math has to be done as well).
Here is a list of the DoubleAnimation properties that maybe you will want to take a look at to figure this out.
Is there are reason you're using DoubleAnimation? DoubleAnimation is designed to take a value from A to B over a specific time period using linear interpolation acceleration/deceleration at the start/end of that period if required (which is why it's "faster" for longer distance.. it has further to go in the same time!). By the looks of things what you are trying to do is move something a fixed distance each "frame" depending on what direction it is facing? That doesn't seem to fit to me.
You could calculate the length of the animation, depending on the distance, so the length is longer for longer distances, then the item is always moving at the same "speed". To me, it makes more sense to just move the item yourself though. You can calculate a objects velocity based on your angle criteria, then each "frame" you can manually move the item as far as it needs to go based on that velocity. With this method you could also easily apply friction etc. to the velocity if required.
The math you have to do is: velocity*time=distance
So, to keep the speed constant you have to change the animation's duration:
double pixelsPerSecond = 5;
animation.Duration = TimeSpan.FromSeconds(distance/pixelsPerSecond);
BTW, I don't think animations are the best solution for moving your robots.

Categories

Resources