I'm creating an fps game in unity and chose to not use the included First Person Controller script to reduce complexity. The camera object will constantly be set to the players position and can rotate. I have created a script containing the following code for movement:
float v = Input.GetAxis("Vertical");
float h = Input.GetAxis("Horizontal");
Vector2 rotation = new Vector2(
Mathf.Cos(cam.transform.eulerAngles.y * deg2rad), (Mathf.Sin(cam.transform.eulerAngles.y * deg2rad))
);
rb.velocity = (
new Vector3(
(-rotation.x * h * speed) + (rotation.y * v * speed),
rb.velocity.y,
(-rotation.y * h * speed) + (-rotation.x * v * speed)
)
);
When I test out the game, the movement is correct along the x-axis in both directions, but is unusual when the players y rotation becomes something other than being aligned with the x-axis (Like moving the player backwards will actually move them forwards and visa-versa).
I'm open to an alternative besides using trig functions for the movement, as I have already used transform.forward and transform.right, but they didn't work entirely.
First thing I'd say is that unless you're intending to learn trig and geometrical functions then you should not reinvent the wheel. As you've stated you want to create a FPS game so really you should leverage the scripts and prefabs that others have created to enable the creation of your game.
If you don't want to use the inbuilt FPS Controller script I'd recommend using a free Asset package named '3rdPerson+Fly'. It appears a bit complex at first however you'll learn about states and stackable behaviours/modes which is going to get you an outcome much faster than creating from scratch. You'll also get the flexibility and openness that comes with a non-inbuilt package. This way you can peek at the inner workings if desired or simply build on top of them. Don't fall for NIH (Not Invented Here) syndrome and stand on the shoulders of giants instead :)
Good luck!
The issue you're having is likely caused by the fact that sin and cos can't determine by themselves which "quadrant" they're in. For example 30 degree angle is the same as a 150 degree angle as far as Sin is concerned.
This video is a fast and good explanation of the issue:
https://www.youtube.com/watch?v=736Wkg8uxA8
Related
I'm developing a 3D spacegame where the camera is in a constant 2D (top down) state. I am able to fire a projectile of speed (s) at a target moving at a given velocity and hit it every time. Great! Okay so what if that target has an angular velocity around a parent? I noticed that if the target has a parent object that is rotating, my projection isn't correct since it doesn't account for the angular velocity.
My initial code was built around the assumption that:
Position_target + Velocity_target * t = Position_shooter + Velocity_shooter * t + Bulletspeed * t
I assume that the shooter is stationary (or potentially moving) and needs to fire a bullet with a constant magnitude.
I simplify the above to this
Delta_Position = Position_target - Position_shooter
Delta_Velocity = Velocity_target - Velocity_shooter
Delta_Position + Delta_Velocity * t = BulletSpeed * t
Squaring both sides I come to a quadratic equation where I can solve for t given determinant outcomes or zeros. This works perfect. I return a t value and then project the target's position and current velocity out to that t, and then I have turret scripts that rotate at a given angular velocity towards that point. If the turret says its looking at that point within 1% on all axis, it fires the bullet at speed(s) and its a 100% hit if the target doesn't alter its course or velocity.
I started adding components on my ships / asteroids that were a child of the parent object, like a turret attached to a ship where the turret itself is a target. If the ship is rotating around an axis (for example Y axis) and the turret is not at x=0 and z=0 my projection no longer works. I thought that using r * sin ( theta + omega * t) as the angular velocity component for the X position and r * cos ( theta + omega * t) for the Z position could work. Theta is the current rotation (with respect to world coordinates) and the omega is the eulerAngle rotation around the y axis.
I've quickly realized this only works with rotating around the y axis, and I can't put the sin into a quadratic equation because I can't extract the t from it so I can't really project this appropriately. I tried using hyperbolics but it was the same situation. I can create an arbitrary t, let's say t=2, and calculate where the object will be in 2 seconds. But I am struggling to find a way to implement the bullet speed projection with this.
Position_targetparent + Velocity_targetparent * t + [ANGULAR VELOCITY COMPONENT] = Position_shooter + Velocity_shooter * t + Bulletspeed * t
Delta_Position_X + Delta_Velocity_X * t + S * t = r * sin (theta + Omegay * t)
Delta_Position_Z + Delta_Velocity_Z * t + S * t = r * cos (theta + Omegay * t)
From here I have been spinning my wheels endlessly trying to figure out a workable solution for this. I am using the eulerAngle.y for the omega which works well. Ultimately I just need that instantaneous point in space that I should fire at which is a product of the speed of the bullet and the distance of the projection, and then my turrets aiming scripts will take care of the rest.
I have been looking at a spherical coordinate system based around the parents position (the center of the rotation)
Vector3 deltaPosition = target.transform.position - target.transform.root.position;
r = deltaPosition .magnitude;
float theta = Mathf.Acos(deltaPosition.z / r);
float phi = Mathf.Atan2(deltaPosition.y,deltaPosition.x);
float xPos = r * Mathf.Sin(theta) * Mathf.Cos(phi)
float yPos = r * Mathf.Sin(theta) * Mathf.Sin(phi)
float zPos = r * Mathf.Cos(theta)
Vector3 currentRotation = transform.root.gameObject.transform.rotation.eulerAngles * Mathf.Deg2Rad;
Vector3 angularVelocity = transform.root.gameObject.GetComponent<Rigidbody>().angularVelocity;
I can calculate the position of the object given these angles ... but I am struggling to turn this into something I can use with the omega * t (angular velocity) approach.
I am wondering if there is a more elegant approach to this problem, or if someone can point me in the right direction of a formula to help me think this through? I am not the best with Quaternions and EulerAngles but I am learning them slowly. Maybe there's something clever I can do with those?
Although the math is likely still tough, I suspect you can simplify the math substantially by having the "target" calculate its future position in local space. And then having it call that location to its parent, have that calculate it in local space, and so on up the hierarchy until you reach world space. Once you have its future position in world space you can aim your turret at that target.
For example an orbiting ship should be able to calculate its future orbit easily. This is an equation for an ellipse. Which can then send that local position to its parent (planet) which is presumably also orbiting and calculate that position relative to itself. The planet will then send this local position to its own parent (Star) and so on. Until you get to world space.
You can further simplify this math by making the bullet's travel time constant (flexible speed), so you can simplify figuring out the future position at a specific time. Depending on the scale of your game, the actual difference in speed might not be that different.
Another idea: Instead of doing all the calculations from brute force, you could "simulate" the target object forward in time. Make sure all the code that affects is position can be run separate from your actual update loop. Simply advance the clock way ahead, and see its future position without actually moving it. Then go back to the present and fire the gun at its future position.
I suggest to solve this problem approximately.
If you can describe the position of your target by a function over time, f(t), then you can approximate it using an divide and conquer strategy like this:
Algorithm (pseudo code):
Let f(t:float):Vector3 be a function that calculates the position of the target at time t
Let g(p:Vector3):float be a function that calculates how long the bullet would need to reach p
float begin = 0 // Lower bound of bullet travel time to hit the target
float end = g(target.position) // Upper bound
// Find an upper bound so that the bullet can hit the target between begin and end time
while g(f(end)) > end:
begin = end
end = end * 2 // Exponential growth for fast convergence
// Add break condition in case the target can't be hit (faster than bullet)
end
// Narrow down the possible aim target, doubling the precision in every step
for i = 1...[precision]:
float center = begin + (end - begin) / 2
float travelTime = g(f(center))
if travelTime > center: // Bullet can't reach target
begin = center
else // Bullet overtook target
end = center
end
end
float finalTravelTime = begin + (end - begin) / 2
Vector3 aimPosition = f(finalTravelTime) // You should aim here...
You need to experiment with the value for [precision]. It should be as small as possible, but large enough for the bullet to always hit the target.
You can also use another break condition, like restricting the absolute error (distance of the bullet to the target at the finalTravelTime).
In case that the target can travel faster than the bullet, you need to add a break condition on the upper bounds loop, otherwise it can become an infinite loop.
Why this is useful:
Instead of calculating a complex equality function to determine the time of impact, you can approximate it with a rather simple position function and this method.
This algorithm is independent of the actual position function, thus works with various enemy movements, as long as the future position can be calculated.
Downsides:
This function calculates f(t) many times, this can be CPU intensive for a complex f(t).
Also it is only an approximation, where the precision of the result gets worse the further the travel time is.
Note:
I wrote this algorithm from the top of my head.
I don't guarantee the correctness of the pseudo code, but the algorithm should work.
I'm making a game in Unity where the game's main camera is controlled by the phone orientation. The problem I have is that the gyroscope's data is very noisy and make the camera rotate in a very jittery way. I tried on different phone to make sure the hardware is not the problem(Galaxy s6 and Sony Xperia T2).
I've tried the following but none seems to work:
-Slerp between current rotation and new attitude(too much jitter no matter the what I multiply Time.deltaTime by)
//called at each update
transform.rotation = Quaternion.Slerp(transform.rotation, new Quaternion(-Input.gyro.attitude.x, -Input.gyro.attitude.y,
Input.gyro.attitude.z, Input.gyro.attitude.w), 60 * Time.deltaTime);
-Average the last gyro samples (either Euleur angles average or Quaternion average; both cases still offer too much jitter no matter how many samples I track)
//called at each update
if (vectQueue.Count >= 5)
{
vectQueue.Enqueue(Input.gyro.attitude.eulerAngles);
vectQueue.Dequeue();
foreach (Vector3 vect in vectQueue) {
avgr = avgr + vect;
}
avgr = new Vector3(avgr.x/5, avgr.y/5,avgr.z/5);
transform.rotation = Quaternion.Slerp(transform.rotation, Quaternion.Euler(avgr),Time.deltaTime*100);
}
else
{
vectQueue.Enqueue(Input.gyro.attitude.eulerAngles);
}
-Round the gyroscope data(best solution so far to prevent jitter but obviously this isn't smooth at all)
-Apply high/low pass filter (doesn't seem to do anything)
public float alpha = 0.5f;
//called at each update
v1 = Input.gyro.attitude.eulerAngles;
if (v2== null)
{
v2 =v1;
v3 = Input.gyro.attitude.eulerAngles;
}
else
{
v3 = (1 - alpha) * v3 + (1 - alpha)*(v1 - v2);
v2 = v1;
transform.Rotate(v3);
}
-Copy paste code from people like this; but the jitter is way too strong in all cases.
So now I'm at a point where I guess I'll try learning kalman filters and implement it in unity but obviously I'd prefer a black box solution to save time.
Always use noise cancellation when using analogue input , you can do so by calculating the difference between gyro values in current frame and gyro values in previous frame , and if the difference is greater then desired amount (0.002 or 0.03 might be good) rotate your camera on gyro values.
This will eventually solve your problem of jittering .Hope you will get it
Found the problem; any of the ways I mentioned earlier are valid under normal circumstances(use a combination for optimal effects). The thing that was causing the insane jitter was the position of the camera. It had very large numbers as coordinates and I guess this was messing up the engine.
hello although i think you have found a solution for that i had an idea about how to smoothen it up using Quaternion.Lerp.
#the rot is used to face forward
Quaternion rot = new Quaternion(0,0,1,0)
Quaternion.lerp(transform.rotation* rot,gyro*rot,10)
I have my code I wrote for displaying a mirror as a plane with a texture from a RenderTarget2D each frame.
This all works perfectly fine, but I still think that there is something wrong in the way the reflection goes (like, the mirror isn't looking exacly where it's supposed to be looking).
There's a screenshot of the mirror that doesn't really look bad, the distort mainly occurs when the player gets close to the mirror.
Here is my code for creating the mirror texture, notice that the mirror is rotated by 15 degrees on the X axis.
RenderTarget2D rt;
...
rt = new RenderTarget2D(device, (int)(graphics.PreferredBackBufferWidth * 1.5), (int)(graphics.PreferredBackBufferHeight * 1.5));
...
device.SetRenderTarget(rt);
device.Clear(Color.Black);
Vector3 camerafinalPosition = camera.position;
if (camera.isCrouched) camerafinalPosition.Y -= (camera.characterOffset.Y * 6 / 20);
Vector3 mirrorPos = new Vector3((room.boundingBoxes[8].Min.X + room.boundingBoxes[8].Max.X) / 2, (room.boundingBoxes[8].Min.Y + room.boundingBoxes[8].Max.Y) / 2, (room.boundingBoxes[8].Min.Z + room.boundingBoxes[8].Max.Z) / 2);
Vector3 cameraFinalTarget = new Vector3((2 * mirrorPos.X) - camera.position.X, (2 * mirrorPos.Y) - camerafinalPosition.Y, camera.position.Z);
cameraFinalTarget = Vector3.Transform(cameraFinalTarget - mirrorPos, Matrix.CreateRotationX(MathHelper.ToRadians(-15))) + mirrorPos;
Matrix mirrorLookAt = Matrix.CreateLookAt(mirrorPos, cameraFinalTarget, Vector3.Up);
room.DrawRoom(mirrorLookAt, camera.projection, camera.position, camera.characterOffset, camera.isCrouched);
device.SetRenderTarget(null);
And then the mirror is being drawn using the rt texture.
I supposed something isn't completly right with the reflection physics or the way I create the LookAt matrix, Thanks for the help.
I didn't use XNA, but I did some Managed C# DX long time ago, so I don't remember too much, but are you sure mirrorLookAt should point to a cameraFinalTarget? Because basically the Matrix.CreateLookAt should create a matrix out of from-to-up vectors - 'to' in your example is a point where mirror aims. You need to calculate a vector from camera position to mirror position and then reflect it, and I don't see that in your code.
Unless your room.DrawRoom method doesn't calculate another mirrorLookAt matrix, I'm pretty sure your mirror target vector is the problem.
edit: Your reflection vector would be
Vector3 vectorToMirror = new Vector3(mirrorPos.X-camera.position.Y, mirrorPos.Y-camera.position.Y, mirrorPos.Z-camera.position.Z);
Vector3 mirrorReflectionVector = new Vector3(vectorToMirror-2*(Vector3.Dot(vectorToMirror, mirrorNormal)) * mirrorNormal);
Also I don't remember if the mirrorReflectionVector shouldn't be inverted (whether it is pointing to mirror or from mirror). Just check both ways and you'll see. Then you create your mirrorLookAt from
Matrix mirrorLookAt = Matrix.CreateLookAt(mirrorPos, mirrorReflectionVector, Vector3.Up);
Though I don't know wher the normal of your mirror is. Also, I've noticed 1 line I can't really understand
if (camera.isCrouched) camerafinalPosition.Y -= (camera.characterOffset.Y * 6 / 20);
What's the point of that? Let's assume your camera is crouched - shouldn't its Y value be lowered already? I don't know how do you render your main camera, but look at the mirror's rendering position - it's way lower than your main eye. I don't know how do you use your IsCrouched member, but If you want to lower the camera just write yourself a method Crouch() or anything similar, that would lower the Y value a little. Later on you're using your DrawRoom method, in which you pass camera.position parameter - yet, it's not "lowered" by crouch value, it's just "pure" camera.position. That may be the reason it's not rendering properly. Let me know If that helped you anything.
lets say i have two points A & B in my 3D Space
now i want to start calculate points from A to B in direction of B and i want to continue calculation farther from B on the same line.
How t do it ?
what i am actually working on is bullets from plane.
If I understood your question correctly, you should first get the direction vector by calculating
dir = B - A
and then you can continue the travel by
C = B + dir
Otherwise, please clarify your question, like for example what you mean by "calculate points from A to B", because mathematically there is an infinite amount of points between A and B.
Edit:
If you want to trace a bullet path you have two options:
1) You implement it as hitscan weapon like done in many FPS; any bullets fired will immediately hit where they were aimed. This can be best achieved by doing a raytrace via Ray.Intersects and is probably the simplest and least computationally intensive way to do it. Of course, it's also not terribly realistic.
2) You make the bullet a physical entity in your world and move it during your Update() call via a "normal" combination of current position vector and movement/velocity vector, doing a manual collision detection with any hittable surfaces and objects. The advantage of this is that you can implement proper physics like travel time, drop, friction, etc., however it is also orders of magnitude more difficult to implement in a robust way. If you want to go this way, I suggest using either a pre-made physics API (like Bullet, but I'm not sure if there's a version for XNA) or at least doing extensive Google research on the subject, so you can avoid the many pitfalls of collision detection with fast-moving objects.
its a try to implement a 2D technique in 3D, i calculate the following once
position = start;
dx = destination.X - start.X;
dy = destination.Y - start.Y;
dz = destination.Z - start.Z;
distance = (float)Math.Sqrt( dx * dx + dy * dy + dz * dz );
scale = 2f / distance;
then i go on calculating
position.X += dx * scale;
position.Y += dy * scale;
position.Z += dz * scale;
but the result is still is not working of 3D Space, i am getting result for only 2 Dimension, the third axis is not being changed
For a project my team and I have been trying to track a wiimote in a 3D space using the built in accelerometer and the WiiMotion Plus gyroscope.
We’ve been able to track the rotation and position using an ODE (Found at http://www.alglib.net/,) but we’ve run into a problem with removing the gravity component from the accelerometer.
We looked at Accelerometer gravity components which had the formula (implemented in C# / XNA)
private Vector3 RemoveGravityFactor(Vector3 accel)
{
float g = -1f;
float pitchAngle = (Rotation.Z);
float rollAngle = (Rotation.Y);
float yawAngle = (Rotation.X);
float x = (float)(g * Math.Sin(pitchAngle));
float y = (float)(-g * Math.Cos(pitchAngle) * Math.Sin(rollAngle));
float z = (float)(-g * Math.Cos(pitchAngle) * Math.Cos(rollAngle));
Vector3 offset = new Vector3(x, y, z);
accel = accel - offset;
return accel;
}
But it doesn’t work at all. As a reference, the acceleration is straight from the accelerometer, and the rotation is measured in radians after it has been worked through the ODE.
Also, We are having problems with understanding how this formula works. Due to the fact that our tracking is taking into account all dimensions, why is Yaw not taken into account?
Thanks in advance for any advice or help that is offered.
EDIT:
After discussing it with my teammates and boss, we've come to find that this formula would actually work if we were using X, Y, and Z correctly. We've come to another stump though.
The problem that we're having is that the Wiimote library that we're using returns relative rotational values based on the gyroscope movement. In otherwords, if the buttons are facing up, rotating the wiimote left and right is yaw and if the buttons are facing toward you, yaw is the same when it SHOULD be the rotation of the entire wiimote.
We've found that Euler angles may be our answer, but we're unsure how to use them appropriately. If there is any input on this new development or any other suggestions please give them.
I'd bet that your accelerometer was not calibrated in zero gravity, so removing the effect of gravity will be difficult, at the very least.
First I'd suggest not using individual components to store the rotation (gimbal lock), a matrix would work better. calibrate by holding it still and measuring (it will be 1g downward). then for each rotation, multiple the rotation matrix by it. then you can tell which way is up and subtract a matrix of 1g down from the vector representing the acceleration. I know that doesn't make a lot of sense but I'm in a bit of a rush, add comments if you have questions.