Properly getting linear depth with a non-GLM projection matrix - c#

I'm porting over a C++ volumetric clustered rendering engine to C#, and the code is here: https://github.com/Kermalis/HybridRenderingEngine-CS
Now, I'm trying to get a linear depth from the current fragment, because it is required for volumetric clustered rendering.
If you're using GLM to create the projection matrix, this is easy with the following equation in your shader:
float ndc = 2.0 * depthSample - 1.0;
float linear = 2.0 * zNear * zFar / (zFar + zNear - ndc * (zFar - zNear));
return linear;
This is because GLM projection matrices are created this way:
float num = (float)Math.Tan(fovy / 2f);
mat4 result = mat4.identity();
result[0, 0] = 1f / (aspect * num);
result[1, 1] = 1f / num;
result[2, 2] = (-(zFar + zNear)) / (zFar - zNear);
result[2, 3] = -1f;
result[3, 2] = (-2f * zFar * zNear) / (zFar - zNear);
However, with C# Matrix4x4, the projection matrix is created differently:
float num = (float)Math.Tan(fieldOfView * 0.5f);
float yScale = 1f / num;
float xScale = yScale / aspectRatio;
result.M11 = xScale;
result.M22 = yScale;
result.M33 = zFar / (zNear - zFar);
result.M34 = -1f;
result.M43 = zNear * zFar / (zNear - zFar);
M33 and M43 are different than GLM's. There are more differences if you create an "off center" projection matrix as well, but I'm not sure if they will make a difference.
The issue, is when you use that same shader code with the Matrix4x4 matrices, you get incorrect values.
If you do the math, you can find why that is:
Here's what you get with 1 Near, 300 Far:
And here's what you get with 0.1 Near, 1500 Far:
Sorry for my handwriting. C1 is M33, and C2 is M43.
You can see that when you divide C1 by C2, you get different depth values for the different projection matrices. So the shader code will not work with Matrix4x4, since it is not actually transforming it to where 0 = near, 1 = far, since the parts of the matrix are different and distribute a different depth curve.
I did find that multiplying by 0.5 will fix it for some near/far values, and you can see that it is close to half difference in my drawings between the two types.
What I want to know is, how would I get depth properly with these matrices in the shader, without having to multiply each fragment by the inverse projection matrix? That's being done in the compute shader (which is why I need the same linear result), but that only gets run one time, not every frame, so I don't care about performance there.
How is the first equation derived? If that becomes clear to me (trust me, I tried to make sense of it), then another one can be created for Matrix4x4, or any other custom one that is passed in. I know an inverse projection matrix will do the trick, but it'll be awfully slow. I can't think of any other robust solution that will cover any matrix. I just don't understand how that shader equation is created or how it applies the inverse.

Related

Why is my angle of 2 vectors function return NaN even though i follow the formula

I'm making a function that calculates the angle between 2 given vectors for my unity game using the dot product formula:
vector(a)*vector(b)=|vector(a)|*|vector(b)|*cos(the angle)
so I figured that the angle would equals
acos((vector(a)*vector(b))/(|vector(a)|*|vector(b)|))
Anyway here's my code:
float rotateAngle(Vector2 a,Vector2 b)
{
return Mathf.Acos((a.x * b.x + a.y * b.y) / ((Mathf.Sqrt(a.x * a.x + a.y * a.y)) * (Mathf.Sqrt(b.x * b.x + b.y * b.y)))) * (180 / Mathf.PI);
}
But when i played it the console showed NaN. I've tried and reviewed the code and the formula but returned empty-handed.
Can someone help me? Thank you in advanced!!
float.NaN is the result of undefined (for real numbers) mathematical operations such as 0 / 0 (note from the docs that x / 0 where x != 0 rather returns positive or negative infinity) or the square root of a negative value. As soon as one operant in an operation already is NaN then also the entire operation returns again NaN.
The second (square root of a negative value) can not happen here since you are using squared values so most probably your vectors have a magnitude of 0.
If you look at the Vector2 source code you will find their implementation of Vector2.Angle or Vector2.SignedAngle (which you should rather use btw as they are tested and way more efficient).
public static float Angle(Vector2 from, Vector2 to)
{
// sqrt(a) * sqrt(b) = sqrt(a * b) -- valid for real numbers
float denominator = (float)Math.Sqrt(from.sqrMagnitude * to.sqrMagnitude);
if (denominator < kEpsilonNormalSqrt)
return 0F;
float dot = Mathf.Clamp(Dot(from, to) / denominator, -1F, 1F);
return (float)Math.Acos(dot) * Mathf.Rad2Deg;
}
// Returns the signed angle in degrees between /from/ and /to/. Always returns the smallest possible angle
public static float SignedAngle(Vector2 from, Vector2 to)
{
float unsigned_angle = Angle(from, to);
float sign = Mathf.Sign(from.x * to.y - from.y * to.x);
return unsigned_angle * sign;
}
There you will find that the first thing they check is
float denominator = (float)Math.Sqrt(from.sqrMagnitude * to.sqrMagnitude);
if (denominator < kEpsilonNormalSqrt)
return 0F;
which basically makes exactly sure that both given vectors have a "big enough" magnitude, in particular one that is not 0 ;)
Long story short: Don't reinvent the wheel and rather use already built-in Vector2.Angle or Vector2.SignedAngle
NaN are typically the result of invalid mathematical operations on floating point numbers. A common source is division by zero, so my guess would be that the vector is 0,0.
I would also recommend using the built in functions for computing the normalization, Length/Magnitude, Dot etc. that will make the code much easier to read, and the compiler should be fairly good at optimizing that kind of code. If you need to do any additional optimization, only do so after you have done some measurements.

deciding whether cw or acw rotation from gesture, 3 ordered sample points

user starts from A and moves to C though(via) B (sample points on screen) in unity3d. at this point, i have calculated angle (theta) which in both images is almost 45 deg(almost). problem is i wanted to conclude that in left image user intended CounterClockWise motion and in right image user intends clockwise rotation.
ahh, its really complicated than i imagined, please suggest.
currently unity code is like
protected void calcAngles(){
v1 = go2.position - go1.position;
v2 = go3.position - go1.position;
v3 = go3.position - go2.position;
float angle = Vector3.Angle (v2, v1);
Debug.Log ("angle:" + angle.ToString ());
//float atan = Mathf.Atan2 (Vector3.Dot (v3, Vector3.Cross (v1, v2)), Vector3.Dot (v1, v2)) * Mathf.Rad2Deg;
//Debug.Log ("atan2:" + atan.ToString ());
}
any ideas.? psudo code.? huge thanks in advance. cheers,lala
It is incredibly difficult to do this.
This may help...
private float bestKnownXYCWAngleFromTo(Vector3 a, Vector3 b)
// the best technology we have so far
{
a.z = 0f;
b.z = 0f;
float __angleCCW = Mathf.Atan2(b.y, b.x) - Mathf.Atan2(a.y, a.x);
if ( __angleCCW < 0f ) __angleCCW += (2.0f * Mathf.PI);
float __angleCWviaatan = (2.0f * Mathf.PI) - __angleCCW;
__angleCWviaatan = __angleCWviaatan * (Mathf.Rad2Deg);
if ( __angleCWviaatan >= 360.0 ) __angleCWviaatan = __angleCWviaatan-360.0f;
return __angleCWviaatan;
}
note that this is a 2D concept, modify as you need.
note that obviously "a" is just your (b-a) and "b" is just your (c-a)
Please note that true gesture recognition is a very advanced research field. I encourage you to get one of the solutions out there,
https://www.assetstore.unity3d.com/en/#!/content/14458
https://www.assetstore.unity3d.com/en/#!/content/21660
which represent literally dozens of engineer-years of effort. PDollar is great (that implementation is even free on the asset store!)
Uuuups. My answer from before was completely wrong. It seems that Vector3.Angle always gives a unsigned angle. But we need the sign to understand whether is rotating clockwise or counterclockwise.
Now, this piece of code will give you a SIGNED angle between your two vectors. The Normal argument should be the normal to the plane you want to consider.
float SignedAngle(Vector3 from, Vector3 to, Vector3 normal){
float angle = Vector3.Angle(from,to);
float sign = Mathf.Sign(Vector3.Dot(normal, Vector3.Cross(from,to)));
float signed_angle = angle * sign;
return signed_angle;
}

Modifying set_Basis values for a transform in Revit

I am using a transform in Revit to show elevation views of individual beams (for the purpose of detailing). This works fine as long as the beam is flat (identical start and end offsets), but if I have a beam that is sloping, I am forced to "flatten" the endpoints.
I tried to define a unit vector along the actual start/end points, and a perpendicular to that vector on an XY plane running through the defined ".Origin" of the transform. I then used simple equations to define a normal to those two vectors...
double newx = first.Y * second.Z - first.Z * second.Y;
double newy = first.Z * second.X - first.X * second.Z;
double newz = first.X * second.Y - first.Y * second.X;
double vectlong = Math.Sqrt(newx * newx + newy * newy + newz * newz);
XYZ normal = new Autodesk.Revit.DB.XYZ(newx / vectlong, newy / vectlong, newz / vectlong);
I then used those three vectors as my ".set_Basis" 0, 1 & 2.
This code works as long as I've forced the beam's start and end points to be flat (which shows that the generated "normal" is valid), but when I remove the code to flatten and use the actual Z values of the endpoints of a sloping beam, the program fails when I try to use these values.
The SDK sample to generate a section through the middle of a beam (CreateViewSection) seems to have found the same problem, but the programmer gave up and simply forces the program to accept only beams that are already on the same XY plane, which is not really the "rule" for beams.
I exported the calculated values of my three vectors and verified that they were all unit length and orthonormal, which should be all that is required for the transform. Can anyone explain why these basis values fail?
Please use this code to set assembly transform. It will align assembly origin and axis properly so that assembly views are always aligned to XYZ axis!
var assyTransform = Transform.Identity;
var beamInst = mainElement as FamilyInstance;
if( beamInst != null )
{
assyTransform = beamInst.GetTransform();
assyTransform.Origin = ( assyInstance.Location as LocationPoint ).Point;
}
if ( !assyInstance.GetTransform()
.AlmostEqual( assyTransform ) )
{
assyInstance.SetTransform( assyTransform );
return true;
}

Relationship between projected and unprojected Z-Values in Direct3D

I've been trying to figure this relationship out but I can't, maybe I'm just not searching for the right thing. If I project a world-space coordinate to clip space using Vector3.Project, the X and Y coordinates make sense but I can't figure out how it's computing the Z (0..1) coordinate. For instance, if my nearplane is 1 and farplane is 1000, I project a Vector3 of (0,0,500) (camera center, 50% of distance to far plane) to screen space I get (1050, 500, .9994785)
The resulting X and Y coordinates make perfect sense but I don't understand where it's getting the resulting Z-value.
I need this because I'm actually trying to UNPROJECT screen-space coordinates and I need to be able to pick a Z-value to tell it the distance from the camera I want the world-space coordinate to be, but I don't understand the relationship between clip space Z (0-1) and world-space Z (nearplane-farplane).
In case this helps, my transformation matrices are:
World = Matrix.Identity;
//basically centered at 0,0,0 looking into the screen
View = Matrix.LookAtLH(
new Vector3(0,0,0), //camera position
new Vector3(0,0,1), //look target
new Vector3(0,1,0)); //up vector
Projection = Matrix.PerspectiveFovLH(
(float)(Math.PI / 4), //FieldOfViewY
1.6f, // AspectRatio
1, //NearPlane
1000); //FarPlane
Standard perspective projection creates a reciprocal relationship between the scene depth and the depth buffer value, not a linear one. This causes a higher percentage of buffer precision to be applied to objects closer to the near plane than those closer to the far plane, which is typically desired. As for the actual math, here's the breakdown:
The bottom-right 2x2 elements (corresponding to z and w) of the projection matrix are:
[far / (far - near) ] [1]
[-far * near / (far - near)] [0]
This means that after multiplying, z' = z * far / (far - near) - far * near / (far - near) and w' = z. After this step, there is the perspective divide, z'' = z' / w'.
In your specific case, the math works out to the value you got:
z = 500
z' = z * 1000 / (1000 - 999) - 1000 / (1000 - 999) = 499.499499499...
w' = z = 500
z'' = z' / w' = 0.998998998...
To recover the original depth, simply reverse the operations:
z = (far / (far - near)) / ((far / (far - near)) - z'')

Move a 2D sprite in a curved upward arc in XNA?

Alright, so here's my problem.
I've been trying to create a sort of a visual day/night cycle in XNA, where I have an underlying class that updates and holds time and a Sky class which outputs a background based on the time that the class updates.
What I can't figure out though is how to make the moon/sun move in a curved upward arc that spans the screen based on what time of the day it is. The most problematic part is getting the Y axis to curve while the X axis moves as the time progresses.
Anyone that could help me here?
EDIT:
Alright, looks like Andrew Russels example helped me to do what I needed to do.
Although I had to expermient for a bit, I finally reached a suitable solution:
float Time = (float)Main.inGameTime.Seconds / (InGameTime.MaxGameHours * 60 * 60 / 2);
this.Position.X = Time * (Main.Viewport.X + Texture.Width * 2) - Texture.Width;
this.Position.Y = Main.Viewport.Y - (Main.Viewport.Y * (float)Math.Sin(Time * MathHelper.Pi) / 2) - (Main.Viewport.Y / 2) + 50;
Try looking at the Math.Sin or Math.Cos functions. These are the trigonometric functions you're looking for.
Something like this (giving a position for SpriteBatch):
float width = GraphicsDevice.Viewport.Width;
float height = GraphicsDevice.Viewport.Height;
float time = 0.5f; // assuming 0 to 1 is one day
Vector2 sunPosition = new Vector2(time * width,
height - height * (float)Math.Sin(time * width / MathHelper.TwoPi));
(Disclaimer: I haven't tested this code.)
There is also the Curve class.

Categories

Resources