Unity Shaderlab- transform from screen space to world space - c#

I'm making a post-processing shader (in unity) that requires world-space coordinates. I have access to the depth information of a certain pixel, as well as the onscreen location of that pixel. How can I find the world position that that pixel corresponds to, much like the function ViewportToWorldPos()?

It's been three years! I was working on this recently, and an older engineer help me solved the problem. Here is the code.
We need to firstly give a camera transform matrix to the shader in script:
void OnRenderImage(RenderTexture src, RenderTexture dst)
{
Camera curruntCamera = Camera.main;
Matrix4x4 matrixCameraToWorld = currentCamera.cameraToWorldMatrix;
Matrix4x4 matrixProjectionInverse = GL.GetGPUProjectionMatrix(currentCamera.projectionMatrix, false).inverse;
Matrix4x4 matrixHClipToWorld = matrixCameraToWorld * matrixProjectionInverse;
Shader.SetGlobalMatrix("_MatrixHClipToWorld", matrixHClipToWorld);
Graphics.Blit(src, dst, _material);
}
Then we need the depth information to transform the clip pos. Like this:
inline half3 TransformUVToWorldPos(half2 uv)
{
half depth = tex2D(_CameraDepthTexture, uv).r;
#ifndef SHADER_API_GLCORE
half4 positionCS = half4(uv * 2 - 1, depth, 1) * LinearEyeDepth(depth);
#else
half4 positionCS = half4(uv * 2 - 1, depth * 2 - 1, 1) * LinearEyeDepth(depth);
#endif
return mul(_MatrixHClipToWorld, positionCS).xyz;
}
That's all.

have a look at this tutorial: http://flafla2.github.io/2016/10/01/raymarching.html
essentially:
store one vector per corner of the screen, passed as constants, that goes from the camera position to said corner.
interpolate the vectors based on the screen space position, or the uvs of your screenspace quad
compute final position as cameraPosition + interpolatedVector * depth

Related

Unity3D: How to show only the intersection/cross-section between two meshes at runtime?

The Problem
Hi, I'm basically trying to do the same thing as described here:
Unity Intersections Mask
With the caveat that the plane isn't exactly a plane but a (very large relative to the arbitrary 3D object) 3D Cone, and the camera I'm using has to be an orthographic camera (so no deferred rendering).
I also need to do this basically every frame.
What I tried
I've tried looking up various intersection depth shaders but they all seem to be done with the perspective camera.
Even then they don't render the non-intersecting parts of the 3D objects as transparent, instead coloring parts of them differently.
The linked stackoverflow question mentions rendering the plane normally as an opaque object, and then using a fragment shader to render only the part of objects that intersect the plane.
However based on my (admittedly) very limited understanding of shaders, I'm uncertain of how to get around to doing this - as far as I know each fragment only has 1 value as it's depth, which is the distance from the near-clipping plane of the camera to the point on the object closest to the camera that is shown by that fragment/pixel.
Since the rest of the object is transparent in this case, and I need to show parts of the object that would normally be covered(and thus, from what I understand, depth not known), I can't see how I could only draw the parts that intersect my cone.
I've tried the following approaches other than using shaders:
Use a CSG algorithm to actually do a boolean intersect operation between the cone and objects and render that.
Couldn't do it because the CSG algorithms were too expensive to do every frame.
Try using the contactPointsfrom the Collision generated by Unity to extract all points(vertices) where the two meshes intersect and construct a new mesh from those points
This led me down the path of 3D Delaunay triangulation, which was too much for me to understand, probably too expensive like the CSG attempt, and I'm pretty sure there is a much simpler solution to this problem given that I'm just missing here.
Some Code
The shader I initially tried using(and which didn't work) was based off code from here:
https://forum.unity.com/threads/depth-buffer-with-orthographic-camera.355878/#post-2302460
And applied to each of the objects.
With the float partY = i.projPos.y + (i.projPos.y/_ZBias); modified without the hard-coded _ZBias correction factor(and other color-related values slightly changed).
From my understanding, it should work since it seems to me like it's comparing the depth buffer and the actual depth of the object and only coloring it as the _HighlightColor when the two are sufficiently similar.
Of course, I know almost nothing about shaders, so I have little faith in my assessment of this code.
//Highlights intersections with other objects
Shader "Custom/IntersectionHighlights"
{
Properties
{
_RegularColor("Main Color", Color) = (1, 1, 1, 0) //Color when not intersecting
_HighlightColor("Highlight Color", Color) = (0, 0, 0, 1) //Color when intersecting
_HighlightThresholdMax("Highlight Threshold Max", Float) = 1 //Max difference for intersections
_ZBias("Highlight Z Bias", Float) = 2.5 //Balance out the Z-axis fading
}
SubShader
{
Tags { "Queue" = "Transparent" "RenderType"="Transparent" }
Pass
{
Blend SrcAlpha OneMinusSrcAlpha
ZWrite Off
Cull Off
CGPROGRAM
#pragma target 3.0
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
uniform sampler2D _CameraDepthTexture; //Depth Texture
uniform float4 _RegularColor;
uniform float4 _HighlightColor;
uniform float _HighlightThresholdMax;
uniform float _ZBias;
struct v2f
{
float4 pos : SV_POSITION;
float4 projPos : TEXCOORD1; //Screen position of pos
};
v2f vert(appdata_base v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.projPos = ComputeScreenPos(o.pos);
return o;
}
half4 frag(v2f i) : COLOR
{
float4 finalColor = _RegularColor;
//Get the distance to the camera from the depth buffer for this point
float sceneZ = tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.projPos)).r * 400;
//Actual distance to the camera
float partY = i.projPos.y;// + (i.projPos.y/_ZBias);
//If the two are similar, then there is an object intersecting with our object
float diff = (abs(sceneZ - partY)) / _HighlightThresholdMax;
if (diff <= 1)
{
finalColor = _HighlightColor;
}
half4 c;
c.r = finalColor.r;
c.g = finalColor.g;
c.b = finalColor.b;
c.a = (diff<=1)? 1.0f: 0.0f;
return c;
}
ENDCG
}
}
FallBack "VertexLit"
The result of the (erroneous) code above is that the object always becomes transparent, regardless of whether or not it intersects the cone:
(The object is fully transparent even though it intersects the cone(pic taken from Scene View at runtime))
Ultimately it just seems to me like it comes back to shaders. How would I get around to achieving this effect? It doesn't necessarily have to be with shaders, anything that works is fine for me tbh. An example code would be great.

How to calculate the (proper) transformation matrix between two frames (axial systems) in Unity3D

For a project in Unity3D I'm trying to transform all objects in the world by changing frames. What this means is that the origin of the new frame is rotated, translated, and scaled to match the origin of the old frame, then this operation is applied to all other objects (including the old origin).
For this, I need a generalized, 3-dimensional (thus 4x4) Transformation-Matrix.
I have looked at using Unity's built-in Matrix4x4.TRS()-method, but this seems useless, as it only applies the Translation, Rotation & Scale to a defined point.
What I'm looking for, is a change of frames, in which the new frame has a different origin, rotation, AND scale, with regards to the original one.
To visualize the problem, I've made a small GIF (I currently have a working version in 3D, without using a Matrix, and without any rotation):
https://gyazo.com/8a7ab04dfef2c96f53015084eefbdb01
The values for each sphere:
Origin1 (Red Sphere)
Before:
Position (-10, 0, 0)
Rotation (0,0,0)
Scale (4,4,4)
After:
Position (10, 0, 0)
Rotation (0,0,0)
Scale (8,8,8)
-
Origin2 (Blue Sphere)
Before:
Position (-20, 0, 0)
Rotation (0,0,0)
Scale (2,2,2)
After:
Position (-10, 0, 0)
Rotation (0,0,0)
Scale (4,4,4)
-
World-Object (White Sphere)
Before:
Position (0, 0, 10)
Rotation (0,0,0)
Scale (2,2,2)
After:
Position (30, 0, 20)
Rotation (0,0,0)
Scale (4,4,4)
Currently I'm simply taking the Vector between the 2 origins, scaling that to the difference between the two origins, then applying that on top of the new position of the original (first) origin.
This will of course not work when rotation is applied to any of the 2 origins.
// Position in original axes
Vector3 positionBefore = testPosition.TestPosition - origin.TestPosition;
// Position in new axes
Vector3 positionAfter = (positionBefore * scaleFactor) + origin.transform.position;
What I'm looking for is a Matrix that can do this (and include rotation, such that Origin2 is rotated to the rotation Origin1 was in before the transformation, and all other objects are moved to their correct positions).
Is there a way to do this without doing the full calculation on every Vector (i.e. transforming the positionbefore-Vector)? It needs to be applied to a (very) large number of objects every frame, thus it needs to be (fairly) optimized.
Edit: Scaling will ALWAYS be uniform.
There might be other solutions but here is what I would do
Wrap your objects into the following hierarchy
WorldAnchorObject
|- Red Sphere
|- Blue Sphere
|- White Sphere
Make sure the WorldAnchorObject has
position: 0,0,0
rotation: 0,0,0
localScale: 1,1,1
position/rotate/scale the Spheres (this will now happen relative to WorldAnchorObject)
Now all that is left is to transform the WorldAnchorObject -> it will move, scale and rotate anything else and keeps the relative transforms intact.
How exactly you move the world anchor is your thing. I guess you want to allways center and normalize a certain child object. Maybe something like
public void CenterOnObject(GameObject targetCenter)
{
var targetTransform = targetCenter.transform;
// The localPosition and localRotation are relative to the parent
var targetPos = transform.localPosition;
var targetRot = targetTransform.localRotation;
var targetScale = targetTransform.localScale;
// First reset everything
transform.position = Vector3.zero;
transform.rotation = Quaternion.Idendity;
transform.localScale = Vector3.one;
// set yourself to inverted target position
// After this action the target should be on 0,0,0
transform.position = targetPos * -1;
// Scale yourself relative to 0,0,0
var newScale = Vector3.one * 1/targetScale.x;
ScaleAround(gameObject, Vector3.zero, newScale);
// Now everything should be scaled so that the target
// has scale 1,1,1 and still is on position 0,0,0
// Now rotate around 0,0,0 so that the rotation of the target gets normalized
transform.rotation = Quaternion.Inverse(targetRotation);
}
// This scales an object around a certain pivot and
// takes care of the correct position translation as well
private void ScaleAround(GameObject target, Vector3 pivot, Vector3 newScale)
{
Vector3 A = target.transform.localPosition;
Vector3 B = pivot;
Vector3 C = A - B; // diff from object pivot to desired pivot/origin
float RS = newScale.x / target.transform.localScale.x; // relative scale factor
// calc final position post-scale
Vector3 FP = B + C * RS;
// finally, actually perform the scale/translation
target.transform.localScale = newScale;
target.transform.localPosition = FP;
}
Now you call it passing one of the children like e.g.
worldAnchorReference.CenterOnObject(RedSphere);
should result in what you wanted to achieve. (Hacking this in on my smartphone so no warranties but if there is trouble I can check it as soon as I'm with a PC again. ;))
Nevermind..
Had to apply the rotation & scale to the Translation before creating the TRS
D'Oh

How to instantiate tiles based on a 2D Array into a Platform Game using Unity?

I'm building a very simple platform game using 2D array to build the map based on it.
There are two simple goals I want and I'm currently not finding the answer:
Ensure that the camera is 16:9 and my scene will be 100% displayed in it
Build a 2D platform tileset as in an array
My environment:
Unity 5.5.0f3 (in 2D Mode)
Camera projection ortographic size 10.9
Game displayed in 16:9
Tileset dimensions are 128x128 px
Here is my current code:
public Camera camera;
public GameObject prefab;
void Start () {
Vector3 pos = camera.ScreenToWorldPoint(new Vector3 (0, 0, 0));
Vector3 nextPosition = pos;
for (int i = 0; i < 32; i++)
{
for(int j=0; j < 18; j++)
{
GameObject a = Instantiate(prefab, new Vector3(nextPosition.x, nextPosition.y, 0), Quaternion.identity);
nextPosition = new Vector3(pos.x+(prefab.GetComponent<Renderer>().bounds.size.x)*i, pos.y+(prefab.GetComponent<Renderer>().bounds.size.y)*j,0);
}
}
}
There are 3 things to notice about it:
I'm using ScreenToWorldPoint to get me the position for 0,0,0
My building order goes from bottom left to top right, each iteration is put as the past position + block width/height (x and y)
I'm using a 16:9 for scheme which is 32:18
Using it, this is my result:
As you can see it stays out of the camera boundary and even tho both camera and code are 16:9, it exceeds 1 column. Also note that the instantiate point is exactly in the middle of my GameObject, so I start instantiating it as half of the gameObject's width and height, meaning:
Vector3 pos = camera.ScreenToWorldPoint(new Vector3 (64, 64, 0));
And the reuslt is the following:
Not what I expected at all, by trial and error I figured out it is suposed to be at 16,16:
Vector3 pos = camera.ScreenToWorldPoint(new Vector3 (16, 16, 0));
Now its a perfect fit, but it exceeds 1 line at the top and 1,5 columns at the right. Which shouldn't because they are both 16:9
I'm clearly doing something wrong but I can't see what, I've been through this problem in the past but I don't remember what I figured out.
"Pos" needs a shift at the start. It can be achieved using Bounds.extents
Vector3 pos = camera.ScreenToWorldPoint(new Vector3 (0, 0, 0));
pos = new Vector3( pos.x + prefab.GetComponent<Renderer>().bounds.extents.x,
pos.y + prefab.GetComponent<Renderer>().bounds.extents.y,
0);
//....the rest is the same as your code
This will be better than using the magic numbers (16,16,0). The first tile will be positioned at the left-bottom corner no matter what scale you use.
128x128 px only tells me that you're using a square tile. So it's 128x128 px but I can fill the whole screen with one tile or I can make it as tiny as I can (thinking in world coordinate). The solution is either to scale the tiles or change the orthognalSize of the camera.
The easy solution is to change the orthographicSize to fit the tiles.
camera.orthographicSize = 18 * prefab.GetComponent<Renderer>().bounds.size.y * 0.5f;
orthographicSize equals half the height in world coordinate and you need 18 tiles in height.
So all code combined :
Bounds bound = prefab.GetComponent<Renderer>().bounds;
camera.orthographicSize = 18 * bound.size.y * 0.5f;
Vector3 pos = camera.ScreenToWorldPoint(Vector3.zero);
pos = new Vector3( pos.x + bound.extents.x, pos.y + bound.extents.y, 0);
//....The rest is the same as your code

How to get coords inside a transformed sprite?

I am trying to the get the x and y coordinates inside a transformed sprite. I have a simple 200x200 sprite which rotates in the middle of the screen - with an origin of (0,0) to keep things simple.
I have written a piece of code that can transform the mouse coordinates but only with a specified x OR y value.
int ox = (int)(MousePos.X - Position.X);
int oy = (int)(MousePos.Y - Position.Y);
Relative.X = (float)((ox - (Math.Sin(Rotation) * Y /* problem here */)) / Math.Cos(Rotation));
Relative.Y = (float)((oy + (Math.Sin(Rotation) * X /* problem here */)) / Math.Cos(Rotation));
How can I achieve this? Or how can I fix my equation?
The most general way is to express the transformation as a matrix. This way, you can add any other transformation later, if you find you need it.
For the given transformation, the matrix is:
var mat = Matrix.CreateRotationZ(Rotation) * Matrix.CreateTranslation(Position);
This matrix can be interpreted as the system transformation from sprite space to world space. You want the inverse transformation - the system transformation from world space to sprite space.
var inv = Matrix.Invert(mat);
You can transform the mouse coordinates with this matrix:
var mouseInSpriteSpace = Vector2.Transform(MousePos, inv);
And you get the mouse position in the sprite's local system.
You can check if you have the correct matrix mat by using the overload of Spritebatch.Begin() that takes a matrix. If you pass the matrix, draw the sprite at (0, 0) with no rotation.

How can I rotate this matrix around the center?

My XNA game uses Farseer Physics, which is a 2d physics engine with an optional renderer for physics engine data, to help you debug. Visual debug data is very useful, so I have it setup to be drawn according to my camera's state. This works perfectly, except for z axis rotation. See, I have a camera class that supports movement, zoom, and z axis rotation. My debug class uses the Farseer's debug renderer to create matrices that make the debug data be drawn according to the camera, and it does it well, except for one thing.. the z axis rotation uses the top-left corner of the screen for (0, 0), while my camera rotates using the center of the viewport as (0, 0). Does anyone have any tips for me? If I can make the debug drawer rotate from the center, it would work perfectly with my camera.
public void Draw(Camera2D camera, GraphicsDevice graphicsDevice)
{
// Projection (location and zoom)
float width = (1f / camera.Zoom) * ConvertUnits.ToSimUnits(graphicsDevice.Viewport.Width / 2);
float height = (-1f / camera.Zoom) * ConvertUnits.ToSimUnits(graphicsDevice.Viewport.Height / 2);
//projection = Matrix.CreateOrthographic(width, height, 1f, 1000000f);
projection = Matrix.CreateOrthographicOffCenter(
-width,
width,
-height,
height,
0f, 1000000f);
// View (translation and rotation)
float xTranslation = -1 * ConvertUnits.ToSimUnits(camera.Position.X);
float yTranslation = -1 * ConvertUnits.ToSimUnits(camera.Position.Y);
Vector3 translationVector = new Vector3(xTranslation, yTranslation, 0f);
view = Matrix.CreateRotationZ(camera.Rotation) * Matrix.Identity;
view.Translation = translationVector;
DebugViewXNA.RenderDebugData(ref projection, ref view);
}
One common approach to solving these sort of issues is to move the object in question to the 'centre', rotate and the move it back.
So in this case, I'd suggest applying a transformation that moves the camera "up and across" by half the screen dimensions, apply the rotation and then move it back.
In general, in order to perform rotation around point (x, y, z), the operation needs to be broken down into 3 conceptual parts:
T is a translation matrix that translates by (-x, -y, -z)
R is a rotation matrix that rotates around the relevant axis.
T^-1 is the matrix that translates back to (x, y, z)
The matrix you're after is the result of the multiplication of these 3, in reverse order:
M = T^-1 * R ^ T
The x,y,z you should use are your camera's position.

Categories

Resources