My main plane is Rectangle(0,0,10000,10000) for example.
My screen plane (ie virtual position) is Rectangle(1000,1000,1920,1080).
My Texture2D is Rectangle(1500,1200,200,100) in main plane.
I need to translate my Texture2D coordinates to my screen plane. I tried with Matrix.Translate without success.
I must get Texture2D = Rectangle(500,200,200,100) in screen plane.
In order to get the Texture2D from (1500, 1200) to (500, 200) you have to use a translation of (-1000, -1000) which are the inverse numbers from your screen plane's coordinates. In code your translation would be something like this
Matrix transform = Matrix.CreateTranslation(-screenPlane.x, -screenPlane.y, 0);
The theory is that you want to move the texture like if your camera was on (0, 0) instead of (1000, 1000). You have to move the texture by (-1000, -1000) in order to do so.
Check the web for 2D camera classes, always usefull to know how cameras work :)
This one for example: http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/
Related
I want to change the floor's vertex normals' direction as the ball is rolling on the floor. I just need some direction on how to achieve this.
So far this is the direction that I'm heading:
Make a copy of all the vertex normals of the floor on start.
On collision get the contact point and raycast/spherecast/boxcast to get the affected vertices. (Set variable offset to control how much vertices I want to be affected by the casting)
Find normals related to the vertices.
Rotate the affected normals parrallel to the ball's closest surface point.
As ball moves away from affected's floor's vertices, slowly return the floor normals back to original direction. (Set a variable to control the movement speed of the normal's rotating back to original direction)
I just need help figuring out which type of casting to use and how to rotate the normals parallel to the ball's surface. This is for a mobile platform so performance is a must.
Thanks in advance.
Here's how you'd go about modifying the normals:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector3[] normals = mesh.normals;
You'd want to use the vertices list to figure out which indexes to modify (presumably also needing to convert from local space to world space). You could then raycast from the worldspace coordinate to the ball's center,1 and use the raycasthit.normal to figure out what the angle to the ball is.
Some clever vector math from there to figure out the new normal for your plane:
Find the vector perpendicular between hit.normal and Vector3.Up: this vector will be parallel to the plane. If the two vectors are parallel, dump out: your normal is unchanged (or should be returned to its original value, which will be the same vector as the raycast to find the sphere).
Find the vector perpendicular to that vector and hit.normal: this vector will be your new normal.
1 Actually, you'll want to know how far down from the ball's center you should target, otherwise, you'll get the most extreme offsets as the ball moves farther away from the plane. So you want the ball's position on X and Z, but a fixed offset up from the plane for Y. This won't be too difficult to calculate.
I would try something like:
Create a texture for the normals. Each pixel is a normal of a vertex(like a grid). Calculate the correspoding coord between the 3d ball and the position of the normal in the texture and draw a ball/circle/sprite on it (like a sprite) each frame. Then you could use a compute shader to revert them slowy to the default up vector.
I have a prefab cube in Unity. I already run some script to position it, and move some of the vertices to change it's shape. Now, I know that to texture it, I have to do something like:
Texture2D myTexture = Resources.Load("sample") as Texture2D;
cube.GetComponent<Renderer>().material.mainTexture = (Texture)myTexture;
And that works very well. But now, I want to understand how to use UV mapping to assign 2 different texture to my cube (1 for top and 1 for side).
Texture2D topTexture = Resources.Load("topTex") as Texture2D;
Texture2D sideTexture = Resources.Load("sideTex") as Texture2D;
//And now, how do I say to only apply to which side?
cube.GetComponent<Renderer>().material.mainTexture = (Texture)topTexture;
cube.GetComponent<Renderer>().material.mainTexture = (Texture)sideTexture;
Your Vertices and UVs match up so that each position in the array of Vertices corresponds to the UV vector in the same position.
vertices[0] // will map to...
uvs[0]
Your vertices describe each of the vertices on your cube. You then essentially pin points of your texture to it. Remember that your texture co-ords are so that they have a min and max value of 0 and 1.
i'm currently programming an 2D topview unity game. And i want to set the camera such as, that just a specific area is visible. That means that i know the size of my area and when the camera, which is following the player currently reaches the border of the area, i want to visible stops.
So here is my question: i know where the camera is and how it can follow the player but i dont know how i can calculate the distance between the border of the field and the border of waht the camera sees. how can i do that?
Essentially, treat your playable area as a rectangle. Then, make a smaller rectangle within that rectangle that accounts for the camera's orthographic size. Don't forget to include the aspect ratio of your camera when calculating horizontal bounds.
Rect myArea; // this stores the bounds of your playable area
Camera cam; // this is your orthographic camera, probably Camera.main
GameObject playerObject; // this is your player
float newX = Mathf.Clamp(
playerObject.transform.position.x,
myArea.xMin + cam.orthographicSize * cam.aspect,
myArea.xMax - cam.orthographicSize * cam.aspect
);
float newY = Mathf.Clamp(
playerObject.transform.position.y,
myArea.yMin + cam.orthographicSize,
myArea.yMax - cam.orthographicSize
);
cam.transform.position = new Vector3(newX,newY,cam.transform.position.z);
If you're using an alternative plane (say xz instead of xy), just swap out the corresponding dimensions in all the calculations.
I have a 360 spherical video. I use this video as a texture on a sphere in Unity. Inside the sphere is a camera and this functions as the setup for my Virtual Reality experience. Pretty basic.
I am trying to write a bit of code on the web where people can upload 360 images and videos, place a marker/hotspot on the 360 spherical image/video, and then apply the image/video-texture on the sphere in Unity3D. If I overlay a simple x/y coordinate grid on the 360 video/image-texture, put in some x/y-coordinates to place the marker/hotspot, and put the texture back on the sphere, Unity will not interpret this correctly since we are now in 3D space and we are looking at the texture from within the sphere mapped onto the plane with all the distortion happening.
My question is, how do I convert these x and y coordinates on the 2D plane of the 360 video texture to coordinates that can be understood in 3D within Unity3D?
My first thought was to use 2-dimensional cartesian coordinates and convert these into spherical coordinates, but I seem to be missing a z-axis in the cartesian coordinates to make this work.
Is the z-axis simply 0 or is it the radius from center of the sphere to the x/y-coordinate? What does the z-axis represent? Is there maybe two coordinate systems. One that is coordinates on a plane and one that is from the centre of the sphere?
This is the conversion code that I have so far:
public static void CartesianToSpherical(Vector3 cartCoords, out float outRadius, out float outPolar, out float outElevation){
if (cartCoords.x == 0)
cartCoords.x = Mathf.Epsilon;
outRadius = Mathf.Sqrt((cartCoords.x * cartCoords.x)
+ (cartCoords.y * cartCoords.y)
+ (cartCoords.z * cartCoords.z));
outPolar = Mathf.Atan(cartCoords.z / cartCoords.x);
if (cartCoords.x < 0)
outPolar += Mathf.PI;
outElevation = Mathf.Asin(cartCoords.y / outRadius);
}
This is my very first post so please excuse me if I am doing anything wrong and let me know how to improve.
Spherical co-ordinates are different from 2d or 3d co-ordinates as you need to measure them in radians. It is usually measured from the center of the sphere and XY axis. It marks a rectangular area on the surface of the sphere. Please refer to this link for spherical co-ordinates in unity - https://blog.nobel-joergensen.com/2010/10/22/spherical-coordinates-in-unity/
I'd like to rotate a Texture in XNA. I know I can rotate it when it is drawn, but I would like the Texture2D variable to be the rotated texture. Is there any way to do this?
Use RenderTarget, draw your texture rotated into the RenderTarget, take the texture and save it.
You should provide a new shader that manage texture coords rotation. As the HLSL code of the basiceffect is public, it should be pretty easy to add this behaviour.
Basic Effect HLSL code
Passing an angle parameter to the shader, the transform should be:
newU = U*cos(alfa) - V*sin(alfa);
newV = U*sin(alfa) + V*cos(alfa);
One way would be to pass a rotation matrix to your shader and multiply your texcoords by that before calling the texture sampler.
I'm not sure if XNA/DirectX has the same concept as OpenGL's texture matrix.