Meshes created from code don't have a position unity - c#

So, I tried to create a grid so that I can instantiate objects on it. I check for the position of said hit object (one of the squares I created) and then set the instantiated object to that position. Problem is, the squares I created with code don't have a position and are all set to 0, 0, 0.
{
GameObject tileObject = new GameObject(string.Format("{0}, {1}", x, y));
tileObject.transform.parent = transform;
Mesh mesh = new Mesh();
tileObject.AddComponent<MeshFilter>().mesh = mesh;
tileObject.AddComponent<MeshRenderer>().material = tileMaterial;
Vector3[] vertices = new Vector3[4];
vertices[0] = new Vector3(x * tileSize, 0, y * tileSize);
vertices[1] = new Vector3(x * tileSize, 0, (y +1) * tileSize);
vertices[2] = new Vector3((x +1) * tileSize, 0, y * tileSize);
vertices[3] = new Vector3((x +1) * tileSize, 0, (y +1) * tileSize);
int[] tris = new int[] { 0, 1, 2, 1, 3, 2 };
mesh.vertices = vertices;
mesh.triangles = tris;
mesh.RecalculateNormals();
tileObject.layer = LayerMask.NameToLayer("Tile");
tileObject.AddComponent<BoxCollider>();
//var xPos = Mathf.Round(x);
//var yPos = Mathf.Round(y);
//tileObject.gameObject.transform.position = new Vector3(xPos , 0f, yPos);
return tileObject;
}```

As said your issue is that you leave all tiles on the position 0,0,0 and only set their vertices to the desired world space positions.
You would rather want to keep your vertices local like e.g.
// I would use the offset of -0.5f so the mesh is centered at the transform pivot
// Also no need to recreate the arrays everytime, you can simply reference the same ones
private readonly Vector3[] vertices = new Vector3[4]
{
new Vector3(-0.5f, 0, -0.5f);
new Vector3(-0.5f, 0, 0.5f);
new Vector3(0.5f, 0, -0.5f);
new Vector3(0.5f, 0, 0.5f);
};
private readonly int[] tris = new int[] { 0, 1, 2, 1, 3, 2 };
and then in your method do
GameObject tileObject = new GameObject($"{x},{y}");
tileObject.transform.parent = transform;
tileObject.localScale = new Vector3 (tileSize, 1, tileSize);
tileObject.localPosition = new Vector3(x * tileSize, 0, y * tileSize);
The latter depends of course on your needs. Actually I would prefer to have the tiles also centered around the grid object so something like e.g.
// The "-0.5f" is for centering the tile itself correctly
// The "-gridWith/2f" makes the entire grid centered around the parent
tileObject.localPosition = new Vector3((x - 0.5f - gridWidth/2f) * tileSize, 0, (y - 0.5f - gridHeight/2f) * tileSize);
In order to later find out which tile you are standing on (e.g. via raycasts, collisions, etc) I would then rather use a dedicated component and simply tell it it's coordinates like e.g.
// Note that Tile is a built-in type so you would want to avoid confusion
public class MyTile : MonoBehaviour
{
public Vector2Int GridPosition;
}
and then while generating your grid you would simply add
var tile = tileObject.AddComponent<MyTile>();
tile.GridPosition = new Vector2Int(x,y);
while you can still also access its transform.position to get the actual world space center of the tiles

Related

How can I adjust rotation based on another gameObject in Unity?

I'm trying to make a chess game in Augmented Reality. I wrote a script which places chessboard on the Plane in AR. Then I created mesh with 64 squares which match chessboard tiles. I have a problem placing mesh to match my chessboard(screenshots). I think I should rotate mesh by Y axis, but I wasn't able to do that.
placing chessboard:
GameObject placedObject = Instantiate(objectToPlace, placementPose.position, placementPose.rotation * Quaternion.Euler(-90f, 0f, 0f));
script that creates and places mesh:
private float yAdjust = 0F;
private Vector3 boardCenter = GameObject.Find("Interaction").GetComponent<TapToPlaceObject>().placementPose.position;
private void GenerateSquares(float squareSize)
{
adjust = new Vector3(-4 * squareSize, 0, -4 * squareSize) + boardCenter;
squares = new GameObject[8,8];
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 8; j++)
{
squares[i, j] = CreateSquare(squareSize,i,j);
}
}
}
private GameObject CreateSquare(float squareSize, int i, int j)
{
GameObject square = new GameObject(string.Format("{0},{1}", i, j));
square.transform.parent = transform;
Vector3[] vertices = new Vector3[4];
vertices[0] = new Vector3(i * squareSize, yAdjust, j * squareSize) + adjust;
vertices[1] = new Vector3(i * squareSize, yAdjust, (j + 1) * squareSize) + adjust;
vertices[2] = new Vector3((i + 1) * squareSize, yAdjust, j * squareSize) + adjust;
vertices[3] = new Vector3((i + 1) * squareSize, yAdjust, (j + 1) * squareSize) + adjust;
int[] triangles = new int[] { 0, 1, 2, 1, 3, 2 };
Mesh mesh = new Mesh();
square.AddComponent<MeshFilter>().mesh = mesh;
square.AddComponent<MeshRenderer>().material = squareMaterial;
//square.transform.rotation = Quaternion.Euler(new Vector3(0, boardRotation.eulerAngles.y, 0));
//square.transform.rotation = boardRotation;
mesh.vertices = vertices;
mesh.triangles = triangles;
square.AddComponent<BoxCollider>();
return square;
}
screenshot of my problem
You could probably try and just add another
* Quaternion.Euler(0f, 45f, 0f)
In general though for your squares there is Transform.SetParent which allows you to pass the optional parameter worldPositionStays as false which is probably what you would want to do. By setting
transform.parent = parent
equals calling
transform.SetParent(parent);
which equals calling
transform.SetParent(parent, true);
so the objects keep their original world space orientation and position.
However, I would actually rather recommend you already create that board once in edit mode, make the entire board a prefab and now when you spawn the board you only need to take care of placing and rotating one single object and all children will already be their correctly placed and rotated within the board.

Unity3D - How to add textures to a mesh

I am making a cubic voxel game. I have chunks, world, blocks and mesh generation done, but there's one problem - I could not do the texturing.
Everything I need is just add a texture to a side of a 3D mesh (Texture of every is different!). I've seen some implementations but it's hard to read somebody else's code (I've tried to use them, but it didn't work). I've tried to do this by myself, but with no results.
Can anybody explain how to do this??
Here is my current code:
[ExecuteInEditMode]
[RequireComponent(typeof(MeshFilter))]
[RequireComponent(typeof(MeshRenderer))]
public class Chunk : MonoBehaviour
{
private ushort[] _voxels = new ushort[16 * 16 * 16];
private MeshFilter meshFilter;
private Vector3[] cubeVertices = new[] {
new Vector3 (0, 0, 0),
new Vector3 (1, 0, 0),
new Vector3 (1, 1, 0),
new Vector3 (0, 1, 0),
new Vector3 (0, 1, 1),
new Vector3 (1, 1, 1),
new Vector3 (1, 0, 1),
new Vector3 (0, 0, 1),
};
private int[] cubeTriangles = new[] {
// Front
0, 2, 1,
0, 3, 2,
// Top
2, 3, 4,
2, 4, 5,
// Right
1, 2, 5,
1, 5, 6,
// Left
0, 7, 4,
0, 4, 3,
// Back
5, 4, 7,
5, 7, 6,
// Bottom
0, 6, 7,
0, 1, 6
};
public ushort this[int x, int y, int z]
{
get { return _voxels[x * 16 * 16 + y * 16 + z]; }
set { _voxels[x * 16 * 16 + y * 16 + z] = value; }
}
void Start()
{
meshFilter = GetComponent<MeshFilter>();
}
private void Update()
{
GenerateMesh();
}
public void GenerateMesh()
{
Mesh mesh = new Mesh();
List<Vector3> vertices = new List<Vector3>();
List<int> triangles = new List<int>();
for (var x = 0; x < 16; x++)
{
for (var y = 0; y < 16; y++)
{
for (var z = 0; z < 16; z++)
{
var voxelType = this[x, y, z];
if (voxelType == 0)
continue;
var pos = new Vector3(x, y, z);
var verticesPos = vertices.Count;
foreach (var vert in cubeVertices)
vertices.Add(pos + vert);
foreach (var tri in cubeTriangles)
triangles.Add(verticesPos + tri);
}
}
}
mesh.SetVertices(vertices);
mesh.SetTriangles(triangles.ToArray(), 0);
meshFilter.mesh = mesh;
}
}
NOTE: This is a repost with many edits so it is focused on one problem plus has better explanation. Sorry for that.
Like your SetVertices() and SetTriangles(), you can call a SetUVs() with a list of the UV coordinates of each vertex on your texture.
The UV list size must match the vertices list size!
The UV coordinate are expressed as Vector2 with values between 0 and 1.
For example, to apply the whole texture on the front face of your cube, you have the first 4 uvs like this:
private Vector2[] cubeUVs = new[] {
new Vector2 (0, 0),
new Vector2 (1, 0),
new Vector2 (1, 1),
new Vector2 (0, 1),
...
}
...
mesh.SetUVs(0, cubeUVs);
If your texture is not a square, then it will be stretched.
You should also call RecalculateBounds() and RecalculateNormals() at the end of your GenerateMesh() method to avoid some issues later.
EDIT
If you really want different texture files for each side of the cube, then the cleanest and most performant solution for me is to set a different VertexColor for each side of your cube, eg. (1,0,0), (0,1,0), (0,0,1), (1,1,0), (1,0,1) and (0,1,1).
However, you will have to duplicate all your vertices 3 times. (because the vertex color is bound to a vertex, and each vertex of a cube belongs to 3 sides)
(You still have to set the UVs like I said previously, but each side has the whole texture instead of only a part of the texture)
Then, you will have to create a custom shader with 6 textures in inputs (one for each side).
And in the fragment function, you select the right texture color according to the vertex color.
You can for that, do some if to select the texture, but it will be not very performant:
float3 finalColor;
if(vertexColor.r > 0.5f && vertexColor.g < 0.5f && vertexColor.b < 0.5f)
{
finalColor = text2D(_TopTexture, in.uv);
}
else if(...)
{
...
}
...
Or if you want more perf (with a lot of cubes), you can instead do some multiplications to select the right texture:
float3 topTexColor = text2D(_TopTexture, in.uv) * vertexColor.r * (1.0f - vertexColor.g) * (1.0f - vertexColor.b);
float3 frontTexColor = ...;
...
float3 finalColor = topTexColor + frontTexColor + ...;

How to check if device has been rotated on all axis in Unity

I want to check in Unity if the device has been rotated on all of it's axis.
So, I am reading the rotation of all the axis.
What should I do in order to validate for example that the user has "flipped" his device over the X-axis? I need to check the value, and see that they contain 0, 90, 180 and 270 degrees in a loop.
Here is part of my code:
void Update () {
float X = Input.acceleration.x;
float Y = Input.acceleration.y;
float Z = Input.acceleration.z;
xText.text = ((Mathf.Atan2(Y, Z) * 180 / Mathf.PI)+180).ToString();
yText.text = ((Mathf.Atan2(X, Z) * 180 / Mathf.PI)+180).ToString();
zText.text = ((Mathf.Atan2(X, Y) * 180 / Mathf.PI)+180).ToString();
}
The accelerometer only tells you if the acceleration of the device changes. So you will have values if the device started moving, or stopped moving. You can't retrieve its orientation from that.
Instead you need to use the gyroscope of the device. Most device have one nowadays.
Fortunately, Unity supports the gyroscope through the Gyroscope class
Simply using
Input.gyro.attitude
Will give you the orientation of the device in space, in the form of a quaternion.
To check the angles, use the eulerAngles function, for instance, is the device flipped in the x axis:
Vector3 angles = Input.gyro.attitude.eulerAngles;
bool xFlipped = angles.x > 180;
Be careful, you might have to invert some values if you want to apply the rotation in Unity (because it depend which orientation the devices uses for positive values, left or right)
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
Here is the full example from the doc (Unity version 2017.3), in case the link above is broken. It shows how to read value from the gyroscope, and apply them to an object in Unity.
// Create a cube with camera vector names on the faces.
// Allow the device to show named faces as it is oriented.
using UnityEngine;
public class ExampleScript : MonoBehaviour
{
// Faces for 6 sides of the cube
private GameObject[] quads = new GameObject[6];
// Textures for each quad, should be +X, +Y etc
// with appropriate colors, red, green, blue, etc
public Texture[] labels;
void Start()
{
// make camera solid colour and based at the origin
GetComponent<Camera>().backgroundColor = new Color(49.0f / 255.0f, 77.0f / 255.0f, 121.0f / 255.0f);
GetComponent<Camera>().transform.position = new Vector3(0, 0, 0);
GetComponent<Camera>().clearFlags = CameraClearFlags.SolidColor;
// create the six quads forming the sides of a cube
GameObject quad = GameObject.CreatePrimitive(PrimitiveType.Quad);
quads[0] = createQuad(quad, new Vector3(1, 0, 0), new Vector3(0, 90, 0), "plus x",
new Color(0.90f, 0.10f, 0.10f, 1), labels[0]);
quads[1] = createQuad(quad, new Vector3(0, 1, 0), new Vector3(-90, 0, 0), "plus y",
new Color(0.10f, 0.90f, 0.10f, 1), labels[1]);
quads[2] = createQuad(quad, new Vector3(0, 0, 1), new Vector3(0, 0, 0), "plus z",
new Color(0.10f, 0.10f, 0.90f, 1), labels[2]);
quads[3] = createQuad(quad, new Vector3(-1, 0, 0), new Vector3(0, -90, 0), "neg x",
new Color(0.90f, 0.50f, 0.50f, 1), labels[3]);
quads[4] = createQuad(quad, new Vector3(0, -1, 0), new Vector3(90, 0, 0), "neg y",
new Color(0.50f, 0.90f, 0.50f, 1), labels[4]);
quads[5] = createQuad(quad, new Vector3(0, 0, -1), new Vector3(0, 180, 0), "neg z",
new Color(0.50f, 0.50f, 0.90f, 1), labels[5]);
GameObject.Destroy(quad);
}
// make a quad for one side of the cube
GameObject createQuad(GameObject quad, Vector3 pos, Vector3 rot, string name, Color col, Texture t)
{
Quaternion quat = Quaternion.Euler(rot);
GameObject GO = Instantiate(quad, pos, quat);
GO.name = name;
GO.GetComponent<Renderer>().material.color = col;
GO.GetComponent<Renderer>().material.mainTexture = t;
GO.transform.localScale += new Vector3(0.25f, 0.25f, 0.25f);
return GO;
}
protected void Update()
{
GyroModifyCamera();
}
protected void OnGUI()
{
GUI.skin.label.fontSize = Screen.width / 40;
GUILayout.Label("Orientation: " + Screen.orientation);
GUILayout.Label("input.gyro.attitude: " + Input.gyro.attitude);
GUILayout.Label("iphone width/font: " + Screen.width + " : " + GUI.skin.label.fontSize);
}
/********************************************/
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
}

Particle effect displaying same particles -- not randomizing, Monogame

When I create particle effects, they all have the same pattern. They are rotated, but they all have the same pattern and same colored particles. See picture:
This is how a new ParticleEffect gets created:
ParticleEffect p = new ParticleEffect(textures, Vector2.Zero, destination, speed);
Where textures is aTexture2D list, VectorZero is the starting location, and so on.
Whenever a new ParticleEffect gets created it gets added to the ParticleList list, which later loops through all the items and calls update and draw for each effect inside.
Here is where the particles are randomised:
private Particle GenerateNewParticle()
{
Random random = new Random();
Texture2D texture = textures[random.Next(textures.Count)];
Vector2 position = EmitterLocation;
Vector2 velocity = new Vector2(
1f * (float)(random.NextDouble() * 2 - 1),
1f * (float)(random.NextDouble() * 2 - 1));
float angle = 0;
float angularVelocity = 0.1f * (float)(random.NextDouble() * 2 - 1);
Color color = new Color(
(float)random.NextDouble(),
(float)random.NextDouble(),
(float)random.NextDouble());
float size = (float)random.NextDouble();
int ttl = 20 + random.Next(40);
return new Particle(texture, position, velocity, angle, angularVelocity, color, size, ttl);
}
A bunch of randoms there, but each effect still comes out the same.
Comment if you want to see more code.
Edit:
Here is how a particle gets drawn:
public void Draw(SpriteBatch spriteBatch)
{
Rectangle sourceRectangle = new Rectangle(0, 0, Texture.Width, Texture.Height);
Vector2 origin = new Vector2(Texture.Width / 2, Texture.Height / 2);
spriteBatch.Draw(Texture, Position, sourceRectangle, Color,
Angle, origin, Size, SpriteEffects.None, 0f);
}
By default, Random instances are initiated with the current time as seed, that means the same sequence of numbers will be re-generated if you create instances at the same time - do not create new instances, reuse an existing instance to get more "randomized" behavior (in your case e.g. use a static Random instance).

Matrix transformations to recreate camera "Look At" functionality

Summary:
I'm given a series of points in 3D space, and I want to analyze them from any viewing angle. I'm trying to figure out how to reproduce the "Look At" functionality of OpenGL in WPF. I want the mouse move X,Y to manipulate the Phi and Theta Spherical Coordinates (respectively) of the camera so that I as I move my mouse, the camera appears to orbit around the center of mass (generally the origin) of the point cloud, which will represent the target of the Look At
What I've done:
I have made the following code, but so far it isn't doing what I want:
internal static Matrix3D CalculateLookAt(Vector3D eye, Vector3D at = new Vector3D(), Vector3D up = new Vector3D())
{
if (Math.Abs(up.Length - 0.0) < double.Epsilon) up = new Vector3D(0, 1, 0);
var zaxis = (at - eye);
zaxis.Normalize();
var xaxis = Vector3D.CrossProduct(up, zaxis);
xaxis.Normalize();
var yaxis = Vector3D.CrossProduct(zaxis, xaxis);
return new Matrix3D(
xaxis.X, yaxis.X, zaxis.X, 0,
xaxis.Y, yaxis.Y, zaxis.Y, 0,
xaxis.Z, yaxis.Z, zaxis.Z, 0,
Vector3D.DotProduct(xaxis, -eye), Vector3D.DotProduct(yaxis, -eye), Vector3D.DotProduct(zaxis, -eye), 1
);
}
I got the algorithm from this link: http://msdn.microsoft.com/en-us/library/bb205342(VS.85).aspx
I then apply the returned matrix to all of the points using this:
var vector = new Vector3D(p.X, p.Y, p.Z);
var projection = Vector3D.Multiply(vector, _camera); // _camera is the LookAt Matrix
if (double.IsNaN(projection.X)) projection.X = 0;
if (double.IsNaN(projection.Y)) projection.Y = 0;
if (double.IsNaN(projection.Z)) projection.Z = 0;
return new Point(
(dispCanvas.ActualWidth * projection.X / 320),
(dispCanvas.ActualHeight * projection.Y / 240)
);
I am calculating the center of all the points as the at vector, and I've been setting my initial eye vector at (center.X,center.Y,center.Z + 100) which is plenty far away from all the points
I then take the mouse move and apply the following code to get the Spherical Coordinates and put that into the CalculateLookAt function:
var center = GetCenter(_points);
var pos = e.GetPosition(Canvas4); //e is of type MouseButtonEventArgs
var delta = _previousPoint - pos;
double r = 100;
double theta = delta.Y * Math.PI / 180;
double phi = delta.X * Math.PI / 180;
var x = r * Math.Sin(theta) * Math.Cos(phi);
var y = r * Math.Cos(theta);
var z = -r * Math.Sin(theta) * Math.Sin(phi);
_camera = MathHelper.CalculateLookAt(new Vector3D(center.X * x, center.Y * y, center.Z * z), new Vector3D(center.X, center.Y, center.Z));
UpdateCanvas(); // Redraws the points on the canvas using the new _camera values
Conclusion:
This does not make the camera orbit around the points. So either my understanding of how to use the Look At function is off, or my math is incorrect.
Any help would be very much appreciated.
Vector3D won't transform in affine space. The Vector3D won't translate because it is a vector, which doesn't exist in affine space (i.e. 3D vector space with a translation component), only in vector space. You need a Point3D:
var m = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1);
var v = new Point3D(1, 1, 1);
var r = Point3D.Multiply(v, m); // 11,11,11
Note your presumed answer is also incorrect, as it should be 10 + 1 for each component, since your vector is [1,1,1].
Well, it turns out that the Matrix3D libraries have some interesting issues.
I noticed that Vector3D.Multiply(vector, matrix) would not translate the vector.
For example:
var matrixTest = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1
);
var vectorTest = new Vector3D(1, 1, 1);
var result = Vector3D.Multiply(vectorTest, matrixTest);
// result = {1,1,1}, should be {11,11,11}
I ended up having to rewrite some of the basic matrix math functions in order for the code to work.
Everything was working except for the logic side, it was the basic math (handled by the Matrix3D library) that was the problem.
Here is the fix. Replace all Vector3D.Multiply method calls with this:
public static Vector3D Vector3DMultiply(Vector3D vector, Matrix3D matrix)
{
return new Vector3D(
vector.X * matrix.M11 + vector.Y * matrix.M12 + vector.Z * matrix.M13 + matrix.OffsetX,
vector.X * matrix.M21 + vector.Y * matrix.M22 + vector.Z * matrix.M23 + matrix.OffsetY,
vector.X * matrix.M31 + vector.Y * matrix.M32 + vector.Z * matrix.M33 + matrix.OffsetZ
);
}
And everything works!

Categories

Resources