Unity - Determining UVs for a circular plane mesh generated by code - c#

I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)

This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

Related

Random Rotation on a 3D sphere given an angle

This question is in between computer graphic, probability, and programming, but since I am coding it for an Unity project in C# I decided to post it here. Sorry if not appropriate.
I need to solve this problem: given a object on a 3d sphere at a certain position, and given a range of degrees, sample points on the sphere uniformly within the given range.
For example:
Left picture: the cube represents the center of the sphere, the green sphere is the starting position. I want to uniformly cover all surface of the circle within a certain degree, for example from -90 to 90 degrees around the green sphere. My approach (right picture) doesn't work as it over-samples points that are close to the starting position.
My sampler:
Vector3 getRandomEulerAngles(float min, float max)
{
float degree = Random.Range(min, max);
return degree * Vector3.Normalize(new Vector3(Random.Range(min, max), Random.Range(min, max), Random.Range(min, max)));
}
and for covering the top half of the sphere I would call getRandomEulerAngles(-90, 90).
Any idea?
Try this:
public class Sphere : MonoBehaviour
{
public float Radius = 10f;
public float Angle = 90f;
private void Start()
{
for (int i = 0; i < 10000; i++)
{
var randomPosition = GetRandomPosition(Angle, Radius);
Debug.DrawLine(transform.position, randomPosition, Color.green, 100f);
}
}
private Vector3 GetRandomPosition(float angle, float radius)
{
var rotationX = Quaternion.AngleAxis(Random.Range(-angle, angle), transform.right);
var rotationZ = Quaternion.AngleAxis(Random.Range(-angle, angle), transform.forward);
var position = rotationZ * rotationX * transform.up * radius + transform.position;
return position;
}
}
We can use a uniform sphere sampling for that. Given two random variables u and v (uniformly distributed), we can calculate a random point (p, q, r) on the sphere (also uniformly distributed) with:
float azimuth = v * 2.0 * PI;
float cosDistFromZenith = 1.0 - u;
float sinDistFromZenith = sqrt(1.0 - cosDistFromZenith * cosDistFromZenith);
(p, q, r) = (cos(azimuth) * sinDistFromZenith, sin(azimuth) * sinDistFromZenith, cosDistFromZenith);
If we put our reference direction (your object location) into zenithal position, we need to sample v from [0, 1] to get all directions around the object and u in [cos(minDistance), cos(maxDistance)], where minDistance and maxDistance are the angle distances from the object you want to allow. A distance of 90° or Pi/2 will give you a hemisphere. A distance of 180° or Pi will give you the full sphere.
Now that we can sample the region around the object in zenithal position, we need to account for other object locations as well. Let the object be positioned at (ox, oy, oz), which is a unit vector describing the direction from the sphere center.
We then build a local coordinate system:
rAxis = (ox, oy, oz)
pAxis = if |ox| < 0.9 : (1, 0, 0)
else : (0, 1, 0)
qAxis = normalize(cross(rAxis, pAxis))
pAxis = cross(qAxis, rAxis)
And finally, we can get our random point (x, y, z) on the sphere surface:
(x, y, z) = p * pAxis + q * qAxis + r * rAxis
Adapted from Nice Schertler, this is the code I am using
Vector3 GetRandomAroundSphere(float angleA, float angleB, Vector3 aroundPosition)
{
Assert.IsTrue(angleA >= 0 && angleB >= 0 && angleA <= 180 && angleB <= 180, "Both angles should be[0, 180]");
var v = Random.Range(0F, 1F);
var a = Mathf.Cos(Mathf.Deg2Rad * angleA);
var b = Mathf.Cos(Mathf.Deg2Rad * angleB);
float azimuth = v * 2.0F * UnityEngine.Mathf.PI;
float cosDistFromZenith = Random.Range(Mathf.Min(a, b), Mathf.Max(a, b));
float sinDistFromZenith = UnityEngine.Mathf.Sqrt(1.0F - cosDistFromZenith * cosDistFromZenith);
Vector3 pqr = new Vector3(UnityEngine.Mathf.Cos(azimuth) * sinDistFromZenith, UnityEngine.Mathf.Sin(azimuth) * sinDistFromZenith, cosDistFromZenith);
Vector3 rAxis = aroundPosition; // Vector3.up when around zenith
Vector3 pAxis = UnityEngine.Mathf.Abs(rAxis[0]) < 0.9 ? new Vector3(1F, 0F, 0F) : new Vector3(0F, 1F, 0F);
Vector3 qAxis = Vector3.Normalize(Vector3.Cross(rAxis, pAxis));
pAxis = Vector3.Cross(qAxis, rAxis);
Vector3 position = pqr[0] * pAxis + pqr[1] * qAxis + pqr[2] * rAxis;
return position;
}

Rotating cubes to face the origin using Quaternions

I'm in the process of setting up a relatively simple voxel-based world for a game. The high level idea is to first generate voxel locations following a fibonacci grid, then rotate the cubes such that the outer surface of the fibonacci grid resembles a sphere, and finally size the cubes such that they roughly cover the surface of the sphere (overlap is fine). See below the code for generating the voxels along the fibonacci grid:
public static Voxel[] CreateInitialVoxels(int numberOfPoints, int radius)
{
float goldenRatio = (1 + Mathf.Sqrt(5)) / 2;
Voxel[] voxels = new Voxel[numberOfPoints];
for (int i = 0; i < numberOfPoints; i++)
{
float n = i - numberOfPoints / 2; // Center at zero
float theta = 2 * Mathf.PI * n / goldenRatio;
float phi = (Mathf.PI / 2) + Mathf.Asin(2 * n / numberOfPoints);
voxels[i] = new Voxel(new Location(theta, phi, radius));
}
return voxels;
}
This generates a sphere that looks roughly like a staircase
So, my current approach to get this looking a bit more spherical is to basically rotate each cube in each pair of axes, then combine all of the rotations:
private void DrawVoxel(Voxel voxel, GameObject voxelContainer)
{
GameObject voxelObject = Instantiate<GameObject>(GetVoxelPrefab());
voxelObject.transform.position = voxel.location.cartesianCoordinates;
voxelObject.transform.parent = voxelContainer.transform;
Vector3 norm = voxel.location.cartesianCoordinates.normalized;
float xyRotationDegree = Mathf.Atan(norm.y / norm.x) * (180 / Mathf.PI);
float zxRotationDegree = Mathf.Atan(norm.z / norm.x) * (180 / Mathf.PI);
float yzRotationDegree = Mathf.Atan(norm.z / norm.y) * (180 / Mathf.PI);
Quaternion xyRotation = Quaternion.AngleAxis(xyRotationDegree, new Vector3(0, 0, 1));
Quaternion zxRotation = Quaternion.AngleAxis(zxRotationDegree, new Vector3(0, 1, 0));
Quaternion yzRotation = Quaternion.AngleAxis(yzRotationDegree, new Vector3(1, 0, 0));
voxelObject.transform.rotation = zxRotation * yzRotation * xyRotation;
}
The primary thing that I am getting caught on is that each of these rotations seems to work fine for me in isolation, but when combining them things tend to go a bit haywire (pictures below) I'm not sure exactly what the issue is. My best guess is that I've made some sign/rotation mismatch in my rotations so they don't combine right. I can get two working, but never all three together.
Above are the pictures of one and two successful rotations, followed by the error mode when I attempt to combine them. Any help either on telling me that the approach I'm following is too convoluted, or helping me understand what the right way to combine these rotations would be would be very helpful. Cartesian coordinate conversion below for reference.
[System.Serializable]
public struct Location
{
public float theta, phi, r;
public Vector3 polarCoordinates;
public float x, y, z;
public Vector3 cartesianCoordinates;
public Location(float theta, float phi, float r)
{
this.theta = theta;
this.phi = phi;
this.r= r;
this.polarCoordinates = new Vector3(theta, phi, r);
this.x = r * Mathf.Sin(phi) * Mathf.Cos(theta);
this.y = r * Mathf.Sin(phi) * Mathf.Sin(theta);
this.z = r * Mathf.Cos(phi);
this.cartesianCoordinates = new Vector3(x, y, z);
}
}
I managed to find a solution to this problem, though it's still not clear to me what the issue with the above code is.
Unity has an extremely handy function called Quaternion.FromToRotation that will generate the appropriate rotation if you simply pass in the appropriate destination vector.
In my case I was able to just do:
voxelObject.transform.rotation = Quaternion.FromToRotation(new Vector3(0, 0, 1), voxel.location.cartesianCoordinates);

Trying to map a sphere with Perlin Noise

So as the title says I'm trying to map properly a octahedron sphere using a 3D Perlin Noise as a procedural texture.
I suppose it has something to do about the UVs or about the edges vertices of the texture (left, right, probably even top and down). It's a texture with size 512*512, but it can be 1024*1024.
I've been documentating, trying other techniques, using normal maps, tangets, etc but i still can't figure out how to solve that seam (keep in mind it should be procedurally generated) to generate a surface around the sphere to update it during runtime (in that way I can change the noise (shape of the terrain) as well as the colours).
By the way, when I do the same with a prepared texture (1024*512) with the edges properly corrected the seam disappear, but what I want is the ability to change it in run time (can survive without it but would be nice to have it)
private void OnEnable()
{
if(autoUpdateTexture)
{
if (texture == null)
{
texture = new Texture2D(resolution, resolution, TextureFormat.RGB24, true);
texture.name = "Procedural Texture";
texture.wrapMode = TextureWrapMode.Repeat;
texture.filterMode = FilterMode.Trilinear;
texture.anisoLevel = 9;
GetComponent<MeshRenderer>().sharedMaterial.mainTexture = texture;
}
FillTexture();
}
}
public void FillTexture()
{
if (texture.width != resolution)
{
texture.Resize(resolution, resolution);
}
Vector3 point00 = transform.TransformPoint(new Vector3(-0.5f, -0.5f));
Vector3 point10 = transform.TransformPoint(new Vector3(0.5f, -0.5f));
Vector3 point01 = transform.TransformPoint(new Vector3(-0.5f, 0.5f));
Vector3 point11 = transform.TransformPoint(new Vector3(0.5f, 0.5f));
NoiseMethod method = Noise.noiseMethods[(int)type][dimensions - 1];
float stepSize = 1f / resolution;
for (int y = 0; y < resolution; y++)
{
Vector3 point0 = Vector3.Lerp(point00, point01, (y + 0.5f) * stepSize);
Vector3 point1 = Vector3.Lerp(point10, point11, (y + 0.5f) * stepSize);
for (int x = 0; x < resolution; x++)
{
Vector3 point = Vector3.Lerp(point0, point1, (x + 0.5f) * stepSize);
float sample = Noise.Sum(method, point, frequency, octaves, lacunarity, persistence);
if (type != NoiseMethodType.Value)
{
sample = sample * 0.5f + 0.5f;
}
texture.SetPixel(x, y, coloring.Evaluate(sample));
}
}
texture.Apply();
}
So, I have 2 images, one showing the 3D generated noise in the sphere (when I save the textue to png it just goes to 2D, something obvious)
And the other one, showing that 3D noise IN the sphere with a seam at the edges, so the thing is get that 3D noise in the sphere without the seam.
If you need any more related info, please let me know, as this is giving me a nice headache.
procedural texture in 2D
3D noise on sphere

OpenTK GL.Translate 2D camera on GLControl

I am doing some ascii game, via using OpenTK & Winforms. And stack with Camera movement and view in 2D.
Those code i am using for mouse event and translate positions:
public static Point convertScreenToWorldCoords(int x, int y)
{
int[] viewport = new int[4];
Matrix4 modelViewMatrix, projectionMatrix;
GL.GetFloat(GetPName.ModelviewMatrix, out modelViewMatrix);
GL.GetFloat(GetPName.ProjectionMatrix, out projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
Vector2 mouse;
mouse.X = x;
mouse.Y = y;
Vector4 vector = UnProject(ref projectionMatrix, modelViewMatrix, new Size(viewport[2], viewport[3]), mouse);
Point coords = new Point((int)vector.X, (int)vector.Y);
return coords;
}
public static Vector4 UnProject(ref Matrix4 projection, Matrix4 view, Size viewport, Vector2 mouse)
{
Vector4 vec;
vec.X = 2.0f * mouse.X / (float)viewport.Width - 1;
vec.Y = 2.0f * mouse.Y / (float)viewport.Height - 1;
vec.Z = 0;
vec.W = 1.0f;
Matrix4 viewInv = Matrix4.Invert(view);
Matrix4 projInv = Matrix4.Invert(projection);
Vector4.Transform(ref vec, ref projInv, out vec);
Vector4.Transform(ref vec, ref viewInv, out vec);
if (vec.W > float.Epsilon || vec.W < float.Epsilon)
{
vec.X /= vec.W;
vec.Y /= vec.W;
vec.Z /= vec.W;
}
return vec;
}
//on mouse click event
Control control = sender as Control;
Point worldCoords = convertScreenToWorldCoords(e.X, control.ClientRectangle.Height - e.Y);
playerX = (int)Math.Floor((double)worldCoords.X / 9d);
playerY = (int)Math.Floor((double)worldCoords.Y / 9d);
And those code will setup my Projection, but, something wrong here...
//Set Projection
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, Width * charWidth * scale, Height * charHeight * scale, 0, -1, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.Translate(playerX, playerY, 0);
Well, my problem is GL.Translate. In this case, they not focused on playerX,Y and movement of "camera" seems reversed. If i am put GL.Translate(-playerX, -playerY, 0); - They seems moves correct, but anyways View seems not focused on player Object (player object should be always on center position of view, typical top-down view camera). But I dont know how to setting up them correctly. My experiments, multiple, devide, etc. with my X,Y pos - does not give me correct view. How it should be in this case?

C# / OpenTK, why does my sphere not look smooth?

This should hopefully be a simple question. So I finally figured out how to render stuff in 3D in OpenTK. Great! Only problem is, it doesn't quite look how I expect. I'm drawing a sphere using the Polar method, and drawing using PrimitiveType.Polygon.
Here's the algorithm for calculating the coordinates. What I'm doing is stepping through each phi then theta in the sphere, incrementally adding more adjacent quads to my final point list:
Point 1: Theta1, Phi1
Point 2: Theta1, Phi2
Point 3: Theta2, Phi2
Point 4: Theta2: Phi1
protected static RegularPolygon3D _create_unit(int n)
{
List<Vector3> pts = new List<Vector3>();
float theta = 0.0f;
float theta2 = 0.0f;
float phi = 0.0f;
float phi2 = 0.0f;
float segments = n;
float cosT = 0.0f;
float cosT2 = 0.0f;
float cosP = 0.0f;
float cosP2 = 0.0f;
float sinT = 0.0f;
float sinT2 = 0.0f;
float sinP = 0.0f;
float sinP2 = 0.0f;
List<Vector3> current = new List<Vector3>(4);
for (float lat = 0; lat < segments; lat++)
{
phi = (float)Math.PI * (lat / segments);
phi2 = (float)Math.PI * ((lat + 1.0f) / segments);
cosP = (float)Math.Cos(phi);
cosP2 = (float)Math.Cos(phi2);
sinP = (float)Math.Sin(phi);
sinP2 = (float)Math.Sin(phi2);
for (float lon = 0; lon < segments; lon++)
{
current = new List<Vector3>(4);
theta = TWO_PI * (lon / segments);
theta2 = TWO_PI * ((lon + 1.0f) / segments);
cosT = (float)Math.Cos(theta);
cosT2 = (float)Math.Cos(theta2);
sinT = (float)Math.Sin(theta);
sinT2 = (float)Math.Sin(theta2);
current.Add(new Vector3(
cosT * sinP,
sinT * sinP,
cosP
));
current.Add(new Vector3(
cosT * sinP2,
sinT * sinP2,
cosP2
));
current.Add(new Vector3(
cosT2 * sinP2,
sinT2 * sinP2,
cosP2
));
current.Add(new Vector3(
cosT2 * sinP,
sinT2 * sinP,
cosP
));
pts.AddRange(current);
}
}
var rtn = new RegularPolygon3D(pts);
rtn.Translation = Vector3.ZERO;
rtn.Scale = Vector3.ONE;
return rtn;
}
And so my Sphere class looks like this:
public class Sphere : RegularPolygon3D
{
public static Sphere Create(Vector3 center, float radius)
{
var rp = RegularPolygon3D.Create(30, center, radius);
return new Sphere(rp);
}
private Sphere(RegularPolygon3D polygon) : base(polygon)
{
}
}
I should also mention, that the color of this sphere is not constant. I 2 dimensions, I have this code that works great for gradients. In 3D...not so much. That's why my sphere has multiple colors. The way the 2d gradient code works, is there is a list of colors coming from a class I created called GeometryColor. When the polygon is rendered, every vertex gets colored based off the list of colors within GeometryColor. So if there are 3 colors the user wished to gradient between, and there were 6 vertices (hexagon), then the code would assign the first 2 vertices color 1, the 2nd two color 2, then the last 2 color 3. The following code shows how the color for the vertex is calculated.
public ColorLibrary.sRGB GetVertexFillColor(int index)
{
var pct = ((float)index + 1.0f) / (float)Vertices.Count;
var colorIdx = (int)Math.Round((FillColor.Colors.Count - 1.0f) * pct);
return FillColor.Colors[colorIdx];
}
Anyway, here's the output I'm getting...hope somebody can see my error...
Thanks.
Edit: If I only use ONE Vertex color (i,e instead of my array of 4 diff colors), then I get a completely smooth sphere...although without lighting and stuff its hard to tell its anything but a circle lol)
Edit....so somehow my sphere is slightly see through...even though all my alphas are set to 1.0f and I'm doing depth testing..
GL.DepthMask(true);
GL.Enable(EnableCap.DepthTest);
GL.ClearDepth(1.0f);
GL.DepthFunc(DepthFunction.Lequal);
Final edit: OK, it has SOMETHING to do with my vertices I'm guessing, because when I use PrimitiveType.Quads it works perfectly....

Categories

Resources