Mirrored mesh and wrong UV map runtime export - c#

EDIT: So after a brief contact with the Assimp dev, I was pointed towards the import process. As I took over the code from someone else, I did not think looking that part:
using (var importer = new AssimpContext())
{
scene = importer.ImportFile(file, PostProcessSteps.Triangulate | PostProcessSteps.FlipUVs | PostProcessSteps.JoinIdenticalVertices);
}
FlipUVs does exactly what it says, it flips on the y axis so the origin is now top left corner. So now I am able to get the model with proper UV but still mirrored mesh. Setting the parent object with scale x = -1 flips it back to normal and makes it look fine but I guess this is not meant to be. So I keep looking.
See the picture, there are two crane models. The one on the left is loaded at runtime via serialization and reconstruction while the right one is the original one simply dragged to the scene.
Serialization happens with Assimp library.
The floor happens to be created first and seems to get the right uv map. While the other items get wrong uv map. Though I am printing the values of the uv maps and they seem to match the original one as they should.
This is how to serialize, this is Mesh class from Assimp, not the Unity Mesh class, the app serializing is Windows application built in UWP:
private static void SerializeMeshes(BinaryWriter writer, IEnumerable<Mesh> meshes)
{
foreach (Mesh mesh in meshes)
{
ICollection<int> triangles = MeshLoadTriangles(mesh);
MeshSerializeHeader(writer, mesh.Name, mesh.VertexCount, triangles.Count, mesh.MaterialIndex);
MeshSerializeVertices(writer, mesh.Vertices);
MeshSerializeUVCoordinate(writer, mesh.TextureCoordinateChannels);
MeshSerializeTriangleIndices(writer, triangles);
}
}
private static void MeshSerializeUVCoordinate(BinaryWriter writer, List<Vector3D>[] textureCoordinateChannels)
{
// get first channel and serialize to writer. Discard z channel
// This is Vector3D since happening outside Unity
List<Vector3D> list = textureCoordinateChannels[0];
foreach (Vector3D v in list)
{
float x = v.X;
float y = v.Y;
writer.Write(x);
writer.Write(y);
}
}
private static void MeshSerializeVertices(BinaryWriter writer, IEnumerable<Vector3D> vertices)
{
foreach (Vector3D vertex in vertices)
{
Vector3D temp = vertex;
writer.Write(temp.X);
writer.Write(temp.Y);
writer.Write(temp.Z);
}
}
private static void MeshSerializeTriangleIndices(BinaryWriter writer, IEnumerable<int> triangleIndices)
{
foreach (int index in triangleIndices) { writer.Write(index); }
}
And this is the invert process:
private static void DeserializeMeshes(BinaryReader reader, SceneGraph scene)
{
MeshData[] meshes = new MeshData[scene.meshCount];
for (int i = 0; i < scene.meshCount; i++)
{
meshes[i] = new MeshData();
MeshReadHeader(reader, meshes[i]);
MeshReadVertices(reader, meshes[i]);
MeshReadUVCoordinate(reader, meshes[i]);
MeshReadTriangleIndices(reader, meshes[i]);
}
scene.meshes = meshes as IEnumerable<MeshData>;
}
private static void MeshReadUVCoordinate(BinaryReader reader, MeshData meshData)
{
bool hasUv = reader.ReadBoolean();
if(hasUv == false) { return; }
Vector2[] uvs = new Vector2[meshData.vertexCount];
for (int i = 0; i < uvs.Length; i++)
{
uvs[i] = new Vector2();
uvs[i].x = reader.ReadSingle();
uvs[i].y = reader.ReadSingle();
}
meshData.uvs = uvs;
}
private static void MeshReadHeader(BinaryReader reader, MeshData meshData)
{
meshData.name = reader.ReadString();
meshData.vertexCount = reader.ReadInt32();
meshData.triangleCount = reader.ReadInt32();
meshData.materialIndex = reader.ReadInt32();
}
private static void MeshReadVertices(BinaryReader reader, MeshData meshData)
{
Vector3[] vertices = new Vector3[meshData.vertexCount];
for (int i = 0; i < vertices.Length; i++)
{
vertices[i] = new Vector3();
vertices[i].x = reader.ReadSingle();
vertices[i].y = reader.ReadSingle();
vertices[i].z = reader.ReadSingle();
}
meshData.vertices = vertices;
}
private static void MeshReadTriangleIndices(BinaryReader reader, MeshData meshData)
{
int[] triangleIndices = new int[meshData.triangleCount];
for (int i = 0; i < triangleIndices.Length; i++)
{
triangleIndices[i] = reader.ReadInt32();
}
meshData.triangles = triangleIndices;
}
MeshData is just a temporary container with the deserialized values from the fbx.
Then, meshes are created:
private static Mesh[] CreateMeshes(SceneGraph scene)
{
Mesh[] meshes = new Mesh[scene.meshCount];
int index = 0;
foreach (MeshData meshData in scene.meshes)
{
meshes[index] = new Mesh();
Vector3[] vec = meshData.vertices;
meshes[index].vertices = vec;
meshes[index].triangles = meshData.triangles;
meshes[index].uv = meshData.uvs;
meshes[index].normals = meshData.normals;
meshes[index].RecalculateNormals();
index++;
}
return meshes;
}
I don't see any reason in the code that should result in this kind of behaviour, I'd say it would totally screw the mesh if the values were wrong.
I can see that the fbx files I have are using quad instead of triangle for the indexing.
Could it be that Assimp does not go to well with this?

I did not solve the issue in a proper way from Assimp.
The basic solution we used was to scale negatively the axis that was flipped in the object transform.
A more appropriate solution would have been to feed all the vertices to a matrix in the Unity side so it resolves the position of the vertices properly.
Get vertex list
foreach vertex multiply with rotation matrix
Assign array to mesh
Use mesh to render

Related

Instantiate predefined number of object along a raycast in Unity

I have a raycast that's being rendered every frame based on 2 points, and those 2 points change position each frame.
What I need is a system that doesn't need a direction, or a number of objects, but instead takes in 2 points, and then instantiates or destroys as many objects as necessary to get the instantiated objects from one side to another minus spaceBetweenPoints. If you wanted you could think of it as an Angry Birds Style slingshot HUD, except without gravity, and based on raycasts.
My Script:
public int numOfPoints; // The number of points that are generated (This would need to chnage based one the distance in the end)
public float spaceBetweenPoints; // The spacing between the generated points
private GameObject[] predictionPoints; // The prefab to be gernerated
private Vector2 firstPathStart; // The starting point for the raycast (Changes each frame)
private Vector2 firstPathEnd; // The ending point for the raycast (Changes each frame)
void start()
{
predictionPoints = new GameObject[numOfPoints];
for (int i = 0; i < numOfPoints; i++)
{
predictionPoints[i] = Instantiate(predictionPointPrefab, firePoint.position,
Quaternion.identity);
}
}
void Update
{
Debug.DrawLine(firstPathStart, firstPathEnd, UnityEngine.Color.black);
DrawPredictionDisplay();
}
void DrawPredictionDisplay()
{
for (int i = 0; i < numOfPoints; i++)
{
predictionPoints[i].transform.position = predictionPointPosition(i * spaceBetweenPoints);
}
}
Vector2 predictionPointPosition(float time)
{
Vector2 position = (Vector2)firstPathStart + direction.normalized * 10f * time;
return position;
}
The current system simply takes in a starting position, a direction, and then moves a preset number of objects in that direction based on time. This way of doing it also causes problems because it's endess instead of only going till the end of the raycast: (Pardon my drawing skills)
Blue line = raycast
Black dots = instantiated prefab
Orange dot = raycast orange
Green dot = end of raycast
Notes:
direction is the momentum which I set in the editor, I needed it to put together what I currently have working, but it shouldn't be necessary when running based on points.
If you ask me I would say it is kinda easy if you know little bit of Math trickery. I'm not saying that I'm very good at Math, but once you get it it's kind of easy to pull off next time. Here if I try to explain everything, i won't be able to explain clearly. Take a look as the code below I've commented the whole code so that you can understand easily.
Basically I used a Method called Vector2.Lerp() Liner Interpolation, which means that this method will return value between point1, and point2 based on the value of 3rd argument t which goes from 0 to 1.
public class TestScript : MonoBehaviour
{
public Transform StartPoint;
public Transform EndPoint;
public float spaceBetweenPoints;
[Space]
public Vector2 startPosition;
public Vector2 endPosition;
[Space]
public List<Vector3> points;
private float distance;
private void Update()
{
startPosition = StartPoint.position; //Setting Starting point and Ending point.
endPosition = EndPoint.position;
//Finding the distance between point
distance = Vector2.Distance(startPosition, endPosition);
//Generating the points
GeneratePointsObjects();
Debug.DrawLine(StartPoint.position, EndPoint.position, Color.black);
}
private void OnDrawGizmos()
{
//Drawing the Dummy Gizmo Sphere to see the points
Gizmos.color = Color.black;
foreach (Vector3 p in points)
{
Gizmos.DrawSphere(p, spaceBetweenPoints / 2);
}
}
private void OnValidate()
{
//Validating that the space between two objects is not 0 because that would be Raise an exception "Devide by Zero"
if (spaceBetweenPoints <= 0)
{
spaceBetweenPoints = 0.01f;
}
}
private void GeneratePointsObjects()
{
//Vlearing the list so that we don't iterate over old points
points.Clear();
float numbersOfPoints = distance / spaceBetweenPoints; //Finding numbers of objects to be spawned by dividing "distance" by "spaceBetweenPoints"
float increnment = 1 / numbersOfPoints; //finding the increment for Lerp this will always be between 0 to 1, because Lerp() takes value between 0 to 1;
for (int i = 1; i < numbersOfPoints; i ++)
{
Vector3 v = Vector2.Lerp(startPosition, endPosition, increnment * i); //Find next position using Vector2.Lerp()
points.Add(v);//Add the newlly found position in List so that we can spwan the Object at that position.
}
}
}
Update: Added, How to set prefab on the positions
I just simply Destroyed old objects and Instantiated new Objects. But remember instantiating and Destroying object frequently in your game in unity will eat-up memory on your player's machine. Os I would suggest you to use Object-Pooling. For the reference I'll add a link to tutorial.
private void Update()
{
startPosition = StartPoint.position; //Setting Starting point and Ending point.
endPosition = EndPoint.position;
//Finding the distance between point
distance = Vector2.Distance(startPosition, endPosition);
//Generating the points
GeneratePointsObjects();
//Update: Generating points/dots on all to location;
InstenciatePrefabsOnPositions();
Debug.DrawLine(StartPoint.position, EndPoint.position, Color.black);
}
private void InstenciatePrefabsOnPositions()
{
//Remove all old prefabs/objects/points
for (int i = 0; i < pointParent.childCount; i++)
{
Destroy(pointParent.GetChild(i).gameObject);
}
//Instantiate new Object on the positions calculated in GeneratePointsObjects()
foreach (Vector3 v in points)
{
Transform t = Instantiate(pointPrefab);
t.SetParent(pointParent);
t.localScale = Vector3.one;
t.position = v;
t.gameObject.SetActive(true);
}
}
Hope this helps please see below links for more reference
OBJECT POOLING in Unity
Vector2.Lerp
I hope I understood you right.
First, compute the A to B line, so B minus A.
To get the number of needed objects, divide the line magnitude by the objects' spacing. You could also add the diameter of the prediction point object to avoid overlapping.
Then to get each object position, write (almost) the same for loop.
Here's what I came up with, didn't tested it, let me know if it helps!
public class CustomLineRenderer : MonoBehaviour
{
public float SpaceBetweenPoints;
public GameObject PredictionPointPrefab;
// remove transforms if you need to
public Transform start;
public Transform end;
private List<GameObject> _predictionPoints;
// these are your raycast start & end point, make them public or whatever
private Vector2 _firstPathStart;
private Vector2 _firstPathEnd;
private void Awake()
{
_firstPathStart = start.position;
_firstPathEnd = end.position;
_predictionPoints = new List<GameObject>();
}
private void Update()
{
_firstPathStart = start.position;
_firstPathEnd = end.position;
// using any small value to clamp everything and avoid division by zero
if (SpaceBetweenPoints < 0.001f) SpaceBetweenPoints = 0.001f;
var line = _firstPathEnd - _firstPathStart;
var objectsNumber = Mathf.FloorToInt(line.magnitude / SpaceBetweenPoints);
var direction = line.normalized;
// Update the collection so that the line isn't too short
for (var i = _predictionPoints.Count; i <= objectsNumber; ++i)
_predictionPoints.Add(Instantiate(PredictionPointPrefab));
for (var i = 0; i < objectsNumber; ++i)
{
_predictionPoints[i].SetActive(true);
_predictionPoints[i].transform.position = _firstPathStart + direction * (SpaceBetweenPoints * i);
}
// You could destroy objects, but it's better to add them to the pool since you'll use them quite often
for (var i = objectsNumber; i < _predictionPoints.Count; ++i)
_predictionPoints[i].SetActive(false);
}
}

Move Gameobject generating Objects based on Data

I am generating gameobjects (spheres) based on coordinates, which are stored in a .csv file. I have a Gameobject with a Single Sphere as primitive childobject. Based on the data the Object will clone this sphere 17 times and move them around. I can move the whole thing around like i want it to by accessing the parent object, but in editing mode the position of the root sphere makes it uneasy to use.
The following Code makes this possible.
public GameObject parentObj;
public TextAsset csvFile;
[SerializeField]
private float scaleDownFactor = 10;
private int index = 0;
//class Deck : MonoBehaviour
//{
[SerializeField]
private GameObject[] deck;
private GameObject[] instanciatedObjects;
private void Start()
{
Fill();
}
public void Fill()
{
instanciatedObjects = new GameObject[deck.Length];
for (int i = 0; i < deck.Length; i++)
{
instanciatedObjects[i] = Instantiate(deck[i]) as GameObject;
}
}
//}
// Update is called once per frame
void Update()
{
readCSV();
}
void readCSV()
{
string[] frames = csvFile.text.Split('\n');
int[] relevant = {
0
};
string[] coordinates = frames[index].Split(',');
for (int i = 0; i < 17; i++)
{
float x = float.Parse(coordinates[relevant[i] * 3]) / scaleDownFactor;
float y = float.Parse(coordinates[relevant[i] * 3+1]) / scaleDownFactor;
float z = float.Parse(coordinates[relevant[i] * 3+2]) / scaleDownFactor;
//objectTest.transform.Rotate(float.Parse(fields[1]), float.Parse(fields[2]), float.Parse(fields[3]));
//objectTest.transform.Translate(x, y, z);
//parentObj.transform.position = new Vector3(x, y, z);
instanciatedObjects[i].transform.position = new Vector3(parentObj.transform.position.x, parentObj.transform.position.y, parentObj.transform.position.z);
instanciatedObjects[i].transform.eulerAngles = new Vector3(parentObj.transform.eulerAngles.x, parentObj.transform.eulerAngles.y, parentObj.transform.eulerAngles.z);
//instanciatedObjects[i].transform.position = new Vector3(x, y, z);
instanciatedObjects[i].transform.Translate(x, y, z);
}
if (index < frames.Length - 1)
{
index++;
}
if (index >= frames.Length -1)
{
index = 0;
}
}
Here is a Screenshot:
So my question is:
How can I set the Position of this Sphere to one of the moving points, without changing the position of the cloned objects? Since all behave based on the BaseSphere?
Is it possible to make the BaseSphere not visible While the Objects are getting cloned or generated?
I am looking for a solution, that makes it easier to move the datagenerated Object around in Editor.
I would appreciate any kind of input.
Make all Spheres children of Empty Gameobject (for example. Sphere_root) and use this for moving Spheres.
Also check Scriptable Objects. It is simple and very quick method to manage data in Unity.
#Edit
public void Fill()
{
instanciatedObjects = new GameObject[deck.Length];
for (int i = 0; i < deck.Length; i++)
{
instanciatedObjects[i] = Instantiate(deck[i]) as GameObject;
instanciatedObjects[i].transform.parent = Baumer; // or Sphere Root or somehing else.
}
}

Saving Cube like in Minecraft game

I'm trying to build a game for Android just like Minecraft using Unity. How can I save my progress?
I'm trying this code but I'm still clueless if I'm on the right track.
public class SavePref : MonoBehaviour {
GameObject[] objects;
float x;
float y;
float z;
// Use this for initialization
void Start () {
objects = GameObject.FindGameObjectsWithTag("ObjectSnap");
}
// Update is called once per frame
void Update () {
}
public void Load()
{
foreach (GameObject obj in objects)
{
obj.name = PlayerPrefs.GetString("Ojects");
x = PlayerPrefs.GetFloat("X");
y = PlayerPrefs.GetFloat("Y");
z = PlayerPrefs.GetFloat("Z");
}
}
public void Save()
{
objects = GameObject.FindObjectsOfType(typeof(GameObject)) as GameObject[];
Debug.Log(objects.Length);
foreach (GameObject obj in objects)
{
PlayerPrefs.SetString("Objects", obj.name);
Debug.Log(obj.name);
x = obj.transform.position.x;
PlayerPrefs.SetFloat("X", x);
y = obj.transform.position.y;
PlayerPrefs.SetFloat("Y", y);
z = obj.transform.position.z;
PlayerPrefs.SetFloat("Z", z);
Debug.Log(obj.transform.position.x);
Debug.Log(obj.transform.position.y);
Debug.Log(obj.transform.position.z);
}
}
}
The reason is you are overwriting the same values.
Every object in the scene will overwrite the same 'X' 'Y' 'Z' and 'Objects' variables of PlayerPerfs.
So, if you want to save just blocks positions,
each block in the scene has to have its own sceneID.
When you write the PlayerPerfs, use these IDs.
For example:
public GameObject[] inscene;
void SaveBlock(){
inscene = GameObject.FindGameObjectsWithType("ObjectSnap");
for(int i = 0; i < inscene.Length; i++){
PlayerPerfs.SetFloat(i.ToString() + "_x", x);
PlayerPerfs.SetFloat(i.ToString() + "_y", y);
PlayerPerfs.SetFloat(i.ToString() + "_z", z);
}
PlayerPerfs.SetInt("Count", inscene.Length);
}
void LoadBlocks(){
int count = PlayerPerfs.GetInt("Count");
inscene = new GameObject[count];
for(int i = 0; i < count; i++)
{
float x = PlayerPerfs.GetFloat(i.ToString() + "_x");
float y = PlayerPerfs.GetFloat(i.ToString() + "_y");
float z = PlayerPerfs.GetFloat(i.ToString() + "_z");
inscene[i] = GameObject.CreatePrimitive(PrimitiveType.Cube);
inscene[i].transform.position = new Vector3(x, y, z);
inscene[i].tag = "ObjectSnap";
}
}
This code will just save blocks positions, and will recreate the world with white cubes.
If you want to save the type of blocks, you should have all types of blocks as prefarbs, and instantiate prefabs, when Load().
P.S. Anyway, such realisation of Minecraft clone for Android is terrible.
Imagine, you have one little chunk (32*32*32) full of blocks, then the RAM will have to handle 32768 blocks in memory (which just kill the application).
So, you have to operate not blocks, but the sides of these blocks, to cull the sides, which aren't visible.
And you shouldn't save the scene information via PlayerPerfs. Use System.IO instead. As i know, PlayerPerfs, saves information in registry, which isn't good for such data.

Get edge vertex points then draw lines across the edge

I am trying to get vertex points of a 3D Object then draw line with LineRenderer. I want the lines to be drawn in order and not crossing through the middle of the model but the lines passes through the model. The image below shows what it looks like:
As you can see, I am almost close to getting this done but the line is passing through the model.
The vertext points probably needs to be sorted. How can I fix this? How I can I make the points draw on the edge without passing through the Object?
Just attach the code below to a simple Cube to see the problem described above:
public class MapEdgeOutline : MonoBehaviour
{
Color beginColor = Color.yellow;
Color endColor = Color.red;
float hightlightSize = 0.1f;
void Start()
{
createEdgeLineOnModel();
}
void createEdgeLineOnModel()
{
EdgeVertices allVertices = findAllVertices();
//Get the points from the Vertices
Vector3[] verticesToDraw = allVertices.vertices.ToArray();
drawLine(verticesToDraw);
}
//Draws lines through the provided vertices
void drawLine(Vector3[] verticesToDraw)
{
//Create a Line Renderer Obj then make it this GameObject a parent of it
GameObject childLineRendererObj = new GameObject("LineObj");
childLineRendererObj.transform.SetParent(transform);
//Create new Line Renderer if it does not exist
LineRenderer lineRenderer = childLineRendererObj.GetComponent<LineRenderer>();
if (lineRenderer == null)
{
lineRenderer = childLineRendererObj.AddComponent<LineRenderer>();
}
//Assign Material to the new Line Renderer
//Hidden/Internal-Colored
//Particles/Additive
lineRenderer.material = new Material(Shader.Find("Hidden/Internal-Colored"));
//Set color and width
lineRenderer.SetColors(beginColor, endColor);
lineRenderer.SetWidth(hightlightSize, hightlightSize);
//Convert local to world points
for (int i = 0; i < verticesToDraw.Length; i++)
{
verticesToDraw[i] = gameObject.transform.TransformPoint(verticesToDraw[i]);
}
//5. Set the SetVertexCount of the LineRenderer to the Length of the points
lineRenderer.SetVertexCount(verticesToDraw.Length + 1);
for (int i = 0; i < verticesToDraw.Length; i++)
{
//Draw the line
Vector3 finalLine = verticesToDraw[i];
lineRenderer.SetPosition(i, finalLine);
//Check if this is the last loop. Now Close the Line drawn
if (i == (verticesToDraw.Length - 1))
{
finalLine = verticesToDraw[0];
lineRenderer.SetPosition(verticesToDraw.Length, finalLine);
}
}
}
EdgeVertices findAllVertices()
{
//Get MeshFilter from Cube
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] meshVerts = mesh.vertices;
//Get temp vert array
float[][] meshVertsArray;
meshVertsArray = new float[meshVerts.Length][];
//Where to store the vertex points
EdgeVertices allVertices = new EdgeVertices();
allVertices.vertices = new List<Vector3>();
int indexCounter = 0;
//Get x,y,z vertex point
while (indexCounter < meshVerts.Length)
{
meshVertsArray[indexCounter] = new float[3];
meshVertsArray[indexCounter][0] = meshVerts[indexCounter].x;
meshVertsArray[indexCounter][1] = meshVerts[indexCounter].y;
meshVertsArray[indexCounter][2] = meshVerts[indexCounter].z;
Vector3 tempVect = new Vector3(meshVertsArray[indexCounter][0], meshVertsArray[indexCounter][1], meshVertsArray[indexCounter][2]);
//Store the vertex pont
allVertices.vertices.Add(tempVect);
indexCounter++;
}
return allVertices;
}
}
public struct EdgeVertices
{
public List<Vector3> vertices;
}

How to show vertices on a cube when selected in Unity (during runtime)?

I'm trying to write a script that makes a cube turn blue and display the vertices of the cube on it when selected during runtime. So basically when clicked, it will show all the vertices and then allow a user to select different vertices of the cube.
This is what i have so far, essentially only turning the cube blue. How can I display vertices of the cube as spheres? I want to be able to select these vertices later.
using UnityEngine;
using System.Collections;
public class Cube : MonoBehaviour {
void OnMouseDown() {
Renderer rend = GetComponent<Renderer>();
rend.material.color = Color.blue;
//insert method to display vertices
}
}
Since both the cube and spheres are primitives:
you find the vertices like this (assuming cube is a GameObject):
Vector3[] vertices = cube.GetComponent<MeshFilter>().mesh.vertices;
And then to create your spheres (need using System.Linq):
GameObject[] spheres = vertices.Select(vert =>
{
GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere)
sphere.transform.position = vert;
return sphere;
})
.ToArray();
UPDATE: Okay i see your problem with too many vertices now, and this is how i would do it, and i would personally make threshold be the radius of your spheres, so that once the spheres begin overlapping they just become one:
float threshold = 0.1f;
public void CreateSpheres()
{
List<GameObject> spheres = new List<GameObject>();
foreach (Vector3 vert in vertices)
{
if (spheres.Any(sph => (sph.transform.position - vert) < threshold)) continue;
GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere)
sphere.transform.position = vert;
spheres.Add(sphere);
}
}
Couldn't get the Select Function to work as show in one of the previous example. The coordinates were found using GetComponent<MeshFilter>.mesh.vertices; as described in the other answer.
void OnMouseDown() {
Renderer rend = GetComponent<Renderer>();
rend.material.color = Color.blue;
Vector3[] vertices = GetComponent<MeshFilter>().mesh.vertices;
Vector3[] verts = removeDuplicates(vertices);
drawSpheres(verts);
}
The vertices array contains 24 elements for the reason shown here. So the function removeDuplicates was written to get rid of the duplicate vertices. This function is shown below:
Vector3[] removeDuplicates(Vector3[] dupArray) {
Vector3[] newArray = new Vector3[8]; //change 8 to a variable dependent on shape
bool isDup = false;
int newArrayIndex = 0;
for (int i = 0; i < dupArray.Length; i++) {
for (int j = 0; j < newArray.Length; j++) {
if (dupArray[i] == newArray[j]) {
isDup = true;
}
}
if (!isDup) {
newArray[newArrayIndex] = dupArray[i];
newArrayIndex++;
isDup = false;
}
}
return newArray;
}
Lastly the Spheres were drawn using the new verts Array using the drawSpheres function shown here:
void drawSpheres(Vector3[] verts) {
GameObject[] Spheres = new GameObject[verts.Length];
for (int i = 0; i < verts.Length; i++) {
Spheres[i] = GameObject.CreatePrimitive(PrimitiveType.Sphere);
Spheres[i].transform.position = verts[i];
Spheres[i].transform.localScale -= new Vector3(0.8F, 0.8F, 0.8F);
}
}

Categories

Resources