Can you create a plane from vuforia target images? [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
As seen in the photo below i have 4 vuforia target images. What i want to achieve is to measure the angle of the red axis joining two target images, against a plane which i want to generate using the three target images connected by the green line. Hopefully this is more understandable. How would i go about generating that plane with those three target images?
[![][1]][1]

You question is extremely broad! However I'll give it a shot:
I'm assuming by "Plane" you mean an actually visible 3D mesh. Otherwise you could directly use the 3 given coordinates to create a mathematical Plane.
First of all for a plane you usually use a rectangular mesh .. it is unclear in your question how exactly you want to built that from the 3 coordinates you get.
I'm simply assuming here you create a Parallelogram directly using the 3 coordinates for the corners A, B andC and then adding a 4th one D by adding the vector B→C to A
using UnityEngine;
[RequireComponent(typeof(MeshFilter))]
[RequireComponent(typeof(MeshRenderer))]
public class PlaneController : MonoBehaviour
{
public Transform target1;
public Transform target2;
public Transform target3;
private MeshFilter meshFilter;
private Mesh mesh;
private Vector3[] vertices;
private void Awake()
{
meshFilter = GetComponent<MeshFilter>();
mesh = meshFilter.mesh;
vertices = new Vector3[4];
// Set the 4 triangles (front and backside)
// using the 4 vertices
var triangles = new int[3 * 4];
// triangle 1 - ABC
triangles[0] = 0;
triangles[1] = 1;
triangles[2] = 2;
// triangle 2 - ACD
triangles[3] = 0;
triangles[4] = 2;
triangles[5] = 3;
// triangle 3 - BAC
triangles[6] = 1;
triangles[7] = 0;
triangles[8] = 2;
// triangle 4 - ADC
triangles[9] = 0;
triangles[10] = 3;
triangles[11] = 2;
mesh.vertices = vertices;
mesh.triangles = triangles;
}
// Update is called once per frame
void Update()
{
// build triangles according to target positions
vertices[0] = target1.position - transform.position; // A
vertices[1] = target2.position - transform.position; // B
vertices[2] = target3.position - transform.position; // C
// D = A + B->C
vertices[3] = vertices[0] + (vertices[2] - vertices[1]);
// update the mesh vertex positions
mesh.vertices = vertices;
// Has to be done in order to update the bounding box culling
mesh.RecalculateBounds();
}
}
Edit
After you changed the question: In this case you don't really need the visual plane but as mentioned before create the purely mathematical construct Plane.
The planes normal is a vector standing always 90° to the plane. You then can use e.g. Vector3.Angle in order to get the angular difference between the normal and your red vector (lets say between target1 and target4). Something like
#if UNITY_EDITOR
using UnityEditor;
#endif
using UnityEngine;
public class AngleChecker : MonoBehaviour
{
[Header("Input")]
public Transform target1;
public Transform target2;
public Transform target3;
public Transform target4;
[Header("Output")]
public float AngleOnPlane;
private void Update()
{
AngleOnPlane = GetAngle();
}
private float GetAngle()
{
// get the plane (green line/area)
var plane = new Plane(target1.position, target2.position, target3.position);
// get the red vector
var vector = target4.position - target1.position;
// get difference
var angle = Vector3.Angle(plane.normal, vector);
// since the normal itself stands 90° on the plane
return 90 - angle;
}
#if UNITY_EDITOR
// ONLY FOR VISUALIZATION
private void OnDrawGizmos()
{
// draw the plane
var mesh = new Mesh
{
vertices = new[]
{
target1.position,
target2.position,
target3.position,
target3.position + (target1.position - target2.position)
},
triangles = new[] {
0, 1, 2,
0, 2, 3,
1, 0, 2,
0, 3, 2
}
};
mesh.RecalculateNormals();
Gizmos.color = Color.white;
Gizmos.DrawMesh(mesh);
// draw the normal at target1
var plane = new Plane(target1.position, target2.position, target3.position);
Gizmos.color = Color.blue;
Gizmos.DrawRay(target1.position, plane.normal);
Handles.Label(target1.position + plane.normal, "plane normal");
// draw the red vector
Gizmos.color = Color.red;
Gizmos.DrawLine(target1.position, target4.position);
Handles.Label(target4.position , $"Angle to plane: {90 - Vector3.Angle(plane.normal, target4.position - target1.position)}");
}
#endif
}
After your comment I see you want to make it also visible in the scene.
So we are back to also the first part of my answer where exactly this was done.
Simply add the visible plane to the scene as well by merging both before mentioned scripts.
For making the text visual I recommend to go through the Unity UI Manual.
For making the line appear there are two options. You can either use a LineRenderer (API) or simply use a cube your rotate and scale accordingly:
// ! UNCOMMENT THIS IF YOU RATHER WANT TO USE THE LINERENDERER !
//#define USE_LINERENDERER
using UnityEngine;
using UnityEngine.UI;
[RequireComponent(typeof(MeshFilter))]
[RequireComponent(typeof(MeshRenderer))]
public class PlaneController : MonoBehaviour
{
[Header("Input")]
// Reference these in the Inspector
public Transform target1;
public Transform target2;
public Transform target3;
public Transform target4;
[Space]
public MeshFilter meshFilter;
[Header("Output")]
public float AngleOnPlane;
public Text angleText;
public float lineWith = 0.05f;
#if USE_LINERENDERER
public LineRenderer lineRenderer;
#else
public MeshRenderer cubeLine;
#endif
private Mesh mesh;
private Vector3[] vertices;
private Vector3 redLineDirection;
// if using line renderer
private Vector3[] positions = new Vector3[2];
private void Start()
{
InitializePlaneMesh();
#if USE_LINERENDERER
InitializeLineRenderer();
#endif
}
// Update is called once per frame
private void Update()
{
// update the plane mesh
UpdatePlaneMesh();
// update the angle value
UpdateAngle();
#if USE_LINERENDERER
// update the line either using the line renderer
UpdateLineUsingLineRenderer();
#else
// update the line rather using a simple scaled cube instead
UpdateLineUsingCube();
#endif
}
private void InitializePlaneMesh()
{
if (!meshFilter) meshFilter = GetComponent<MeshFilter>();
mesh = meshFilter.mesh;
vertices = new Vector3[4];
// Set the 4 triangles (front and backside)
// using the 4 vertices
var triangles = new int[3 * 4];
// triangle 1 - ABC
triangles[0] = 0;
triangles[1] = 1;
triangles[2] = 2;
// triangle 2 - ACD
triangles[3] = 0;
triangles[4] = 2;
triangles[5] = 3;
// triangle 3 - BAC
triangles[6] = 1;
triangles[7] = 0;
triangles[8] = 2;
// triangle 4 - ADC
triangles[9] = 0;
triangles[10] = 3;
triangles[11] = 2;
mesh.vertices = vertices;
mesh.triangles = triangles;
}
#if USE_LINERENDERER
private void InitializeLineRenderer()
{
lineRenderer.positionCount = 2;
lineRenderer.startWidth = lineWith;
lineRenderer.endWidth = lineWith;
lineRenderer.loop = false;
lineRenderer.alignment = LineAlignment.View;
lineRenderer.useWorldSpace = true;
}
#endif
private void UpdatePlaneMesh()
{
// build triangles according to target positions
vertices[0] = target1.position - transform.position; // A
vertices[1] = target2.position - transform.position; // B
vertices[2] = target3.position - transform.position; // C
// D = A + B->C
vertices[3] = vertices[0] + (vertices[2] - vertices[1]);
// update the mesh vertex positions
mesh.vertices = vertices;
// Has to be done in order to update the bounding box culling
mesh.RecalculateBounds();
}
private void UpdateAngle()
{
// get the plane (green line/area)
var plane = new Plane(target1.position, target2.position, target3.position);
// get the red vector
redLineDirection = target4.position - target1.position;
// get difference
var angle = Vector3.Angle(plane.normal, redLineDirection);
// since the normal itself stands 90° on the plane
AngleOnPlane = Mathf.Abs(90 - angle);
// write the angle value to a UI Text element
angleText.text = $"Impact Angle = {AngleOnPlane:F1}°";
// move text to center of red line and a bit above it
angleText.transform.position = (target1.position + target4.position) / 2f + Vector3.up * 0.1f;
// make text always face the user (Billboard)
// using Camera.main here is not efficient! Just for the demo
angleText.transform.forward = Camera.main.transform.forward;
}
#if USE_LINERENDERER
private void UpdateLineUsingLineRenderer()
{
positions[0] = target1.position;
positions[1] = target4.position;
lineRenderer.SetPositions(positions);
}
#else
private void UpdateLineUsingCube()
{
// simply rotate the cube so it is facing in the line direction
cubeLine.transform.forward = redLineDirection.normalized;
// scale it to match both target positions with its ends
cubeLine.transform.localScale = new Vector3(lineWith, lineWith, redLineDirection.magnitude);
// and move it to the center between both targets
cubeLine.transform.position = (target1.position + target4.position) / 2f;
}
#endif
}
Result using the LineRenderer
Result using the cube which looks almost the same

Related

How to change the neighboring color of selected triangle?

Image from unity scene
Hello, by the help of stack overflow (Unity | mesh.colors won't color my custom mesh object I was able to solve the problem where I can see the colorized triangle's face after being selected using ray-cast in unity as shown in the image attached. I was wondering how I can improve my method by making the neighbors of selected triangle also change its color as well. For example, as shown in the image the selected triangle's neighbors change its color to white and be there only for a few seconds.
I have shared my implemented code below. Any guidance will be really appreciated! Thank you.
using System.Linq;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.SceneManagement;
public class MyRayDraw : MonoBehaviour
{
public GameObject spherePrefab;
// Better to reference those already in the Inspector
[SerializeField] private MeshFilter meshFilter;
[SerializeField] private MeshRenderer meshRenderer;
[SerializeField] private MeshCollider meshCollider;
private Mesh mesh;
private void Awake()
{
if (!meshFilter) meshFilter = GetComponent<MeshFilter>();
if (!meshRenderer) meshRenderer = GetComponent<MeshRenderer>();
if (!meshCollider) meshCollider = GetComponent<MeshCollider>();
mesh = meshFilter.mesh;
// create new colors array where the colors will be created
var colors = new Color[mesh.vertices.Length];
for(var i = 0; i < colors.Length; i++)
{
colors[i] = new Color(1f, 0f, 0.07f);
}
// assign the array of colors to the Mesh.
mesh.colors = colors;
}
private void Update()
{
if (!Input.GetMouseButtonDown(0)) return;
var ray = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
Debug.Log(hit.triangleIndex);
//cube.transform.position = hit.point;
// Get current vertices, triangles, uvs and colors
var vertices = mesh.vertices;
var triangles = mesh.triangles;
//var uv = _mesh.uv;
var colors = mesh.colors;
// Get the vert indices for this triangle
var vertIndex_1 = triangles[hit.triangleIndex * 3 + 0];
var vertIndex_2 = triangles[hit.triangleIndex * 3 + 1];
var vertIndex_3 = triangles[hit.triangleIndex * 3 + 2];
// Get the positions for the vertices
var vertPos_1 = vertices[vertIndex_1];
var vertPos_2 = vertices[vertIndex_2];
var vertPos_3 = vertices[vertIndex_3];
// Now for all three vertices we first check if any other triangle if using it
// by simply count how often the indices are used in the triangles list
var verticesOccur_1 = 0;
var verticesOccur_2 = 0;
var verticesOccur_3 = 0;
for (var i = 0; i < triangles.Length; i++)
{
if (triangles[i] == vertIndex_1) verticesOccur_1++;
if (triangles[i] == vertIndex_2) verticesOccur_2++;
if (triangles[i] == vertIndex_3) verticesOccur_3++;
}
// Create copied Lists so we can dynamically add entries
var newVertices = vertices.ToList();
var newTriangles = triangles.ToList();
//var newUV = uv.ToList();
var newColors = colors.ToList();
// Now if a vertex is shared we need to add a new individual vertex
// and also an according entry for the color array
// and update the vertex index
// otherwise we will simply use the vertex we already have
if (verticesOccur_1 >= 1)
{
newVertices.Add(vertPos_1);
// newUV.Add(vert1UV);
newColors.Add(Color.green);
vertIndex_1 = newVertices.Count - 1;
}
if (verticesOccur_2 >= 1)
{
newVertices.Add(vertPos_2);
//newUV.Add(vert2UV);
newColors.Add(Color.green);
vertIndex_2 = newVertices.Count - 1;
}
if (verticesOccur_3 >= 1)
{
newVertices.Add(vertPos_3);
//newUV.Add(vert3UV);
newColors.Add(Color.green);
vertIndex_3 = newVertices.Count - 1;
}
// Update the indices of the hit triangle to use the (eventually) new
// vertices instead
triangles[hit.triangleIndex * 3 + 0] = vertIndex_1;
triangles[hit.triangleIndex * 3 + 1] = vertIndex_2;
triangles[hit.triangleIndex * 3 + 2] = vertIndex_3;
// Now we can simply add the new triangle
newTriangles.Add(vertIndex_1);
newTriangles.Add(vertIndex_2);
newTriangles.Add(vertIndex_3);
// And update the mesh
mesh.vertices = newVertices.ToArray();
mesh.triangles = newTriangles.ToArray();
//_mesh.uv = newUV.ToArray();
mesh.colors = newColors.ToArray();
// color these vertices
newColors[vertIndex_1] = Color.red;
newColors[vertIndex_2] = Color.red;
newColors[vertIndex_3] = Color.red;
// Recalculate the normals
mesh.RecalculateNormals();
}
else
{
Debug.Log("no hit");
}
}
}

Why is my line renderer not showing up in Unity 2D

I am trying to create a Fruit Ninja style game on Unity 2D and I want to create a trail that follows where the player has "cut". I've tried to instantiate a "cut" object that contains the line renderer every time a user drags. However, the line renderer is not showing up. Can anyone correct any errors or suggest a new method?
public class CreateCuts : MonoBehaviour
{
public GameObject cut;
public float cutDestroyTime;
private bool dragging = false;
private Vector2 swipeStart;
// Update is called once per frame
void Update()
{
if (Input.GetMouseButtonDown(0))
{
dragging = true;
swipeStart = Camera.main.ScreenToWorldPoint(Input.mousePosition);
}
else if (Input.GetMouseButtonUp(0) && dragging)
{
createCut();
}
}
private void createCut()
{
this.dragging = false;
Vector2 swipeEnd = Camera.main.ScreenToWorldPoint(Input.mousePosition);
GameObject cut = Instantiate(this.cut, this.swipeStart, Quaternion.identity) as GameObject;
cut.GetComponent<LineRenderer>().positionCount = 1 ;
cut.GetComponent<LineRenderer>().enabled = true;
cut.GetComponent<LineRenderer>().SetPosition(0, this.swipeStart);
cut.GetComponent<LineRenderer>().SetPosition(1, swipeEnd);
Vector2[] colliderPoints = new Vector2[2];
colliderPoints[0] = new Vector2(0.0f, 0.0f);
colliderPoints[1] = swipeEnd - swipeStart;
cut.GetComponent<EdgeCollider2D>().points = colliderPoints;
Destroy(cut.gameObject, this.cutDestroyTime);
}
}
I expect there to be a line, but nothing shows up. There is also a warning stating that the SetPosition(1, swipeEnd) is out of bounds.
EDIT: Here are the settings of my cut object
1st part of cut object settings
2nd part of cut object settings
Positions tab of line renderer
I want to create a trail that follows where the player has "cut".
The word "trail" indicates that you should rather use a trail renderer!
Manual: https://docs.unity3d.com/Manual/class-TrailRenderer.html
API reference: https://docs.unity3d.com/ScriptReference/TrailRenderer.html
Back to your original question:
Your linerenderer probably is rendered but at a random position, because of Vector2 to Vector3 conversion, i dunno your project structure but this can be the case.
Please post a picture with one of your cut gameobject, that holds your linerenderer, and also extend the positions tab on the linerenderer so we can see your points xyz coordinates
Also apply the changes mentioned by commenters, because you really need 2 verticies for a line :P
Update:
public class CreateCuts : MonoBehaviour
{
public GameObject cut;
public float cutDestroyTime;
private bool dragging = false;
private Vector3 swipeStart;
// Update is called once per frame
void Update()
{
if (Input.GetMouseButtonDown(0))
{
dragging = true;
swipeStart = Camera.main.ScreenToWorldPoint(Input.mousePosition);
Debug.Log("Swipe Start: " + swipeStart);
}
else if (Input.GetMouseButtonUp(0) && dragging)
{
createCut();
}
}
private void createCut()
{
this.dragging = false;
Vector3 swipeEnd = Camera.main.ScreenToWorldPoint(Input.mousePosition);
Debug.Log("SwipeEnd: " + swipeEnd);
GameObject cut = Instantiate(this.cut, swipeStart, Quaternion.identity);
cut.GetComponent<LineRenderer>().positionCount = 2;
// why is it not enabled by default if you just instantiate the gameobject O.o?
cut.GetComponent<LineRenderer>().enabled = true;
cut.GetComponent<LineRenderer>().SetPositions(new Vector3[]{
new Vector3(swipeStart.x, swipeStart.y, 10),
new Vector3(swipeEnd.x, swipeEnd.y, 10)
// z is zero cos we are in 2d in unity up axis is Y we set it to 10 for visibility reasons tho}
});
// Commented out cos atm we are "debugging" your linerenderer
// Vector2[] colliderPoints = new Vector2[2];
// colliderPoints[0] = new Vector2(0.0f, 0.0f);
// colliderPoints[1] = swipeEnd - swipeStart;
// cut.GetComponent<EdgeCollider2D>().points = colliderPoints;
//Destroy(cut.gameObject, this.cutDestroyTime);
}
}

How would you lerp the positions of two matrices?

In my code, I create 2 matrices that store information about two different meshes, using the following code:
Mesh mesh;
MeshFilter filter;
public SphereMesh SphereMesh;
public CubeMesh CubeMesh;
public Material pointMaterial;
public Mesh pointMesh;
public List<Matrix4x4> matrices1 = new List<Matrix4x4>();
public List<Matrix4x4> matrices2 = new List<Matrix4x4>();
[Space]
public float normalOffset = 0f;
public float globalScale = 0.1f;
public Vector3 scale = Vector3.one;
public int matricesNumber = 1; //determines which matrices to store info in
public void StorePoints()
{
Vector3[] vertices = mesh.vertices;
Vector3[] normals = mesh.normals;
Vector3 scaleVector = scale * globalScale;
// Initialize chunk indexes.
int startIndex = 0;
int endIndex = Mathf.Min(1023, mesh.vertexCount);
int pointCount = 0;
while (pointCount < mesh.vertexCount)
{
// Create points for the current chunk.
for (int i = startIndex; i < endIndex; i++)
{
var position = transform.position + transform.rotation * vertices[i] + transform.rotation * (normals[i].normalized * normalOffset);
var rotation = Quaternion.identity;
pointCount++;
if (matricesNumber == 1)
{
matrices1.Add(Matrix4x4.TRS(position, rotation, scaleVector));
}
if (matricesNumber == 2)
{
matrices2.Add(Matrix4x4.TRS(position, rotation, scaleVector));
}
rotation = transform.rotation * Quaternion.LookRotation(normals[i]);
}
// Modify start and end index to the range of the next chunk.
startIndex = endIndex;
endIndex = Mathf.Min(startIndex + 1023, mesh.vertexCount);
}
}
The GameObject starts as a Cube mesh, and this code stores the info about the mesh in matrices1. Elsewhere in code not shown, I have it so that the Mesh changes to a Sphere and then changes matricesnumber to 2, then triggers the code above to store the info of the new Sphere mesh in matrices2.
This seems to be working, as I'm able to use code like Graphics.DrawMesh(pointMesh, matrices1[i], pointMaterial, 0);
to draw 1 mesh per vertex of the Cube mesh. And I can use that same line (but with matrices2[i]) to draw 1 mesh per vertex of the Sphere mesh.
The Question: How would I draw a mesh for each vertex of the Cube mesh (info stored in matrices1) on the screen and then make those vertex meshes Lerp into the positions of the Sphere mesh's (info stored in matrices2) vertices?
I'm trying to hack at it with things like
float t = Mathf.Clamp((Time.time % 2f) / 2f, 0f, 1f);
matrices1.position.Lerp(matrices1.position, matrices2.position, t);
but obviously this is invalid code. What might be the solution? Thanks.
Matrix4x4 is usually only used in special cases for directly calculating transformation matrices.
You rarely use matrices in scripts; most often using Vector3s, Quaternions and functionality of Transform class is more straightforward. Plain matrices are used in special cases like setting up nonstandard camera projection.
In your case it seems to me you actually only need to somehow store and access position, rotation and scale.
I would suggest to not use Matrix4x4 at all but rather something simple like e.g.
// The Serializable makes them actually appear in the Inspector
// which might be very helpful in you case
[Serializable]
public class TransformData
{
public Vector3 position;
public Quaternion rotation;
public Vector3 scale;
}
Then with
public List<TransformData> matrices1 = new List<TransformData>();
public List<TransformData> matrices2 = new List<TransformData>();
you could simply lerp over
var lerpedPosition = Vector3.Lerp(matrices1[index].position, matrices2[index].position, lerpFactor);
And if you then need it as a Matrix4x4 you can still create it ad-hoc like e.g.
var lerpedMatrix = Matrix4x4.Translation(lerpedPosition);
or however you want to use the values.

How to draw latitude/longitude lines on the surface of a Sphere in Unity 3D?

I'm a beginner of Unity 3D. And I'm trying to create a globe with Unity 3D as shown below. I created a Sphere game object on a scene and set the radius as 640. Then, I want to draw latitude/longitude lines (every 10 degree) on the surface of this Sphere in C# script.
I tried to draw each lat/long line by using LineRender, but did not get it work.
My code:
public class EarthController : MonoBehaviour {
private float _radius = 0;
// Use this for initialization
void Start () {
_radius = gameObject.transform.localScale.x;
DrawLatLongLines();
}
// Update is called once per frame
void Update()
{
}
private void DrawLatLongLines()
{
float thetaStep = 0.0001F;
int size = (int)((2.0 * Mathf.PI) / thetaStep);
// draw lat lines
for (int latDeg = 0; latDeg < 90; latDeg += 10)
{
// throw error here.
// seems I cannot add more than one component per type
LineRenderer latLineNorth = gameObject.AddComponent<LineRenderer>();
latLineNorth.startColor = new Color(255, 0, 0);
latLineNorth.endColor = latLineNorth.startColor;
latLineNorth.startWidth = 0.2F;
latLineNorth.endWidth = 0.2F;
latLineNorth.positionCount = size;
LineRenderer latLineSouth = Object.Instantiate<LineRenderer>(latLineNorth);
float theta = 0;
var r = _radius * Mathf.Cos(Mathf.Deg2Rad * latDeg);
var z = _radius * Mathf.Sin(Mathf.Deg2Rad * latDeg);
for (int i = 0; i < size; i++)
{
var x = r * Mathf.Sin(theta);
var y = r * Mathf.Cos(theta);
Vector3 pos = new Vector3(x, y, z);
latLineNorth.SetPosition(i, pos);
pos.z = -z;
latLineSouth.SetPosition(i, pos);
theta += thetaStep;
}
}
}
}
What's the correct way to do this?
I don't want to write custom shader (if possible) since I know nothing about it.
The usual way to customize the way 3d objects look is to use shaders.
In your case, you would need a wireframe shader, and if you want control on the number of lines, then you might have to write it yourself.
Another solution is to use a texture. In unity, you will have many default materials that will apply a texture on your object. You can apply an image texture that contains your lines.
If you don't want a texture and really just the lines, you could use the line renderer. LineRenderer doesn't need a 3D object to work. You just give it a number of points and it is going to link them with a line.
Here is how I would do it:
Create an object with a line renderer and enter points that create a
circle (You can do it dynamically in c# or manually in the editor on
the line renderer).
Store this as a prefab (optional) and duplicate it in your scene
(copy paste. Each copy draws a new line
Just be modifying the rotation, scale and position of your lines you
can recreate a sphere
If your question is "What is the equation of a circle so I can find the proper x and y coord?" here is a short idea to compute x and y coord
for(int i =0; i< nbPointsOnTheCircle; ++i)
{
var x = Mathf.Cos(nbPointsOnTheCircle / 360);
var y = Mathf.Sin(nbPointsOnTheCircle / 360);
}
If your question is "How to assign points on the line renderer dynamicaly with Unity?" here is a short example:
public class Circle : MonoBehavior
{
private void Start()
{
Vector3[] circlePoints = computePoints(); // a function that compute points of a circle
var lineRenderer = GetComponent<LineRenderer>();
linerenderer.Positions = circlePoints;
}
}
EDIT
You can only have one per object. This is why the example above only draws one circle. You already have a earth controller, but this controller can't add many LineRenderes to itself. Instead, the idea would be that the earth object has a script that does the something like following:
private void Start()
{
for(int i=0; i<nbLines;++i)
{
GameObject go = new GameObject();
go.AddComponent<LineRenderer>();
go.AddCOmponent<Circle>();
go.transform.SetParent = transform;
go.name = "Circle" + i;
}
}
Then you will see several objects created in you scene, each having exactly one LineRenderer and one Circle component
From there you should be able to do what you want (for instance, pass parameters to the Circle so each Circle is a bit different)

How can I change position of instantiate objects (clones)?

I created a series of sphere clones in my game. After that I adapted the scale so that they appear smaller. However, now there is a gap between these spheres ... and I would have to change the position of this instatiate game objects. I changed my code already exactly at this position but nothing happens. So please I need your help! How can I do this? I would have very small spheres which are located near together.
Here the code:
using UnityEngine;
using System.Collections;
public class SineWave : MonoBehaviour {
private GameObject plotPointObject;
private int numberOfPoints= 100;
private float animSpeed =1.0f;
private float scaleInputRange = 8*Mathf.PI; // scale number from [0 to 99] to [0 to 2Pi] //Zahl vor Mathf, Anzahl Bön
private float scaleResult = 2.5f; // Y Achse Range
public bool animate = true;
GameObject[] plotPoints;
// Use this for initialization
void Start () {
if (plotPointObject == null) //if user did not fill in a game object to use for the plot points
plotPointObject = GameObject.CreatePrimitive(PrimitiveType.Sphere); //create a sphere
//add Material to the spheres , load material in the folder Resources/Materials
Material myMaterial = Resources.Load("Materials/green", typeof(Material)) as Material;
plotPointObject.GetComponent<MeshRenderer> ().material = myMaterial;
//change the scale of the spheres
//plotPointObject.transform.localScale = Vector3.one * 0.5f ;
plotPointObject.transform.localScale -= new Vector3(0.5f,0.5f,0.5f);
plotPoints = new GameObject[numberOfPoints]; //creat an array of 100 points.
//plotPointObject.GetComponent<MeshRenderer> ().material =Material.Load("blue") as Material
//plotPointObject.transform.localScale -= new Vector3 (0.5F, 0.5F, 0.5F); //neu: change the scale of the spheres
for (int i = 0; i < numberOfPoints; i++)
{
plotPoints[i] = (GameObject)GameObject.Instantiate(plotPointObject, new Vector3(i -
(numberOfPoints/2), 0, 0), Quaternion.identity); //this specifies
what object to create, where to place it and how to orient it
}
//we now have an array of 100 points- your should see them in the hierarchy when you hit play
plotPointObject.SetActive(false); //hide the original
}
Thank you already in advance!
Edit:
As I said in the comment I achieved now to place my spheres without a gap in between. However, as soon as I animate my spheres (with a sine wave) there is still that gap between the spheres. How can I adapt this? Should I copy the code of the Start function in the Update function?
I would be very happy to get some help. Thank you very much!
enter code here void Update()
{
for (int i = 0; i < numberOfPoints; i++)
{
float functionXvalue = i * scaleInputRange / numberOfPoints; // scale number from [0 to 99] to [0 to 2Pi]
if (animate)
{
functionXvalue += Time.time * animSpeed;
}
plotPoints[i].transform.position = new Vector3(i - (numberOfPoints/2), ComputeFunction(functionXvalue) * scaleResult, 0);
//print (plotPointObject.GetComponent<MeshRenderer> ().bounds.size.x);
// put the position information of sphere clone 50 in a vector3 named posSphere
posSphere = plotPoints [50].transform.position;
}
//print position of sphere 50 in console
//print (posSphere);
}
float ComputeFunction(float x)
{
return Mathf.Sin(x);
}
}
I think you could make the Barış solution.
For each new object that you are instantiating, you will set his position to the lasted instantiated position adding the size of the object itself, or whatever distance that you want they have from each other.
var initialPosition = 0;
var distanceFromEachOther = 20;
for (int i = 0; i < numberOfPoints; i++) {
var newPos = new Vector3(initialPosition + (i * distanceFromEachOther), 0, 0);
plotPoints[i] = (GameObject)GameObject.Instantiate(plotPointObject, newPos, Quaternion.identity);
}
That will make a gap between the spheres at X pivot, depending on their size. Change the distanceFromEachOther var, adjusting for your needs.
You could also get the object distance with plotPointObject.GetComponent<MeshRenderer>().bounds.size, so distanceFromEachOther could be, for example distanceFromEachOther = plotPointObject.GetComponent<MeshRenderer>().bounds.size.x + 5. So then you will have the objects with a perfectly distance of 5 from each other.
give this a try:
Transform objectToSpawn;
for (int i = 0; i < numberOfPoints; i++)
{
float someX = 200;
float someY = 200;
Transform t = Instantiate(objectToSpawn, new Vector3(i -(numberOfPoints/2), 0, 0), Quaternion.identity) as Transform;
plotPoints[i] = t.gameObject;
t.position = new Vector(someX, someY);
}

Categories

Resources