I'm trying to build a game for Android just like Minecraft using Unity. How can I save my progress?
I'm trying this code but I'm still clueless if I'm on the right track.
public class SavePref : MonoBehaviour {
GameObject[] objects;
float x;
float y;
float z;
// Use this for initialization
void Start () {
objects = GameObject.FindGameObjectsWithTag("ObjectSnap");
}
// Update is called once per frame
void Update () {
}
public void Load()
{
foreach (GameObject obj in objects)
{
obj.name = PlayerPrefs.GetString("Ojects");
x = PlayerPrefs.GetFloat("X");
y = PlayerPrefs.GetFloat("Y");
z = PlayerPrefs.GetFloat("Z");
}
}
public void Save()
{
objects = GameObject.FindObjectsOfType(typeof(GameObject)) as GameObject[];
Debug.Log(objects.Length);
foreach (GameObject obj in objects)
{
PlayerPrefs.SetString("Objects", obj.name);
Debug.Log(obj.name);
x = obj.transform.position.x;
PlayerPrefs.SetFloat("X", x);
y = obj.transform.position.y;
PlayerPrefs.SetFloat("Y", y);
z = obj.transform.position.z;
PlayerPrefs.SetFloat("Z", z);
Debug.Log(obj.transform.position.x);
Debug.Log(obj.transform.position.y);
Debug.Log(obj.transform.position.z);
}
}
}
The reason is you are overwriting the same values.
Every object in the scene will overwrite the same 'X' 'Y' 'Z' and 'Objects' variables of PlayerPerfs.
So, if you want to save just blocks positions,
each block in the scene has to have its own sceneID.
When you write the PlayerPerfs, use these IDs.
For example:
public GameObject[] inscene;
void SaveBlock(){
inscene = GameObject.FindGameObjectsWithType("ObjectSnap");
for(int i = 0; i < inscene.Length; i++){
PlayerPerfs.SetFloat(i.ToString() + "_x", x);
PlayerPerfs.SetFloat(i.ToString() + "_y", y);
PlayerPerfs.SetFloat(i.ToString() + "_z", z);
}
PlayerPerfs.SetInt("Count", inscene.Length);
}
void LoadBlocks(){
int count = PlayerPerfs.GetInt("Count");
inscene = new GameObject[count];
for(int i = 0; i < count; i++)
{
float x = PlayerPerfs.GetFloat(i.ToString() + "_x");
float y = PlayerPerfs.GetFloat(i.ToString() + "_y");
float z = PlayerPerfs.GetFloat(i.ToString() + "_z");
inscene[i] = GameObject.CreatePrimitive(PrimitiveType.Cube);
inscene[i].transform.position = new Vector3(x, y, z);
inscene[i].tag = "ObjectSnap";
}
}
This code will just save blocks positions, and will recreate the world with white cubes.
If you want to save the type of blocks, you should have all types of blocks as prefarbs, and instantiate prefabs, when Load().
P.S. Anyway, such realisation of Minecraft clone for Android is terrible.
Imagine, you have one little chunk (32*32*32) full of blocks, then the RAM will have to handle 32768 blocks in memory (which just kill the application).
So, you have to operate not blocks, but the sides of these blocks, to cull the sides, which aren't visible.
And you shouldn't save the scene information via PlayerPerfs. Use System.IO instead. As i know, PlayerPerfs, saves information in registry, which isn't good for such data.
Related
I have created a procedurally generated 'tiled floor' in unity3d, using a block prefab asset and script as follows:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System;
public class Wall : MonoBehaviour
{
public GameObject block;
public int width = 10;
public int height = 10;
public int timeDestroy = 1;
List<GameObject> blockList = new List<GameObject>();
void Start(){
for (int y=0; y<height; ++y)
{
for (int x=0; x<width; ++x)
{
Vector3 offset = new Vector3(x, 0, y);
GameObject hello= (GameObject)Instantiate(block, transform.position + offset, Quaternion.identity);
blockList.Add(hello);
}
}
StartCoroutine(SelfDestruct());
}
void Update()
{
SelfDestruct();
}
IEnumerator SelfDestruct()
{
yield return new WaitForSeconds(timeDestroy);
Destroy(blockList[UnityEngine.Random.Range(0,(width*height))]);
}
}
When I run the game, one of the 100 blocks has been destroyed, but then nothing happens. From my script, I was expecting one block to destroy every second, as defined by:
yield return new WaitForSeconds(timeDestroy);
where int timeDestroy = 1;
and repeat until all the blocks are gone - game over. How can I change my script so the 100 gameObjects are destroyed one after another, until none are left?
you need to put your IEnumerator SelfDestruct(); in a while loop:
IEnumerator SelfDestruct()
{
while (1<2)
{
Destroy(blockList[UnityEngine.Random.Range(0, (width * height))]);
yield return new WaitForSeconds( timeDestroy );
}
}
that way it will resume destroying the blocks, and you need to remove the SelfDestruct(); in Update.
Ps. Thanks derHugo I have forgotten that ^^.
A couple of issues here
you start a coroutine once but it doesn't really wait for anything (only at the end). There is no repetition anywhere. You probably wanted to use a loop there
you call the SelfDestruct every frame within Update which will execute everything until the first yield!
you potentially try to destroy the same object multiple times since you never remove them from your list.
Actually the entire thing can be one single coroutine!
I would also use Linq OrderBy to randomize the order of blocks once and then simply iterate and destroy them one by one in already randomized order.
Something like e.g.
using System.Linq;
...
public class Wall : MonoBehaviour
{
public GameObject block;
public int width = 10;
public int height = 10;
public int timeDestroy = 1;
// Yes this is valid and Unity will automatically
// run this as a Coroutine!
private IEnumerator Start()
{
// Don't even need this as a field only locally
// already pre-allocate the needed amount
var blockList = new List<GameObject>(height * width);
for (var y = 0; y < height; ++y)
{
for (var x = 0; x < width; ++x)
{
var offset = new Vector3(x, 0, y);
var newBlock = Instantiate(block, transform.position + offset, Quaternion.identity);
blockList.Add(new Block);
}
}
// shuffle the blocks once
var randomizedBlocks = blockList.OrderBy(blockInstance => Random.value);
// then simply iterate them in already randomized order
foreach (var blockInstance in randomizedBlocks)
{
yield return new WaitForSeconds (timeDestroy);
Destroy(blockInstance);
}
// Now you additionally also have direct control when they are all destroyed
Debug.Log("All blocks have been destroyed!");
}
}
i am trying to generate some islands in a 2d game in unity using this code:
public class terraingen : MonoBehaviour
{
int generate = 10;
public int distance = 5;
int i;
void Start()
{
for (i = 1; i <= generate; i++)
{
float x = transform.position.x;
float a = x + distance;
transform.position = new Vector3(a, 2, 0);
Instantiate(gameObject);
}
}
}
now the problem is when i enter play mode because it doesnt load, i mean it only shows "Hold up(busy for mins:sec)" and the time it is "busy" never ends.
i tried using other codes from the internet but none worked, some had the same result and others just didnt work
I am generating gameobjects (spheres) based on coordinates, which are stored in a .csv file. I have a Gameobject with a Single Sphere as primitive childobject. Based on the data the Object will clone this sphere 17 times and move them around. I can move the whole thing around like i want it to by accessing the parent object, but in editing mode the position of the root sphere makes it uneasy to use.
The following Code makes this possible.
public GameObject parentObj;
public TextAsset csvFile;
[SerializeField]
private float scaleDownFactor = 10;
private int index = 0;
//class Deck : MonoBehaviour
//{
[SerializeField]
private GameObject[] deck;
private GameObject[] instanciatedObjects;
private void Start()
{
Fill();
}
public void Fill()
{
instanciatedObjects = new GameObject[deck.Length];
for (int i = 0; i < deck.Length; i++)
{
instanciatedObjects[i] = Instantiate(deck[i]) as GameObject;
}
}
//}
// Update is called once per frame
void Update()
{
readCSV();
}
void readCSV()
{
string[] frames = csvFile.text.Split('\n');
int[] relevant = {
0
};
string[] coordinates = frames[index].Split(',');
for (int i = 0; i < 17; i++)
{
float x = float.Parse(coordinates[relevant[i] * 3]) / scaleDownFactor;
float y = float.Parse(coordinates[relevant[i] * 3+1]) / scaleDownFactor;
float z = float.Parse(coordinates[relevant[i] * 3+2]) / scaleDownFactor;
//objectTest.transform.Rotate(float.Parse(fields[1]), float.Parse(fields[2]), float.Parse(fields[3]));
//objectTest.transform.Translate(x, y, z);
//parentObj.transform.position = new Vector3(x, y, z);
instanciatedObjects[i].transform.position = new Vector3(parentObj.transform.position.x, parentObj.transform.position.y, parentObj.transform.position.z);
instanciatedObjects[i].transform.eulerAngles = new Vector3(parentObj.transform.eulerAngles.x, parentObj.transform.eulerAngles.y, parentObj.transform.eulerAngles.z);
//instanciatedObjects[i].transform.position = new Vector3(x, y, z);
instanciatedObjects[i].transform.Translate(x, y, z);
}
if (index < frames.Length - 1)
{
index++;
}
if (index >= frames.Length -1)
{
index = 0;
}
}
Here is a Screenshot:
So my question is:
How can I set the Position of this Sphere to one of the moving points, without changing the position of the cloned objects? Since all behave based on the BaseSphere?
Is it possible to make the BaseSphere not visible While the Objects are getting cloned or generated?
I am looking for a solution, that makes it easier to move the datagenerated Object around in Editor.
I would appreciate any kind of input.
Make all Spheres children of Empty Gameobject (for example. Sphere_root) and use this for moving Spheres.
Also check Scriptable Objects. It is simple and very quick method to manage data in Unity.
#Edit
public void Fill()
{
instanciatedObjects = new GameObject[deck.Length];
for (int i = 0; i < deck.Length; i++)
{
instanciatedObjects[i] = Instantiate(deck[i]) as GameObject;
instanciatedObjects[i].transform.parent = Baumer; // or Sphere Root or somehing else.
}
}
I'm a beginner of Unity 3D. And I'm trying to create a globe with Unity 3D as shown below. I created a Sphere game object on a scene and set the radius as 640. Then, I want to draw latitude/longitude lines (every 10 degree) on the surface of this Sphere in C# script.
I tried to draw each lat/long line by using LineRender, but did not get it work.
My code:
public class EarthController : MonoBehaviour {
private float _radius = 0;
// Use this for initialization
void Start () {
_radius = gameObject.transform.localScale.x;
DrawLatLongLines();
}
// Update is called once per frame
void Update()
{
}
private void DrawLatLongLines()
{
float thetaStep = 0.0001F;
int size = (int)((2.0 * Mathf.PI) / thetaStep);
// draw lat lines
for (int latDeg = 0; latDeg < 90; latDeg += 10)
{
// throw error here.
// seems I cannot add more than one component per type
LineRenderer latLineNorth = gameObject.AddComponent<LineRenderer>();
latLineNorth.startColor = new Color(255, 0, 0);
latLineNorth.endColor = latLineNorth.startColor;
latLineNorth.startWidth = 0.2F;
latLineNorth.endWidth = 0.2F;
latLineNorth.positionCount = size;
LineRenderer latLineSouth = Object.Instantiate<LineRenderer>(latLineNorth);
float theta = 0;
var r = _radius * Mathf.Cos(Mathf.Deg2Rad * latDeg);
var z = _radius * Mathf.Sin(Mathf.Deg2Rad * latDeg);
for (int i = 0; i < size; i++)
{
var x = r * Mathf.Sin(theta);
var y = r * Mathf.Cos(theta);
Vector3 pos = new Vector3(x, y, z);
latLineNorth.SetPosition(i, pos);
pos.z = -z;
latLineSouth.SetPosition(i, pos);
theta += thetaStep;
}
}
}
}
What's the correct way to do this?
I don't want to write custom shader (if possible) since I know nothing about it.
The usual way to customize the way 3d objects look is to use shaders.
In your case, you would need a wireframe shader, and if you want control on the number of lines, then you might have to write it yourself.
Another solution is to use a texture. In unity, you will have many default materials that will apply a texture on your object. You can apply an image texture that contains your lines.
If you don't want a texture and really just the lines, you could use the line renderer. LineRenderer doesn't need a 3D object to work. You just give it a number of points and it is going to link them with a line.
Here is how I would do it:
Create an object with a line renderer and enter points that create a
circle (You can do it dynamically in c# or manually in the editor on
the line renderer).
Store this as a prefab (optional) and duplicate it in your scene
(copy paste. Each copy draws a new line
Just be modifying the rotation, scale and position of your lines you
can recreate a sphere
If your question is "What is the equation of a circle so I can find the proper x and y coord?" here is a short idea to compute x and y coord
for(int i =0; i< nbPointsOnTheCircle; ++i)
{
var x = Mathf.Cos(nbPointsOnTheCircle / 360);
var y = Mathf.Sin(nbPointsOnTheCircle / 360);
}
If your question is "How to assign points on the line renderer dynamicaly with Unity?" here is a short example:
public class Circle : MonoBehavior
{
private void Start()
{
Vector3[] circlePoints = computePoints(); // a function that compute points of a circle
var lineRenderer = GetComponent<LineRenderer>();
linerenderer.Positions = circlePoints;
}
}
EDIT
You can only have one per object. This is why the example above only draws one circle. You already have a earth controller, but this controller can't add many LineRenderes to itself. Instead, the idea would be that the earth object has a script that does the something like following:
private void Start()
{
for(int i=0; i<nbLines;++i)
{
GameObject go = new GameObject();
go.AddComponent<LineRenderer>();
go.AddCOmponent<Circle>();
go.transform.SetParent = transform;
go.name = "Circle" + i;
}
}
Then you will see several objects created in you scene, each having exactly one LineRenderer and one Circle component
From there you should be able to do what you want (for instance, pass parameters to the Circle so each Circle is a bit different)
EDIT: So after a brief contact with the Assimp dev, I was pointed towards the import process. As I took over the code from someone else, I did not think looking that part:
using (var importer = new AssimpContext())
{
scene = importer.ImportFile(file, PostProcessSteps.Triangulate | PostProcessSteps.FlipUVs | PostProcessSteps.JoinIdenticalVertices);
}
FlipUVs does exactly what it says, it flips on the y axis so the origin is now top left corner. So now I am able to get the model with proper UV but still mirrored mesh. Setting the parent object with scale x = -1 flips it back to normal and makes it look fine but I guess this is not meant to be. So I keep looking.
See the picture, there are two crane models. The one on the left is loaded at runtime via serialization and reconstruction while the right one is the original one simply dragged to the scene.
Serialization happens with Assimp library.
The floor happens to be created first and seems to get the right uv map. While the other items get wrong uv map. Though I am printing the values of the uv maps and they seem to match the original one as they should.
This is how to serialize, this is Mesh class from Assimp, not the Unity Mesh class, the app serializing is Windows application built in UWP:
private static void SerializeMeshes(BinaryWriter writer, IEnumerable<Mesh> meshes)
{
foreach (Mesh mesh in meshes)
{
ICollection<int> triangles = MeshLoadTriangles(mesh);
MeshSerializeHeader(writer, mesh.Name, mesh.VertexCount, triangles.Count, mesh.MaterialIndex);
MeshSerializeVertices(writer, mesh.Vertices);
MeshSerializeUVCoordinate(writer, mesh.TextureCoordinateChannels);
MeshSerializeTriangleIndices(writer, triangles);
}
}
private static void MeshSerializeUVCoordinate(BinaryWriter writer, List<Vector3D>[] textureCoordinateChannels)
{
// get first channel and serialize to writer. Discard z channel
// This is Vector3D since happening outside Unity
List<Vector3D> list = textureCoordinateChannels[0];
foreach (Vector3D v in list)
{
float x = v.X;
float y = v.Y;
writer.Write(x);
writer.Write(y);
}
}
private static void MeshSerializeVertices(BinaryWriter writer, IEnumerable<Vector3D> vertices)
{
foreach (Vector3D vertex in vertices)
{
Vector3D temp = vertex;
writer.Write(temp.X);
writer.Write(temp.Y);
writer.Write(temp.Z);
}
}
private static void MeshSerializeTriangleIndices(BinaryWriter writer, IEnumerable<int> triangleIndices)
{
foreach (int index in triangleIndices) { writer.Write(index); }
}
And this is the invert process:
private static void DeserializeMeshes(BinaryReader reader, SceneGraph scene)
{
MeshData[] meshes = new MeshData[scene.meshCount];
for (int i = 0; i < scene.meshCount; i++)
{
meshes[i] = new MeshData();
MeshReadHeader(reader, meshes[i]);
MeshReadVertices(reader, meshes[i]);
MeshReadUVCoordinate(reader, meshes[i]);
MeshReadTriangleIndices(reader, meshes[i]);
}
scene.meshes = meshes as IEnumerable<MeshData>;
}
private static void MeshReadUVCoordinate(BinaryReader reader, MeshData meshData)
{
bool hasUv = reader.ReadBoolean();
if(hasUv == false) { return; }
Vector2[] uvs = new Vector2[meshData.vertexCount];
for (int i = 0; i < uvs.Length; i++)
{
uvs[i] = new Vector2();
uvs[i].x = reader.ReadSingle();
uvs[i].y = reader.ReadSingle();
}
meshData.uvs = uvs;
}
private static void MeshReadHeader(BinaryReader reader, MeshData meshData)
{
meshData.name = reader.ReadString();
meshData.vertexCount = reader.ReadInt32();
meshData.triangleCount = reader.ReadInt32();
meshData.materialIndex = reader.ReadInt32();
}
private static void MeshReadVertices(BinaryReader reader, MeshData meshData)
{
Vector3[] vertices = new Vector3[meshData.vertexCount];
for (int i = 0; i < vertices.Length; i++)
{
vertices[i] = new Vector3();
vertices[i].x = reader.ReadSingle();
vertices[i].y = reader.ReadSingle();
vertices[i].z = reader.ReadSingle();
}
meshData.vertices = vertices;
}
private static void MeshReadTriangleIndices(BinaryReader reader, MeshData meshData)
{
int[] triangleIndices = new int[meshData.triangleCount];
for (int i = 0; i < triangleIndices.Length; i++)
{
triangleIndices[i] = reader.ReadInt32();
}
meshData.triangles = triangleIndices;
}
MeshData is just a temporary container with the deserialized values from the fbx.
Then, meshes are created:
private static Mesh[] CreateMeshes(SceneGraph scene)
{
Mesh[] meshes = new Mesh[scene.meshCount];
int index = 0;
foreach (MeshData meshData in scene.meshes)
{
meshes[index] = new Mesh();
Vector3[] vec = meshData.vertices;
meshes[index].vertices = vec;
meshes[index].triangles = meshData.triangles;
meshes[index].uv = meshData.uvs;
meshes[index].normals = meshData.normals;
meshes[index].RecalculateNormals();
index++;
}
return meshes;
}
I don't see any reason in the code that should result in this kind of behaviour, I'd say it would totally screw the mesh if the values were wrong.
I can see that the fbx files I have are using quad instead of triangle for the indexing.
Could it be that Assimp does not go to well with this?
I did not solve the issue in a proper way from Assimp.
The basic solution we used was to scale negatively the axis that was flipped in the object transform.
A more appropriate solution would have been to feed all the vertices to a matrix in the Unity side so it resolves the position of the vertices properly.
Get vertex list
foreach vertex multiply with rotation matrix
Assign array to mesh
Use mesh to render