I want to create collision vertices to attach to bodies in XNA with Farseer, according to loaded Texture2Ds.
A caveat, first of all. is that I'm not using Farseer for anything other than collision. Rendering and all other game code is done using my own engine. Farseer is just used as a background physics simulator (and will only tell me when a collision happens, and then I'll handle that myself).
I should point out here that I'm 100% new to Farseer. Never used it before.
So, if I create my List using BayazitDecomposer.ConvexPartition(verts), should I then store this data alongside the Texture2D objects; and then create List objects on the fly when I create my collidable actors? Or am I doing something wrong?
Furthermore, in the example at http://farseerphysics.codeplex.com/documentation, it scales the vertices by Vertices.Scale()... If I keep all my Farseer bodies in pixel space, do I need to do this?
Thank you.
So, if I create my List using BayazitDecomposer.ConvexPartition(verts), should I then store this data alongside the Texture2D objects; and then create List objects on the fly when I create my collidable actors? Or am I doing something wrong?
You create an instance of the Body class from a list of verts using the BodyFactory. I would suggest doing this for each actor you have. However, you can save yourself some processor power by reusing bodies. So if an actor dies add the body to a queue and then snap it to a new actor that is created.
Furthermore, in the example at http://farseerphysics.codeplex.com/documentation, it scales the vertices by Vertices.Scale()... If I keep all my Farseer bodies in pixel space, do I need to do this?
Nope. However that means that a pixel is a meter as far as Farseer is concerned. I believe the formula for Torque if Radius * Force so anything reliant on a radius wouldn't behave as expected. I suggest making the metes equivalent to what actually is a meter in the game. Its just a bit of extra division.
Related
I am currently writing a Minecraft clone in Unity as my pet project, and I'm trying to implement raycasts for it, so that I can know which block the player looks at (I will use raycasts for other purposes too). The world of the game is a 3D grid of perfect unit cubes. Each element of a grid is either solid or not. I want to be able to shoot a ray from any point of my world and get the point where that ray hits the surface of a first solid block in it's way.
Here's a c# pseudocode of what my game aproximately looks like:
// An aproximation of what Unity's Vector3 looks like.
public struct Vector3
{
public float x, y, z;
}
public class World
{
public bool[,,] blocks;
public bool IsSolid(Vector3 pos) // i.e. if pos is inside a solid block
{
return blocks[Math.Floor(pos.x), Math.Floor(pos.y), Math.Floor(pos.z)]
}
public Vector3 Raycast(Vector3 origin, Vector3 direction)
{
// some algorithm, that returns the point at which ray hits a solid block
}
}
Note, that coordintaes of any Vector3 may not be whole nubers, it is entirely possible for rays to start (or end) at fractional coordinates. For simplicity you may (or may not) assume that world is infinite and a ray will always hit some block eventually. Remeber, that Raycast() should return the point at which the ray hits the surface of a cube.
What is the best algorithm I can use for this?
My priorities (in order) are:
Speed - making a raycast should be fast
Elegance - the algorithm should reasonably straightforward and concise
Generality - the algorithm should be easy to modify (i.e. add some extra functionality to it.)
Here's a Q/A for some possible questions:
Q: Why not use Unity's native colliders and raycasts?
A: Unity's colliders and raycast are too slow and aren't optimized for my needs, furthermore, that is by no means elegant of generic.
Q: Do you want an implementation of an algorithm or just the basic concept?
A: I'm fine with just understanding the basis of an aglorithm, but I would really apreciate an implementaion (preferably in C#).
Here are some basic principles. You will need to be fairly comfortable with linear algebra to even attempt this, and read and understand how ray-plane intersection works.
Your ray will start inside a cube, and it will hit one of the 6 faces on its way out of the cube. In normal cases we can quickly eliminate three of the faces by just checking if the ray direction points in the same direction as the cube-face. This is done by checking the sign of the dot product between the vectors. To find the first hit of the three remaining faces we do an intersection test to the corresponding plane, and pick the closest one, if you know the hit face you know what cube it hit. If that cube is empty you repeat the process until you find a non-empty cube. You may also add some checks to avoid edge cases, like eliminating all planes that are parallel to your ray as early as possible.
However, to get any real speed you really need some kind of tree structure to reduce the number of checks done. There are many alternatives, kd-trees, r-trees etc, but in this specific case I would probably consider a sparse octree. This means your cubes will be part of larger 2x2x2 sections, where the whole section can be tagged as empty, filled, partial filled etc. These sections will also be grouped into larger 2x2x2 sections and so on, until you hit some max size that either contains your entire play area, or one independently loadable unit of your play area, and have some logic to test multiple units if needed.
Raycasting is done more or less the same way with a octree as in the simple case, except you now have variable sized cubes. And when you hit a face you need to traverse the tree to find the next cube to test.
But to actually make a well optimized implementation of this requires quite a bit of experience, both in the concepts involved, and in the language and hardware. You may also need insight in the structure of your actual game-world, since there might be non obvious shortcuts you can take that significantly help speedup performance. So it is not guaranteed that your implementation will be faster than unitys built in.
I am an inexperienced programmer, I am looking for advice on a new unity project:
I need to generate terrain for a 3d game from fairly large tiles. For now I only need one type of tile, but I was thinking, I better set up a registry system now and dynamically generate that default tile in an infinite grid. I have a few concerns though, like will the objects continue to load as the character moves into the render distance of a new tile (or chunk if you rather). Also, all the tutorials I have found are wrong for me in some way, like it only works in 2d and doesn't have collision, or is just a static registry and does not allow for changing the content of the tiles in-game.
Right now I don't even know what the code looks like to place a 3d object in the scene without building them from vectors, which maybe I could do. I also don't know how I would want to trigger the code.
Could someone give me an idea of what the code would look like / terminology to look up / a tutorial that gives me what I need?
This looks like a pretty big scope for a new programmer but lets give it a shot. Generating terrain will be a large learning experience when it comes to performance and optimization when you don't know what you're doing.
First off, you'll probably want to make a script that acts as a controller for generating your objects and put this inside of the player. I would start by only making a small area, or one chunk, generate and then move on to making multiple chunks generate when you understand what you're doing. To 'place' an object in your scene you will want to make an instance of the object. I'd start by trying to make your grid of objects, this can be done pretty easily on initialization (Start() function) through a for loop, for testing purposes. IE, if you are trying to make 16x16 squares like minecraft; have a for loop that runs 16 times (For the x) and a for loop inside of that to run 16 times (for the z). That way you can make a complete square of, in this case, cubes. Here is some very untested code just to give you an example of what I'm talking about.
public GameObject cube; //Cube you want to make a copy of, this will appear in the editor
void Start(){
for(var x=0; x < 16; x++){
for(var z=0; z < 16; z++){
GameObject newCube = Instantiate(cube); //Creates an instance of the 'cube' object, think of this like a copy.
newCube.transform.position = new Vector3(x, 0, z); //Places the cube on the x and z which is updated in the for loops
}
}
}
Now where you go from here will be very different depending on what you're trying to do exactly but you can start by looking into perlin noise to add in a randomized y level that looks good. It's very easy to use once you grasp the general concept and this example I provided should help you understand how to use it. They even give good examples of how to use this on the Unity docs.
In all, programming is all about learning. You'll have to learn how to only take the parts of resources that you need for what you're trying to create. I think what I provided you should give you a good start on what you want to create but it will take a deeper understanding to carry things out on your own. Just test different things and truly try and understand how they work and you'll be able to implement parts of them into your own project.
I hope this helps, good luck!
I am doing some calculation with meshes in unity and adding MeshCollider's to calculate pathfinding. Some meshes that are generated are very big and splits meshes into 2 or more 'mesh parts'. I set MeshCollider like so:
testRoom = GameObject.Find("testroom");
MeshFilter[] _filters = testRoom.GetComponentsInChildren<MeshFilter>();
foreach (MeshFilter _filter in _filters)
{
_verts.AddRange(_filter.mesh.vertices);
_triangles.AddRange(_filter.mesh.triangles);
testRoom.AddComponent<MeshCollider>();
testRoom.GetComponent<MeshCollider>().sharedMesh = _filter.mesh;
}
This obviously dont work with several meshparts since the Collider gets overwritten. Since the mesh is to big I cannot merge the meshes together and set a collider, right? When I use the Generate Colliders box in unity it works fine but I want a programmable solution since I am to generate the meshes at runtime.
Does anyone know how to make a single collider for the big mesh that is split into several parts due to having to many vertices?
Here is a picture of the mesh (as requested) It might look abit strange but it's (part of) an office building. I made the mesh myself using a Tango tablet. I diddnt include a tango tag because I did not think how the mesh looked would affect the answer.
I dont fully understand your situation, to be honest, but if you need a collider that will cover complex object created from script you still can add it to each object separately at runtime, right? So instead of
testRoom.AddComponent<MeshCollider>();
testRoom.GetComponent<MeshCollider>().sharedMesh = _filter.mesh;
you can do
var collider = _filter.gameObject.AddComponent<MeshCollider>();
collider.sharedMesh = _filter.mesh;
You cant add more than one MeshFilter component to a gameObject anyway, so any filter you get from testRoom.GetComponentsInChildren() is attached to a different object. Therefore you can add new collider to each one of them individually.
You can of course add the colliders at runtime as you are already doing. This will give you very heavy and poor performance solution that will work but slow. Normal practice in this case would be modelling lowpoly version of that mesh you will use as a collider.
Another solution if your objects aren't too complex would be box colliders. You would need to grab renderer.bounds
http://docs.unity3d.com/ScriptReference/Renderer-bounds.html
and create new box collider with that bounds
http://docs.unity3d.com/ScriptReference/Collider-bounds.html
Hope that make sense
Let's say I want to replicate Planeshifting from Legacy of Kain: Soul Reaver in Unity.
There are 2 realms: Spectral realm and Material realm.
The Spectral realm is based on the Material realm, only has the geometry distorted and certain objects fade out/become non-interactive.
In Soul Reaver, it is used as means to go to areas where you normally wouldn't be able to in Material (distorting geometry), to use other powers (such as going through grates).
My question is: Is it even possible at all to implement this in Unity 3D? (I would need the Scene(level) or the objects to have 2 states somehow that I could switch beetween/distort to real-time.)
I would call this a rather advanced topic and there are multiple ways of accomplishing a at least similar effect.
But to answer your actual question straight away - Yes it is possible.
And here are some approaches i would take (i guess that would be your next question ;))
The easiest way is obviously having game object which have their collider and renderer disabled (or the whole object) when "changing
realms". But this for sure isn't the best-looking way of doing it,
even tho a lot of motion blur or other image effect could help.
(Depending on what shaders you use, animating the alpha value can
create a fading effect as well)
The more advanced way would be the actual manipulation of vertices (changing the object). There are quite a few tutorials on
how to change the geometry of object. Take a look at Mesh() in the
official documentation:
http://docs.unity3d.com/ScriptReference/Mesh.html
A class that allows creating or modifying meshes from scripts.
Another way (didn't try) thats rather easy would be using shape keys. I don't know which Software you use to create your
world/models but blender has this function which allows you to define
a base shape, then edit the verticies in blender and save it as a
second (or more) shapes. Unity can blend smoothly between those
shapes as being shown in this video:
https://www.youtube.com/watch?v=6vvNV1VeXhk
Yes it will be possible in Unity3D, but your question is quite general. You could try something like having 2 models per GameObject (perhaps as children or fields on the script) and disabling 1 of them depending on which realm the player is in. You could have 2 scenes for each level and switch between them, though that might be too slow. You could see if there are any plugins/assets that allow you to define 2 models and morph between them. There are probably a number of other routes you could take, but I can't really help more until you've chosen a path.
I'm attempting to simulate a ship/space station with internal gravity.
To accomplish this, I'm making the player and all contents of the ship children of the ship. The ship itself has colliders, but no rigid body components. The idea is that as the ship moves, so will all of its contents. Pretty straightforward so far.
To simulate gravity in the ship, the player controller and all rigid bodies have the default gravity turned off. Instead of the standard, each frame a force is applied along the negative up vector of the parent ship.
This sort of works, but there is one major problem that I have to sort out before this thing is solid. All rigid bodies slide around the interior of the ship very slowly.
I'm aware that this is probably due to the updated position of the floor combined with the gravity force resulting in some kind of shear force. The objects always slide against the rotation of the ship.
I've tried mucking around with all of the physics properties from Physic materials to drag to mass, etc. None of these have worked, and I'm pretty sure it's due to the fundamental fact that the floor is moving, even though the RBs are children of the object that the floor is a part of.
Anyone have a solution to this that isn't some kind of duct tape? I could try to make everything kinematic and only "wake up" when certain external collisions occur or something, but that could get very cumbersome. I need for this to work in as much of a general purpose way as possible.
Some code:
On the ship
void Update ()
{
transform.Rotate(new Vector3(Time.deltaTime * 0.125f,Time.deltaTime*0.5f,0));
}
void FixedUpdate()
{
Vector3 tempVec;
foreach(Rigidbody rb in rigidBodies)
{
//Gravity!!
tempVec = transform.up * -9.81f * rb.mass * Time.deltaTime;
rb.AddForce(tempVec, ForceMode.Acceleration);
}
}
I've also worked on a version where the ship was following the movements of a rigid body. I couldn't do direct parenting, so I had to simply set the transform manually each frame to match the physics proxy. This still had the same effect as above, though it's probably ultimately how I want to move the ship, since that will tie into the flight mechanics more properly.
If you equate this to a real world scenario, the only thing that stops us from sliding around on the floor is friction.
Does the Physics library correctly apply friction based on the contacting materials? If not applying a certain amount of friction (or a minimum amount of force applied required to overcome it) should have the effect of preventing you from sliding around on the floor.
Although this is pretty much "duct tape" as above, it could neatly fit in and expand your physics engine if it doesn't already contain a way to enforce it.
As suggested above, the issue is because of how the physics engine applies friction. If I'm not mistaken, there will be some other forces acting on objects in a rotating frame (some of which are very unintuitive - check out this video: https://www.youtube.com/watch?v=bJ_seXo-Enc). However, despite all that (plus likely rounding errors arising from the engine itself and the joys of floating-point mathematics), in the real world, static friction is greater than moving (kinetic) friction. I don't think this is often implemented in game physics engines, which is why we so often see "wobbly" near-static objects. Also, you may run into the issue that even if this is implemented, the physics engine may be interpreting two contacting rotating bodies as non-static (even though their contact surfaces are static from a local perspective, the engine may be thinking globally)... [Insert joke about Newton and Einstein arguing].
https://i.stack.imgur.com/AMDr2.gif shows an idealised version of what friction actually looks like in the real world: until you overcome the static friction, nothing moves.
One way you would implement this (if you can access the physics engine that low-level) would be to round all movement forces below a certain threshold to zero - i.e. force < 0.001* is set to 0 (or possibly velocity < 0.001 is set to zero - whichever is easier).
*Some threshold - you'll have to work out what this is.
Otherwise, maybe you could instruct those objects to stop calculating physics and "stick" them to the parent surface, until such time as you want to do something with them? (This is probably a bad solution, but most of the other ideas above rely on hacking the underlying physics code).