Bullet Physics Convex hulls with cubes - c#

I'm developing a game engine in c# and am using BulletSharp for physics. It's working well except with cubes:
http://i.stack.imgur.com/EPfrw.png
(The Axis-Aligned Bounding box is the transparent red, the model is the white)
At rest, the stand on their edges. Because I'm loading from Collada models, I am creating a ConvexHullShape() and adding the data as a vector cloud. While using BoxShape() would be more efficient (and work correctly), I cannot as it is not guaranteed that all models are cubes. I cannot figure out why they rest on vertices and not on the flat edges. Is my implementation of ConvexHullShape wrong or do I need to use a different type of shape (for the physics to work correctly)?
public RigidBody AddDynamicGeometry(ColladaGeometry geometry, Matrix4 transform)
{
List<Vector3> points = new List<Vector3>();
foreach (Triangle tri in geometry.triangles)
{
points.Add(tri.vertices[0]);
points.Add(tri.vertices[1]);
points.Add(tri.vertices[2]);
}
CollisionShape shape = new ConvexHullShape(points);
shape.UserObject = geometry;
collisionShapes.Add(shape);
RigidBody body = CreateRigidBody(geometry.triangles.Count * 10, transform, shape);
return body;
}
public RigidBody CreateRigidBody(float mass, Matrix4 startTransform, CollisionShape shape)
{
bool isDynamic = (mass != 0.0f);
Vector3 localInertia = Vector3.Zero;
if (isDynamic)
shape.CalculateLocalInertia(mass, out localInertia);
DefaultMotionState myMotionState = new DefaultMotionState(startTransform);
RigidBodyConstructionInfo rbInfo = new RigidBodyConstructionInfo(mass, myMotionState, shape, localInertia);
RigidBody body = new RigidBody(rbInfo);
physics_world.AddRigidBody(body);
return body;
}

ConvexHullShape expects the center of mass (COM) to be (0,0,0), but the cube is offset from the center, making it tilt towards the corner.
You can find the correct COM with ConvexTriangleMeshShape.CalculatePrincipalAxisTransform. Then you could subtract the COM from each vertex to bring the COM back to 0. However, it's easier to create a CompoundShape with a local center for the cube.
// Create a ConvexTriangleMeshShape from the points
const int indexStride = 3 * sizeof(int);
const int vertexStride = 12;
int vertexCount = points.Count;
int indexCount = vertexCount / 3;
TriangleIndexVertexArray vertexArray = new TriangleIndexVertexArray();
IndexedMesh mesh = new IndexedMesh();
mesh.Allocate(vertexCount, vertexStride, indexCount, indexStride);
Vector3Array vdata = mesh.Vertices;
IntArray idata = mesh.TriangleIndices;
for (int i = 0; i < vertexCount; i++)
{
vdata[i] = points[i];
idata[i] = i;
}
vertexArray.AddIndexedMesh(mesh);
ConvexTriangleMeshShape shape = new ConvexTriangleMeshShape(vertexArray, true);
// Calculate center of mass
Matrix center = Matrix.Identity;
Vector3 inertia;
float volume;
shape.CalculatePrincipalAxisTransform(ref center, out inertia, out volume);
// Create a CompoundShape with COM offset
CompoundShape compound = new CompoundShape();
compound.AddChildShape(Matrix.Invert(center), shape);
Note: ConvexTriangleMeshShape.CalculatePrincipalAxisTransform works in SVN trunk, but not in BulletSharp 2.82. There will be a bugfix release soon.

Related

Why is the radius of my circle too large (Unity using Line Renderer)?

I am trying to draw a circle in Unity using Line Renderer but I'm having a few problems.
A circle is being drawn but its radius is larger than it should be. (the yellow sphere is the radius it should be and the black circle is what is being drawn)
Example: When trying to draw a circle with a radius of 10.0f, the circle drawn will have a radius of something larger than 10.0f.
The circle doesn't fully connect. There are a couple points in the circle where something went wrong. (these areas are circled in the picture)
using UnityEngine;
[RequireComponent(typeof(LineRenderer))]
public static class GameObjectEx
{
public static void DrawCircle(this GameObject container, float radius, float lineWidth)
{
var segments = 360;
LineRenderer line = container.GetComponent<LineRenderer>();
line.material = new Material(Shader.Find("Sprites/Default"));
line.startColor = Color.black;
line.endColor = Color.black;
line.useWorldSpace = false;
line.startWidth = lineWidth;
line.endWidth = lineWidth;
line.positionCount = segments + 1;
var pointCount = segments + 1;
var points = new Vector3[pointCount];
for (int i = 0; i < pointCount; i++)
{
var rad = Mathf.Deg2Rad * i;
points[i] = new Vector3(Mathf.Sin(rad) * radius, 0f, Mathf.Cos(rad) * radius);
}
line.SetPositions(points);
}
}
I've tried following a few different tutorials for drawing a circle with Line Renderer and the code I posted is the closest I've gotten to something that works. I'm pretty sure problem #1 is being caused by "var rad" but I haven't been able to figure it out. For problem #2, I've tried increasing the value of segments but the two broken parts are still present and unchanged.

Extracting points and edge vectors

I am creating a program to generate a path for a CNC machine laser/plasma cutting. In it, the user should be able to cut shapes in the base element and be able to acquire the points and vectors of those cuts. I added the possibility to draw arrows (points and vectors) on selected walls according to which the tool should travel. This is based on obtaining the normal vector of the selected wall, which is used to determine the angle of cut.
Unfortunately, I do not know how to get the same effect on walls with a variable normal vector. An example of such an edge is the inclined cylinder. When I apply arrows to such an edge they all have the same vector.
Code sample:
public List<Mesh> DrawArrowsOnSelectedFace(Entity entity)
{
List<Mesh> arrowList = new List<Mesh>();
Brep ent = (Brep)entity;
for (int i = 0; i < ent.Faces.Length; i++)
{
if (ent.GetFaceSelection(i))
{
Surface[] sf = ent.Faces[i].ConvertToSurface(ent);
foreach (Surface surf in sf)
{
ICurve[] extractedEdges = surf.ExtractEdges();
Vector3D rotation = CalculatePerpenticularToNormalVector(surf);
foreach (ICurve curve in extractedEdges)
{
Point3D[] segmented = curve.GetPointsByLengthPerSegment(5);
for (int j = 1; j <= segmented.Length - 1; j++)
{
Point3D point1 = segmented[j - 1];
Mesh arrow = CreateArrow(point1, rotation);
arrowList.Add(arrow);
}
}
}
}
}
return arrowList;
}
private Vector3D CalculatePerpenticularToNormalVector(Surface surface)
{
Point3D point3D1 = new Point3D(surface.ControlPoints[0, 0].X, surface.ControlPoints[0, 0].Y, surface.ControlPoints[0, 0].Z);
Point3D point3D2 = new Point3D(surface.ControlPoints[0, 1].X, surface.ControlPoints[0, 1].Y, surface.ControlPoints[0, 1].Z);
Point3D point3D3 = new Point3D(surface.ControlPoints[1, 0].X, surface.ControlPoints[1, 0].Y, surface.ControlPoints[1, 0].Z);
Plane plane = new Plane(point3D1, point3D2, point3D3);
Vector3D equation = new Vector3D(plane.Equation.X, plane.Equation.Y, plane.Equation.Z);
Vector3D vectorZ = new Vector3D();
vectorZ.PerpendicularTo(Vector3D.AxisMinusY);
Vector3D result = CalculateRotation(vectorZ, equation);
result.Normalize();
return result;
}
private Mesh CreateArrow(Point3D point3D, Vector3D rotation)
{
if (point3D.Z >= -0.5)
{
return Mesh.CreateArrow(point3D, rotation, 0.3, 5, 0.35, 2, 36, Mesh.natureType.Smooth, Mesh.edgeStyleType.Sharp);
}
else return null;
}
private Vector3D CalculateRotation(Vector3D vector, Vector3D equation)
{
return vector - Vector3D.Dot(vector, equation) * equation;
}
What type should I best use for Boolean operations?
I also have a part of the code prepared where the arrows are drawn based on the common part of the basic element and the cut shape. Both of these shapes are BREPs. Unfortunately, this uses a lot of memory and takes some time.
You can convert the yellow face to a Surface using Brep.Faces[i].ConvertToSurface() and generating U or V isocurves of the resulting surface at equally spaced parameters using Surface.IsocurveU(t) or Surface.IsocurveU(t).

Trying to map a sphere with Perlin Noise

So as the title says I'm trying to map properly a octahedron sphere using a 3D Perlin Noise as a procedural texture.
I suppose it has something to do about the UVs or about the edges vertices of the texture (left, right, probably even top and down). It's a texture with size 512*512, but it can be 1024*1024.
I've been documentating, trying other techniques, using normal maps, tangets, etc but i still can't figure out how to solve that seam (keep in mind it should be procedurally generated) to generate a surface around the sphere to update it during runtime (in that way I can change the noise (shape of the terrain) as well as the colours).
By the way, when I do the same with a prepared texture (1024*512) with the edges properly corrected the seam disappear, but what I want is the ability to change it in run time (can survive without it but would be nice to have it)
private void OnEnable()
{
if(autoUpdateTexture)
{
if (texture == null)
{
texture = new Texture2D(resolution, resolution, TextureFormat.RGB24, true);
texture.name = "Procedural Texture";
texture.wrapMode = TextureWrapMode.Repeat;
texture.filterMode = FilterMode.Trilinear;
texture.anisoLevel = 9;
GetComponent<MeshRenderer>().sharedMaterial.mainTexture = texture;
}
FillTexture();
}
}
public void FillTexture()
{
if (texture.width != resolution)
{
texture.Resize(resolution, resolution);
}
Vector3 point00 = transform.TransformPoint(new Vector3(-0.5f, -0.5f));
Vector3 point10 = transform.TransformPoint(new Vector3(0.5f, -0.5f));
Vector3 point01 = transform.TransformPoint(new Vector3(-0.5f, 0.5f));
Vector3 point11 = transform.TransformPoint(new Vector3(0.5f, 0.5f));
NoiseMethod method = Noise.noiseMethods[(int)type][dimensions - 1];
float stepSize = 1f / resolution;
for (int y = 0; y < resolution; y++)
{
Vector3 point0 = Vector3.Lerp(point00, point01, (y + 0.5f) * stepSize);
Vector3 point1 = Vector3.Lerp(point10, point11, (y + 0.5f) * stepSize);
for (int x = 0; x < resolution; x++)
{
Vector3 point = Vector3.Lerp(point0, point1, (x + 0.5f) * stepSize);
float sample = Noise.Sum(method, point, frequency, octaves, lacunarity, persistence);
if (type != NoiseMethodType.Value)
{
sample = sample * 0.5f + 0.5f;
}
texture.SetPixel(x, y, coloring.Evaluate(sample));
}
}
texture.Apply();
}
So, I have 2 images, one showing the 3D generated noise in the sphere (when I save the textue to png it just goes to 2D, something obvious)
And the other one, showing that 3D noise IN the sphere with a seam at the edges, so the thing is get that 3D noise in the sphere without the seam.
If you need any more related info, please let me know, as this is giving me a nice headache.
procedural texture in 2D
3D noise on sphere

Split Texture using a Curved Line in Unity3D C#

I have a texture that I want to slice into 2 parts, using a Vector2 array.
I have all the Vector2 points for the curved line.
Question
How can I slice the texture into 2 parts using the curved line of points.
Alternative Solutions/Questions
How can I 'pixel' fill a Vector2[] shape to create a Texture?
My attempts
1) Generating Vector2 points to create a square, with the top part being the curve edge. Looked promising but when I tried generating a Mesh, the points sorting was incorrect.
2) Dynamically created a Polygon2D Collider - mimicking the bottom part of the sliced texture - this had the same issue as attempt 1, the point ordering. So when convert the Collider to Mesh, it obviously had the same results as attempt
In the picture below:
The red line simulates my Vector2 array
The gray+green square is the texture 1024 x 1024 pixels
The green area is the target area I want
This makes a mesh that is the shape you want (but with jagged edges on top), hopefully that is a step in the right direction. The Vector2 points[] array contains your red line. It should be sorted by the x coordinate, and all the numbers should be between 0 and 1. Needs a mesh filter and a mesh renderer with your texture.
using UnityEngine;
[RequireComponent(typeof(MeshFilter))]
[RequireComponent(typeof(MeshRenderer))]
public class createMesh : MonoBehaviour {
void Start () {
Vector2[] points = new Vector2[4];
points [0] = new Vector2 (0, .5f);
points [1] = new Vector2 (.33f, 1f);
points [2] = new Vector2 (.66f, .5f);
points [3] = new Vector2 (1, 1f);
MeshFilter mf = GetComponent<MeshFilter> ();
Mesh mesh = new Mesh();
Vector3[] verticies = new Vector3[points.Length * 2];
int[] triangles = new int[(points.Length - 1)*6];
Vector3[] normals = new Vector3[points.Length * 2];
Vector2[] uv = new Vector2[points.Length * 2];
int vIndex = 0;
int tIndex = 0;
int nIndex = 0;
int uvIndex = 0;
for (int i = 0; i< points.Length; i++) {
Vector3 topVert = points[i];
Vector3 bottomVert = topVert;
bottomVert.y = 0;
verticies[vIndex++]= bottomVert;
verticies[vIndex++]=topVert;
//uv
uv[uvIndex++] = bottomVert;
uv[uvIndex++] = topVert;
//normals
normals[nIndex++] = -Vector3.forward;
normals[nIndex++] = -Vector3.forward;
if (i<points.Length - 1) {
//triangles
triangles[tIndex++] = (i)*2;
triangles[tIndex++] = (i)*2+1;
triangles[tIndex++] = (i)*2+2;
triangles[tIndex++] = (i)*2+2;
triangles[tIndex++] = (i)*2+1;
triangles[tIndex++] = (i)*2+3;
}
}
mesh.vertices = verticies;
mesh.triangles = triangles;
mesh.normals = normals;
mesh.uv = uv;
mf.mesh = mesh;
}
}
Bonus: here's a way to do it just with the texture. To use this the bitmap has to be set to Advanced, with read/write enabled in the import settings. This method uses 0 to 1023 (or however large your texture is) for coordinates, and should work for numbers out of that range too.
using UnityEngine;
using System.Collections;
public class tex2d : MonoBehaviour {
public Vector2[] points;
void Start () {
MeshRenderer mr;
Texture2D t2d;
Texture2D newTex = new Texture2D (1024, 1024);
mr = GetComponent<MeshRenderer> ();
t2d = mr.material.GetTexture (0) as Texture2D;
MakeTex (points, t2d, ref newTex, 1024);
mr.material.SetTexture (0, newTex);
}
void MakeTex(Vector2[] pnts, Texture2D inputTex, ref Texture2D outputTex, int size){
Color bgcolor = new Color (1, 0, 1, 1);
for (int i=0; i<(pnts.Length-1); i++) {
Vector2 p1=pnts[i];
Vector2 p2=pnts[i+1];
//skip points that are out of range
if ((p1.x <0 && p2.x <0) || (p1.x > size && p2.x>size)) continue;
for (int x =(int)p1.x; x<(int)p2.x; x++) {
if (x<0) continue;
if (x>=size) break;
float interpX = (x-p1.x)/(p2.x-p1.x);
int interpY = (int) ((p2.y-p1.y)*interpX + p1.y);
for (int y=0; y<interpY; y++) {
outputTex.SetPixel(x,y,inputTex.GetPixel(x,y));
}
for (int y= interpY; y<size; y++) {
outputTex.SetPixel(x,y,bgcolor);
}
}
}
outputTex.Apply ();
}
}

How to rotate BoundingBox

here is my code for displaying bounding box
Vector3[] corners = box.GetCorners();
for (int i = 0; i < 8; i++)
{
verts[i].Position = Vector3.Transform(corners[i],modelMatrix);
verts[i].Color = Color.White;
}
vbo.SetData(verts);
ibo.SetData(indices);
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
effect.World = Matrix.Identity;
effect.View = view;
effect.Projection = projection;
pass.Apply();
ContentLoader.SetBuffers(ibo, vbo);
}
I'd like to achieve same result using BoundingBox class.
I tried to do it like this,but it doesn't work
for (int i = 0; i < boundingBoxes.Count; i++)
{
Vector3 min = Vector3.Transform(boundingBoxes[i].Min, modelMatrix);
Vector3 max = Vector3.Transform(boundingBoxes[i].Max, modelMatrix);
boundingBoxes[i] = new BoundingBox(min, max);
}
the code above works if there is no rotation.With rotation things get messed up.Any idea why and how to fix it?
You can not rotate a BoundingBox object in Xna. The built in collision detection methods of the BoundingBox class will always be calculated from min & max for a box in axis alignment only. By transforming min & max, you are not rotating the box, you are only changing the x,y,z dimensions of the axis aligned box.
You might be better off studying up on "oriented bounding boxes". You would draw an oriented box by using the corners as verts and choosing 'LineList' as your PrimitiveType instead of 'TriangleList' in the 'DrawIndexedPrimitives' method. Collision detection for an oriented box is different & more complex than for an axis aligned box.

Categories

Resources