Skeletal skinning algorithm bunches up everything at the model's feet - c#

I'm trying to implement skinning using skeletal animations stored in a Collada file, and while I managed to load it and render the model without skinning correctly, I can't figure out why when I apply my skinning algorithm all the parts get bunched up at the model's feet, or extremely deformed. The entire project is stored on GitHub for reference (the skinning branch).
I believe the vertex shader is correct since if I pass identity transforms to the bones I get my default pose model, it's calculating the bone transforms based on the skeletal animation in the .dae file that's somehow broken. This is what my problem looks like, versus how the model looks like in the default pose:
I believe my problem is somewhere while applying the recursive bone transforms:
public void Update(double deltaSec)
{
if (CurrentAnimationName is null) return;
var anim = animations[CurrentAnimationName];
currentAnimationSec = (currentAnimationSec + deltaSec) % anim.Duration.TotalSeconds;
void calculateBoneTransforms(BoneNode boneNode, Matrix4x4 parentTransform)
{
var bone = anim.Bones.FirstOrDefault(b => b.Id == boneNode.Id);
var nodeTransform = bone?[TimeSpan.FromSeconds(currentAnimationSec)] ?? boneNode.Transform;
var globalTransform = parentTransform * nodeTransform;
if (boneNode.Id >= 0)
for (int meshIdx = 0; meshIdx < perMeshData.Length; ++meshIdx)
perMeshData[meshIdx].FinalBoneMatrices[boneNode.Id] = globalTransform * perMeshData[meshIdx].boneOffsetMatrices[boneNode.Id];
foreach (var child in boneNode.Children)
calculateBoneTransforms(child, globalTransform);
}
calculateBoneTransforms(rootBoneNode, Matrix4x4.Identity);
}
Or when building the recursive structure of bone data with their transforms:
BoneNode visitTransforms(Node node, Matrix4x4 mat)
{
var boneNode = new BoneNode
{
Children = new BoneNode[node.ChildCount],
Id = boneIds.TryGetValue(node.Name, out var id) ? id : -1,
Transform = Matrix4x4.Transpose(node.Transform.ToNumerics()),
};
mat = node.Transform.ToNumerics() * mat;
foreach (var meshIndex in node.MeshIndices)
transformsDictionary[scene.Meshes[meshIndex]] = mat;
int childIdx = 0;
foreach (var child in node.Children)
boneNode.Children[childIdx++] = visitTransforms(child, mat);
return boneNode;
}
rootBoneNode = visitTransforms(scene.RootNode, Matrix4x4.Identity);
I believe the bone to vertex weights are gathered and uploaded to the shader correctly, and that the final bone array uniform is uploaded correctly (but maybe not calculated correctly). I'm also not sure of the order of matrix multiplications and whether or not to transpose anything when uploading to the shader, though I've tried it both ways every attempt.

If anyone runs into a similar issue, my problem was that my keyframe bone transforms were being transposed compared to how the rest of the chain of transforms were calculated, so when I multiplied them everything went crazy. So, keep track of what matrices are left-handed and which are right-handed!

Related

Confused on Clipper in C#

I'm creating a 2D game in Unity which has procedurally placed tiles. I want to simplify the collision geometry using Angus Johnson's Clipper library (specifically the union function), but I'm running into an issue with the library returning empty solutions and I'm not sure why.
For reference, here are the Polygon Colliders I've been using to test.
And here is a simplified version of the function I'm using to combine the geometry:
List<List<Vector2>> unitedPolygons = new List<List<Vector2>>();
Clipper clipper = new Clipper();
Paths solution = new Paths();
ClipperOffset offset = new ClipperOffset();
//Use a scaling factor for floats and convert the Polygon Colliders' points to Clipper's desired format
int scalingFactor = 10000;
for (int i = 0; i < polygons.Count; i++)
{
Path allPolygonsPath = new Path(polygons[i].points.Length);
for (int j = 0; j < polygons[i].points.Length; j++)
{
allPolygonsPath.Add(new IntPoint(Mathf.Floor(polygons[i].points[j].x * scalingFactor), Mathf.Floor(polygons[i].points[j].y * scalingFactor)));
}
bool succeeded = clipper.AddPath(allPolygonsPath, PolyType.ptSubject, true);
}
//Execute the union
bool success = clipper.Execute(ClipType.ctUnion, solution);
Debug.Log("Polygons after union: " + solution.Count);
//Offset the polygons
offset.AddPaths(solution, JoinType.jtMiter, EndType.etClosedPolygon);
offset.Execute(ref solution, 5f);
//Convert back to a format Unity can use
foreach (Path path in solution)
{
List<Vector2> unitedPolygon = new List<Vector2>();
foreach (IntPoint point in path)
{
unitedPolygon.Add(new Vector2(point.X / (float)scalingFactor, point.Y / (float)scalingFactor));
}
unitedPolygons.Add(unitedPolygon);
}
return unitedPolygons;
What I've discovered through debugging is that the first Execute (for the union) is returning an empty solution. I've figured out that the "BuildResult" function in the "Clipper" class is indeed running, and "m_PolyOuts" has data in it, but the "Pts" property of the "OutRec"s in that list are all null. I can't figure out where this happens or if they were ever set in the first place.
I'm convinced this is proper behavior and I'm just using the library wrong, but I can't find any documentation or examples explaining what I need to change to make the union succeed.
Thanks.
EDIT: I've narrowed it down a bit more. During "ExecuteInteral" in the Clipper class, the "Pts" lists aren't null until the "FixupOutPolygon" function is run. After that, all of the lists are null. "JoinCommonEdges" also makes a couple of the lists null, but not all of them.
I've been working on my own game project as well and stumbled upon similar problem with Clipper. What worked for me in this case was instead of writing this:
clipper.Execute(ClipType.ctUnion, solution);
... I specified PolyFillType for Execute method:
clipper.Execute(ClipType.ctUnion, solution, PolyFillType.pftNonZero, PolyFillType.pftNonZero);
I'm not sure why it worked for me but I think it's due to the fact that some Paths can share common edges so with the default pftEvenOdd filling rule it gets cut out.

Move each UV vertex by offset/vector

I want to create second UV set, and then move each UV vertex in an object by vector u=0, v=1.0/number of vertices. The new UV vertex coordinates created for 4 vertices plane should go like this: for vertex 0 (u=0,v=0), for vertex 1 (u=0,v=0.25), for vertex 2 (u=0,v=0.5), for vertex 3 (u=0,v=0.75), etc.
I have a source code in C# :
Vector2[] UV2 = new Vector2[m.vertexCount];
float HalfTexelSize = (1f/ (float)m.vertexCount)/2f;
for (int i = 0; i < m.vertexCount; i++) {
UV2[i] = new Vector2(0f, (float)i / (float)m.vertexCount) + new Vector2(0f, HalfTexelSize);
}
m.uv2 = UV2;
meshFilter.mesh = m;
As far as my research goes there is no vectors in Python, and now I'm stuck with the solution. So far I came up this:
import maya.cmds as cmds
cmds.polyUVSet(create=True, uvSet='map2')
vertexCount = cmds.polyEvaluate(v=True)
vertexCount_float = float(vertexCount)
HalfTextureSize = (1.0/vertexCount/2.0)
x = 1.0/vertexCount
sel = cmds.ls(sl=1, fl=1)
for i in sel:
i=0, i<sel
cmds.polyEditUV(uValue=0.0, vValue=x)
But the output I get is the second UV set with every vertex in (0,0) UV coordinates. Can anyone help me? Any MEL/Python solution would be appreciated.
There's no vectors in default maya pyhon.
If you want a vanilla python-only solution this project is a single-file vector library.
You can also get 3-vectors from maya directly using the API:
from maya.api.OpenMaya import MVector
v1 = MVector(1,2,3)
v2 = MVector(3,4,5)
print v1 + v2
These are 3d vectors but you can just ignore the 3d component
For actually creating and editing the UV set, there are a couple of complexities to consider
Just creating a new UV set gives you an empty set; the set exists but it won't contain any UVs. The relationship between verts and UVs is not always 1:1, you can have more than one UVs for a physical vert and you can have un-mapped vertices. To create a 1-uv-per-vertex mapping it's easiest to just apply a simple planar or sphere map.
You'll need to set the 'current uv set' so maya knows which set you want to work on. Unfortunately when you ask maya for UV set names, it gives you unicode, but when you set them it expects strings (at least in 2016 - I think this is a bug)
You don't really need a vector to set this if your current code is an indication.
The final solution will look something like this:
import maya.cmds as cmds
# work on only one selected object at a time
selected_object = cmds.ls(sl=1, fl=1)[0]
vertexCount = cmds.polyEvaluate(selected_object, v=True)
# add a projection so you have 1-1 vert:uv mapping
cmds.polyPlanarProjection(selected_object, cm=True)
# get the name of the current uv set
latest_uvs = cmds.polyUVSet(selected_object, q=True, auv=True)
# it's a unioode by default, has to be a string
new_uv = str(latest_uvs[-1])
# set the current uv set to the newly created one
cmds.polyUVSet(selected_object, e=True, cuv = True, uvs=new_uv)
vert_interval = 1.0 / vertexCount
for vert in range (vertexCount):
uv_vert = "{}.map[{}]".format (selected_object, vert)
cmds.polyEditUV( uv_vert, u= 0, v = vert * vert_interval, relative=False, uvs=new_uv)

How to get the center point of a Face or a PlanarFace element in Revit

I'm doing a Revit Macro to get the center point of a part (floor part) to check if it is inside a room or a space.
I couldn't get much of the BoundingBox object which is giving me a point outside the part, so I tried to use the Geometry element internal faces getting the mesh vertices but I'm stuck calculating the mid point.
I'm using a rather naive algorithm shown in the snippet below, but it's giving me false results as it seems to be affected by the initial default of min/max variables.
Any suggestions?
PS: DebugTools is a custom helper class of my own.
public void ZoneDetect()
{
Document doc = this.ActiveUIDocument.Document;
using (Transaction t = new Transaction(doc,"Set Rooms By Region"))
{
t.Start();
FilteredElementCollector fec =
new FilteredElementCollector(doc)
.OfClass(typeof(Part))
.OfCategory(BuiltInCategory.OST_Parts)
.Cast<Part>();
foreach (Part p in fec)
{
Options op = new Options();
op.View=doc.ActiveView;
op.ComputeReferences=true;
GeometryElement gm=p.get_Geometry(op);
Solid so = gm.First() as Solid;
PlanarFace fc=so.Faces.get_Item(0) as PlanarFace;
foreach (PlanarFace f in so.Faces)
{
if (f.Normal == new XYZ(0,0,-1)) fc=f;
}
XYZ max = new XYZ();
XYZ min = new XYZ();
int no = 0;
foreach (XYZ vx in fc.Triangulate().Vertices)
{
// Just for debugging
DebugTools.DrawModelTick(vx,doc,"Max");
doc.Regenerate();
TaskDialog.Show("Point:"+no.ToString(),vx.ToString());
no++;
//Comparing points
if (vx.X>max.X) max=new XYZ (vx.X,max.Y,0);
if (vx.Y>max.Y) max=new XYZ (max.X,vx.Y,0);
if (vx.X<min.X) min=new XYZ (vx.X,min.Y,0);
if (vx.Y<min.Y) min=new XYZ (min.X,vx.Y,0);
}
XYZ mid = new XYZ(max.X-min.X,max.Y-min.Y,0);
DebugTools.DrawModelTick(mid,doc,"Mid");
DebugTools.DrawModelTick(max,doc,"Max");
DebugTools.DrawModelTick(min,doc,"Min");
}
t.Commit();
}
}
It seems like you're looking for the center of gravity of a polygon. An algorithm for that can be found here: Center of gravity of a polygon
Once you have a Face object, you can enumerate its edges to receive a list of vertex points. Use the longest of the EdgeLoops in the face. Collect all the points and make sure that they are in the right order (the start and end points of the edges might need to be swapped).
Daren & Matt thanks a lot for your answers,
Since I'm dealing with rather simple shapes ( mainly rectangles ) I just needed to get a point roughly near the center to test whether it is inside a room, my problem was with the naive algorithm I was using which turned out to be wrong.
I corrected it as follows:
XYZ midSum = Max + Min;
XYZ mid = new XYZ(midSum.X/2 , midSum.Y/2,0);
I will look into refining it using the link you've provided, but as for now I will get into finishing my task in hand.
Many thanks

getting vertex points of GeometryModel3D to draw a wireframe

I've loaded a 3d model using Helix toolking like this
modelGroupScull = importer.Load("C:\\Users\\Robert\\Desktop\\a.obj");
GeometryModel3D modelScull = (GeometryModel3D)modelGroupScull.Children[0];
and I also have _3DTools that can draw lines from point-to-point in 3d space. now to draw a wireframe of my GeometryModel3D I guess I have to cycle to its vertexes and add them to ScreenSpaceLines3D.
ScreenSpaceLines3D wireframe = new ScreenSpaceLines3D();
// need to cycle through all vertexes of modelScull as Points, to add them to wireframe
wireframe.Points.Add(new Point3D(1, 2, 3));
wireframe.Color = Colors.LightBlue;
wireframe.Thickness = 3;
Viewport3D1.Children.Add(wireframe);
But... how do I actually get this vertex points?
EDIT:
Thanks for the answer. It did add the points
ScreenSpaceLines3D wireframe = new ScreenSpaceLines3D();
MeshGeometry3D mg3 = (MeshGeometry3D)modelScull.Geometry;
foreach (Point3D point3D in mg3.Positions)
{
wireframe.Points.Add(point3D);
}
wireframe.Color = Colors.LightBlue;
wireframe.Thickness = 1;
Viewport3D1.Children.Add(wireframe);
but the wireframe is messed up )
(source: gyazo.com)
maybe someone knows of other ways to draw wireframes? )
Normally the trigons are drawn with index buffers (to prevent extra rotations of vertices) Take a look at the TriangleIndices:
if you do something like this: (not tested it)
MeshGeometry3D mg3 = (MeshGeometry3D)modelScull.Geometry;
for(int index=0;index<mg3.TriangleIndices.Count; index+=3)
{
ScreenSpaceLines3D wireframe = new ScreenSpaceLines3D();
wireframe.Points.Add(mg3.Positions[mg3.TriangleIndices[index]]);
wireframe.Points.Add(mg3.Positions[mg3.TriangleIndices[index+1]]);
wireframe.Points.Add(mg3.Positions[mg3.TriangleIndices[index+2]]);
wireframe.Points.Add(mg3.Positions[mg3.TriangleIndices[index]]);
wireframe.Color = Colors.LightBlue;
wireframe.Thickness = 1;
Viewport3D1.Children.Add(wireframe);
}
But, this can create some overdraw (2 lines on the same coordinates) and probably very slow.
If you put each side into a list and use something like a Distinct on it, it will be better.
The problem with the ScreenSpaceLines3D is that will continue the line, instead of create 1 line (start/end).
If you can manage a algoritm that tries to draw you model with 1 line, it will go faster.
Wireframes are very slow in WPF. (because they are created with trigons)
You should find the vertex points in MeshGeometry3D.Positions Property
foreach (var point3D in modelScull.Geometry.Positions)

Local Position of Bone wrt Model

I have a requirement to get either position matrix or position vector of a bone (say wheel) with respect to my model (car).
What I have tried -
Vector3.Transform(mesh.BoundingSphere.Center , transforms[mesh.ParentBone.Index]*Matrix.CreateScale(o.Scaling )))
Above doesn't give accurate result.
What you want is to calculate the absolute transforms for each bone. The CopyAbsoluteBoneTransformsTo method can do it for you.
It is equivalent to the following code:
/// <summary>Calculates the absolute bone transformation matrices in model space</summary>
private void calculateAbsoluteBoneTransforms() {
// Obtain the local transform for the bind pose of all bones
this.model.CopyBoneTransformsTo(this.absoluteBoneTransforms);
// Convert the relative bone transforms into absolute transforms
ModelBoneCollection bones = this.model.Bones;
for (int index = 0; index < bones.Count; ++index) {
// Take over the bone transform and apply its user-specified transformation
this.absoluteBoneTransforms[index] =
this.boneTransforms[index] * bones[index].Transform;
// Calculate the absolute transform of the bone in model space.
// Content processors sort bones so that parent bones always appear
// before their children, thus this works like a matrix stack,
// resolving the full bone hierarchy in minimal steps.
ModelBone bone = bones[index];
if (bone.Parent != null) {
int parentIndex = bone.Parent.Index;
this.absoluteBoneTransforms[index] *= this.absoluteBoneTransforms[parentIndex];
}
}
}
Taken from here.

Categories

Resources