How to fix this weird transparent areas in 3D model? - c#

something is not working as it should. If you take alook at the screenshot you will see that the result is weird. The floor of the pavilion is rendered correctly, but the columns are kinda transparent, and the roof is completele weird. I used Assimp.NET to import his mesh from a .obj file. In other engines it looked correctly. Another thing was: if i set CullMode to Back - it will cull the front faces?! I think it could be 3 things: 1,the mesh was imported wrong, or the z Buffer is not working, or maybe i need multiple world matrices (im using only one).
Does maybe anybody know what this could be?!
Screenshot:
Here is some code:
DepthBuffer/DepthStencilView
var depthBufferDescription = new Texture2DDescription
{
Format = Format.D32_Float_S8X24_UInt,
ArraySize = 1,
MipLevels = 1,
Width = BackBuffer.Description.Width,
Height = BackBuffer.Description.Height,
SampleDescription = swapChainDescription.SampleDescription,
BindFlags = BindFlags.DepthStencil
};
var depthStencilViewDescription = new DepthStencilViewDescription
{
Dimension = SwapChain.Description.SampleDescription.Count > 1 || SwapChain.Description.SampleDescription.Quality > 0 ? DepthStencilViewDimension.Texture2DMultisampled : DepthStencilViewDimension.Texture2D
};
var depthStencilStateDescription = new DepthStencilStateDescription
{
IsDepthEnabled = true,
DepthComparison = Comparison.Always,
DepthWriteMask = DepthWriteMask.All,
IsStencilEnabled = false,
StencilReadMask = 0xff,
StencilWriteMask = 0xff,
FrontFace = new DepthStencilOperationDescription
{
Comparison = Comparison.Always,
PassOperation = StencilOperation.Keep,
FailOperation = StencilOperation.Keep,
DepthFailOperation = StencilOperation.Increment
},
BackFace = new DepthStencilOperationDescription
{
Comparison = Comparison.Always,
PassOperation = StencilOperation.Keep,
FailOperation = StencilOperation.Keep,
DepthFailOperation = StencilOperation.Decrement
}
};
Loading mesh files:
public static Mesh Stadafax_ModelFromFile(string path)
{
if (_importContext.IsImportFormatSupported(Path.GetExtension(path)))
{
var imported = _importContext.ImportFile(path, PostProcessSteps.Triangulate | PostProcessSteps.FindDegenerates | PostProcessSteps.FindInstances | PostProcessSteps.FindInvalidData | PostProcessSteps.JoinIdenticalVertices | PostProcessSteps.OptimizeGraph | PostProcessSteps.ValidateDataStructure | PostProcessSteps.FlipUVs);
Mesh engineMesh = new Mesh();
Assimp.Mesh assimpMesh = imported.Meshes[0];
foreach(Face f in assimpMesh.Faces)
{
engineMesh.Structure.Faces.Add(new Rendering.Triangle((uint)f.Indices[0], (uint)f.Indices[1], (uint)f.Indices[2]));
}
List<Vector3D>[] uv = assimpMesh.TextureCoordinateChannels;
for(int i = 0; i < assimpMesh.Vertices.Count; i++)
{
engineMesh.Structure.Vertices.Add(new Vertex(new Vector4(assimpMesh.Vertices[i].X, assimpMesh.Vertices[i].Y, assimpMesh.Vertices[i].Z, 1), RenderColorRGBA.White, new Vector2(uv[0][i].X, uv[0][i].Y)));
}
return engineMesh;
}
else
NoëlEngine.Common.Output.Log("Model format not supported!", "Importeur", true); return null;
}
}
If anybody only has the smallest idea what this could be please just write a comment.

What you see are polygons actually behind others still being drawn above them.
When you configure the depth buffer via DepthStencilStateDescription, you set up the DepthComparison to Comparison.Always. This is not what you want, you want to use Comparison.Less.
What's the logic behind it? Every depth value for a pixel is checked whether it can actually write to the depth buffer. This check is configured with the Comparison you specified.
Comparison.Always always allows the new value to be written. So no matter if a polygon is actually behind others or above them or whatever, it will always override ("draw above") what's already there - even if it's behind it spatially.
Comparison.Less only writes the value if it is less than the current value in the depth buffer. Don't forget that smaller depth values are closer to the viewer. So a polygon closer to an existing one will override ("draw above") the old one. But if it is behind it, it won't draw. That's exactly what you want.
You can also guess what the other Comparison enumerations now do, and play around with them :)

Related

Skeletal skinning algorithm bunches up everything at the model's feet

I'm trying to implement skinning using skeletal animations stored in a Collada file, and while I managed to load it and render the model without skinning correctly, I can't figure out why when I apply my skinning algorithm all the parts get bunched up at the model's feet, or extremely deformed. The entire project is stored on GitHub for reference (the skinning branch).
I believe the vertex shader is correct since if I pass identity transforms to the bones I get my default pose model, it's calculating the bone transforms based on the skeletal animation in the .dae file that's somehow broken. This is what my problem looks like, versus how the model looks like in the default pose:
I believe my problem is somewhere while applying the recursive bone transforms:
public void Update(double deltaSec)
{
if (CurrentAnimationName is null) return;
var anim = animations[CurrentAnimationName];
currentAnimationSec = (currentAnimationSec + deltaSec) % anim.Duration.TotalSeconds;
void calculateBoneTransforms(BoneNode boneNode, Matrix4x4 parentTransform)
{
var bone = anim.Bones.FirstOrDefault(b => b.Id == boneNode.Id);
var nodeTransform = bone?[TimeSpan.FromSeconds(currentAnimationSec)] ?? boneNode.Transform;
var globalTransform = parentTransform * nodeTransform;
if (boneNode.Id >= 0)
for (int meshIdx = 0; meshIdx < perMeshData.Length; ++meshIdx)
perMeshData[meshIdx].FinalBoneMatrices[boneNode.Id] = globalTransform * perMeshData[meshIdx].boneOffsetMatrices[boneNode.Id];
foreach (var child in boneNode.Children)
calculateBoneTransforms(child, globalTransform);
}
calculateBoneTransforms(rootBoneNode, Matrix4x4.Identity);
}
Or when building the recursive structure of bone data with their transforms:
BoneNode visitTransforms(Node node, Matrix4x4 mat)
{
var boneNode = new BoneNode
{
Children = new BoneNode[node.ChildCount],
Id = boneIds.TryGetValue(node.Name, out var id) ? id : -1,
Transform = Matrix4x4.Transpose(node.Transform.ToNumerics()),
};
mat = node.Transform.ToNumerics() * mat;
foreach (var meshIndex in node.MeshIndices)
transformsDictionary[scene.Meshes[meshIndex]] = mat;
int childIdx = 0;
foreach (var child in node.Children)
boneNode.Children[childIdx++] = visitTransforms(child, mat);
return boneNode;
}
rootBoneNode = visitTransforms(scene.RootNode, Matrix4x4.Identity);
I believe the bone to vertex weights are gathered and uploaded to the shader correctly, and that the final bone array uniform is uploaded correctly (but maybe not calculated correctly). I'm also not sure of the order of matrix multiplications and whether or not to transpose anything when uploading to the shader, though I've tried it both ways every attempt.
If anyone runs into a similar issue, my problem was that my keyframe bone transforms were being transposed compared to how the rest of the chain of transforms were calculated, so when I multiplied them everything went crazy. So, keep track of what matrices are left-handed and which are right-handed!

How to filter values in a point array

I have a constantly feeding point array with a length of 4, and want to filter certain "outliers" in the array.
I'm creating a VR/AR app with Opencvforunity and Unity.
Using a live feed from the webcam, I have an 4-length points array which updates and contains x, y 2d coordinates, representing the four corners of a tracked object. And I'm using them as source values to draw a Rect in unity.
Each slot in array contains data such as this:
{296.64151, 88.096649}
However, Unity throws errors and crashes when the a value in the array has
negative values (sometimes happens because of tracking error)
large values exceeding the canvas size (same reason, currently using 1280 x 720)
An example of a "bad value" will be like this :
{-1745.10614, 46.908913} <- negative / big value on X
{681.00519, 1234.15828} <- big value on Y
So I have to somehow create a filter for the array to make the app to work.
The order should not be altered and the data constantly updates so ignoring/skipping bad values will be optimal. I'm new to C# and I have searched but no good luck for "point array"
Here's my code:
Point[] ptsArray = patternTrackingInfo.points2d.toArray();
pt1 = ptsArray[0];
pt2 = ptsArray[2];
pt3 = new OpenCVForUnity.CoreModule.Point(ptsArray[2].x + 5, ptsArray[2].y + 5);
for (int i = 0; i < 4; i++)
{
cropRect = new OpenCVForUnity.CoreModule.Rect(pt1, pt3);
}
pt1 represents the left-top corner and pt2 for right-bottom.
I heard that the right bottom point is exclusive in OpenCV itself so I tried to add a new point to that(pt3), but still crashing - so I believe it is not related to that matter.
Any suggestions for creating a filter for a point array will be very much helpful. Thank you.
I would just create a new list of Points and loop through the existing list, adding only the valid points to the new list. Then that becomes the list that you convert to an array for your OpenCV calls.
List<Point> filteredList = new List<Point>();
for(int i = 0; i < patternTrackingInfo.points2d.Count; i++)
{
if(/*Do your check here*/)
continue;
filteredList.Add(patternTrackingInfo.points2d[i]);
}
Point[] ptsArray = filteredList.toArray();
pt1 = ptsArray[0];
pt2 = ptsArray[2];
pt3 = new OpenCVForUnity.CoreModule.Point(ptsArray[2].x + 5, ptsArray[2].y + 5);
for (int i = 0; i < 4; i++)
{
cropRect = new OpenCVForUnity.CoreModule.Rect(pt1, pt3);
}

C# WinForms Chart Control: get Size,Region,Position of Bar

is there a way to get the rectangles of the stackcolumn chart bar?
this code snippet is how it can be works but it's very ugly:
var points = new List<Point>();
for (int x = 0; x < chart.Size.Width; x++)
{
for (int y = 0; y < chart.Size.Height; y++)
{
var hp = chart.HitTest(x, y, false, ChartElementType.DataPoint);
var result = hp.Where(h => h.Series?.Name == "Cats");
if (result.Count() > 0)
{
points.Add(new Point(x, y));
}
}
}
var bottomright = points.First();
var topleft = points.Last();
I will try to describe my purpose:
I would like to create a chart from various testresults and make this available as a HTML file. This generated Chart is inserted as an image file in the HTML document. Now, I would like to link each part of a Bar area from the Chart to an external document. Since the graphics is static, I have only the possibility to use the "MAP Area" element to make any area as a link from HTML. The "map" element requires a "rectangle", or these coordinates. That's the reason why I need the coordinator of each part of a Bar.
I have to mention that I am not really familiar with the Chart control yet.
The graphics is generated testweise.
[SOLVED]
i got the solution:
var stackedColumns = new List<Tuple<string,string,Rectangle>>();
for (int p = 0; p < chart.Series.Select(sm => sm.Points.Count).Max(); p++)
{
var totalPoints = 0;
foreach (var series in chart.Series)
{
var width = int.Parse(series.GetCustomProperty("PixelPointWidth"));
var x = (int)area.AxisX.ValueToPixelPosition(p + 1) - (width / 2);
int y = (int)area.AxisY.ValueToPixelPosition(totalPoints);
totalPoints += series.Points.Count > p ? (int)series.Points[p].YValues[0] : 0;
int y_total = (int)area.AxisY.ValueToPixelPosition(totalPoints);
var rect = new Rectangle(x, y_total, width, Math.Abs(y - y_total));
stackedColumns.Add(new Tuple<string, string, Rectangle>(series.Name, series.Points.ElementAtOrDefault(p)?.AxisLabel, rect));
}
}
this workaround works for stackedcolumn and points starts at x-axis=0.
just the PixelPointWidth property has to be set manualy to get the right width. i have not yet found a way to get the default bar width..
This is extremely tricky and I really wish I knew how to get the bounds from some chart functionionality!
You code snippet is actulally a good start for a workaround. I agree though that it has issues:
It is ugly
It doesn't always work
It has terrible performance
Let's tackle these issues one by one:
Yes it is ugly, but then that's the way of workarounds. My solution is even uglier ;-)
There are two things I found don't work:
You can't call a HitTest during a Pre/PostPaint event or terrible things will happen, like some Series go missing, SO exceptions or other crashes..
The result for the widths of the last Series are off by 1-2 pixels.
The performance of testing each pixel in the chart will be terrible even for small charts, but gets worse and worse when you enlarge the chart. This is relatively easy to prevent, though..:
What we are searching are bounding rectangles for each DataPoint of each Series.
A rectangle is defined by left and right or width plus top and bottom or height.
We can get precise values for top and bottom by using the axis function ValueToPixelPosition feeding in the y-value and 0 for each point. This is simple and cheap.
With that out of the way we still need to find the left and right edges of the points. To do so all we need to do it test along the zero-line. (All points will either start or end there!)
This greatly reduces the number of tests.
I have decided to do the testing for each series separately, restaring at 0 each time. For even better performance one could do it all in one go.
Here is a function that returns a List<Rectangle> for a given Series:
List<Rectangle> GetColumnSeriesRectangles(Series s, Chart chart, ChartArea ca)
{
ca.RecalculateAxesScale();
List<Rectangle> rex = new List<Rectangle>();
int loff = s == chart.Series.Last() ? 2 : 0; ;
int y0 = (int)ca.AxisY.ValueToPixelPosition(0);
int left = -1;
int right = -1;
foreach (var dp in s.Points)
{
left = -1;
int delta = 0;
int off = dp.YValues[0] > 0 ? delta : -delta;
for (int x = 0; x < chart.Width; x++)
{
var hitt = chart.HitTest(x, y0 +off );
if (hitt.ChartElementType == ChartElementType.DataPoint &&
((DataPoint)hitt.Object) == dp)
{
if (left < 0) left = x;
right = x;
}
else if (left > 0 && right > left) break;
}
int y = (int)ca.AxisY.ValueToPixelPosition(dp.YValues[0]);
rex.Add(new Rectangle(left, Math.Min(y0, y),
right - left + 1 - loff, Math.Abs(y - y0)));
left = -1;
}
return rex;
}
A few notes:
I start by doing a RecalculateAxesScale because we can't Hittest before the current layout has been calculated.
I use a helper variable loff to hold the offset for the width in the last Series.
I start searching at the last x coordinate as the points should all lie in sequence. If they don't because you have used funny x-values or inserted points you may need to start at 0 instead..
I use y0 as the baseline of the zero values for both the hittesting y and also the points' base.
I use a little Math to get the bounds right for both positive and negative y-values.
Here is a structure to hold those rectangles for all Series and code to collect them:
Dictionary<string, List<Rectangle>> ChartColumnRectangles = null;
Dictionary<string, List<Rectangle>> GetChartColumnRectangles(Chart chart, ChartArea ca)
{
Dictionary<string, List<Rectangle>> allrex = new Dictionary<string, List<Rectangle>>();
foreach (var s in chart.Series)
{
allrex.Add(s.Name, GetColumnSeriesRectangles(s, chart, ca));
}
return allrex;
}
We need to re-calculate the rectangles whenever we add points or resize the chart; also whenever the axis view changes. The common code for AxisViewChanged, ClientSizeChanged, Resize and any spot you add or remove points could look like this:
Chart chart= sender as Chart;
GetChartColumnRectangles(chart, chart.ChartAreas[0]);
Let's test the result with a Paint event:
private void chart1_Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
chart1.ApplyPaletteColors();
foreach (var kv in ChartColumnRectangles)
{
{
foreach (var r in kv.Value)
g.DrawRectangle(Pens.Black, r);
}
}
}
Here it is in action:
Well, I've been down this path and the BIG issue for me is that the custom property of 'PixelPointWidth' is just that - it is custom. You cannot retrieve it unless you've set it. I needed the width of the item - had to scwag/calculate it myself. Keep in mind that many charts can be panned/zoomed, so once you go down this path, then you need to recalculate it and set it for the chart prepaint events.
Here is a crude little function I made (is more verbose than needed - for educational purposes and has no error handling :)):
private int CalculateChartPixelPointWidth(Chart chart, ChartArea chartArea, Series series)
{
// Get right side - takes some goofy stuff - as the pixel location isn't available
var areaRightX = Math.Round(GetChartAreaRightPositionX(chart, chartArea));
var xValue = series.Points[0].XValue;
var xPixelValue = chartArea.AxisX.ValueToPixelPosition(xValue);
var seriesLeftX = chart.Location.X + xPixelValue;
var viewPointWidth = Math.Round((areaRightX - seriesLeftX - (series.Points.Count * 2)) / series.Points.Count, 0);
return Convert.ToInt32(viewPointWidth);
}
And this as well:
private double GetChartAreaRightPositionX(Chart chart, ChartArea area)
{
var xLoc = chart.Location.X;
return xLoc + (area.Position.Width + area.Position.X) / 100 * chart.Size.Width;
}
The reason I'm calculating this is because I need to draw some graphical overlays on top of the normal chart item objects (my own rendering for my own purposes).
In the 'prepaint' event for the chart, I need to calculate the 'PixelPointWidth' that matches the current chart view (might be panned/zoomed). I then use that value to SET the chart custom property to match . . . such that the normal chart entities and MINE are correctly aligned/scaled (ensures we're in exactly the right 'x' axis position):
In my prepaint event, I do the following - just prior to drawing my graphical entities:
// Pretty close scwag . . .
var viewPointWidth = CalculateChartPixelPointWidth(e.Chart, e.Chart.ChartAreas[0], e.Chart.Series[0]);
// Set the custom property and use the same point width for my own entities . .
chart1.Series[0].SetCustomProperty("PixelPointWidth", viewPointWidth.ToString("D"));
// . . . now draw my entities below . . .

visual studio c# chart with only one axis

I'm trying to create in c# in Visual studio a nice way to show the minimum, maximum and actual value of a "variable" (Variable is a class). I was trying to use the charts to do that but I have two problems.
1) It shows in 2D and I only need 1 dimension.
2) I can't write tags on the values, in this case to show which is the minimum, the maximum and the current value.
Is there a SeriesChartType that does that?
I would appreciate ideas. Thanks!
It is not so much the chart type but the various styling options you need to play with.
Here is an example using ChartType.Point:
// no legend:
chart.Legends.Clear();
// a couple of short references:
ChartArea ca = chart.ChartAreas[0];
Series S1 = chart.Series[0];
// no y-axis:
ca.AxisY.Enabled = AxisEnabled.False;
ca.AxisY.Minimum = 0;
ca.AxisY.Maximum = 1;
// use your own values:
ca.AxisX.Minimum = 0;
ca.AxisX.Maximum = 100;
// style the ticks, use your own values:
ca.AxisX.MajorTickMark.Size = 7;
ca.AxisX.MajorTickMark.Interval = 10;
ca.AxisX.MinorTickMark.Enabled = true;
ca.AxisX.MinorTickMark.Size = 3;
ca.AxisX.MinorTickMark.Interval = 2;
// I turn the axis labels off.
ca.AxisX.LabelStyle.Enabled = false;
// If you want to show them pick a reasonable Interval!
ca.AxisX.Interval = 1;
// no gridlines
ca.AxisY.MajorGrid.Enabled = false;
ca.AxisX.MajorGrid.Enabled = false;
// the most logical type
// note that you can change colors, sizes, shaps and markerstyles..
S1.ChartType = SeriesChartType.Point;
// display x-value above; make sure you have enough room!
S1.Label = "#VALX";
// a few test data:
S1.Points.AddXY(1, 0.1);
S1.Points.AddXY(11, 0.1);
S1.Points.AddXY(17, 0.1);
S1.Points.AddXY(81, 0.1);
Note that you can play with the Series.SmartLabelStyle to display values when they are too dense!
I add the DataPoints at an y-value of 0.1. You can change it to move the points up or down a little..

Display marks in contour points

I create a single contour and add it to the chart, add points with label text and also subscribe to the GetSeriesMark event, but the text is not displayed, and the event never gets fired
Contour contour1 = new Contour();
contour1.IrregularGrid = true;
//
// contour1
contour1.Brush.Color = Color.FromArgb(68, 102, 163);
contour1.ColorEach = false;
contour1.EndColor = Color.FromArgb(192, 0, 0);
contour1.FillLevels = checkEditFillLevels.Checked;
//
//
contour1.Marks.Style = MarksStyles.Label;
contour1.Marks.Visible = true;
//
//
contour1.NumLevels = 8;
contour1.PaletteMin = 0;
contour1.PaletteStep = 0;
contour1.PaletteStyle = PaletteStyles.Pale;
//
//
contour1.Pen.Color = Color.FromArgb(192, 192, 192);
contour1.Pen.Style = DashStyle.Dot;
//
//
contour1.Pointer.HorizSize = 2;
//
//
contour1.Pointer.Pen.Visible = false;
contour1.Pointer.Style = PointerStyles.Rectangle;
contour1.Pointer.VertSize = 2;
contour1.Pointer.Visible = true;
contour1.StartColor = Color.FromArgb(255, 255, 192);
contour1.Title = "contour1";
Adding points is done with this
contour1.Add(x, y, z, "My Point 1");
Is there a way to display marks on the exact points in the Contour, and moreover is there a way to display marks only on specific points in the Contour (some points are actual data, others are made using interpolation to be able to show the contour)?
I'm afraid not, Contour series calculates and displays isolines from a custom array of X, Y and Z points. Levels are calculated automatically from users data. What would you like to get exactly? You might be interested in using Annotation tools. Here you can find an example about custom annotation tool positioning.
Since it is not possible to mark single points in contour (see #Narcís Calvet's answer), I ended up adding one Points series with marks on them.
However, I still wanted only the Contour levels to be shown in the legend, and X axis to display it's values instead of Marks of the Points, so I needed to add following lines.
tChart1.Legend.LegendStyle = LegendStyles.Values;
tChart1.Legend.Series = _currentContour;
tChart1.Axes.Bottom.Labels.Style = AxisLabelStyle.Value;

Categories

Resources