XNA performance - c#

I'm writing some sort of Geometry Wars inspired game except with added 2d rigid body physics Ai pathfinding some waypoint analysis line of sight checks load balancing etc. It seems that even though with around 80-100 enemies on screen it can work reasonably fast with all that stuff enabled the performance completely breaks down once you get to a total of 250 (150 enemies) objects or so. I've searched for any O(n^2) parts in the code but there don't seem to be any left. I'm also using spatial grids.
Even if I disable pretty much everything from the supposedly expensive Ai related processing it doesn't seem to matter, it like still breaks down at 150 enemies.
Now I implemened all the code from scratch, currently even the matrix multiplication code, and I'm almost completely relying on the GC as well as using C# closures for some things, so I expect this to be seriously far from being optimized, but still it doesn't make sense to me that with like 1/15 of the processing work but double the objects the game suddenly starts to slow down to crawl? Is this normal, how is the XNA platform normally supposed to scale as far as the amount of objects being processed is concerned?
I remember Some slerp spinning cube thing I did at first could handle more than 1000 at once so I think I'm doing something wrong?
edit:
Here's the grid structure's class
public abstract class GridBase{
public const int WORLDHEIGHT = (int)AIGridInfo.height;
public const int WORLDWIDTH = (int)AIGridInfo.width;
protected float cellwidth;
protected float cellheight;
int no_of_col_types;
// a dictionary of lists that gets cleared every frame
// 3 (=no_of_col_types) groups of objects (enemy side, players side, neutral)
// 4000 initial Dictionary hash positions for each group
// I have also tried using an array of lists of 100*100 cells
//with pretty much identical results
protected Dictionary<CoordsInt, List<Collidable>>[] grid;
public GridBase(float cellwidth, float cellheight, int no_of_col_types)
{
this.no_of_col_types = no_of_col_types;
this.cellheight=cellheight;
this.cellwidth=cellwidth;
grid = new Dictionary<CoordsInt, List<Collidable>>[no_of_col_types];
for (int u = 0; u < no_of_col_types; u++)
grid[u] = new Dictionary<CoordsInt, List<Collidable>>(4000);
}
public abstract void InsertCollidable(Collidable c);
public abstract void InsertCollidable(Grid_AI_Placeable aic);
//gets called in the update loop
public void Clear()
{
for (int u = 0; u < no_of_col_types; u++)
grid[u].Clear();
}
//gets the grid cell of the left down corner
protected void BaseCell(Vector3 v, out int gx, out int gy)
{
gx = (int)((v.X + (WORLDWIDTH / 2)) / cellwidth);
gy = (int)((v.Y + (WORLDHEIGHT / 2)) / cellheight);
}
//gets all cells covered by the AABB
protected void Extent(Vector3 pos, float aabb_width, float aabb_height, out int totalx, out int totaly)
{
var xpos = pos.X + (WORLDWIDTH / 2);
var ypos = pos.Y + (WORLDHEIGHT / 2);
totalx = -(int)((xpos / cellwidth)) + (int)((xpos + aabb_width) / cellwidth) + 1;
totaly = -(int)((ypos / cellheight)) + (int)((ypos + aabb_height) / cellheight) + 1;
}
}
public class GridBaseImpl1 : GridBase{
public GridBaseImpl1(float widthx, float widthy)
: base(widthx, widthy, 3)
{
}
//adds a collidable to the grid /
//caches for intersection test
//checks if it should be tested to prevent penetration /
//tests penetration
//updates close, intersecting, touching lists
//Collidable is an interface for all objects that can be tested geometrically
//the dictionary is indexed by some simple struct that wraps the row and column number in the grid
public override void InsertCollidable(Collidable c)
{
//some tag so that objects don't get checked more than once
Grid_Query_Counter.current++;
//the AABB is allocated in the heap
var aabb = c.CollisionAABB;
if (aabb == null) return;
int gx, gy, totalxcells, totalycells;
BaseCell(aabb.Position, out gx, out gy);
Extent(aabb.Position, aabb.widthx, aabb.widthy, out totalxcells, out totalycells);
//gets which groups to test this object with in an IEnumerable (from a statically created array)
var groupstestedagainst = CollidableCalls.GetListPrevent(c.CollisionType).Select(u => CollidableCalls.group[u]);
var groups_tested_against = groupstestedagainst.Distinct();
var own_group = CollidableCalls.group[c.CollisionType];
foreach (var list in groups_tested_against)
for (int i = -1; i < totalxcells + 1; i++)
for (int j = -1; j < totalycells + 1; j++)
{
var index = new CoordsInt((short)(gx + i), (short)(gy + j));
if (grid[list].ContainsKey(index))
foreach (var other in grid[list][index])
{
if (Grid_Query_Counter.Check(other.Tag))
{
//marks the pair as close, I've tried only keeping the 20 closest but it's still slow
other.Close.Add(c);
c.Close.Add(other);
//caches the pair it so that checking if the pair intersects doesn't go through the grid //structure loop again
c.CachedIntersections.Add(other);
var collision_function_table_id = c.CollisionType * CollidableCalls.size + other.CollisionType;
//gets the function to use on the pair for testing penetration
//the function is in a delegate array statically created to simulate multiple dispatch
//the function decides what coarse test to use until descending to some complete //geometric query
var prevent_delegate = CollidableCalls.preventfunctions[collision_function_table_id];
if (prevent_delegate == null) { Grid_Query_Counter.Put(other.Tag); continue; }
var a = CollidableCalls.preventfunctions[collision_function_table_id](c, other);
//if the query returns true mark as touching
if (a) { c.Contacted.Add(other); other.Contacted.Add(c); }
//marks it as tested in this query
Grid_Query_Counter.Put(other.Tag);
}
}
}
//adds it to the grid if the key doesn't exist it creates the list first
for (int i = -1; i < totalxcells + 1; i++)
for (int j = -1; j < totalycells + 1; j++)
{
var index = new CoordsInt((short)(gx + i), (short)(gy + j));
if (!grid[own_group].ContainsKey(index)) grid[own_group][index] = new List<Collidable>();
grid[own_group][index].Add(c);
}
}
[...]
}

First. Profile your code. Even if you just use manually inserted time stamps to surround blocks you're interested in. I prefer to use the profiler that comes built into Visual Studio Pro.
However, based in your description, I would assume your problems are due to too many draw calls. Once you exceed 200-400 draw calls per frame your performance can drop dramatically. Try batching your rendering and see if this improves performance.

You can use a profiler such as ANTS Profiler to see what may be the problem.
Without any code theres not much I can do.

Related

Unity - Combining data increases RAM usage -> Garbage Collector causes frame drops

Right now, I'm working on a QuadTree LOD system for planets. In general everything is working quite nice.
As a rough description of how the mesh is generated:
Every node generated by the QuadTree contains a container called "NodeMeshData". This contains all information to generate a simple Mesh
public class NodeMeshData
{
public Vector3[] vertices;
public int[] indices;
public Vector2[] uvs;
public void Clear()
{
vertices = null;
indices = null;
uvs = null;
}
}
Whenever the mesh needs to be regenerated, all QuadTree nodes without any children (leaf nodes) are asked for there NodeMeshData. All these are put into one Array of NodeMeshData objects.
public void UpdateQuadTree(Vector3 playerPosition)
{
_quadTree.UpdateTree(playerPosition);
NodeMeshData[] data = _quadTree.GetNodeMeshData().ToArray();
CombinedMeshData.Combine(data);
}
(CombinedMeshData is a NodeMeshData property from the class where this method is)
The problem now lies in the combining of all these NodeMeshData objects into one to generate the final mesh - the extention method "Combine". By process of elimination I found out that it must be this method that causes the problem I have.
public static class NodeMeshHelper
{
public static void Combine(this NodeMeshData newData, IList<NodeMeshData> nodeMeshData)
{
int verticesCount = nodeMeshData.Sum(static nmd => nmd.vertices.Length);
int indicesCount = nodeMeshData.Sum(static nmd => nmd.indices.Length);
int uvsCount = nodeMeshData.Sum(static nmd => nmd.uvs.Length);
List<Vector3> vertices = new(verticesCount);
List<int> indices = new(indicesCount);
List<Vector2> uvs = new(uvsCount);
int lastIndex = 0;
foreach (var meshData in nodeMeshData)
{
vertices.AddRange(meshData.vertices);
int[] shiftedIndices = meshData.indices.Select(index => index + lastIndex).ToArray();
lastIndex += meshData.indices.Last() + 1;
indices.AddRange(shiftedIndices);
uvs.AddRange(meshData.uvs);
}
newData.vertices = vertices.ToArray();
newData.indices = indices.ToArray();
newData.uvs = uvs.ToArray();
}
}
Whenever it is called and executed, the RAM usage rises. After a few seconds, the garbage collector kicks in and cleans the memory of all the unused data as it is supposed to do. But this results in a nasty frame drop every single time.
My idea is that all these declarations of new Lists or the AddRange calls cause the rising RAM usage. From asking my collegues I found out, that declaring Lists with a fixed size should eliminate the internal redeclarations of Arrays within the List objects when AddRange is called. But as you can see, I already do that and it doesn't help. I also tried to change the NodeMeshHelper into a non static class with fields for the lists or the final NodeMeshData result. It didn't help either.
What can I do to fix this? Is there a simple way, e.g. changing the "Combine" method in a way that it doesn't use new memory every single time? Or do I need to rethink my QuadTree or mesh creation algorithm completely (I hope not)? What about compute shaders?
Whoever is interested in the full code:
Github
UPDATE
I changed the NodeMeshHelper to not longer use Lists. Now it uses Arrays. I did that because it seems to improve the performance a bit. Also I hope that it might go in the right direction to fix my major problem. Yes, I know there still are "new" statements for the arrays.
public static class NodeMeshHelper
{
public static void Combine(this NodeMeshData newData, NodeMeshData[] nodeMeshData)
{
int vertexIterator = 0;
int indexIterator = 0;
int uvIterator = 0;
int verticesCount = 0;
foreach (var nmd in nodeMeshData)
verticesCount += nmd.vertices.Length;
int indicesCount = 0;
foreach (var nmd in nodeMeshData)
indicesCount += nmd.indices.Length;
int uvsCount = 0;
foreach (var nmd in nodeMeshData)
uvsCount += nmd.uvs.Length;
Vector3[] vertices = new Vector3[verticesCount];
int[] indices = new int[indicesCount];
Vector2[] uvs = new Vector2[uvsCount];
int lastIndex = 0;
foreach (var meshData in nodeMeshData)
{
// vertices
foreach (Vector3 vertex in meshData.vertices)
{
vertices[vertexIterator] = vertex;
vertexIterator++;
}
// indices
foreach (int index in meshData.indices)
{
indices[indexIterator] = index + lastIndex;
indexIterator++;
}
lastIndex += meshData.indices[^1] + 1;
// uvs
foreach (Vector2 uv in meshData.uvs)
{
uvs[uvIterator] = uv;
uvIterator++;
}
}
newData.vertices = vertices;
newData.indices = indices;
newData.uvs = uvs;
}
}
UPDATE 2
I also tried to implement this in Unitys Job System. But unfortunately it does not support NativeArrays of NativeArrays, which (in my brain) is necessary for my purposes.

Molecular MC simulation is not equilibrating

Suppose, a polymer has N monomers in its chain. I want to simulate its movement using the bead-spring model. However, there was no periodic boundary condition applied. Coz, points were generated such that they never cross the boundary.
So, I wrote the following program.
Polymer chain simulation with Monte Carlo method
I am using 1 million steps. The energy is not fluctuating as expected. After several thousand steps the curve goes totally flat.
The X-axis is steps. Y-axis is total energy.
Can anyone check the source code and tell me what I should change?
N.B. I am especially concerned with the function that calculates the total energy of the polymer.
Probably, the algorithm is incorrect.
public double GetTotalPotential()
{
double totalBeadPotential = 0.0;
double totalSpringPotential = 0.0;
// calculate total bead-energy
for (int i = 0; i < beadsList.Count; i++)
{
Bead item_i = beadsList[i];
Bead item_i_plus_1 = null;
try
{
item_i_plus_1 = beadsList[i + 1];
if (i != beadsList.Count - 1)
{
// calculate total spring energy.
totalSpringPotential += item_i.GetHarmonicPotential(item_i_plus_1);
}
}
catch { }
for (int j = 0; j < beadsList.Count; j++)
{
if (i != j)
{
Bead item_j = beadsList[j];
totalBeadPotential += item_i.GetPairPotential(item_j);
//Console.Write(totalBeadPotential + "\n");
//Thread.Sleep(100);
}
}
}
return totalBeadPotential + totalSpringPotential;
}
Problem of this application is that simulations (Simulation.SimulateMotion) are run in separate thread in parallel to the draw timer (SimulationGuiForm.timer1_Tick) and share the same state (polymerChain) without any sync/signaling, so some mutations of polymerChain are skipped completely (not drawn) and when the simulation is finished (far before the finish of the drawing) the timer1_Tick will redraw the same polymerChain. You can easily check that by adding counter to Simulation and increasing it in the SimulateMotion:
public class Simulation
{
public static int Simulations = 0; // counter
public static void SimulateMotion(PolymerChain polymerChain, int iterations)
{
Random random = new Random();
for (int i = 0; i < iterations; i++)
{
Simulations++; // bump the counter
// rest of the code
// ...
And checking it in timer1_Tick:
private void timer1_Tick(object sender, EventArgs e)
{
// ...
// previous code
if (Simulation.Simulations == totalIterations)
{
// breakpoint or Console.Writeline() ...
// will be hit as soon as "the curve goes totally flat"
}
DrawZGraph();
}
You need to rewrite your application in such way that SimulateMotion either stores iterations in some collection which is consumed by timer1_Tick (basically implementing producer-consumer pattern, for example you can try using BlockingCollection, like I do in the pull request) or performs it's actions only when the current state is rendered.

How to search for values in Point3DCollection?

Basically I am using a MeshGeometry3D to load points, positions and normals from an STL file.
The STL file format duplicates points, so I want to first search the MeshGeometry3D.Positions for duplicate before adding the newly read point.
The Mesh.Positions.IndexOf(somePoint3D) does not work, because it compares based on the object reference rather than the X, Y, Z values of the Point3D. This is why I am iterating the entire Collection to manually find duplicates:
//triangle is a custom class containing three vertices of type Point3D
//for each 3 points read from STL the triangle object is reinitialized
vertex1DuplicateIndex = -1;
vertex2DuplicateIndex = -1;
vertex3DuplicateIndex = -1;
for (int q = tempMesh.Positions.Count - 1; q >= 0; q--)
{
if (vertex1DuplicateIndex != -1)
if (tempMesh.Positions[q] == triangle.Vertex1)
vertex1DuplicateIndex = q;
if (vertex2DuplicateIndex != -1)
if (tempMesh.Positions[q] == triangle.Vertex2)
vertex2DuplicateIndex = q;
if (vertex3DuplicateIndex != -1)
if (tempMesh.Positions[q] == triangle.Vertex3)
vertex3DuplicateIndex = q;
if (vertex1DuplicateIndex != -1 && vertex2DuplicateIndex != -1 && vertex3DuplicateIndex != -1)
break;
}
This code is actually very efficient when duplicates are found, but when there is no duplicate the collection is iterated entirely which is very slow for big meshes, with more than a million positions.
Is there another approach on the search?
Is there a way to force Mesh.Positions.IndexOf(newPoint3D) to compare based on value like the Mesh.Positions[index]==(somePoint3D), rather than the reference comparison it is doing now?
I don't know of a built in way to do this, but you could use a hash map to cache the indices of the 3D vectors.
Depending of the quality of your hash functions for the vectors you'll have a 'sort of' constant lookup (no collisions are impossible, but it should be faster than iterating though all the vertex data for each new triangle point).
Using hakononakani's idea I've managed to speed up a bit, using a combination of a HashSet and a Dictionary. The following is a simplified version of my code:
class CustomTriangle
{
private Vector3D normal;
private Point3D vertex1, vertex2, vertex3;
}
private void loadMesh()
{
CustomTriangle triangle;
MeshGeometry3D tempMesh = new MeshGeometry3D();
HashSet<string> meshPositionsHashSet = new HashSet<string>();
Dictionary<string, int> meshPositionsDict = new Dictionary<string, int>();
int vertex1DuplicateIndex, vertex2DuplicateIndex, vertex3DuplicateIndex;
int numberOfTriangles = GetNumberOfTriangles();
for (int i = 0, j = 0; i < numberOfTriangles; i++)
{
triangle = ReadTriangleDataFromSTLFile();
vertex1DuplicateIndex = -1;
if (meshPositionsHashSet.Add(triangle.Vertex1.ToString()))
{
tempMesh.Positions.Add(triangle.Vertex1);
meshPositionsDict.Add(triangle.Vertex1.ToString(), tempMesh.Positions.IndexOf(triangle.Vertex1));
tempMesh.Normals.Add(triangle.Normal);
tempMesh.TriangleIndices.Add(j++);
}
else
{
vertex1DuplicateIndex = meshPositionsDict[triangle.Vertex1.ToString()];
tempMesh.TriangleIndices.Add(vertex1DuplicateIndex);
tempMesh.Normals[vertex1DuplicateIndex] += triangle.Normal;
}
//Do the same for vertex2 and vertex3
}
}
At the end tempMesh will have only unique points. All you have to do is normalize all Normals and you're ready to visualize.
The same can be achieved only with the Dictionary using:
if (!meshPositionsDict.Keys.Contains(triangle.Vertex1.ToString()))
I just like using the HashSet, because it's fun to work with :)
In both cases the final result is a ~60 times faster algorithm than before!

Calculating adjacency matrix from randomly generated graphs

I have developed small program, which randomly generates several connections between the graphs (the value of the count could be randomly too, but for the test aim I have defined const value, it could be redefined in random value in any time).
Code is C#: http://ideone.com/FDCtT0
( result: Success time: 0.04s memory: 36968 kB returned value: 0 )
If you don't know, what is the adjacency matrix, go here : http://en.wikipedia.org/wiki/Adjacency_matrix
I think, that my version of code is rather not-optimized.
If I shall work with large matrixes, which have the size: 10k x 10k.
What are your suggestions, how is better to parallel calculations in
this task? Should I use some of the lockers-models like semaphore
etc for multi-threading calculations on large matrixes.
What are your suggestions for redesigning the architecture of
program. How should I prepare it for large matrixes?
As you see, upper at ideone, I have showed the time execution parameter and allocated memory in RAM. What is the asymptotic value of execution of my program? Is it O(n^2)?
So I want to listen to your advice how to increase the asymptotic mark, parallel calculations with using semaphores ( or maybe better locker-model for threads ).
Thank you!
PS:
SO doesn't allow to post topic without formatted code, so I'm posting in at the end (full program):
/*
Oleg Orlov, 2012(c), generating randomly adjacency matrix and graph connections
*/
using System;
using System.Collections.Generic;
class Graph
{
internal int id;
private int value;
internal Graph[] links;
public Graph(int inc_id, int inc_value)
{
this.id = inc_id;
this.value = inc_value;
links = new Graph[Program.random_generator.Next(0, 4)];
}
}
class Program
{
private const int graphs_count = 10;
private static List<Graph> list;
public static Random random_generator;
private static void Init()
{
random_generator = new Random();
list = new List<Graph>(graphs_count);
for (int i = 0; i < list.Capacity; i++)
{
list.Add(new Graph(i, random_generator.Next(100, 255) * i + random_generator.Next(0, 32)));
}
}
private static void InitGraphs()
{
for (int i = 0; i < list.Count; i++)
{
Graph graph = list[i] as Graph;
graph.links = new Graph[random_generator.Next(1, 4)];
for (int j = 0; j < graph.links.Length; j++)
{
graph.links[j] = list[random_generator.Next(0, 10)];
}
list[i] = graph;
}
}
private static bool[,] ParseAdjectiveMatrix()
{
bool[,] matrix = new bool[list.Count, list.Count];
foreach (Graph graph in list)
{
int[] links = new int[graph.links.Length];
for (int i = 0; i < links.Length; i++)
{
links[i] = graph.links[i].id;
matrix[graph.id, links[i]] = matrix[links[i], graph.id] = true;
}
}
return matrix;
}
private static void PrintMatrix(ref bool[,] matrix)
{
for (int i = 0; i < list.Count; i++)
{
Console.Write("{0} | [ ", i);
for (int j = 0; j < list.Count; j++)
{
Console.Write(" {0},", Convert.ToInt32(matrix[i, j]));
}
Console.Write(" ]\r\n");
}
Console.Write("{0}", new string(' ', 7));
for (int i = 0; i < list.Count; i++)
{
Console.Write("---");
}
Console.Write("\r\n{0}", new string(' ', 7));
for (int i = 0; i < list.Count; i++)
{
Console.Write("{0} ", i);
}
Console.Write("\r\n");
}
private static void PrintGraphs()
{
foreach (Graph graph in list)
{
Console.Write("\r\nGraph id: {0}. It references to the graphs: ", graph.id);
for (int i = 0; i < graph.links.Length; i++)
{
Console.Write(" {0}", graph.links[i].id);
}
}
}
[STAThread]
static void Main()
{
try
{
Init();
InitGraphs();
bool[,] matrix = ParseAdjectiveMatrix();
PrintMatrix(ref matrix);
PrintGraphs();
}
catch (Exception exc)
{
Console.WriteLine(exc.Message);
}
Console.Write("\r\n\r\nPress enter to exit this program...");
Console.ReadLine();
}
}
I will start from the end, if you don't mind. :)
3) Of course, it is O(n^2). As well as the memory usage.
2) Since sizeof(bool) == 1 byte, not bit, you can optimize memory usage by using bit masks instead of raw bool values, this will make it (8 bits per bool)^2 = 64 times less.
1) I don't know C# that well, but as i just googled i found out that C# primitive types are atomic, which means you can safely use them in multi-threading. Then, you are to make a super easy multi-threading task: just split your graphs by threads and press the 'run' button, which will run every thread with its part of graph on itself. They are independent so that's not going to be any problem, you don't need any semaphores, locks and so on.
The thing is that you won't be able to have an adjacency matrix with size 10^9 x 10^9. You just can't store it in the memory. But, there is an other way.
Create an adjacency list for each vertex, which will have a list of all vertices it is connected with. After building those lists from your graph, sort those lists for each vertex. Then, you can answer on the 'is a connected to b' in O( log(size of adjacency list for vertex a) ) time by using binary search, which is really fast for common usage.
Now, if you want to implement Dijkstra algorithm really fast, you won't need an adj. matrix at all, just those lists.
Again, it all depends on the future tasks and constraints. You cannot store the matrix of that size, that's all. You don't need it for Dijkstra or BFS, that's a fact. :) There is no conceptual difference from the graph's side: graph will be the same no matter what data structure it's stored in.
If you really want the matrix, then that's the solution:
We know, that number of connections (1 in matrix) is greatly smaller than its maximum which is n^2. By doing those lists, we simply store the positions of 1 (it's also called sparse matrix), which consumes no unneeded memory.

Problems with double array?

I am required to create a program which reads in data from a .cvs file, and use these (x, y and z) values for a series of calculations.
I read in the file as a string, and then split this into 3 smaller strings for x, y and z.
The x, y and z coordinates represents the x and y coordinates of the contours of a lake, and the depth (z).
One of the calculations which I have to do, is to calculate the surface area of the lake, using the formula (x[i]*y[i+1])-(x[i+1]*y[i]), where z(depth) = 0.
I can get my code to run perfectly, up until the x[i+1] and y[i+1], where it keeps giving me a value of 0.
Can someone please tell me how to fix this?
Here is my code;
{
string[] ss = File.ReadAllLines(#"C:File.csv");
for (int i = 1; i < ss.Length; i++)
{
string[] valuesAsString = ss[i].Split(new char[] { ' ', ',' }, StringSplitOptions.RemoveEmptyEntries);
double[] X = new double[valuesAsString.Length];
double[] Y = new double[valuesAsString.Length];
double[] Z = new double[valuesAsString.Length];
for (int n = 0; n < 1; n++)
{
X[n] = double.Parse(valuesAsString[0]);
Y[n] = double.Parse(valuesAsString[1]);
}
do
{
double SurfaceArea = (X[n] * Y[n + 1]) - (X[n + 1] * Y[n]);
Console.WriteLine(SurfaceArea);
}
while (Z[n] == 0);
}
}
Ok, im not sure if i got it right, so you if you would take a look to what i did and tell me if its of any help.
After reviewng it a little i came up with the following:
A class for the values
public class ValueXyz
{
public double X { get; set; }
public double Y { get; set; }
public int Z { get; set; }
}
A class to manange the calculation:
public class SurfaceCalculator
{
private ValueXyz[] _valuesXyz;
private double _surface;
private readonly string _textWithValues;
public SurfaceCalculator(string textWithValues)
{
_textWithValues = textWithValues;
SetValuesToCalculate();
}
public double Surface
{
get { return _surface; }
}
public void CalculateSurface()
{
for (var i = 0; i < _valuesXyz.Length; i++)
{
if (_valuesXyz[i].Z == 0)
_surface = (_valuesXyz[i].X*_valuesXyz[i + 1].Y) - (_valuesXyz[i + 1].X*_valuesXyz[i].Y);
}
}
private void SetValuesToCalculate()
{
var valuesXyz = _textWithValues.Split(' ');
_valuesXyz = valuesXyz.Select(item => new ValueXyz
{
X = Convert.ToDouble(item.Split(',')[0]),
Y = Convert.ToDouble(item.Split(',')[1]),
Z = Convert.ToInt32(item.Split(',')[2])
}).ToArray();
}
}
So now your client code could do somethin like:
[TestMethod]
public void TestSurfaceCalculatorGetsAValue()
{
//var textWithValues = File.ReadAllText(#"C:File.csv");
var textWithValues = "424.26,424.26,0 589.43,231.46,0 720.81,14.22,1";
var calculator = new SurfaceCalculator(textWithValues);
calculator.CalculateSurface();
Assert.IsNotNull(calculator.Surface);
}
I'm not very sure i got the idea correct of how to implement the formula, but i just wanted to expose an alternative you can use, you can never have to many ways of doing one thing :).
Cheers.
By the way part of the intent i had, was not tying up your funcionality to the csv in case your source for the text in the future would change.
Step through your code in the debugger. Pay special attention to tbe behavior of the line
for (int n = 0; n < 1; n++)
This loop will execute how many times? What will the value of n be during each iteration through the loop?
Well, one thing i noticed is when you're setting your X, Y, Z vars, you're setting it to the Length of the array object instead of it's value - is that intentional?
Put a debug break on the line with:
double SurfaceArea = (X[n] * Y[n + 1]) - (X[n + 1] * Y[n]);
and check the datatype of "X", "Y" and "Z"
I've had problems in the past where it tries to calculate them as strings (because it took it out of the data source as strings). I ended up fixing it by adding CInt() to each of the variables (or Convert.ToInt32();).
Hope this helps.
As this looks like it might be a homework problem, I am trying not to give a direct solution in my answer, but I see a number of questionable parts of your code that you should examine.
Why are X, Y, Z arrays? You are creating a new array each time through the outer loop, setting the length of the array to the number of elements in the line, then only assigning a value to one element of X and Y, and never assigning Z to anything.
As phoog suggests in his answer, what is the purpose of: for (int n = 0; n < 1; n++)?
What are you trying to accomplish with the do-while loop? As it has been mentioned in the comments by Mr Skeet, X[n], Y[n], Z[n] don't exist because n does not exist outside of the loop it is declared for. Even if it did exist Z[n] will always be zero because you never assign anything to the Z array after it is initialized, so the do-while loop will run forever.

Categories

Resources