I've got a method that transforms a number of cylinders. If I run the method a second time it transforms the cylinders from their original position rather than their new position.
Is there anyway of 'applying' the transformation so that it changes the underlying values of the cylinders so that I can re-transform from the new values?
Can anyone assist?
Cheers,
Andy
void TransformCylinders(double angle)
{
var rotateTransform3D = new RotateTransform3D { CenterX = 0, CenterY = 0, CenterZ = 0 };
var axisAngleRotation3D = new AxisAngleRotation3D { Axis = new Vector3D(1, 1, 1), Angle = angle };
rotateTransform3D.Rotation = axisAngleRotation3D;
var myTransform3DGroup = new Transform3DGroup();
myTransform3DGroup.Children.Add(rotateTransform3D);
_cylinders.ForEach(x => x.Transform = myTransform3DGroup);
}
You are remaking the Transform3DGroup every time the method is called:
var myTransform3DGroup = new Transform3DGroup();
Transforms are essentially a stack of matrices that get multiplied together. You are clearing that stack every time you make a new group. You need to add consecutive transforms to the existing group rather than remake it.
Related
hi i want to write a animation where car drivin on a road and waits untill train will pass and then goes again,but i cant find anything on the internet about it , i also want to cars drive in a "row" but now they driving through each other through.
Is it even possible ?
namespace Symulator_ruchu
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
Random r = new Random();
IEnumerable<PathFigure> colle;
DispatcherTimer dispatcherTimer = new DispatcherTimer();
DispatcherTimer train = new DispatcherTimer();
public MainWindow()
{
InitializeComponent();
dispatcherTimer.Tick += new EventHandler(dispTimer_tick);
dispatcherTimer.Interval = new TimeSpan(0,0,1);
dispatcherTimer.Start();
train.Tick += new EventHandler(train_tick);
train.Interval = new TimeSpan(0, 0, 10);
train.Start();
}
private void train_tick(object sender, EventArgs e)
{
Path car = new Path
{
Name = "AnimatedMatrixTrain",
Fill = Train.Fill,
Data = new RectangleGeometry
{
Rect = new Rect(0, 0, 55, 270)
},
LayoutTransform = new RotateTransform
{
Angle = 90
},
RenderTransform = new MatrixTransform
{
Matrix = new Matrix
{
OffsetX = 100,
OffsetY = 100
}
}
};
MatrixAnimationUsingPath maup = new MatrixAnimationUsingPath
{
Duration = TimeSpan.FromSeconds(r.Next(5, 10)),
DoesRotateWithTangent = true,
AutoReverse = false,
PathGeometry = new PathGeometry
{
Figures = PathFigureCollection.Parse("M 745,900 L 745,0")
}
};
Storyboard.SetTarget(maup, car);
Storyboard.SetTargetProperty(maup, new PropertyPath("(UIElement.RenderTransform).(MatrixTransform.Matrix)"));
Storyboard storyboard = new Storyboard();
storyboard.Children.Add(maup);
//Canvas.SetTop(car, r.Next(1, 500));
//Canvas.SetLeft(car, r.Next(1, 500));
storyboard.Begin(car);
Canv.Children.Add(car);
}
private void dispTimer_tick(object sender, EventArgs e)
{
Path car = new Path
{
Name = "AnimatedMatrixCar",
Fill = Car.Fill,
Data = new RectangleGeometry
{
Rect = new Rect(0,0,60,30)
},
LayoutTransform = new RotateTransform
{
Angle = 180
},
RenderTransform = new MatrixTransform
{
Matrix = new Matrix
{
OffsetX = 100,
OffsetY = 100
}
}
};
MatrixAnimationUsingPath maup = new MatrixAnimationUsingPath
{
Duration = TimeSpan.FromSeconds(r.Next(5,10)),
DoesRotateWithTangent = true,
AutoReverse = false,
PathGeometry = new PathGeometry
{
Figures = PathFigureCollection.Parse("m 10,286 h 800 c 130,10 190, 200 -105, 180 h -450 c -160,10 -200,250 -35,290 H 1200")
}
};
Storyboard.SetTarget(maup, car);
Storyboard.SetTargetProperty(maup, new PropertyPath("(UIElement.RenderTransform).(MatrixTransform.Matrix)"));
Storyboard storyboard = new Storyboard();
storyboard.Children.Add(maup);
//Canvas.SetTop(car, r.Next(1, 500));
//Canvas.SetLeft(car, r.Next(1, 500));
storyboard.Begin(car);
Canv.Children.Add(car);
}
}
Here is how wpf window look like:
enter image description here
I don't think you can use collision detection in an animation. You'd have to write your own code moved the car pixel by pixel. You could then use a distance calculation between some calculated edge point of your rectangle and anything else.
That's all possible but you'd be calculating angle of rotation yourself. Fair bit of fiddly work and it might not even be so smooth once you're finished.
I'd go with storyboard and animations.
Some trial and error will be necessary for this in order for the train to pass just after a car stops.
Use a storyboard and at least two matrix animations within that.
Each animation can have a begin time and duration. The first animation should have no begin time, you want that to start straight away.
You might want one of the easout easing functions so it slows down as it nears the crossing.
The second animation should have a begin time which makes it start up again after the train has passed. This time with an ease in easing function so it accelerates up to speed.
You will need to split your path geometry into two parts. Up to the crossing and then from the crossing to end of road. You could use inkscape or manually find where your crossing is using the points in the geometry.
You will apply this to your first car.
Off it goes.
Use async await to introduce a delay. Then repeat using your storyboard with the second car. It will of course do the same as the first one but start a bit later, giving a gap.
There is a bit of a complication though. Maybe you want your second car to stop behind the first.
That means you need two different geometries with the second ending earlier.
Not sure if you have your geometries.
I would use inkscape to draw the path. It's free. You could import your base picture, add a layer and draw a vectorin the new layer. That has line start and and nodes and curve handle nodes. You can drag the nodes and add more. There's a bit of a learning curve but it'll be fairly easy to do your neat line. Then select your line and save as xaml. You'll get a file with a path in it, and your geometry. Extra nodes for a second and third car stop position.
Are your cars looking ok with the matrix animation? I wrote a facing converter to rotate moving (infantry or cavalry) columns so they face the direction they are moving in.
I have rotated a line based using Transform Group in UWP. Once after rotation I need to get the new bounds of line using the transform matrix value .Whether it is possible to get the current points of line using the matrix value? Can anyone help me out on this ?
RotateTransform rotate = new RotateTransform();
rotate.Angle = -angle;
var translate = new TranslateTransform
{
X = offset,
Y = offset
};
var group = new TransformGroup
{
Children = { (Transform)translate.Inverse, rotate,translate }
};
line.RenderTransform = group;
var matrix = ((line.RenderTransform as TransformGroup).Value);
if you know and access the Parent of the Line, then I think you can do this:
var line_bound = line.TransformToVisual(parent).TransformBounds(new Rect(0, 0, Math.Abs(line.X2 - line.X1), Math.Abs(line.Y2 - line.Y1)));
Here the parent may be a Grid or a Canvas that you attach the line to.
Read more about TransformBounds(Rect) Method here.
Calculating area:
var coordinates = new List<Coordinate> {
new Coordinate(55, 35),
new Coordinate(55, 35.1),
new Coordinate(55.1, 35.1),
new Coordinate(55.1, 35),
};
Console.WriteLine(new Polygon(coordinates).Area); // ~0.01
Calculation is right, because it's happen in orthogonal coordinate system.
But how to mark that coordinates are in WGS?
It seems that task is more complicated that I've expected. I found this useful discussion on google groups
Firstly we need to found projection system, that is most suitable for our region where we need to compute area. For example you can take one of UTM zones
using DotSpatial.Projections;
using DotSpatial.Topology;
public static double CalculateArea(IEnumerable<double> latLonPoints)
{
// source projection is WGS1984
var projFrom = KnownCoordinateSystems.Geographic.World.WGS1984;
// most complicated problem - you have to find most suitable projection
var projTo = KnownCoordinateSystems.Projected.UtmWgs1984.WGS1984UTMZone37N;
// prepare for ReprojectPoints (it mutates array)
var z = new double[latLonPoints.Count() / 2];
var pointsArray = latLonPoints.ToArray();
Reproject.ReprojectPoints(pointsArray, z, projFrom, projTo, 0, pointsArray.Length / 2);
// assemblying new points array to create polygon
var points = new List<Coordinate>(pointsArray.Length / 2);
for (int i = 0; i < pointsArray.Length / 2; i++)
points.Add(new Coordinate(pointsArray[i * 2], pointsArray[i * 2 + 1]));
var poly = new Polygon(points);
return poly.Area;
}
You can get the area directly from IGeometry or from Feature.Geometry. Also You need to repeat the first coordinate to close your polygon.
FeatureSet fs = new FeatureSet(FeatureType.Polygon);
Coordinate[] coord = new Coordinate[]
{
new Coordinate(55, 35),
new Coordinate(55, 35.1),
new Coordinate(55.1, 35.1),
new Coordinate(55.1, 35),
new Coordinate(55, 35)
};
fs.AddFeature(new Polygon(new LinearRing(coord)));
var area = fs.Features.First().Geometry.Area;
I'm having trouble getting my head around the colour/material system of C# WPF projects, currently I am updating the colour of an entire system of points on each update of the model when I would instead like to just update the colour of a single point (as it is added).
AggregateSystem Class
public class AggregateSystem {
// stack to store each particle in aggregate
private readonly Stack<AggregateParticle> particle_stack;
private readonly GeometryModel3D particle_model;
// positions, indices and texture co-ordinates for particles
private readonly Point3DCollection particle_positions;
private readonly Int32Collection triangle_indices;
private readonly PointCollection text_coords;
// brush to apply to particle_model.Material
private RadialGradientBrush rad_brush;
// ellipse for rendering
private Ellipse ellipse;
private RenderTargetBitmap render_bitmap;
public AggregateSystem() {
particle_stack = new Stack<AggregateParticle>();
particle_model = new GeometryModel3D { Geometry = new MeshGeometry3D() };
ellipse = new Ellipse {
Width = 32.0,
Height = 32.0
};
rad_brush = new RadialGradientBrush();
// fill ellipse interior using rad_brush
ellipse.Fill = rad_brush;
ellipse.Measure(new Size(32,32));
ellipse.Arrange(new Rect(0,0,32,32));
render_bitmap = new RenderTargetBitmap(32,32,96,96,PixelFormats.Pbgra32));
ImageBrush img_brush = new ImageBrush(render_bitmap);
DiffuseMaterial diff_mat = new DiffuseMaterial(img_brush);
particle_model.Material = diff_mat;
particle_positions = new Point3DCollection();
triangle_indices = new Int32Collection();
tex_coords = new PointCollection();
}
public Model3D AggregateModel => particle_model;
public void Update() {
// get the most recently added particle
AggregateParticle p = particle_stack.Peek();
// compute position index for triangle index generation
int position_index = particle_stack.Count * 4;
// create points associated with particle for circle generation
Point3D p1 = new Point3D(p.position.X, p.position.Y, p.position.Z);
Point3D p2 = new Point3D(p.position.X, p.position.Y + p.size, p.position.Z);
Point3D p3 = new Point3D(p.position.X + p.size, p.position.Y + p.size, p.position.Z);
Point3D p4 = new Point3D(p.position.X + p.size, p.position.Y, p.position.Z);
// add points to particle positions collection
particle_positions.Add(p1);
particle_positions.Add(p2);
particle_positions.Add(p3);
particle_positions.Add(p4);
// create points for texture co-ords
Point t1 = new Point(0.0, 0.0);
Point t2 = new Point(0.0, 1.0);
Point t3 = new Point(1.0, 1.0);
Point t4 = new Point(1.0, 0.0);
// add texture co-ords points to texcoords collection
tex_coords.Add(t1);
tex_coords.Add(t2);
tex_coords.Add(t3);
tex_coords.Add(t4);
// add position indices to indices collection
triangle_indices.Add(position_index);
triangle_indices.Add(position_index + 2);
triangle_indices.Add(position_index + 1);
triangle_indices.Add(position_index);
triangle_indices.Add(position_index + 3);
triangle_indices.Add(position_index + 2);
// update colour of points - **NOTE: UPDATES ENTIRE POINT SYSTEM**
// -> want to just apply colour to single particles added
rad_brush.GradientStops.Add(new GradientStop(p.colour, 0.0));
render_bitmap.Render(ellipse);
// set particle_model Geometry model properties
((MeshGeometry3D)particle_model.Geometry).Positions = particle_positions;
((MeshGeometry3D)particle_model.Geometry).TriangleIndices = triangle_indices;
((MeshGeometry3D)particle_model.Geometry).TextureCoordinates = tex_coords;
}
public void SpawnParticle(Point3D _pos, Color _col, double _size) {
AggregateParticle agg_particle = new AggregateParticle {
position = _pos, colour = _col, size = _size;
}
// push most-recently-added particle to stack
particle_stack.Push(agg_particle);
}
}
where AggregateParticle is a POD class consisting of Point3D position, Color color and double size fields which are self-explanatory.
Is there any simple and efficient method to update the colour of the single particle as it is added in the Update method rather than the entire system of particles? Or will I need to create a List (or similar data structure) of DiffuseMaterial instances for each and every particle in the system and apply brushes for the necessary colour to each?
[The latter is something I want to avoid at all costs, partly due to the fact it would require large structural changes to my code, and I am certain that there is a better way to approach this than that - i.e. there MUST be some simple way to apply colour to a set of texture co-ordinates, surely?!.]
Further Details
AggregateModel is a single Model3D instance corresponding to the field particle_model which is added to a Model3DGroup of the MainWindow.
I should note that what I am trying to achieve, specifically, here is a "gradient" of colours for each particle in an aggregate structure where a particle has a Color in a "temperature-gradient" (computed elsewhere in the program) which is dependent upon order in which it was generated - i.e. particles have a colder colour if generated earlier and a warmer colour if generated later. This colour list is pre-computed and passed to each particle in the Update method as can be seen above.
One solution I attempted involved creating a separate AggregateComponent instance for each particle where each of these objects has an associated Model3D and thus a corresponding brush. Then an AggregateComponentManager class was created which contained the List of each AggregateComponent. This solution works, however it is horrendously slow as each component has to be updated every time a particle is added so memory usage explodes - is there a way to adapt this where I can cache already rendered AggregateComponents without having to call their Update method each time a particle is added?
Full source code (C# code in the DLAProject directory) can be found on GitHub: https://github.com/SJR276/DLAProject
We create WPF 3D models for smallish point clouds (+/- 100 k points) where each point is added as an octahedron (8 triangles) to a MeshGeometry3D.
To allow different colors for different points (we use this for selecting one or a subset of points) in such a point cloud, we assign texture coordinates from a small Bitmap.
At a high level we have some code like this:
BitmapSource bm = GetColorsBitmap(new List<Color> { BaseColor, SelectedColor });
ImageBrush ib = new ImageBrush(bm)
{
ViewportUnits = BrushMappingMode.Absolute,
Viewport = new Rect(0, 0, 1, 1) // Matches the pixels in the bitmap.
};
GeometryModel3D model = new GeometryModel3D { Material = new DiffuseMaterial(ib) };
and now the texture coordinates are just
new Point(0, 0);
new Point(1, 0);
... etc.
The colors Bitmap comes from:
// Creates a bitmap that has a single row containing single pixels with the given colors.
// At most 256 colors.
public static BitmapSource GetColorsBitmap(IList<Color> colors)
{
if (colors == null) throw new ArgumentNullException("colors");
if (colors.Count > 256) throw new ArgumentOutOfRangeException("colors", "More than 256 colors");
int size = colors.Count;
for (int j = colors.Count; j < 256; j++)
{
colors.Add(Colors.White);
}
var palette = new BitmapPalette(colors);
byte[] pixels = new byte[size];
for (int i = 0; i < size; i++)
{
pixels[i] = (byte)i;
}
var bm = BitmapSource.Create(size, 1, 96, 96, PixelFormats.Indexed8, palette, pixels, 1 * size);
bm.Freeze();
return bm;
}
We also go to some effort to cache and reuse the internal Geometry structure when updating the point cloud.
Finally we display this with the awesome Helix Toolkit.
I have the following code (Test only right now)
PlotPoint[] starLocations = new PlotPoint[4];
starLocations[0] = new PlotPoint(9,-2,1);
starLocations[1] = new PlotPoint(-3,6,1);
starLocations[2] = new PlotPoint(4,2,-3);
starLocations[3] = new PlotPoint(7,-8,9);
//draw the sector map
SectorMap ourMap = new SectorMap();
ourMap.reDraw(PlotPoint.getILLocs(starLocations));
SectorMap.cs
public void reDraw(float[,] givenLocs){
ILArray<float> ourPositions = givenLocs;
textBox1.Text = ourPositions.ToString();
var scene = new ILScene();
var plotCube = scene.Add(new ILPlotCube(false));
var ourPosBuffer = new ILPoints();
ourPosBuffer.Positions = ourPositions;
ourPosBuffer.Size = 3;
plotCube.Add(ourPosBuffer);
iLStarChart.Scene = scene;
}
Doing this, when I check the matrix at PlotPoint.getILLocs, I get a 4x3 matrix. When I check the passed matrix, it's again a 3x4
When I check ourPositions in SectorMap.cs, it has become a 3x4 matrix. Which is something I did not intend. What am I doing wrong?
I messed up, and I'm providing how here for other people's refrence:
1) It was, as Haymo Kutschbach said, a difference in storage scheme.
2) The confusion and lack of 3D grid was due to
var plotCube = scene.Add(new ILPlotCube(false));
instead of the
var plotCube = scene.Add(new IPlotCube(null,false));