It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I hope to develop a program to get a kinect depth image and convert it into 3D point cloud as my final year project.
I have to write a program to save those depth images into bin directory of the project. But I'm unable to convert those images to 3d point cloud.
If anyone has an idea about how to implement this or any working projects for that please help me.
I suggest that you check out PCL library. It's open project for 3D point cloud processing, and it's quite well-documented. There are many tutorials, but for your task you should take look at:
Kinect OpenNI tutorial (fast access to point cloud using PCL and OpenNI library)
See i-programmer for how to make a point-cloud. And if you just want to make one and not understand it(which I strongly don't advise), here is the code:
XAML:
<Window x:Class="PointCloudWPF.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Point Cloud" Height="653" Width="993" Background="Black" Loaded="Window_Loaded">
<Grid Height="1130" Width="1626">
<Canvas Height="611" HorizontalAlignment="Left" Name="canvas1" VerticalAlignment="Top" Width="967" Background="Black" />
</Grid>
C#:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Windows.Media.Media3D;
using Microsoft.Kinect;
namespace PointCloudWPF
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
GeometryModel3D[] points = new GeometryModel3D[320 * 240];
int s = 4;
KinectSensor sensor;
public MainWindow()
{
InitializeComponent();
}
void DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
DepthImageFrame imageFrame = e.OpenDepthImageFrame();
if (imageFrame != null)
{
short[] pixelData = new short[imageFrame.PixelDataLength];
imageFrame.CopyPixelDataTo(pixelData);
int temp = 0;
int i = 0;
for (int y = 0; y < 240; y += s)
for (int x = 0; x < 320; x += s)
{
temp = ((ushort)pixelData[x + y * 320]) >> 3;
((TranslateTransform3D)points[i].Transform).OffsetZ = temp;
i++;
}
}
}
private GeometryModel3D Triangle(double x, double y, double s, SolidColorBrush color)
{
Point3DCollection corners = new Point3DCollection();
corners.Add(new Point3D(x, y, 0));
corners.Add(new Point3D(x, y + s, 0));
corners.Add(new Point3D(x + s, y + s, 0));
Int32Collection Triangles = new Int32Collection();
Triangles.Add(0);
Triangles.Add(1);
Triangles.Add(2);
MeshGeometry3D tmesh = new MeshGeometry3D();
tmesh.Positions = corners;
tmesh.TriangleIndices = Triangles;
tmesh.Normals.Add(new Vector3D(0, 0, -1));
GeometryModel3D msheet = new GeometryModel3D();
msheet.Geometry = tmesh;
msheet.Material = new DiffuseMaterial(color);
return msheet;
}
private void Window_Loaded(object sender, RoutedEventArgs e)
{
DirectionalLight DirLight1 = new DirectionalLight();
DirLight1.Color = Colors.White;
DirLight1.Direction = new Vector3D(1, 1, 1);
PerspectiveCamera Camera1 = new PerspectiveCamera();
Camera1.FarPlaneDistance = 8000;
Camera1.NearPlaneDistance = 100;
Camera1.FieldOfView = 10;
Camera1.Position = new Point3D(160, 120, -1000);
Camera1.LookDirection = new Vector3D(0, 0, 1);
Camera1.UpDirection = new Vector3D(0, -1, 0);
Model3DGroup modelGroup = new Model3DGroup();
int i = 0;
for (int y = 0; y < 240; y += s)
{
for (int x = 0; x < 320; x += s)
{
points[i] = Triangle(x, y, s, new SolidColorBrush(Colors.White));
// points[i]=MCube(x,y);
points[i].Transform = new TranslateTransform3D(0, 0, 0);
modelGroup.Children.Add(points[i]);
i++;
}
}
modelGroup.Children.Add(DirLight1);
ModelVisual3D modelsVisual = new ModelVisual3D();
modelsVisual.Content = modelGroup;
Viewport3D myViewport = new Viewport3D();
myViewport.IsHitTestVisible = false;
myViewport.Camera = Camera1;
myViewport.Children.Add(modelsVisual);
canvas1.Children.Add(myViewport);
myViewport.Height = canvas1.Height;
myViewport.Width = canvas1.Width;
Canvas.SetTop(myViewport, 0);
Canvas.SetLeft(myViewport, 0);
sensor = KinectSensor.KinectSensors[0];
sensor.SkeletonStream.Enable();
sensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);
sensor.DepthFrameReady += DepthFrameReady;
sensor.Start();
}
}
}
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
My goal is : Only allowed click to remove a rectangle if there is no rectangle in front of the target rectangle.
Maybe there is a solution to get a layer level from an object ? I have found nothing about this, im new to WPF. So if some can explain the solution or the way to think about this problem.
Code xaml.cs file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Windows.Threading;
namespace WpfApp1
{
/// <summary>
/// Logique d'interaction pour MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
DispatcherTimer gameTimer = new DispatcherTimer();
List<Rectangle> removeThis = new List<Rectangle>();
int posX;
int posY;
int width;
int rectangleVariation;
int height;
Random rand = new Random();
Brush brush;
public MainWindow()
{
InitializeComponent();
Random rand = new Random();
int nbObjects = rand.Next(15,30);
for (int i = 0; i < nbObjects; i++)
{
brush = new SolidColorBrush(Color.FromRgb((byte)rand.Next(1, 255), (byte)rand.Next(1, 255), (byte)rand.Next(1, 255)));
posX = rand.Next(15, 700);
posY = rand.Next(50, 250);
rectangleVariation = rand.Next(0, 2);
if (rectangleVariation == 0)
{
width = 200;
height = 50;
}
else
{
width = 50;
height = 200;
}
Rectangle rectangle = new Rectangle
{
Tag = "rectangle",
Height = height,
Width = width,
Stroke = Brushes.Black,
StrokeThickness = 1,
Fill = brush
};
Canvas.SetLeft(rectangle, posX);
Canvas.SetTop(rectangle, posY);
MyCanvas.Children.Add(rectangle);
}
}
private void ClickOnCanvas(object sender, MouseButtonEventArgs e)
{
if (e.OriginalSource is Rectangle)
{
Rectangle rectangle = (Rectangle)e.OriginalSource;
MyCanvas.Children.Remove(rectangle);
}
}
}
}
Code xaml file :
<Window x:Class="WpfApp1.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:WpfApp1"
mc:Ignorable="d"
Title="MainWindow" Height="450" Width="800">
<Canvas Name="MyCanvas" MouseLeftButtonDown="ClickOnCanvas" Background="DarkGray">
</Canvas>
</Window>
You should first determine whether the clicked Rectangle intersects with any other Rectangle elements and if it does, you could determine whether it's in top of all of them by looking at the index in the Canvas's Children collection.
The Rect type has an IntersectsWith method that you can use. Something like this:
private void ClickOnCanvas(object sender, MouseButtonEventArgs e)
{
if (e.OriginalSource is Rectangle)
{
Rectangle clickedRectangle = (Rectangle)e.OriginalSource;
Rect clickedRect = new Rect()
{
Location = new Point(Canvas.GetLeft(clickedRectangle), Canvas.GetTop(clickedRectangle)),
Size = new Size(clickedRectangle.Width, clickedRectangle.Height)
};
for (int i = 0; i < MyCanvas.Children.Count; i++)
{
if (MyCanvas.Children[i] is Rectangle rectangle && rectangle != clickedRectangle)
{
Rect rect = new Rect()
{
Location = new Point(Canvas.GetLeft(rectangle), Canvas.GetTop(rectangle)),
Size = new Size(rectangle.Width, rectangle.Height)
};
if (clickedRect.IntersectsWith(rect) && MyCanvas.Children.IndexOf(clickedRectangle) < i)
return;
}
}
MyCanvas.Children.Remove(clickedRectangle);
}
}
I've got two images, one of them I'll create using Graphics (a simple circle/ellipse).
Now I want to remove a part of the circle using another image. It should support removing alpha values too.
I hope the link works, if not please write it into comments & I'll fix it.
Thanks for any advice
EDIT:
Image 2 does not really have any border, it is just to show the frame size...
The following will do:
using System;
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Imaging;
namespace WpfApplication4
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
Loaded += MainWindow_Loaded;
}
private void MainWindow_Loaded(object sender, RoutedEventArgs e)
{
var image1 = new BitmapImage(new Uri("1.png", UriKind.Relative));
var image2 = new BitmapImage(new Uri("2.png", UriKind.Relative));
var bitmap1 = BitmapFactory.ConvertToPbgra32Format(image1);
var bitmap2 = BitmapFactory.ConvertToPbgra32Format(image2);
var width = 256;
var height = 256;
var bitmap3 = BitmapFactory.New(width, height);
var transparent = Color.FromArgb(0, 0, 0, 0);
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
var color1 = bitmap1.GetPixel(x, y);
var color2 = bitmap2.GetPixel(x, y);
Color color3;
if (color1.Equals(transparent))
{
color3 = transparent;
}
else
{
if (color2.Equals(transparent))
{
color3 = color1;
}
else
{
color3 = transparent;
}
}
bitmap3.SetPixel(x, y, color3);
}
}
Image1.Source = bitmap3;
}
}
}
I've used https://www.nuget.org/packages/WriteableBitmapEx/ to make things simpler, be careful about having 32-bit PNG as well as what is a transparent color because in WPF it is a transparent white in fact.
You should be able to translate that to Forms easily if this is what you're using, with this https://msdn.microsoft.com/en-us/library/system.drawing.bitmap%28v=vs.110%29.aspx.
EDIT : you could have used an opacity mask but since pic.2 does not its outer dark, it wouldn't have worked.
Finally, I wrote the code by myself. This is it:
public static Bitmap RemovePart(Bitmap source, Bitmap toRemove)
{
Color c1, c2, c3;
c3 = Color.FromArgb(0, 0, 0, 0);
for (int x = 0; x < source.Width; x++)
{
for (int y = 0; y < source.Height; y++)
{
c1 = source.GetPixel(x, y);
c2 = toRemove.GetPixel(x, y);
if (c2 != c3)
{
source.SetPixel(x, y, Color.FromArgb(c2.A, c1));
}
}
}
}
Let me start off by saying that I have indeed followed many tutorials such as the one located on EmguCv's main site in their entirety but get a TypeInitializationException thrown.
Now, listen closely because here comes the extremely weird part. I'll start by saying that there are three "levels" of my problem, however, the code in all "levels" is EXACTLY the same without even the slightest of change. This would naturally point to that I have a reference or linkage problem, but again I've attempted multiple attempts following different tutorials but to no avail.
Level 1 (this level produces a TypeInitializationException)
I create a new project, properly reference everything and such, then write my code in this new project. Upon debug, I get that exception thrown and my program exits. Here's a link to a picture of the problem: http://prntscr.com/uychc
Level 2 (this level runs completely fine and no exception is thrown)
In this level, I pretty much located one of EmguCv's example projects (VideoSurveilance in this case) then delete the default code and copy and paste all of my code in there. After adding a few more references that I needed, the program works fine. I cant post more than 3 links, but you'll have to trust me that the video picture displays correctly.
Level 3 (this level does not throw an exception but warns me of a "first chance" of one)
In this level, I copy and paste my whole Level 2 project into a different directory. After finding and relinking missing files/references, I am able to run the program but the pictures do not show and I get a "A first chance exception of type "System.TypeInitializationException" occurred in Emgu.CV.dll warning. http://prntscr.com/uycmn
I currently run Windows 7 x64 (yes I changed build options to x64 and x64 .dlls) and am running EmguCv 2.4.9 and 2.4.2 (tested on both) and Visual Studios 2010 and 2012 (tested on both).
Here's the code for what it may be worth:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
//using System.Threading.Tasks;
using System.Windows.Forms;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using Microsoft.Kinect;
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using Emgu.CV.UI;
using System.IO;
namespace VideoSurveilance
{
public partial class VideoSurveilance : Form
{
KinectSensor sensor;
WriteableBitmap depthBitmap;
WriteableBitmap colorBitmap;
DepthImagePixel[] depthPixels;
byte[] colorPixels;
int blobCount = 0;
public VideoSurveilance()
{
InitializeComponent();
}
private void VideoSurveilance_Load(object sender, System.EventArgs e)
{
foreach (var potentialSensor in KinectSensor.KinectSensors)
{
if (potentialSensor.Status == KinectStatus.Connected)
{
this.sensor = potentialSensor;
break;
}
}
if (null != this.sensor)
{
this.sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
this.sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength];
this.depthPixels = new DepthImagePixel[this.sensor.DepthStream.FramePixelDataLength];
this.colorBitmap = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null);
this.depthBitmap = new WriteableBitmap(this.sensor.DepthStream.FrameWidth, this.sensor.DepthStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null);
WriteableBitmap bitmap;
bitmap = new WriteableBitmap(this.sensor.DepthStream.FrameWidth, this.sensor.DepthStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null);
byte[] retVal = new byte[bitmap.PixelWidth * bitmap.PixelHeight * 4];
bitmap.CopyPixels(new Int32Rect(0, 0, bitmap.PixelWidth, bitmap.PixelHeight), retVal, bitmap.PixelWidth * 4, 0);
Bitmap b = new Bitmap(bitmap.PixelWidth, bitmap.PixelHeight);
int k = 0;
byte red, green, blue, alpha;
for (int i = 0; i < bitmap.PixelWidth; i++)
{
for (int j = 0; j < bitmap.PixelHeight && k < retVal.Length; j++)
{
alpha = retVal[k++];
blue = retVal[k++];
green = retVal[k++];
red = retVal[k++];
System.Drawing.Color c = new System.Drawing.Color();
c = System.Drawing.Color.FromArgb(alpha, red, green, blue);
b.SetPixel(i, j, c);
}
}
Image<Bgr, Byte> temp = new Image<Bgr, byte>(b);
this.ibOriginal.Image = temp;
this.sensor.AllFramesReady += this.sensor_AllFramesReady;
try
{
this.sensor.Start();
}
catch (IOException)
{
this.sensor = null;
}
}
}
private void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
blobCount = 0;
BitmapSource depthBmp = null;
Image<Bgr, Byte> openCVImg;
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
blobCount = 0;
if (colorFrame != null)
{
byte[] pixels = new byte[colorFrame.PixelDataLength];
colorFrame.CopyPixelDataTo(pixels);
int stride = colorFrame.Width * 4;
BitmapSource color = BitmapImage.Create(colorFrame.Width, colorFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride);
openCVImg = new Image<Bgr, byte>(color.ToBitmap());
}
else
{
return;
}
Image<Gray, byte> gray_image;
using (MemStorage stor = new MemStorage())
{
gray_image = openCVImg.InRange(new Bgr(0, 0, 150), new Bgr(200, 200, 255));
gray_image = gray_image.SmoothGaussian(9);
CircleF[] circles = gray_image.HoughCircles(new Gray(100),
new Gray(50),
2,
gray_image.Height / 4,
10,
400)[0];
foreach (CircleF circle in circles)
{
CvInvoke.cvCircle(openCVImg,
new System.Drawing.Point(Convert.ToInt32(circle.Center.X), Convert.ToInt32(circle.Center.Y)),
3,
new MCvScalar(0, 255, 0),
-1,
LINE_TYPE.CV_AA,
0);
openCVImg.Draw(circle,
new Bgr(System.Drawing.Color.Red),
3);
}
}
ibOriginal.Image = openCVImg;
ibProcessed.Image = gray_image;
}
}
}
}
}
}
using System;
using System.Globalization;
using System.IO;
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using Microsoft.Kinect;
using System.ComponentModel;
using System.Runtime.InteropServices;
using Emgu.CV;
namespace VideoSurveilance
{
public static class Helper
{
private const int MaxDepthDistance = 4000;
private const int MinDepthDistance = 850;
private const int MaxDepthDistanceOffset = 3150;
public static BitmapSource SliceDepthImage(this DepthImageFrame image, int min = 20, int max = 1000)
{
int width = image.Width;
int height = image.Height;
//var depthFrame = image.Image.Bits;
short[] rawDepthData = new short[image.PixelDataLength];
image.CopyPixelDataTo(rawDepthData);
var pixels = new byte[height * width * 4];
const int BlueIndex = 0;
const int GreenIndex = 1;
const int RedIndex = 2;
for (int depthIndex = 0, colorIndex = 0;
depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
depthIndex++, colorIndex += 4)
{
// Calculate the distance represented by the two depth bytes
int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
// Map the distance to an intesity that can be represented in RGB
var intensity = CalculateIntensityFromDistance(depth);
if (depth > min && depth < max)
{
// Apply the intensity to the color channels
pixels[colorIndex + BlueIndex] = intensity; //blue
pixels[colorIndex + GreenIndex] = intensity; //green
pixels[colorIndex + RedIndex] = intensity; //red
}
}
return BitmapSource.Create(width, height, 96, 96, PixelFormats.Bgr32, null, pixels, width * 4);
}
public static byte CalculateIntensityFromDistance(int distance)
{
// This will map a distance value to a 0 - 255 range
// for the purposes of applying the resulting value
// to RGB pixels.
int newMax = distance - MinDepthDistance;
if (newMax > 0)
return (byte)(255 - (255 * newMax
/ (MaxDepthDistanceOffset)));
else
return (byte)255;
}
public static System.Drawing.Bitmap ToBitmap(this BitmapSource bitmapsource)
{
System.Drawing.Bitmap bitmap;
using (var outStream = new MemoryStream())
{
// from System.Media.BitmapImage to System.Drawing.Bitmap
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create(bitmapsource));
enc.Save(outStream);
bitmap = new System.Drawing.Bitmap(outStream);
return bitmap;
}
}
[DllImport("gdi32")]
private static extern int DeleteObject(IntPtr o);
/// <summary>
/// Convert an IImage to a WPF BitmapSource. The result can be used in the Set Property of Image.Source
/// </summary>
/// <param name="image">The Emgu CV Image</param>
/// <returns>The equivalent BitmapSource</returns>
public static BitmapSource ToBitmapSource(IImage image)
{
using (System.Drawing.Bitmap source = image.Bitmap)
{
IntPtr ptr = source.GetHbitmap(); //obtain the Hbitmap
BitmapSource bs = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
ptr,
IntPtr.Zero,
Int32Rect.Empty,
System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
DeleteObject(ptr); //release the HBitmap
return bs;
}
}
}
}
I sincerely thank anyone who even attempts to help me and hope that anyone with similar problems can benefit from this long question.
Found 'brute force' solution:
you have to put the "cvextern.dll" file into your build directory (x64, debug, release, or whatever) manually.
this does the job. I can't believe I spent a whole day trying to figure this out.
As Oliver said you, your problem is very similar to this one : EmguCV TypeInitializationException
The exception is thrown by an impossibility to access some dll as opencv_highgui220.dll. In CvInvoke you will find DLLImports, maybe you should insert a break point and observe what it comes.
I suppose you have already read this from emgu website
Here's a little Brownian motion demo in WPF:
using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Shapes;
using System.Windows.Threading;
using System.Threading;
namespace WpfBrownianMotion
{
static class MiscUtils
{
public static double Clamp(this double n, int low, double high)
{
return Math.Min(Math.Max(n, low), high);
}
}
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
Width = 500;
Height = 500;
var canvas = new Canvas();
Content = canvas;
var transform = new TranslateTransform(250, 250);
var circle = new Ellipse()
{
Width = 25,
Height = 25,
Fill = Brushes.PowderBlue,
RenderTransform = transform
};
canvas.Children.Add(circle);
var random = new Random();
var thread =
new Thread(
() =>
{
while (true)
{
Dispatcher.Invoke(
DispatcherPriority.Normal,
(ThreadStart)delegate()
{
transform.X += -1 + random.Next(3);
transform.Y += -1 + random.Next(3);
transform.X = transform.X.Clamp(0, 499);
transform.Y = transform.Y.Clamp(0, 499);
});
}
});
thread.Start();
Closed += (s, e) => thread.Abort();
}
}
}
My question is this. In cases like this where the standard WPF animation facility isn't being used, is the above approach using Thread and Dispatcher the recommended approach to take?
In general, I sometimes have states that need to be updated and rendered and the animation facility isn't a good fit. So I need a way to do the update and rendering in a separate thread. Just wondering if the above approach is the Right Thing.
Clemens in the comments above suggested using DispatcherTimer. Indeed, this does simplify the code quite a bit. Here's a version taking that approach:
using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Shapes;
using System.Windows.Threading;
namespace WpfBrownianMotion
{
static class MiscUtils
{
public static double Clamp(this double n, int low, double high)
{
return Math.Min(Math.Max(n, low), high);
}
}
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
Width = 500;
Height = 500;
var canvas = new Canvas();
Content = canvas;
var transform = new TranslateTransform(250, 250);
var circle = new Ellipse()
{
Width = 25,
Height = 25,
Fill = Brushes.PowderBlue,
RenderTransform = transform
};
canvas.Children.Add(circle);
var random = new Random();
var timer = new DispatcherTimer();
timer.Tick += (s, e) =>
{
transform.X += -1 + random.Next(3);
transform.Y += -1 + random.Next(3);
transform.X = transform.X.Clamp(0, 499);
transform.Y = transform.Y.Clamp(0, 499);
};
timer.Start();
}
}
}
I have a WPF canvas on which I'm dynamically creating objects from code. These objects are being transformed by setting the RenderTransform property, and an animation needs to be applied one of those transforms. Currently, I can't get properties of any transform to animate (although no exception gets raised and the animation appears to run - the completed event gets raised).
In addition, if the animation system is stressed, sometimes the Storyboard.Completed event is never raised.
All the examples I've come accross animate the transforms from XAML. MSDN documentation suggests that the x:Name property of a transform must be set for it to be animatable, but I haven't found a working way to set it from code.
Any ideas?
Here's the full code listing that reproduces the problem:
using System;
using System.Diagnostics;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
namespace AnimationCompletedTest {
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window {
Canvas panel;
public MainWindow() {
InitializeComponent();
MouseDown += DoDynamicAnimation;
Content = panel = new Canvas();
}
void DoDynamicAnimation(object sender, MouseButtonEventArgs args) {
for (int i = 0; i < 12; ++i) {
var e = new Ellipse {
Width = 16,
Height = 16,
Fill = SystemColors.HighlightBrush
};
Canvas.SetLeft(e, Mouse.GetPosition(this).X);
Canvas.SetTop(e, Mouse.GetPosition(this).Y);
var tg = new TransformGroup();
var translation = new TranslateTransform(30, 0);
tg.Children.Add(translation);
tg.Children.Add(new RotateTransform(i * 30));
e.RenderTransform = tg;
panel.Children.Add(e);
var s = new Storyboard();
Storyboard.SetTarget(s, translation);
Storyboard.SetTargetProperty(s, new PropertyPath(TranslateTransform.XProperty));
s.Children.Add(
new DoubleAnimation(3, 100, new Duration(new TimeSpan(0, 0, 0, 1, 0))) {
EasingFunction = new PowerEase {EasingMode = EasingMode.EaseOut}
});
s.Completed +=
(sndr, evtArgs) => {
Debug.WriteLine("Animation {0} completed {1}", s.GetHashCode(), Stopwatch.GetTimestamp());
panel.Children.Remove(e);
};
Debug.WriteLine("Animation {0} started {1}", s.GetHashCode(), Stopwatch.GetTimestamp());
s.Begin();
}
}
[STAThread]
public static void Main() {
var app = new Application();
app.Run(new MainWindow());
}
}
}
Leave out the Storyboard:
var T = new TranslateTransform(40, 0);
Duration duration = new Duration(new TimeSpan(0, 0, 0, 1, 0));
DoubleAnimation anim = new DoubleAnimation(30, duration);
T.BeginAnimation(TranslateTransform.YProperty, anim);
(small fix for syntax)
Seems that after a bit of Googling I solved the problem myself. Many thanks to MSDN documentation and a post in MSDN forums by Antares19.
In summary:
For a Freezable object (like TranslateTransform) to be targetable by an Storyboard, it must have a registered name. This can be done by calling FrameworkElement.RegisterName(..).
I added the Storyboard object to the ResourceDictionary of the same Framework element to which I registered the TranslateTransform. This can be done by calling ResourceDictionary.Add(..)
Here's the updated code, which now animates nicely, and registers/deregisters the added resources:
using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
namespace AnimationCompletedTest {
public partial class MainWindow : Window {
Canvas panel;
public MainWindow() {
InitializeComponent();
MouseDown += DoDynamicAnimation;
Content = panel = new Canvas();
}
void DoDynamicAnimation(object sender, MouseButtonEventArgs args) {
for (int i = 0; i < 12; ++i) {
var e = new Ellipse { Width = 16, Height = 16, Fill = SystemColors.HighlightBrush };
Canvas.SetLeft(e, Mouse.GetPosition(this).X);
Canvas.SetTop(e, Mouse.GetPosition(this).Y);
var tg = new TransformGroup();
var translation = new TranslateTransform(30, 0);
var translationName = "myTranslation" + translation.GetHashCode();
RegisterName(translationName, translation);
tg.Children.Add(translation);
tg.Children.Add(new RotateTransform(i * 30));
e.RenderTransform = tg;
panel.Children.Add(e);
var anim = new DoubleAnimation(3, 100, new Duration(new TimeSpan(0, 0, 0, 1, 0))) {
EasingFunction = new PowerEase { EasingMode = EasingMode.EaseOut }
};
var s = new Storyboard();
Storyboard.SetTargetName(s, translationName);
Storyboard.SetTargetProperty(s, new PropertyPath(TranslateTransform.YProperty));
var storyboardName = "s" + s.GetHashCode();
Resources.Add(storyboardName, s);
s.Children.Add(anim);
s.Completed +=
(sndr, evtArgs) => {
panel.Children.Remove(e);
Resources.Remove(storyboardName);
UnregisterName(translationName);
};
s.Begin();
}
}
[STAThread]
public static void Main() {
var app = new Application();
app.Run(new MainWindow());
}
}
}
I have a Solution using XAML / C# Combo. In your XAML file define:
<UserControl.RenderTransform>
<TranslateTransform x:Name="panelTrans" Y="0"></TranslateTransform>
</UserControl.RenderTransform>
This will give you the ability to do the following in the C# code:
Storyboard.SetTargetName(mFlyInDA, "panelTrans");
Storyboard.SetTargetProperty(mFlyInDA, new PropertyPath("Y"));
The rest is business as usual. Create a DoubleAnimation, set its properties, add it as a child to your Storyboard, call the Begin function on the story board.
No need to register the transform or add the storyboard to the resource collection if the target is the FrameworkElement you want to animate. You can use Blend to generate the PropertyPath syntax.
frameworkElement.RenderTransform = new TransformGroup
{
Children =
{
new ScaleTransform(),
new SkewTransform(),
new RotateTransform(),
translate
}
};
translate.X = 100.0;
var animation = new DoubleAnimation(0.0, TimeSpan.FromSeconds(1));
var sb = new Storyboard();
sb.Children.Add(animation);
Storyboard.SetTarget(animation, frameworkElement);
Storyboard.SetTargetProperty(animation, new PropertyPath("(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.X)"));
sb.Begin();