I am trying to create a WPF application using C# to run on Pixelsense that is basic version of the tangram puzzle. I am able to draw my 7 shapes and translate and rotate them all around the screen.
Could anyone give me advise regarding how I should go about saving the pattern (with shapes in specific positions and orientations) so that when a user creates the pattern next time, the application can match it to the saved one and tell the user if it's correct.
It's a pattern matching and recognition problem that I am trying to solve.
I have been stuck on this for a while now :(
Define the solution as a collection of objects with shapeType, position, and orientation properties. Have the solution include one shape at position 0, 0 and an orientation of 0. Now loop over all the shapes the user has actually placed to find the ones with a shapeType that matches the shape your solution has at 0,0,0. Calculate the position and orientation of every other shape relative to where the user put this one. Compare those values to the rest of your solution. You'll need to experiment with how much tolerance to allow because this stuff is not precise - to make the game fun, err on the side of having high tolerances. If needed, you can follow this up with some performance optimizations to only re-evaluate pieces that moved.
Hopefully you are using physical shape prices with tags on them instead of this purely a virtual game. I always wanted to build this when I was on the Surface team but it never happened. One challenge you will run into is defining how the tag's position/orientation relates to the actual shape. If you'll be putting tag stickers on multiple tangram sets, you almost certainly won't get the on precisely the same each time so you may need to add a "calibration" mode to your app (have the user place each piece in a specific spot and then push a button so you can record where the tag is relative to those spots). The TagVisualizer WPF control should help a lot for building your UI - definitely look into using it (this scenario was top of mind when we designed that API). The default behavior of that control (if you tell it the ID of a tag to look for but not how to visualize it) is a "crosshair" that can help you find tune your offset values.
Good luck! If you wouldn't mind recording a YouTube video when you are done and posting a comment here linking to it, I'd really appreciate that
You can use ObservableCollection or List of a custom class. That class can consist of various values such as position, orientation etc as properties.
When a new pattern is drawn or when the pattern change its position you can update that particular object stored in the collection. As you have all the details of the pattern(positions and orientation) you can iterate the for loop and check the position of the new pattern when added.
Related
I have some data that comes via messages to my chart. It's a electric current over time (seconds) chart. How can I change the behaviour of the FitToView mode (or write a different one) so that the plotter doesn't zoom out and scale to fit the whole line graph, but move left instead, showing for example only the 100 last seconds?
I thought of calculating minimums and maximums every message and changing the plotters restraints explicitly but it doesn't seem very optimal. Also due to the fact that I would have to set the restraints in code-behind but all the data is in the ViewModel (using MVVM with caliburn).
Edit: I've found the functionality for this (adding WidthFollowConstraint to the FitToView constraints) but the linegraph gets moved more than the axis and after that it compensates back to where it should be, making the whole graph glitch out on every iteration. How can this be fixed?
Apperently I forgot to answer this.
I made the graph move instead of scale by adding a MinimalSizeConstraint and a FollowWidthConstraint to the ConstraintCollection in the constructor of the D3 Viewport2D class. The names are pretty selfexplanitory. Basically this changes the FitToView funtion of the graph to the desired behaviour
this is my first question, however I'm a long time lurker. I'll split up this into two parts, one part explaining what I'm doing and why I think this is the way to go, the second one being the actual question that I can't solve for myself.
What am I doing?
I'm currently developing a framework for rendering 2-dimensional features meant to be displayed in real-time. You can think of an application like Google Maps in your browser, however the framework is meant to render all kinds of geographical data (not just axis-aligned raster data, like those Google Tiles).
The framework is to be integrated into our (the company's) newest product which is a WPF application for the desktop and laptop.
Therefore I chose WPF for actually rendering geometry only; Visibility and Occlusion Culling are done by myself as well as input handling (mouse picking), moving the camera, etc..
Being a real-time application, it need to achieve at least 30 FPS. The framework performs adequate when rendering images: I can draw several thousand bitmaps per frame without a problem, however polyonal data turns out to be a major problem.
The actual question
I'm rendering my fair amount of polyline and polygon data using WPF, specifically using DrawingContext and StreamGeometry. My understanding so far is that this is the way to go for if I need performance. However I am not able to achieve the results that I expected from this.
This is how I fill the StreamGeometry with actual data:
using (StreamGeometryContext ctx = Geometry.Open())
{
foreach (var segment in segments)
{
var first = ToWpf(segment[0]);
ctx.BeginFigure(first, false, false);
// Skip the first point, obviously
List<Point> points = segment.Skip(1).Select(ToWpf).ToList();
ctx.PolyLineTo(points, true, false);
}
}
Geometry.Freeze();
And this is how I draw my geometry:
_dc.PushTransform(_mercatorToView);
_dc.DrawGeometry(null, _pen, polyline);
_dc.Pop();
As a test, I loaded ESRI shapes from OpenStreetMap into my application to test its performance, however I'm not satisfied at all:
My test data consists of ~3500 line segments with a total of ~20k lines.
Mapping each segment to its own StreamGeometry performed extremely bad, but I kinda expected that already: Rendering takes about 14 seconds.
I've then tried packing more segments into the same StreamGeometry, using multiple figures:
80 StreamGeometry, Rendering takes about 50ms.
However I can't get any better results than this. Increasing the amount of lines to around 100k makes my application nearly unusable: Rendering takes more than 100ms.
What else can I do besides freezing both the geometry as well the pen when rendering vector data?
I'm at the point where I'd rather make use of DirectX myself than to rely on WPF for me do to it because something seems to be going terribly wrong.
Edit
To further clarify what I am doing: The application visualizes geographic data in real-time, very much like an application like Google Maps in the browser: However it is supposed to visualize much, much more data. As you may know, Google Maps allows both zooming and panning, which requires > 25 FPS for it to appear as a fluent animation; anything less does not feel fluent.
*
Sorry but I shouldn't upload a video of this before the actual product is released. You may however envision something like Google Maps, however with tons of vector data (polygons and polylines).
*
There are two solutions, one of which is very often stated:
Cache heavy drawings in a bitmap
The implementation seems kinda easy, however I see some problems with this approach: In order to properly implement panning, I need to avoid drawing the heavy stuff each frame, and therefore I am left with the choice of either not updating the cached bitmap while panning the camera, or creating a bitmap which covers an even bigger region than the viewport, so that I only need to update the cached bitmap every so often.
The second "problem" is related to zooming. However it's more of a visual artifact than a real problem: Since the cached bitmap can't properly be updated at 30 FPS, I need to avoid that when zooming as well. I may very well scale the bitmap while zooming, only creating a new bitmap when the zoom ends, however the width of the polylines would not have a constant thickness, although they should.
This approach does seem to be used by MapInfo, however I can't say I'm too fond of it. It does seem to be the easiest to implement though.
Split geometry up into different drawing visuals
This approach seems to deal with the problem differently. I'm not sure if this approach works at all: It depends on whether or not I correctly understood how WPF is supposed to work in this area.
Instead of using one DrawingVisual for all stuff that needs to be drawn, I should use several, so that not every one needs to be RenderOpened(). I could simply change parameters, for example the matrix in the sample above, in order to reflect both camera panning and moving.
However I see some problems with this approach as well: Panning the camera will inevitably bring new geometry into the viewport, hence I would need to perform something similar than in the first approach, actually render stuff which is currently not visible, but may become visible due to the camera shifting; Drawing everything is out of the question as it may take ridiculous amounts of times for a rather small amount of data.
Problem related to both approaches
One big problem which neither of these approach can solve is that even if the overall frame-rate is stable, occasional hickups, either when updating the cached bitmaps (okay, this doesn't apply if the cached bitmap is only updated when the camera is no longer panned) or calling RenderOpen to draw the visible chunk of geometry, seem to be inevitable.
My thoughts so far
Since these are the only two solutions I ever see to this problem (I've done my fair share of googling for more than a year), I guess the only solution so far is to accept frame-rate hickups on even the most powerful GPUs (which should be able to rasterize hundreds of millions of primitives per second), a delayed updating of the viewport (in the case where bitmaps are only updated when the viewport is no longer moved) or to not use WPF at all and resort to DirectX directly.
I'm very glad for the help, however I can't say I'm impressed by WPFs rendering performance so far.
To improve 2D WPF rendering performance you could have a look at the RenderTargetBitmap (for WPF >= 3.5) or the BitmapCache class (for WPF >= 4).
Those classes are used for Cached Composition
From MSDN:
By using the new BitmapCache and BitmapCacheBrush classes, you can cache a complex part of the visual tree as a bitmap and greatly improve rendering time. The bitmap remains responsive to user input, such as mouse clicks, and you can paint it onto other elements just like any brush.
I am writing a geoscience visualization application that uses wpf 3d. The user needs to be able to zoom deep into detail and out quick with minimum resources taken. I've decided to divide my slice (ModelVisual3D) into subrectangles (GeometryModel3D), so that each has it's own texture that changes when the camera zooms in (similar to Google maps).
The problem is that "cracks" are appearing between subrectangles, even though they actually have no empty space between them.
How to hide these? or is there any other way to assign multiple materials with different sizes to one ModelVisual3D?
PS I've tried making the background gray, light-gray, silver and white-smoke. It helps a little, but it's not acceptable. I've also tried overlapping the subrectangles, with no result.
Instead of your current setup you might want to make several textures at different resolutions and switch between these depending on the zoom level. (Mipmaps)
When getting really close you might replace the entire object and switch it for a much smaller one) and use a highly detailed texture.
It will require a bit more pre-processing but you will be able to use a single geometry.
Seems like changing ImageBrush's stretch to Stretch.None and using textures larger than the subsquare helps. Although now I need more precise control over texture coordinates for the surface.
We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?
Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm
You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)
I am designing a CAD application using a variation of MVC architecture. My model and view are independent of each other. They communicate through the controller. My problem is if I need to draw an object (say line or polyline) I need a number of input points. What would be the best way to get the points? All the events from the view are subscribed by the controller and controller has to keep the points, then generate the line or polyline and finally add this line to view. But I dont know how capturing the mouse points be done efficiently, because each object will have different number of inputs and different algorithms of input validations.
Any help would be highly appreciated.
I was working in a CAD application 3 years ago, and these are some tips I remembered we have done (BTW: the application is free, you can download it, register your copy, and make use of the features in the Truss Editor):
1- You may add buttons for shape drawing, example: a button for a line, a button for a polyline, a rectangle, ...etc.
2- Create a variable that holds the current state your application (may be an enum): ready, drawing point, drawing line, drawing polyline, drawing circle, ...etc.
3- Wherever the user clicks a drawing button, the system enters a relevant state from those mentioned above.
4- The system returns to the "ready mode" when finished drawing, which can be detected automatically by the expected number of points (1 for point, 2 for line, 3 for ellipse, ...etc) or when the user presses Esc or right-clicked the drawing area (if the expected number of points is unknown, example: polyline). You may also end polyline drawing if the user re-clicked the first point and he has drawn 3+ points.
5- The system may cancel current drawing operation if the user ends the operation before completing the number of expected points.
...
when designing a CAD software, you must not only think of flexibility and dynamic, but also of speed. You should use some kind of wrapper class that works as a very thin layer between you and the hardware driver, it should return stuff like the pixel array of the screen, the current bpp, etc... This is how I would do it (and did actually). Now in C#, seeing as it is a .NET language, I'm not sure you can go that below, but you can still have someking of handler between the controller and your pen object, can't you?