Currently I'm using a System.Windows.Media.DrawingGroup to store some tiled images (ImageDrawing) inside the Children-DrawingCollection property.
Well the problem is now, this method gets really slow if you display the entire DrawingGroup in an Image control, because my DrawingGroup can contain hundreds or even thousands of small images it can really mess up the performance.
So my first thought was to somehow render a single image from all the small ones inside the DrawingGroup and then only display that image, that would be much faster. But as you might have figured out I haven't found any solution so simply combine several images with WPF Imaging.
It would really be great if someone could help with this problem or tell me how i can improve the performance with the DrawingGroup or even use another approach.
One last thing, currently I'm using a RenderTargetBitmap to generate a single BitmapSource from the DrawingGroup, this approach isn't really faster, but it makes the scrolling and working on the Image control at least a little smoother.
I'm not sure this applies to your problem, but I'll post it here anyway since it might help someone:
If not all of the images in the drawing group is going to be visible at the same time, you could set the ClipGeometry property to whatever you want it to draw. This effectively tells WPF that the parts outside of the geometry is not needed and will be skipped during rendering.
A few other things you could experiment with are:
Freeze the drawinggroup before applying it to your image source. At least it couldn't hurt (unless you really need to modify the instance, but remember you could always just create a new one instead of modifying the old)
Try different ways to display your drawinggroup. For instance as a DrawingVisual (which is considered light weight) or as a DrawingBrush to paint e.g. a rectangle.
Combine the small images into smaller groups which you then combine to your bigger group. No idea if this improves or hurts performance, but it could be worth trying.
I'm working on a problem having to do with tiled images, with the added complication of having to display some images overlaid on others.
I haven't run into any performance issues (and likely won't since my images are rather small and relatively small in number, around 100 max.), but I did come across the Freeze method in a code sample for the DrawingImage class. The comment for it was "Freeze the DrawingImage for performance benefits."
The Freeze method also applies to the DrawingGroup class. Maybe give it a try?
Related
I've got this problem:
I created a pretty large Canvas in WPF with a lot children.
I want to add a Print Button. PrintVisual seems not to work, because the image is to big. (I am using a scrollbar) I want to split the Canvas between multiple pages.
What i've done so far:
I am taking a Visual Brush for every part of the Canvas
Create a new Canvas and make the visual brush its background
Adding the new Canvas to a page, add the page to a FixedDocument.
Well, now i've got a FixedDocument and going to print it via printDocument.
The Problem is, that the whole process of printing takes a lot of time and sometimes it doesn't work at all. It's like there is a preprocessing step to convert the fixed document into a bitmap.
My Question: Is a Visualbrush in a Canvas to big? Should i convert the Canvas first into a bitmap?
I've found this great article: http://www.codeproject.com/Articles/339416/Printing-large-WPF-UserControls.
Whatever.
Is it a good way to convert a huge canvas into a bitmap first and then print parts of the bitmap? I could very well imagine, that one could get problems with blurring effects this way.
I've got also no idea how to add a bitmap to a page in wpf.
The worst thing is, that i couldn't found some really good sorces or a standardway (and i think there has to be one, cause this should be a pretty standard problem) for printing dynamic produced canvas in wpf.
I am really greatful for every really good source, help or code.
Thank you for your time.
this is my first question, however I'm a long time lurker. I'll split up this into two parts, one part explaining what I'm doing and why I think this is the way to go, the second one being the actual question that I can't solve for myself.
What am I doing?
I'm currently developing a framework for rendering 2-dimensional features meant to be displayed in real-time. You can think of an application like Google Maps in your browser, however the framework is meant to render all kinds of geographical data (not just axis-aligned raster data, like those Google Tiles).
The framework is to be integrated into our (the company's) newest product which is a WPF application for the desktop and laptop.
Therefore I chose WPF for actually rendering geometry only; Visibility and Occlusion Culling are done by myself as well as input handling (mouse picking), moving the camera, etc..
Being a real-time application, it need to achieve at least 30 FPS. The framework performs adequate when rendering images: I can draw several thousand bitmaps per frame without a problem, however polyonal data turns out to be a major problem.
The actual question
I'm rendering my fair amount of polyline and polygon data using WPF, specifically using DrawingContext and StreamGeometry. My understanding so far is that this is the way to go for if I need performance. However I am not able to achieve the results that I expected from this.
This is how I fill the StreamGeometry with actual data:
using (StreamGeometryContext ctx = Geometry.Open())
{
foreach (var segment in segments)
{
var first = ToWpf(segment[0]);
ctx.BeginFigure(first, false, false);
// Skip the first point, obviously
List<Point> points = segment.Skip(1).Select(ToWpf).ToList();
ctx.PolyLineTo(points, true, false);
}
}
Geometry.Freeze();
And this is how I draw my geometry:
_dc.PushTransform(_mercatorToView);
_dc.DrawGeometry(null, _pen, polyline);
_dc.Pop();
As a test, I loaded ESRI shapes from OpenStreetMap into my application to test its performance, however I'm not satisfied at all:
My test data consists of ~3500 line segments with a total of ~20k lines.
Mapping each segment to its own StreamGeometry performed extremely bad, but I kinda expected that already: Rendering takes about 14 seconds.
I've then tried packing more segments into the same StreamGeometry, using multiple figures:
80 StreamGeometry, Rendering takes about 50ms.
However I can't get any better results than this. Increasing the amount of lines to around 100k makes my application nearly unusable: Rendering takes more than 100ms.
What else can I do besides freezing both the geometry as well the pen when rendering vector data?
I'm at the point where I'd rather make use of DirectX myself than to rely on WPF for me do to it because something seems to be going terribly wrong.
Edit
To further clarify what I am doing: The application visualizes geographic data in real-time, very much like an application like Google Maps in the browser: However it is supposed to visualize much, much more data. As you may know, Google Maps allows both zooming and panning, which requires > 25 FPS for it to appear as a fluent animation; anything less does not feel fluent.
*
Sorry but I shouldn't upload a video of this before the actual product is released. You may however envision something like Google Maps, however with tons of vector data (polygons and polylines).
*
There are two solutions, one of which is very often stated:
Cache heavy drawings in a bitmap
The implementation seems kinda easy, however I see some problems with this approach: In order to properly implement panning, I need to avoid drawing the heavy stuff each frame, and therefore I am left with the choice of either not updating the cached bitmap while panning the camera, or creating a bitmap which covers an even bigger region than the viewport, so that I only need to update the cached bitmap every so often.
The second "problem" is related to zooming. However it's more of a visual artifact than a real problem: Since the cached bitmap can't properly be updated at 30 FPS, I need to avoid that when zooming as well. I may very well scale the bitmap while zooming, only creating a new bitmap when the zoom ends, however the width of the polylines would not have a constant thickness, although they should.
This approach does seem to be used by MapInfo, however I can't say I'm too fond of it. It does seem to be the easiest to implement though.
Split geometry up into different drawing visuals
This approach seems to deal with the problem differently. I'm not sure if this approach works at all: It depends on whether or not I correctly understood how WPF is supposed to work in this area.
Instead of using one DrawingVisual for all stuff that needs to be drawn, I should use several, so that not every one needs to be RenderOpened(). I could simply change parameters, for example the matrix in the sample above, in order to reflect both camera panning and moving.
However I see some problems with this approach as well: Panning the camera will inevitably bring new geometry into the viewport, hence I would need to perform something similar than in the first approach, actually render stuff which is currently not visible, but may become visible due to the camera shifting; Drawing everything is out of the question as it may take ridiculous amounts of times for a rather small amount of data.
Problem related to both approaches
One big problem which neither of these approach can solve is that even if the overall frame-rate is stable, occasional hickups, either when updating the cached bitmaps (okay, this doesn't apply if the cached bitmap is only updated when the camera is no longer panned) or calling RenderOpen to draw the visible chunk of geometry, seem to be inevitable.
My thoughts so far
Since these are the only two solutions I ever see to this problem (I've done my fair share of googling for more than a year), I guess the only solution so far is to accept frame-rate hickups on even the most powerful GPUs (which should be able to rasterize hundreds of millions of primitives per second), a delayed updating of the viewport (in the case where bitmaps are only updated when the viewport is no longer moved) or to not use WPF at all and resort to DirectX directly.
I'm very glad for the help, however I can't say I'm impressed by WPFs rendering performance so far.
To improve 2D WPF rendering performance you could have a look at the RenderTargetBitmap (for WPF >= 3.5) or the BitmapCache class (for WPF >= 4).
Those classes are used for Cached Composition
From MSDN:
By using the new BitmapCache and BitmapCacheBrush classes, you can cache a complex part of the visual tree as a bitmap and greatly improve rendering time. The bitmap remains responsive to user input, such as mouse clicks, and you can paint it onto other elements just like any brush.
I am writing a geoscience visualization application that uses wpf 3d. The user needs to be able to zoom deep into detail and out quick with minimum resources taken. I've decided to divide my slice (ModelVisual3D) into subrectangles (GeometryModel3D), so that each has it's own texture that changes when the camera zooms in (similar to Google maps).
The problem is that "cracks" are appearing between subrectangles, even though they actually have no empty space between them.
How to hide these? or is there any other way to assign multiple materials with different sizes to one ModelVisual3D?
PS I've tried making the background gray, light-gray, silver and white-smoke. It helps a little, but it's not acceptable. I've also tried overlapping the subrectangles, with no result.
Instead of your current setup you might want to make several textures at different resolutions and switch between these depending on the zoom level. (Mipmaps)
When getting really close you might replace the entire object and switch it for a much smaller one) and use a highly detailed texture.
It will require a bit more pre-processing but you will be able to use a single geometry.
Seems like changing ImageBrush's stretch to Stretch.None and using textures larger than the subsquare helps. Although now I need more precise control over texture coordinates for the surface.
We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?
Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm
You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)
I have written a chart that displays financial data. Performance was good while I was drawing less than 10.000 points displayed as a connected line using PathGeometry together with PathFigure and LineSegments. But now I need to display up to 100.000 points at the same time (without scrolling) and it's already very slow with 50.000 points. I was thinking of StreamGeometry, but I am not sure since it's basically the same as a PathGeometry stroring the information as byte stream. Does any one have an idea to make this much more performant or maybe someone has even done something similar already?
EDIT: These data points do not change once drawn so if there is potential optimizing it, please let me know (line segments are frozen right now).
EDIT: I tried StreamGeometry. Creating the graphic took even longer for some reason, but this is not the issue. Drawing on the chart after drawing all the points is still as slow as the previous method. I think it's just too many data points for WPF to deal with.
EDIT: I've experimented a bit and I noticed that performance improved a bit by converting the coordinates which were previously in double to int to prevent WPF anti-aliasing sub-pixel lines.
EDIT: Thanks for all the responses suggesting to reduce the number of line segments. I have reduced them to at most twice the horizontal resolution for stepped lines and at most the horizontal resolution for simple lines and the performance is pretty good now.
I'd consider downsampling the number of points you are trying to render. You may have 50,000 points of data but you're unlikely to be able to fit them all on the screen; even if you charted every single point in one display you'd need 100,000 pixels of horizontal resolution to draw them all! Even in D3D that's a lot to draw.
Since you are more likely to have something like 2,048 pixels, you may as well reduce the points you are graphing and draw an approximate curve that fits onto the screen and has only a couple thousand verts. If for example the user graphs a time frame including 10000 points, then downsample those 10000 points to 1000 before graphing. There are numerous techniques you could try, from simple averaging to median-neighbor to Gaussian convolution to (my suggestion) bicubic interpolation. Drawing any number of points greater than 1/2 the screen resolution will simply be a waste.
As the user zooms in on a part of a graph, you can resample to get higher resolutions and more accurate curve fitting.
When you start dealing with hundreds of thousands of distinct vertices and vectors in your geometry, you should probably consider migrating your graphics code to use a graphics framework instead of depending on WPF (which, while built on top of Direct3D and therefore capable of remarkably efficient vector graphics rendering, has a lot of extra overhead going on that hampers its efficiency). It's possible to host both Direct3D and OpenGL graphics rendering windows within WPF -- I'd suggest moving that direction instead of continuing to work solely within WPF.
(EDIT: changed "DirectX" in original answer to "Direct3D")
Just ran into this question, but as I mentioned in this thread, the most performant approach might be to program against WPF's Visual layer.
Everything Visual in WPF eventually goes against this layer ... and so it is the most lightweight approach of them all.
See this and this for more info. Chapter 14 of Matthew MacDonald's Pro WPF in C# 2008 book also has a good section on it.
As another reference ... see Chapter 2 of Pavan Podila's book WPF Control Development Unleashed. On page 13, he discusses how DrawingVisuals would be an excellent choice for a charting component.
Finally, I just noticed that Charles Petzold wrote an MSDN Magazine article where the best overall (performant anyway) solution (to a scatter plot) was a DrawingVisual approach.
Another idea would be to use the Image control with the Source property set to a DrawingImage that you've dynamically created.
According to Pavan Podila in WPF Control Development Unleashed, this approach can be very helpful when you have thousands and thousands of visuals that don't need any interactivity. Check out page 25 of his book for more info.
This is an old thread, but I thought it was worth mentioning that you could attain interactivity with the above method by using the MouseUp() event. You know the size of the image's viewport, the resolution of the image, and the mouse's position. For example, you could maintain the collection actualScreenPoints through a timer attached to your UserControl_SizeChanged event:
double xworth = viewport.ActualWidth / (XEnd - XStart);
double xworth = viewport.ActualHeight / (YEnd - YStart);
List<Point> actualScreenPoints = new List<Point>();
for (var i = 0; i < points.Count; i++)
{
double posX = points[i].X * xworth;
double posY = points[i].Y * yworth;
actualScreenPoints.Add(posX, posY);
}
And then when your MouseUp() event fires, check if any of the points in the collection are within +-2px. There's your MouseUp on a given point.
I don't know how well it scales, but I've had some success using ZedGraph in WPF (WinForms control inside a WindowsFormsPresenter). I'm surprised no one mentioned it yet. It's worth taking a look at, even if you're not planning on using it for your current project.
ZedGraph
Good luck!
I believe the only method that might be faster while remaining in the WPF framework would be to override OnRender in a custom control. You can then render your geometry directly to the persisted scene, culling anything out of view. If the user can only see a small part of the data set at a time, culling could be enough on its own.
With this many data points, it's unlikely that the user can see full detail when the entire dataset is in view. So it might also be worthwhile to consider simplifying the dataset for full view and then showing a more detailed view if and when they zoom in.
Edit: Also, give StreamGeometry a shot. Its whole reason for existing is performance, and you never know until you try.
This is a very good question, and at it's heart begs the question "Can any user make practical use of, or business descisions from, a screen containing 100,000 discrete points?".
Following best practice in GUI design philosphy, the answer should be No, which would lead me to question whether there isn't a different way to meet the requirement for the application.
If there really is a bona-fide case for displaying 100,000 points on screen, with no scrolling, then using an off-screen buffer is the way to go. Composite your image to a bitmap, than whack that bitmap onto your Window / Page as needed. This way the heavy lifting is only done once, after which the hardware acceleration can be used every time the window needs to be drawn.
Hope this helps.
I haven't worked with WPF (disclaimer), but I suspect that your performance problem is because your code is trying to fit a smooth curved line through all of your data, and the time required increases geometrically (or worse) with the number of data points.
I don't know if this would be acceptable appearance-wise, but try graphing your data by connecting each point to the last with a straight line. This should make the time-to-graph proportional to the number of data points, and with as many points as you have the graph may end up looking exactly the same anyway.
Another idea would be to use the Image control with the Source property set to a DrawingImage that you've dynamically created.
According to Pavan Podila in WPF Control Development Unleashed, this approach can be very helpful when you have thousands and thousands of visuals that don't need any interactivity. Check out page 25 of his book for more info.