Application, improve performance of touch events - c#

Basically, I have an application witch is 8000px by 8000px. We can zoom in to view a specific part, example on the radio, or we can zoom out to view everything.
Each part of the car is a control that we can manipulate with fingers, on a dual touch or multitouch monitor.
My problem is: for manipulating a control, for example the Volume button, the user needs to move the mouse exactly like in real life, so with a circular movement.
With the mouse everything is perfect, it responds instantly without any delay. I use the OnMouseLeftButtonDown, OnMouseMove, etc.
With the touch, it seems to be very difficult for the computer to get the touch position and there is a huge lag, especially when the user move 2 different button with 2 fingers at the same time. I use the OnTouchDown, OnTouchMove, etc...
The only difference between the mouse and the touch is when we need to get the position, with the Mouse I use: (e is a MouseButtonEventArgs)
Point currentPosition = e.GetPosition(this);
With the Touch I use: (e is a TouchEventArgs)
Point currentPosition = e.GetTouchPoint(this).Position;
Everything after this is the same.
I don't know if it's because I have too many control in the my application (over 5000 that we can manipulate, but when we zoom in on only 2 control it's the same thing) or because it is really difficult for the computer to get the position from a touch event....
Can someone help me with this? I need to find a solution to eliminate the lag.
I use Visual Studio 2010, Blend 4, .NET 4.0
Windows 7 64-bit
7 Gb RAM
Xeon 2.13 Ghz, 2 core, 8 thread
Screen: ELO technology, in a NEC 2490WUXi2 screen

This seems to be a bug. Take a look at this post.
http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/756a2ccf-6cf1-4e4a-a101-30b6f3d9c2c9/#ed51226a-6319-49f8-be45-cf5ff48d56bb
Or
http://www.codeproject.com/Articles/692286/WPF-and-multi-touch

it's hard to say why you have such an issue - may be OnTouchMove is triggered more often than MouseMove and you should create some additional processing to smooth the touch positions data.
I'd try to comment all the code under the
Point currentPosition = e.GetTouchPoint(this).Position;
and look at the performance.
Another approach is to count how much OnTouchMove is triggered.

The problem is the touch device, try another one to see if the lag is still there.

Related

Positioning Unity Admob Banner at the Bottom on a Device with a Display Cutout

I'm trying to position my ad banner (generated via the official Admob package for Unity) at the very bottom of the screen, but it doesn't seem to work very well with devices with a notch.
bv = new BannerView(adUnitId, adSize, AdPosition.Bottom);
This line of code positions the banner at the very bottom perfectly on notch-less devices (such as Pixel, Pixel XL, Pixel 2 & 2 XL), but look like this when the device have a notch at the top:
(The demo photo is taken from this Github Issue. Ignore the visible navigation bar.)
the space between the banner and the bottom of the screen is exactly the height of the notch. I tested it with multiple notch heights.
In order to position the banner at the very bottom, I think the best solution is to get the height of the notch and then use the following line of code:
new BannerView(adUnitId, adSize, 0, (int)ConvertPixelsToDP(Screen.height - adHeight + notchHeight));
But I couldn't find any way to get the cutout height. I tried using Screen and Display classes properties, but none of the ways helped me get the real resolution of the screen, including the notch, so I could subtract the safeArea height - and get the exact height of the notch.
Is there any way to get this real screen resolution? or any other possible solution to this problem?
From the main GitHub issue thread about this the AdMob Unity plugin developers stated "Will be prioritizing notch compatibility for the next release." back in October, indicating they are aware of the bug, and they posted on January 22nd saying "This work got pushed back but will be making it into the next release. No ETAs yet but should be soon." So who knows when the issue will ever get fixed.
But to anyone who is wrestling with this issue: I found a workaround. In my case it's a workaround for getting the ad to appear at the bottom of the screen, but I will also mention how you can probably get it to display at the top properly. I still hope for a release that just fixes this soon, but here is an option for those who don't want to wait around...
I discovered this workaround because I first started with the obvious solution: compute the y position manually via the obvious formula: int yDP = DensityIndependentPixels(Mathf.RoundToInt(Screen.safeArea.y + Screen.safeArea.height) - ScreenPixels(adSize.Height)); This would work, except for the fact that as of Unity 2018.3.5 safeArea always has 0, 0 for x and y, and while the width and height are usable to detect that there's a notch when compared to Display.main.systemHeight (although as far as I can tell, Screen.height == safeArea.height and same for width, so safeArea is entirely useless right now, since it only conveys information you can get from Screen.height and Screen.width), you can't actually detect where the notch is... this means in this workaround I assume that the notch is at the top of the screen (a safe assumption, but some phones have two notches, one on top and one on bottom, and for those devices my workaround will leave the ad partially covered by the bottom notch... if safeArea ever gets useful values, my workaround can account for that... anyway, I digress.)...
With that assumption made we can use a slightly modified formula: int yDP = DensityIndependentPixels(Screen.height - ScreenPixels(adSize.Height));
But frustratingly while I was logging values I'd expect for the y position... it was still showing up in the same weird offset position! I started manually increasing the y position from 0 incrementally and confirmed that at a certain y coordinate, even if you increase the value, the position it would appear in would be the same! And then I discovered that if you add a fair bit more to the y value, it would inexplicably pop back up at the top of the screen and you could continue incrementing the y position to bring it back down the screen, except this time it didn't get stuck at the weird offset position! After much experimentation I found that the formula for this inexplicable extra wrap-around y offset is: (DensityIndependentPixels(Display.main.systemHeight) + adSize.Height) and when added to intendedDensityIndependentPixelPosition... you now have the banner in the y position you were expecting!
Once you simplify the math, the manual y position that is a functional workaround for placing the banner at the bottom on a notched device is the shockingly simple:
int yDP = DensityIndependentPixels(Screen.height + Display.main.systemHeight);
I've tested this with all the simulated notches in Android 9 and it works (with the caveat that the double notch partially covers the ad still, but that's better than the ad covering the UI of the app), but be warned that I haven't yet tested this on actual notched devices!
Note that you only need to do this when Screen.height != Display.main.systemHeight... otherwise you're on a notchless device and should use AdPosition.BOTTOM.
For those who want your banner to appear at the top, the manual computation for y position is even simpler. Under the same assumption that notches are always on top, the y position of the ad should be set to DensityIndependentPixels(Display.main.systemHeight - Screen.height) with no bizarre inexplicable wrap-around offset necessary. Note that I haven't tested displaying the banner at the top given that the banner in my app is always displayed at the bottom.
The x position manual formula to make sure the ad is centered is exactly as you'd expect: int xDP = Mathf.RoundToInt(DensityIndependentPixels(Display.main.systemWidth - ScreenPixels(adSize.Width)) / 2f); Nothing particularly weird there.
One final note, I noticed that after I started manually specifying the ad position, I couldn't call Show() after calling Hide() on the BannerView... it would get in a weird state where it was invisible yet clickable; to fix this simply Destroy() and create a new AdRequest instead of calling Show()... I've seen some statements on Google support forums that say this is better practice anyway.
I hope the day I lost diving into this bizarre issue and finding this workaround helps someone here.
Unfortunately this is an issue that is caused by admob not accounting correctly for the top notch on Android. Note on iOS the ads code seems to acknowledge the existence of the notch and add extra 10 units of padding under it, which is weird.
I just spent a day or so trying to figure out very similar issue with the banner.
There are several ways to fix this:
Option 1. Create new java plugin and add the code below or from C# call the equivalent of this code:
public int getDisplayRealHeightPixels() {
Point realSize = new Point();
Display display = getWindowManager().getDefaultDisplay();
if (Build.VERSION.SDK_INT >= 17) {
display.getRealSize(realSize);
}
else {
display.getSize(realSize);
}
return realSize.y;
}
Then use getDisplayRealHeightPixels() to position the banner. That's what I currently do.
Option 2. Enable painting in cutout area: create a new java plugin, in onCreate add:
protected void onCreate(Bundle savedInstanceState) {
...
// uncomment to allow painting in notch area;
WindowManager.LayoutParams attributes = getWindow().getAttributes();
attributes.layoutInDisplayCutoutMode = WindowManager.LayoutParams.LAYOUT_IN_DISPLAY_CUTOUT_MODE_SHORT_EDGES;
getWindow().setAttributes(attributes);
...
I'll use this option once I adapt my app to support cutouts properly.
Option 3: Wait for Unity 2018.3 to support notches properly
According to https://unity3d.com/unity/beta/2018.3:
"Android: Added notch support for Android."
Gotchas:
None of the above fixes are perfect (perhaps option 3? but I haven't tried it.) Make sure to test on iOS (iPhone X) because Ads behaves differently there than Android.
On Android Ads, y==0 is the top of the screen, while on iPhone X y==0 renders the banner about 10 units below the notch.
For iOS I just hardcode the "notch size" to 30 and draw my app in the notch area.
Android split screen is a pain - I haven't tested completely there, my banner flies around the screen in this case. I just decided to fix it later.

Improving DrawingContext performance rendering Geometry (Polygons and Polylines)

this is my first question, however I'm a long time lurker. I'll split up this into two parts, one part explaining what I'm doing and why I think this is the way to go, the second one being the actual question that I can't solve for myself.
What am I doing?
I'm currently developing a framework for rendering 2-dimensional features meant to be displayed in real-time. You can think of an application like Google Maps in your browser, however the framework is meant to render all kinds of geographical data (not just axis-aligned raster data, like those Google Tiles).
The framework is to be integrated into our (the company's) newest product which is a WPF application for the desktop and laptop.
Therefore I chose WPF for actually rendering geometry only; Visibility and Occlusion Culling are done by myself as well as input handling (mouse picking), moving the camera, etc..
Being a real-time application, it need to achieve at least 30 FPS. The framework performs adequate when rendering images: I can draw several thousand bitmaps per frame without a problem, however polyonal data turns out to be a major problem.
The actual question
I'm rendering my fair amount of polyline and polygon data using WPF, specifically using DrawingContext and StreamGeometry. My understanding so far is that this is the way to go for if I need performance. However I am not able to achieve the results that I expected from this.
This is how I fill the StreamGeometry with actual data:
using (StreamGeometryContext ctx = Geometry.Open())
{
foreach (var segment in segments)
{
var first = ToWpf(segment[0]);
ctx.BeginFigure(first, false, false);
// Skip the first point, obviously
List<Point> points = segment.Skip(1).Select(ToWpf).ToList();
ctx.PolyLineTo(points, true, false);
}
}
Geometry.Freeze();
And this is how I draw my geometry:
_dc.PushTransform(_mercatorToView);
_dc.DrawGeometry(null, _pen, polyline);
_dc.Pop();
As a test, I loaded ESRI shapes from OpenStreetMap into my application to test its performance, however I'm not satisfied at all:
My test data consists of ~3500 line segments with a total of ~20k lines.
Mapping each segment to its own StreamGeometry performed extremely bad, but I kinda expected that already: Rendering takes about 14 seconds.
I've then tried packing more segments into the same StreamGeometry, using multiple figures:
80 StreamGeometry, Rendering takes about 50ms.
However I can't get any better results than this. Increasing the amount of lines to around 100k makes my application nearly unusable: Rendering takes more than 100ms.
What else can I do besides freezing both the geometry as well the pen when rendering vector data?
I'm at the point where I'd rather make use of DirectX myself than to rely on WPF for me do to it because something seems to be going terribly wrong.
Edit
To further clarify what I am doing: The application visualizes geographic data in real-time, very much like an application like Google Maps in the browser: However it is supposed to visualize much, much more data. As you may know, Google Maps allows both zooming and panning, which requires > 25 FPS for it to appear as a fluent animation; anything less does not feel fluent.
*
Sorry but I shouldn't upload a video of this before the actual product is released. You may however envision something like Google Maps, however with tons of vector data (polygons and polylines).
*
There are two solutions, one of which is very often stated:
Cache heavy drawings in a bitmap
The implementation seems kinda easy, however I see some problems with this approach: In order to properly implement panning, I need to avoid drawing the heavy stuff each frame, and therefore I am left with the choice of either not updating the cached bitmap while panning the camera, or creating a bitmap which covers an even bigger region than the viewport, so that I only need to update the cached bitmap every so often.
The second "problem" is related to zooming. However it's more of a visual artifact than a real problem: Since the cached bitmap can't properly be updated at 30 FPS, I need to avoid that when zooming as well. I may very well scale the bitmap while zooming, only creating a new bitmap when the zoom ends, however the width of the polylines would not have a constant thickness, although they should.
This approach does seem to be used by MapInfo, however I can't say I'm too fond of it. It does seem to be the easiest to implement though.
Split geometry up into different drawing visuals
This approach seems to deal with the problem differently. I'm not sure if this approach works at all: It depends on whether or not I correctly understood how WPF is supposed to work in this area.
Instead of using one DrawingVisual for all stuff that needs to be drawn, I should use several, so that not every one needs to be RenderOpened(). I could simply change parameters, for example the matrix in the sample above, in order to reflect both camera panning and moving.
However I see some problems with this approach as well: Panning the camera will inevitably bring new geometry into the viewport, hence I would need to perform something similar than in the first approach, actually render stuff which is currently not visible, but may become visible due to the camera shifting; Drawing everything is out of the question as it may take ridiculous amounts of times for a rather small amount of data.
Problem related to both approaches
One big problem which neither of these approach can solve is that even if the overall frame-rate is stable, occasional hickups, either when updating the cached bitmaps (okay, this doesn't apply if the cached bitmap is only updated when the camera is no longer panned) or calling RenderOpen to draw the visible chunk of geometry, seem to be inevitable.
My thoughts so far
Since these are the only two solutions I ever see to this problem (I've done my fair share of googling for more than a year), I guess the only solution so far is to accept frame-rate hickups on even the most powerful GPUs (which should be able to rasterize hundreds of millions of primitives per second), a delayed updating of the viewport (in the case where bitmaps are only updated when the viewport is no longer moved) or to not use WPF at all and resort to DirectX directly.
I'm very glad for the help, however I can't say I'm impressed by WPFs rendering performance so far.
To improve 2D WPF rendering performance you could have a look at the RenderTargetBitmap (for WPF >= 3.5) or the BitmapCache class (for WPF >= 4).
Those classes are used for Cached Composition
From MSDN:
By using the new BitmapCache and BitmapCacheBrush classes, you can cache a complex part of the visual tree as a bitmap and greatly improve rendering time. The bitmap remains responsive to user input, such as mouse clicks, and you can paint it onto other elements just like any brush.

Designing a CAD application

I am designing a CAD application using a variation of MVC architecture. My model and view are independent of each other. They communicate through the controller. My problem is if I need to draw an object (say line or polyline) I need a number of input points. What would be the best way to get the points? All the events from the view are subscribed by the controller and controller has to keep the points, then generate the line or polyline and finally add this line to view. But I dont know how capturing the mouse points be done efficiently, because each object will have different number of inputs and different algorithms of input validations.
Any help would be highly appreciated.
I was working in a CAD application 3 years ago, and these are some tips I remembered we have done (BTW: the application is free, you can download it, register your copy, and make use of the features in the Truss Editor):
1- You may add buttons for shape drawing, example: a button for a line, a button for a polyline, a rectangle, ...etc.
2- Create a variable that holds the current state your application (may be an enum): ready, drawing point, drawing line, drawing polyline, drawing circle, ...etc.
3- Wherever the user clicks a drawing button, the system enters a relevant state from those mentioned above.
4- The system returns to the "ready mode" when finished drawing, which can be detected automatically by the expected number of points (1 for point, 2 for line, 3 for ellipse, ...etc) or when the user presses Esc or right-clicked the drawing area (if the expected number of points is unknown, example: polyline). You may also end polyline drawing if the user re-clicked the first point and he has drawn 3+ points.
5- The system may cancel current drawing operation if the user ends the operation before completing the number of expected points.
...
when designing a CAD software, you must not only think of flexibility and dynamic, but also of speed. You should use some kind of wrapper class that works as a very thin layer between you and the hardware driver, it should return stuff like the pixel array of the screen, the current bpp, etc... This is how I would do it (and did actually). Now in C#, seeing as it is a .NET language, I'm not sure you can go that below, but you can still have someking of handler between the controller and your pen object, can't you?

Most performant way to graph thousands of data points with WPF?

I have written a chart that displays financial data. Performance was good while I was drawing less than 10.000 points displayed as a connected line using PathGeometry together with PathFigure and LineSegments. But now I need to display up to 100.000 points at the same time (without scrolling) and it's already very slow with 50.000 points. I was thinking of StreamGeometry, but I am not sure since it's basically the same as a PathGeometry stroring the information as byte stream. Does any one have an idea to make this much more performant or maybe someone has even done something similar already?
EDIT: These data points do not change once drawn so if there is potential optimizing it, please let me know (line segments are frozen right now).
EDIT: I tried StreamGeometry. Creating the graphic took even longer for some reason, but this is not the issue. Drawing on the chart after drawing all the points is still as slow as the previous method. I think it's just too many data points for WPF to deal with.
EDIT: I've experimented a bit and I noticed that performance improved a bit by converting the coordinates which were previously in double to int to prevent WPF anti-aliasing sub-pixel lines.
EDIT: Thanks for all the responses suggesting to reduce the number of line segments. I have reduced them to at most twice the horizontal resolution for stepped lines and at most the horizontal resolution for simple lines and the performance is pretty good now.
I'd consider downsampling the number of points you are trying to render. You may have 50,000 points of data but you're unlikely to be able to fit them all on the screen; even if you charted every single point in one display you'd need 100,000 pixels of horizontal resolution to draw them all! Even in D3D that's a lot to draw.
Since you are more likely to have something like 2,048 pixels, you may as well reduce the points you are graphing and draw an approximate curve that fits onto the screen and has only a couple thousand verts. If for example the user graphs a time frame including 10000 points, then downsample those 10000 points to 1000 before graphing. There are numerous techniques you could try, from simple averaging to median-neighbor to Gaussian convolution to (my suggestion) bicubic interpolation. Drawing any number of points greater than 1/2 the screen resolution will simply be a waste.
As the user zooms in on a part of a graph, you can resample to get higher resolutions and more accurate curve fitting.
When you start dealing with hundreds of thousands of distinct vertices and vectors in your geometry, you should probably consider migrating your graphics code to use a graphics framework instead of depending on WPF (which, while built on top of Direct3D and therefore capable of remarkably efficient vector graphics rendering, has a lot of extra overhead going on that hampers its efficiency). It's possible to host both Direct3D and OpenGL graphics rendering windows within WPF -- I'd suggest moving that direction instead of continuing to work solely within WPF.
(EDIT: changed "DirectX" in original answer to "Direct3D")
Just ran into this question, but as I mentioned in this thread, the most performant approach might be to program against WPF's Visual layer.
Everything Visual in WPF eventually goes against this layer ... and so it is the most lightweight approach of them all.
See this and this for more info. Chapter 14 of Matthew MacDonald's Pro WPF in C# 2008 book also has a good section on it.
As another reference ... see Chapter 2 of Pavan Podila's book WPF Control Development Unleashed. On page 13, he discusses how DrawingVisuals would be an excellent choice for a charting component.
Finally, I just noticed that Charles Petzold wrote an MSDN Magazine article where the best overall (performant anyway) solution (to a scatter plot) was a DrawingVisual approach.
Another idea would be to use the Image control with the Source property set to a DrawingImage that you've dynamically created.
According to Pavan Podila in WPF Control Development Unleashed, this approach can be very helpful when you have thousands and thousands of visuals that don't need any interactivity. Check out page 25 of his book for more info.
This is an old thread, but I thought it was worth mentioning that you could attain interactivity with the above method by using the MouseUp() event. You know the size of the image's viewport, the resolution of the image, and the mouse's position. For example, you could maintain the collection actualScreenPoints through a timer attached to your UserControl_SizeChanged event:
double xworth = viewport.ActualWidth / (XEnd - XStart);
double xworth = viewport.ActualHeight / (YEnd - YStart);
List<Point> actualScreenPoints = new List<Point>();
for (var i = 0; i < points.Count; i++)
{
double posX = points[i].X * xworth;
double posY = points[i].Y * yworth;
actualScreenPoints.Add(posX, posY);
}
And then when your MouseUp() event fires, check if any of the points in the collection are within +-2px. There's your MouseUp on a given point.
I don't know how well it scales, but I've had some success using ZedGraph in WPF (WinForms control inside a WindowsFormsPresenter). I'm surprised no one mentioned it yet. It's worth taking a look at, even if you're not planning on using it for your current project.
ZedGraph
Good luck!
I believe the only method that might be faster while remaining in the WPF framework would be to override OnRender in a custom control. You can then render your geometry directly to the persisted scene, culling anything out of view. If the user can only see a small part of the data set at a time, culling could be enough on its own.
With this many data points, it's unlikely that the user can see full detail when the entire dataset is in view. So it might also be worthwhile to consider simplifying the dataset for full view and then showing a more detailed view if and when they zoom in.
Edit: Also, give StreamGeometry a shot. Its whole reason for existing is performance, and you never know until you try.
This is a very good question, and at it's heart begs the question "Can any user make practical use of, or business descisions from, a screen containing 100,000 discrete points?".
Following best practice in GUI design philosphy, the answer should be No, which would lead me to question whether there isn't a different way to meet the requirement for the application.
If there really is a bona-fide case for displaying 100,000 points on screen, with no scrolling, then using an off-screen buffer is the way to go. Composite your image to a bitmap, than whack that bitmap onto your Window / Page as needed. This way the heavy lifting is only done once, after which the hardware acceleration can be used every time the window needs to be drawn.
Hope this helps.
I haven't worked with WPF (disclaimer), but I suspect that your performance problem is because your code is trying to fit a smooth curved line through all of your data, and the time required increases geometrically (or worse) with the number of data points.
I don't know if this would be acceptable appearance-wise, but try graphing your data by connecting each point to the last with a straight line. This should make the time-to-graph proportional to the number of data points, and with as many points as you have the graph may end up looking exactly the same anyway.
Another idea would be to use the Image control with the Source property set to a DrawingImage that you've dynamically created.
According to Pavan Podila in WPF Control Development Unleashed, this approach can be very helpful when you have thousands and thousands of visuals that don't need any interactivity. Check out page 25 of his book for more info.

WPF - ScreenSaver graphics performance improvements

I took this WPF-VS2008 ScreenSaver template and started to make a new screen saver. I have some experience with winForms-platform (GDI+) screen savers, so i am little bit lost with WPF.
Background-element for my screen saver is Canvas.
A DispatcherTimer tick is set to 33 msec, which is ~ 30 FPS.
Background-color is just one huge LinearGradientBrush.
On the screen I have (per available screen, on my local computer i have 2) n-Ellipses drawn with randomly-calculated (Initialization) Background colors + Alpha channel. They are all in Canvas's Children collection.
I'm moving those Ellipses around the screen with some logic (every DispatcherTimer tick). I make a move per-ellipse, and then just call Canvas.SetLeft(...) and Canvas.SetTop(...) for each Ellipse.
If N (number of Ellipses) is higher > 70-80, i begin to notice graphics slow-downs.
Now, i wonder, if there is anything i could do to improve the graphic-smoothness when choosing higher N-values ? Can I "freeze" "something" before moving my Ellipses and "un-freeze" "something" when i'm finished ? Or is there any other trick i could do?
Not that i would be too picky about mentioned performance drop downs - becouse when N==50, everything works smooth as it should. Even if Ellipses are ALL in the SAME place (loads of transparency stuff), there are no problems at all.
Have you tried rendering in the CompositionTarget.Rendering event, rather than in a timer? I've gotten impressive performance in a 3D screen saver when using the Rendering event and doing my own double buffering. (See http://stuff.seans.com/2008/08/21/simple-water-animation-in-wpf/ , http://stuff.seans.com/2008/08/24/raindrop-animation-in-wpf/ , and http://stuff.seans.com/2008/09/01/writing-a-screen-saver-in-wpf/ )
You will improve performance if you call the Freeze method on objects that inherit from Freezable - brushes for example.
The reason is that Freezable supports extra change notifications that have to be handled by the graphics system, when you call Freeze the object can no longer change and so there are no more change notifications.
For an example of this notification system, if you create a brush, use it to paint a rectangle (for example) and then change the brush the on-screen rectangle will change color.
It is not possible to unfreeze something once it has been frozen (although a copy of the object is unfozen by default). Double buffering is also enabled by default in WPF so you cannot gain here.
Once way to improve performance if not already done is to use geometry objects such as Ellipse Geometry rather than shapes if you do not need to the all of the events as these are lighter weight.
I also have found this MSDN Article Optimizing Performance: 2D Graphics and Imaging that suggests a CachingHint may help along with some other tips.
Finally ensure that you are using the latest service pack one as it has many performance improvements outlined here

Categories

Resources