I have a SplitContainer which contains Atalasoft's AnnotateViewer. Class hierarchy is as follows:
System.Windows.Forms.Control
Atalasoft.Imaging.WinControls.ScrollPort
...
Atalasoft.Annotate.UI.AnnotateViewer
My.AnnotateViewer
Now the problem: As long as the content of the SplitContainer is smaller than the actual viewport, hence no scrollbars visible, touch input is interpreted as left mouse down, mouse move and left mouse up which is exactly what I'd expect and love to see. I could still use two-finger-panning to scroll the view. BUT: If I zoom the viewer, so that my content gets larger than my viewport, scrollbars appear and touch input behaves differently: Horizontal panning stays the same, but vertical panning now causes scrolling, even with a single finger.
The question is: Is this behavior Atalasoft-specific, WinForms-specific or system-specific and can I do something to change it? I'd like a single finger to always convert to left click and move. Two finger's for scrolling is fine (and already works.)
I fear that it is system specific because you can find the exact same behavior in Word 2010. Still, it's a Microsoft product.
I begin to hate the fact that you so often get sudden inspiration after finally typing down your problem to a forum or similar.
This problem was now solved by re-registering for gesture events. You are able to register for all pan gestures except for horizontal and/or vertical single finger pan.
// adapt the gesture registration for this window
GESTURECONFIG[] gestureConfig = new[]
{
// register for zoom gesture
new GESTURECONFIG { dwID = GID_ZOOM, dwWant = GC_ZOOM, dwBlock = 0 },
// register for pan gestures but ignore single finger (only use two-finger-pan to scroll)
new GESTURECONFIG { dwID = GID_PAN, dwWant = GC_PAN, dwBlock = GC_PAN_WITH_SINGLE_FINGER_HORIZONTALLY | GC_PAN_WITH_SINGLE_FINGER_VERTICALLY }
};
SetGestureConfig(this.Handle, 0, (uint)gestureConfig.Length, gestureConfig, (uint)Marshal.SizeOf(typeof(GESTURECONFIG)));
Details here:
http://msdn.microsoft.com/de-de/library/dd353241%28v=vs.85%29.aspx
I think this is the cleanest solution you can get.
The SetGestureConfig API accepts an GESTURECONFIG struct as its 4th parameter. How can you pass in a GESTURECONFIG[] array?
Related
I have a WPF Touch app (MS Surface, VS 2017, .NET 4.7) that shows a number of simple shapes (lines, rects, circles) above an image, all on on a canvas. I'm having a trouble with selecting my ManipulationContainer for touch.
Roughly what I have is this:
<ScrollViewer x:Name="MyViewer"
ManipulationStarting="Viewer_ManipulationStarting">
<Canvas x:Name="MyCanvas">
<Image x:Name="MyImage" Source={Binding MyImageSource}"/>
<ItemsControl x:Name="MyShapes" ItemsSource="{Binding MyShapes}"/>
</Canvas>
</ScrollViewer>
I want let my user do two different things with touch.
With one finger they can draw new shapes (relative to the canvas)
With two fingers they can zoom the entire canvas (relative to the scrollviewer)
So if the user is drawing a new shape, then ManipulationContainer must be "MyCanvas". But if the user is zooming the whole scene, then the ManipulationContainer must be "MyViewer" (because in that case I'm changing the LayoutTransform of the whole canvas).
So I need (it would seem) to select from one of two different manipulation containers depending on which of these operations is happening. But I can't seem to figure out how. Below is what I have (which doesn't work) where I choose the container. in the ManipulationStarting handler.
private void Scene_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
{
// If zooming, container is the parent viewer. Otherwise it's
// drawing a shape relative to the canvas.
if (e.Manipulators.Count() >= 2)
e.ManipulationContainer = MyViewer; // Zooming whole canvas
else
e.ManipulationContainer = MyCanvas; // Drawing on canvas
e.Handled = true;
}
As you likely guessed, when I first get this, I always get just one manipulator; The user is extremely unlikely to have his very first touch be with two fingers exactly simultaneously. So my code always thinks I'm drawing a shape.
It's only later on, in the ManipulationDelta that I start getting more than one manipulator. But it's too late then to choose a container. It's already been chosen. And if I check the number of manipulators then, my coordinates, delta, and origin are all relative to the very thing I'm trying to move. It makes the zooming jump all over the place.
I don't know what to do about this. I have to let my user zoom at any time so I do not have the option of forcing them to choose a "zoom tool".
I'm sure I'm missing something simple but I don't know what it is and I'd like to avoid blind alleys. Any suggestions?
I'm trying to implement a utility for showing throughput over time in a system, and am using Oxyplot to visualise the data.
Currently, zoom and pan are working as expected, but I would like some visual indication to the user which clearly shows whether the graph can be zoomed or panned.
After ditching the idea of using a scroll bar (being neither able to accurately get the position of the visible section of the graph, nor correctly position the thumb of the scroll bar releative to the chart), I have settled on using icons to show whether there is any data on the chart which is hidden to the left or rightmost side.
I would like these icons to work as buttons which allow the user to page left and right on the graph, however as with all things OxyPlot related, the implementation is far more complex than it first seems.
I'm using the WPF implementation, which uses a ViewModel representing the overall data set, with each series item represented by its own model.
This effectively renders almost every tutorial useless as the WPF implementation is significantly different to the basic OxyPlot package.
Currently, the code behind in the view handles the click on the page left/right buttons. I cannot put this in my ViewModel as it must interract directly with the PlotControl object.
private void btnPageRight_Click(object sender, RoutedEventArgs e) {
CategoryAxis axis = (CategoryAxis)PlotControl.ActualModel.Axes[0];
double xAxisMin = axis.ActualMinimum;
double xAxisMax = axis.ActualMaximum;
double visibleSpan = xAxisMax - xAxisMin;
double newMinOffset = xAxisMax + visibleSpan;
PlotControl.Axes[0].Minimum = newMinOffset;
PlotControl.Axes[0].Maximum = newMinOffset + visibleSpan;
PlotControl.ActualModel.InvalidatePlot(true);
}
As it stands, the above code throws no errors, but it does not work either.
If anybody can advise a possible way to make OxyPlot scroll to a given position using just code behind, I would be grateful.
As a last resort, I have pondered trying to simulate a mouse drag event to make this finicky beast behave.
I find the need to work around the problem in that way quite offensive, but desparation leads to odd solutions...
In case anybody else runs into this issue, the following snippet will scroll the graph in pages based on the number of columns visible on the graph at the time.
The snippet takes the number of visible columns as the viewport, and will move the visible area by the viewport size.
Although this applies to the WPF implementation, the only way I could find to make this work was to run this method from the code behind in the View containing the OxyPlot chart.
This should work correctly regardless of the zoom amount at the time.
The CategoryAxis reference must be obtained from the ActualModel as the WPF.Axis does not provide the ActualMinumum and ActualMaximum needed to calculate the viewable area.
The visibleSpan in this case represents the number of columns, with panStep denoting the amount to pan by in pixels.
private void ScrollInPages() {
//To zoom on the X axis.
CategoryAxis axis = (CategoryAxis)PlotControl.ActualModel.Axes[0];
double visibleSpan = axis.ActualMaximum - axis.ActualMinimum;
double panStep = 0;
//Scrolling the chart - comment out as appropriate
//Scroll right one page
panStep = axis.Transform(0 - (axis.Offset + visibleSpan));
//Scroll left one page
panStep = axis.Transform(axis.Offset + visibleSpan);
axis.Pan(panStep);
PlotControl.InvalidateFlag++;
}
I am working with the Windows Universal Sample for OCR located here:
https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/OCR/cs
Specifically the OcrCapturedImage.xaml.cs
It seems that the camera often becomes unfocused, blurry, and nowhere near as good quality as the native camera app. How can I set up autofocusing and/or tap to fix exposure?
What I have tried so far is looking at the other camera samples which help set resolution, but I cannot find anything about focus/exposure.
Update:
I think
await mediaCapture.VideoDeviceController.FocusControl.FocusAsync();
and
await mediaCapture.VideoDeviceController.ExposureControl.SetAutoAsync(true);
But this isn't working (does nothing-still blurry etc.) and could be built upon if someone knows how to tap a certain area and apply focus/exposure accordingly.
Native Camera:
App Camera:
Update based on answer:
I must have been putting my focus methods in the wrong spot because my original update code works. Sergi's also works. I want to used the tapped event in combination with it, something like this:
Point tapped=e.GetPosition(null); //Where e is TappedRoutedEventArgs from a tapped event method on my preview screen
await mediaCapture.VideoDeviceController.RegionsOfInterestControl.ClearRegionsAsync();
await mediaCapture.VideoDeviceController.RegionsOfInterestControl.SetRegionsAsync(new[] { new RegionOfInterest() { Bounds = new Rect(tapped.X, tapped.Y, 0.02, 0.02) } }); //Throws Parameter Incorrect
But it throws parameter incorrect. Also, How would I show the overlay a Rectangle on the preview screen, so the user knows how big the region of interest is?
This is a great link https://github.com/Microsoft/Windows-universal-samples/blob/master/Samples/CameraManualControls/cs/MainPage.Focus.xaml.cs
Configuration of the auto focus using the Configure method of the FocusControl class.
mediaCapture.VideoDeviceController.FocusControl.Configure(
new FocusSettings { Mode = FocusMode.Auto });
await mediaCapture.VideoDeviceController.FocusControl.FocusAsync();
In order to focus on a certain area, the RegionOfInterestControl propery can be used. It has the SetRegionsAsync method to add a collection of RegionOfInterest instances. RegionOfInterest has the Bounds property which defines the region of focus. This example shows how to set the focus in the center:
// clear previous regions of interest
await mediaCapture.VideoDeviceController.RegionOfInterestControl.ClearRegionsAsync();
// focus in the center of the screen
await mediaCapture.VideoDeviceController.RegionOfInterestControl.SetRegionsAsync(
new []
{
new RegionOfInterest() {Bounds = new Rect(0.49,0.49,0.02,0.02) }
});
I've been working on a proper slider for my C# WPF project.
I wanted to create a slider, with a background that indicates different parts of the process, by adding a different color to each section on the slider. Furthermore I wanted to add small indicators (like the default ticks, but custom shape and irregular position) to the background.
I achived this by creating a drawing brush and adding correspondingly colored rectangles. This seemed to work fine, but a small distortion was still present, so I investigated further and realized the following:
With slider.ActualWidth I get the width of the whole widget. So in order to create a background covering the actual "slider" part, I'll have to be aware of the distance from the widget to the actual slider. (See image)
I measured the distance in a very small window, in fullscreen and stretched on two screens. It seems this distance is always 5 pixels. I tried google and looked through the info WPF provides on its pages, but either I read over it, or there is no information on this.
Can I be sure this distance is always 5 pixels ? In there any place such information is kept ? Is there maybe another way, to determine the size of the slider itself?
Assuming you haven't tinkered with the Slider template you can just walk down the visual tree and check the ActualWidth of the track:
Border b = VisualTreeHelper.GetChild(slider, 0) as Border;
Grid g = VisualTreeHelper.GetChild(b, 0) as Grid;
Border track = VisualTreeHelper.GetChild(g, 2) as Border;
Console.WriteLine("Track ActualWidth: " + track.ActualWidth);
How would I draw something on a Canvas in C# for Windows Phone?
Okay, let me be a little more clear.
Say the user taps his finger down at 386,43 on the canvas. (the canvas is 768 by 480)
I would like my application to be able to respond by placing a red dot at 386,43 on the canvas.
I have no prior experience with Canvas whatsoever.
If this is too complex to be answered in one question (which it probably is), please give me links to other websites with Canvas and Drawing articles.
There are various ways of doing this. Depending on the nature of the red dot, you could make it a UserControl. For a basic circle, you can simply handle your canvas' ManipulationStarted event.
private void myCanvas_ManipulationStarted(object sender, ManipulationStartedEventArgs e)
{
Ellipse el = new Ellipse();
el.Width = 10;
el.Height = 10;
el.Fill = new SolidColorBrush(Colors.Red);
Canvas.SetLeft(el, e.ManipulationOrigin.X);
Canvas.SetTop(el, e.ManipulationOrigin.Y);
myCanvas.Children.Add(el);
}
I think you need to approach the problem differently. (I'm not including code on purpose, because of that).
Forms and controls in an Windows applications (including Phone) can be refreshed for several reasons, at any time. If you draw on a canvas in response to a touch action, you have an updated canvas until the next refresh. If a refresh occurs the canvas repaints itself, you end up with a blank canvas.
I have no idea what your end goal is, but you likely want to either keep track of what the user has done and store that state somewhere and show it in a canvas on the repaint of the canvas. This could be done with storing all the actions and "replaying" them on the canvas, or simply storing the view of the canvas as a bitmap and reload the canvas with that bitmap when refreshed. But, in the later case I think using a canvas isn't the right solution.