I am facing a major problem for custom touch event handling.
The goal is to route the touch event to different controls, depending on how many fingers are used.
For example:
We have a scrollview with many webviews arranged in it.
The zoomfactor is set to present one webview to fullscreen.
Currently I am using the pointer pressed / released functions like this, to configure the controls, so the event will be catched by the right one:
int pointerCount = 0;
private void ScrollView_PointerPressed(object sender, PointerRoutedEventArgs e)
{
try
{
pointerCount++;
if (pointerCount > 3)
pointerCount = 0;
switch (pointerCount)
{
case 1:
// I don't do anything, so it goes to the webview anyway
break;
case 2:
EnableScrolling();
break;
case 3:
ZoomInOut();
break;
default:
return;
}
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex.Message);
}
}
private void EnableScrolling()
{
ScrollViewer.ZoomMode = ZoomMode.Enabled;
ScrollViewer.VerticalScrollMode = ScrollMode.Enabled;
ScrollViewer.HorizontalScrollMode = ScrollMode.Enabled;
}
1-finger events should go to the webview // this works
2-finger events should go to the ScrollView // this is not working, because the webview grabs the touch event and will not release it again
3-finger events should zoom out // this works
The PointerPressed is always called, but the PointerReleased is not called when the PointerPressed was on the webview.
This also results in the effect of not decreasing the pointerCount, so it is possible to do a 1-finger tap 3 times and it results in the 3-finger event and zooms out, that should not happen.
Hopefully you can see the problem and help to resolve it.
If you think this is a way too wrong approach, feel free to show an alternative that works out better.
Well, I couldn't find a proper solution, but I was able to remove the unwanted side effect with upcounting the pointers event if they were released.
private void ScrollViewer_PointerReleased(object sender, PointerRoutedEventArgs e)
{
pointerCount = 0;
}
So the right direction handling is working fine now, but I still can't do something like:
currentWebView.CapturePointer(e.Pointer);
Because it won't root the pointerEvent into it's content, it will call the pointer events of the WebView, but it won't root it into it's html & js content.
I also tried to use
.... wb.ManipulationStarted += Wb_ManipulationStarted; ....
private void Wb_ManipulationStarted(object sender, ManipulationStartedRoutedEventArgs e)
{
// something here
}
but this also will not arrive in the content.
Any ideas for another approach?
I figured out how to handle that, and it is so simple that I could cry...
I just added this line of code to tell the webviews js that there was a event...
await webView.InvokeScriptAsync("eval", new string[] { "document.elementFromPoint(" + pointX + ", " + pointY + ").click();" });
And the webview handles the rest of it. So there is no custom invoke as I was worried about, it's just the standard function invoked, that is always invoked when a touch/click is performed.
Related
I´m working on a Kinect project. I'm recording data from the received frames. When I have to process the frame data, the program is taking a lot of time. In the meanwhile, I want to disable the EventHandler MultiSourceFrameArrived.
The thing is, I have been reading different posts, I can't find an answer that fits my problem. The expression -= seems to work only if it is in the same scope as the expression +=. When I write them in different scopes, the frames are still arriving and I can't disable that event handler.
The NON WORKING code is this one:
private void MainWindow()
{
//Intitalize components
if (this.multiSourceFrameReader != null)
{
EnableFrameArrived();
}
}
private void MultiSourceFrameReader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
{
DisableFrameArrived();
}
private void DisableFrameArrived()
{
this.multiSourceFrameReader.MultiSourceFrameArrived -= this.MultiSourceFrameReader_MultiSourceFrameArrived;
//This doesn`t cancel my suscription to the event.
}
private void EnableFrameArrived()
{
this.multiSourceFrameReader.MultiSourceFrameArrived += MultiSourceFrameReader_MultiSourceFrameArrived;
}
The WORKING code is this one:
private void MainWindow()
{
//Intitalize components
if (this.multiSourceFrameReader != null)
{
this.multiSourceFrameReader.MultiSourceFrameArrived += this.MultiSourceFrameReader_MultiSourceFrameArrived;
//I subscribe to the event
}
this.multiSourceFrameReader.MultiSourceFrameArrived -= this.MultiSourceFrameReader_MultiSourceFrameArrived;
//I cancel my subscription. But I need to cancel my subscription in another scope.
}
I commented the rest of the code and just left this part running. I keep having incomings frames even though the Disable Method is reached. Why the cancellation to my event is only possible if I'm in the same scope? I`ve reading different blogs but I can't solve this.
Any suggestions?
Thanks!
I'd like to subscribe to an event which tells me that scrolling has started in a ListView and get the direction of scrolling.
Is there any way to do this in Windows 10 UWP API?
Thanks
You should first obtain the ScrollViewer inside the ListView and then subscribe to its DirectManipulationStarted event.
However, to get the direction of the scrolling can be tricky. I'd suggest you to have a look at the new Windows Composition API where there's a way to use ExpressionAnimation to link the ScrollViewer's translation to a value of your choice.
A good start is to have a look at this demo from James Clarke.
private void MainPage_Loaded(object sender, RoutedEventArgs e)
{
CompositionPropertySet scrollerViewerManipulation = ElementCompositionPreview.GetScrollViewerManipulationPropertySet(myScroller);
Compositor compositor = scrollerViewerManipulation.Compositor;
ExpressionAnimation expression = compositor.CreateExpressionAnimation("ScrollManipululation.Translation.Y * ParallaxMultiplier");
expression.SetScalarParameter("ParallaxMultiplier", 0.3f);
expression.SetReferenceParameter("ScrollManipululation", scrollerViewerManipulation);
Visual textVisual = ElementCompositionPreview.GetElementVisual(background);
textVisual.StartAnimation("Offset.Y", expression);
}
Update
Actually just thought of an easier way to detect the scrolling direction. Just subscribe whenever the VerticalOffset is changed and compare it to its previous values.
private double _previousOffset;
sv.RegisterPropertyChangedCallback(ScrollViewer.VerticalOffsetProperty, (s, dp) =>
{
if (Math.Abs(sv.VerticalOffset - _previousOffset ) < 3)
{
// ignore when offset difference is too small
}
else if (sv.VerticalOffset > _previousOffset)
{
Debug.WriteLine($"up {sv.VerticalOffset - _previousOffset}");
}
else {
Debug.WriteLine($"down {sv.VerticalOffset - _previousOffset}");
}
_previousOffset = sv.VerticalOffset;
});
I had used mouse events, using TouchFrameReported, I wanted it to be single touch, but it is supporting multitouch, how to disable multitouch, in touch frame reported, or is there any idea to implement so that multitouch is not supported..
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
foreach (TouchPoint touchPoint in e.GetTouchPoints(this.mainGrid))
{
if (touchPoint.Action == TouchAction.Down)
{
currentPoint.X = touchPoint.Position.X;
currentPoint.Y = touchPoint.Position.Y;
glowDot();
}
else if (touchPoint.Action == TouchAction.Up)
{
circPathGlow.Visibility = Visibility.Collapsed;
}
else if (touchPoint.Action == TouchAction.Move)
{
}
}
}
You can find more information on:
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/123e9381-fc0b-441e-a738-dcd35f526a6e/how-to-disable-multitouch
I wouldn't try to fiddle with the touch messages here. If the goal is
to limit the dragging to one control at a time then limit it to the
controls. Once one is moving, don't move the others.
At the pointer message level you can track the PointerId in
PointerPressed and ignore other PointerIds until you get a
PointerReleased or PointerCaptureLost:
Question: Do you want to disable certain multi-gestures or all?
After reading http://www.wintellect.com/blogs/jprosise/building-touch-interfaces-for-windows-phones-part-2
I came to know that I was using e.GetTouchPoints
Instead of e.GetPrimaryTouchPoint,,
Now, I use e.GetPrimaryTouchPoint, which capture only the first touch points that is being touched,,
TouchPoint touchPoint = e.GetPrimaryTouchPoint(this.mainGrid);
and rest of the code,, this solve my problem.
Background:
In my winforms form, I have a Checked ListView and a "master" checkbox called checkBoxAll.
The behaviour of the master is as follows:
If the master is checked or unchecked, all ListViewItems must change accordingly.
If the user unchecks a ListViewItem, the master must change accordingly.
If the user checks a ListViewItem, and all other ListViewItems are checked aswell, the master must change accordingly.
I have written the following code to mimic this behaviour:
private bool byProgram = false; //Flag to determine the caller of the code. True for program, false for user.
private void checkBoxAll_CheckedChanged(object sender, EventArgs e)
{
//Check if the user raised this event.
if (!byProgram)
{
//Event was raised by user!
//If checkBoxAll is checked, all listviewitems must be checked too and vice versa.
//Check if there are any items to (un)check.
if (myListView.Items.Count > 0)
{
byProgram = true; //Raise flag.
//(Un)check every item.
foreach (ListViewItem lvi in myListView.Items)
{
lvi.Checked = checkBoxAll.Checked;
}
byProgram = false; //Lower flag.
}
}
}
private void myListView_ItemChecked(object sender, ItemCheckedEventArgs e)
{
//Get the appropiate ListView that raised this event
var listView = sender as ListView;
//Check if the user raised this event.
if (!byProgram)
{
//Event was raised by user!
//If all items are checked, set checkBoxAll checked, else: uncheck him!
bool allChecked = true; //This boolean will be used to set the value of checkBoxAll
//This event was raised by an ListViewItem so we don't have to check if any exist.
//Check all items untill one is not checked.
foreach (ListViewItem lvi in listView.Items)
{
allChecked = lvi.Checked;
if (!allChecked) break;
}
byProgram = true; //Raise flag.
//Set the checkBoxAll according to the value determined for allChecked.
checkBoxAll.Checked = allChecked;
byProgram = false; //Lower flag.
}
}
In this example, I use a flag (byProgram) to make sure an event was caused by the user or not, thereby preventing an infinite loop (one event can fire another, which can fire the first one again etc. etc.). IMHO, this is a hacky solution.
I searched around but I couldn't find a MSDN documented method to determine if an User Control Event was directly fired thanks to the user. Which strikes me as odd (again, IMHO).
I know that the FormClosingEventArgs has a field which we can use to determine if the user is closing the form or not. But as far as I know, that is the only EventArg that provides this kind of functionality...
So in summary:
Is there a way (other than my example) to determine if an event was fired directly by the user?
Please note: I don't mean the sender of an event! It won't matter if I code someCheckBox.Checked = true; or manually set someCheckBox, the sender of the event will always be someCheckBox. I want to find out if it is possible to determine whether it was through the user (click) or by the program (.Checked = true).
Aaand also: 30% of the time it took to write this question was to formulate the question and the title correctly. Still not sure if it is a 100% clear so please edit if you think you can do better :)
No, there's no practical way to determine whether the change came from GUI or was done by program (in fact, you could analyze the callstack - but that's not recommended because it's very slow and error-prone).
BTW, there's one other thing you could do instead of setting byProgram. You could remove and add the event handler prior or after, respectively, change your controls:
checkBoxAll.CheckedChanged -= checkBoxAll_CheckedChanged;
// do something
checkBoxAll.CheckedChanged += checkBoxAll_CheckedChanged;
Instead of using the changed event, you could use the clicked event to cascade the change through to the relevant controls. This would be in response to a user click, and not the value being changed programatically.
This is something I come across quite a lot and what I tend to try do is not split it between user interaction vs program interaction - I use more generic code i.e. the UI is being updated and doesn't require any events to be handled. I usually package this up through BeginUpdate/EndUpdate methods e.g.
private int updates = 0;
public bool Updating { get { return updates > 0; } }
public void BeginUpdate()
{
updates++;
}
public void EndUpdate()
{
updates--;
}
public void IndividualCheckBoxChanged(...)
{
if (!Updating)
{
// run code
}
}
public void CheckAllChanged(...)
{
BeginUpdate();
try
{
// run code
}
finally
{
EndUpdate();
}
}
We are evaluating touchscreen keyboards and for one we need to track 10 fingers at the same time. The problem is that the touchscreen driver is very wonky (and there is no fixed version). It sends out 2500+ events for the FrameReported event each second for so many fingers. There is just no way to handle all those, even if we discard 90% at the beginning. It's simply impossible to keep up and do anything meaningful with the data.
Instead of System.Windows.Input.Touch.FrameReported, I also tried to use the (Preview) TouchMove events of the window; Same problem here.
So now I wanted, instead of using events, to poll in a separate Thread, but I cannot find information on how to get all the current touchpoints.
The only thing I found is a WinForms hack, but that isn't an option, since then I will be unable to render any WPF controls in my window.
Any solutions?
Edit 1:
This is the code, that handles all the move events:
private void UserControlTouchMove(object sender, TouchEventArgs e)
{
//Update Position of the corresponding point
var touch = e.GetTouchPoint(this);
var id = touch.TouchDevice.Id;
e.Handled = true;
var position = touch.Position;
//update finger on display, quick and dirty
if (m_ShowFingers)
{
foreach (var finger in m_Fingers)
{
if (id == (int)finger.DataContext)
{
finger.RenderTransform = new TranslateTransform(position.X - HalfFingerSize, position.Y - HalfFingerSize);
break;
}
}
}
}