I'm trying to understand what is the proper way of handling mouse input when there are multiple objects on the screen that can be clicked. I have a custom Mouse class and many "game objects" on the screen that can be clicked (say like buttons of a gui). In many simple tutorials I see people check for mouse-pressed or released inside the Update method of each game object instance present on the scene and then the particular clicked object invokes an event that it has been clicked. But having say 20 or 30 or more of these objects - each with a different function means that the "top layer" (the manager of these objects) has to subscribe to all the OnClick events of all the buttons and call the respective function. Is this the correct way? (I'm not asking for an opinion - I know that each can program it's own way) Is there a less cumbersome way of programming this?
The "solution" that I was thinking is putting the click event ivocation in the Mouse class. So the mouse will notify the game/state/ui-manager that a click occured, passing the mouse coordinates and buttin clicked in the eventArgs. Then (and not at each game loop as before) the "manager" would find the game-object that has been clicked over. This might work for click events but is it possible to implement the mouse hover functionality in this way too?
Thanks for your 2 cents.
I prefer to implement a static InputManager class to provide a common input across platforms and input devices.
Here are snippets of that class relevant to your question:
public static class InputManager
{
public static bool LeftClicked = false;
private static MouseState ms = new MouseState(), oms;
public static void Update()
{
oms = ms;
ms = Mouse.GetState();
LeftClicked = ms.LeftButton != ButtonState.Pressed && oms.LeftButton == ButtonState.Pressed;
// true On left release like Windows buttons
}
public static bool Hover(Rectangle r)
{
return r.Contains(new Vector2(ms.X,ms.Y));
}
}
The first line of Update in game1.cs:
InputManager.Update();
In any of your game objects' Update method with a collision or draw rectangle named rect:
if (InputManager.Hover(rect))
{
// do hover code here
if(InputManager.LeftClicked)
{
//Do click action here
}
}
else
{
//undo hover code here
}
The only time I used a native input event callback with MonoGame was in an Android App to register/process swipes that could occur faster than 16.66ms, 60 fps.(There were no Threading issues involved in that case)
Related
Do you see how, after editing a value on -say- an image asset and then clicking anywhere outside without first clicking on apply, makes a window appear to ask you if you want to save or discard changes?
So, I wanted to do pretty much that from an OnGUI() or OnSceneGUI(), and right after starting to write the MyClassEditor : Editor realized that I not only knew how to accomplish such a thing; but apparently I didn't even know where to start searching on how to detect the mouse "entering" or "leaving" anything in the UI... if even possible.
This is what i found first when googling "detect mouse leaving inspector unity", and well as far as i understood, it is about detecting the edge of the screen boundaries, and the game window boundaries. So I went over the next result, which now looks promising at the beggining since it seems to describe my issue, but that is misleading since that solution applies to detect the mouse inside the scene view without loosing focus of the UI.. and i want to detect the focus lost so i head back to google only to stumble with a couple (more like several tbh) more of similar cases (like the unity reference to Monobehaviour.OnMouseEnter/Exit).
Which probably shows how lost I am, and I don't mean to ask for anything solved, but maybe a little push in the right direction will do? I appreciate every little help.
Edit:
So I tried this inside a [CustomEditor(typeof(MyClass))] public class MyClassEditor : Editor:
public override void OnInspectorGUI()
{
Event e = Event.current;
switch (e.type)
{
case EventType.MouseDown:
Debug.Log("mouse down");
break;
case EventType.MouseEnterWindow:
Debug.Log("mouse left a window");
break;
case EventType.MouseLeaveWindow:
Debug.Log("mouse entered a window");
break;
default:
break;
}
base.OnInspectorGUI();
var click = GUILayout.Button("Quick Fill");
if (click)
{
MyClassEditorWindow.Open((MyClass)target);
}
}
and even when the button works and the [mouse down] fires (only when clicking on top of what i think would be UIElements(?), tho -but not outside them, on the empty inspector area-),the other two ones don't seem to be firing... pretty sure im doing more than one thing wrong, not a clue of what of all.
There is a property in EditowWindow called wantsMouseEnterLeaveWindow.
If you set it to true, you will receive EventType.MouseLeaveWindow/EventType.MouseEnterWindow events.
Normally, I enable it in OnEnable function like this:
private void OnEnable() { wantsMouseEnterLeaveWindow = true; }
bool clickingGuiElement = false;
if (UnityEngine.EventSystems.EventSystem.current.IsPointerOverGameObject())
{
if (UnityEngine.EventSystems.EventSystem.current.currentSelectedGameObject != null)
{
if (UnityEngine.EventSystems.EventSystem.current.currentSelectedGameObject
.GetComponent<CanvasRenderer>() != null)
{
// user is clicking on a UI element (button)
clickingGuiElement = true;
}
else
clickingGuiElement = false;
}
else
clickingGuiElement = false;
}
else
clickingGuiElement = false;
The code above is utilizing Unity's EventSystem. This small control structure serves the purpose of handling objects being currently clicked on. It accomplishes the task of setting a boolean to inform the program whether the user is clicking on an UI element or not. In my case it is primarily utilized on buttons. Using this boolean output one can block UI clicks from propagating through the UI and interacting with any gameObjects that may be behind the UI.
Running through the code above on the HoloLens with breakpoints and/or printouts the result is always the same - false. The program never makes it into the first if statement on the HoloLens.
I need to be able to detect if the users airtap/clicker input, when they click, is on a UI element. If I know when a user clicks on an UI element, I can block it from propagating through the UI (as is done above, but, again, it works in the Editor and not on the HoloLens).
So, what substitutes the same EventSystem functionality achieved here (in the Editor) on the HoloLens?
P.S. I am using IPointerClickerHandler from UnityEngine.EventSystems in other parts of my code and it works fine, so I am not sure why this implementation utilizing the EventSystems isn't working. My best guess (from implementing features in the Editor vs the HoloLens) is that the 'Pointer' used in IsPointerOverGameObject isn't utilizing airtap input akin to mouse click input.
if(GazeManager.Instance.IsGazingAtObject && GazeManager.Instance.HitObject != null)
{
Debug.Log("guiElementName: " + GazeManager.Instance.HitObject.transform.name);
}
Did not find an EventSystem code equivalent in the end, however there is an easy solution in the 'GazeManager' script provided in the HoloToolkit provided by Microsoft.
Utilizing a simple check like this one above, one is able to determine if the users gaze is on a GUI element. Like so,
// Object assignment
GameObject cursor;
// If on GUI menu control structure
if (GazeManager.Instance.IsGazingAtObject && GazeManager.Instance.HitObject != null &&
(GazeManager.Instance.HitObject.transform.name == "icon" ||
GazeManager.Instance.HitObject.transform.name == "guiManager"))
{
// Scale
cursor.transform.localScale = halfScale;
// Move the cursor to the point where the raycast hit
// to display it on the gui element
cursor.transform.position = GazeManager.Instance.HitInfo.point;
// Enable the cursor mesh
cursor.GetComponent<MeshRenderer>().enabled = true;
}
Self-explanatory really, the code above scales, moves, and displays a cursor gameObject on an GUI when the users gaze is on it.
In my case [I used] the boolean output [from the EventSystem code attempting to] block UI clicks from propagating through the UI and interacting with any gameObjects that may be behind the UI.
Rather then blocking the clicks, one can handle a users gaze input, when gazing at UI elements, properly using the GazeManager.
Your UI Element needs to have a collider, and you need to use the raycast to detect if your ui element is being hit by the raycast. In the hololens the only way that I know of to detect that an element or game object has been hit is using gaze.
Edit: Upon clarification of the problem, you need to make sure that the layer that the raycast is able to hit is only the layer your button is on.
The question asks about detecting if a button is clicked, and the comments bring more understanding of the full problem, but the nature of the question is asked from the perspective of the person asking the questions's experience with other systems not from the understanding of how Hololens and Unity work together, to give the answer you are looking for is not possible because it's the wrong question. Everything in unity is essentially a game object, but they all can sit on different and multiple layers at the same time in the scene, you can see the layers in the top right in unity. The answer to resolve the problem is to specify the layer that the button is on, and that layer is not shared with other game objects which is not default behavior in Unity. When you are doing the raycast, you need to ignore all of the other layers, then you will no longer interact with the background objects behind the button. The link I included tells you how to do that. Once you have done that you would use the airtap event wired to the button like any other game object.
Here is a link on how to work with layers in Unity.
https://answers.unity.com/questions/416919/making-raycast-ignore-multiple-layers.html
I'm writing a complex GUI for my game's main menu. I have already created a ButtonGUI class with it's own constructor with the parameters of button text, text color etc. I have a background texture drawn behind the text for every button, to make it look pretty. So I implemented a simple mouse input system:
void DetectBtnInput(ButtonGUI btn)
{
if (mouseRect.Intersects(btn.SourceRect))
{
btn.IsHovered = true;
if (mouseState.LeftButton == ButtonState.Pressed) settingsWindow.isOpen = true;
}
}
This method is called in Update(GameTime gameTime) for each instance of the ButtonGUI class. So mouseRect = new Rectangle(mouseState.X, mouseState.Y, 1, 1);
each instance of the ButtonGUI class has a Rectangle SourceRect property, which is equal to the position of the button passed to the constructor (obviously the button's size is the same every time). The logic behind the above code is simple, if the mouse is hovering the button instance's SourceRect, btn.IsHovered is set to true, it changes text color. When clicked, my WindowGUI class instance opens with additional settings.
So my aim is at making these buttons look nice and have the Windows styled button mechanics. So I am looking for a code to check for mouse hold-like event and have for example a bool that changes the button texture or whatever I can do myself. But the problem is that I tried incorporating the solution people give about the previousMouseState and the newMouseState, but I am not certain as if it works in Monogame as it worked in XNA... Or was my implementation wrong? Can I have a clearer example on how you handle mouse-hold in your games? Thanks in advance!
If you mean the player can click and hold (as per question title), and nothing happens until they then release. previous and current state should work fine as an implementation such as.
Forgive any incorrections in syntax, hopefully you can take this as pseudo code enough to make your own.
MouseState prev, curr;
curr = GetMouseState();
update()
{
prev = curr;
curr = GetMouseState();
// Simple check for a mouse click (from being held down)
if (curr == MouseState.Up && prev == MouseState.Down)
{
// Do mouse stuff here
}
}
I would like to use the kinect hand cursor as 'normal' mouse cursor. In the specific I want to be able to interact with the Awesomium browser object.
The problem is that no Awesomium Browser event is raised when the kinect hand cursor is (for example) over a link, or I do a click, or any other typical mouse event.
I modified the Control Basics-WPF example program that you can find in the example directory of the Kinect SDK
I am using c# visual studio 2012, Kinect SDK 1.7, Awesomium 1.7.1.
It's been a month since this question's been asked, so perhaps you've already found your own solution.
In any case, I found myself in this scenario as well, and here was my solution:
Inside MainWindow.xaml, you'll need the Awesomium control inside a KinectRegion (from the SDK).
You'll have to somehow tell the SDK that you want a control to also handle hand events. You can do this by adding this inside MainWindow.xaml.cs on the Window_Loaded handler:
KinectRegion.AddHandPointerMoveHandler(webControl1, OnHandleHandMove);
KinectRegion.AddHandPointerLeaveHandler(webControl1, OnHandleHandLeave);
Elsewhere in MainWindow.xaml.cs, you can define the hand handler events. Incidentally, I did it like this:
private void OnHandleHandLeave(object source, HandPointerEventArgs args)
{
// This just moves the cursor to the top left corner of the screen.
// You can handle it differently, but this is just one way.
System.Drawing.Point mousePt = new System.Drawing.Point(0, 0);
System.Windows.Forms.Cursor.Position = mousePt;
}
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
// The meat of the hand handle method.
HandPointer ptr = args.HandPointer;
Point newPoint = kinectRegion.PointToScreen(ptr.GetPosition(kinectRegion));
clickIfHandIsStable(newPoint); // basically handle a click, not showing code here
changeMouseCursorPosition(newPoint); // this is where you make the hand and mouse positions the same!
}
private void changeMouseCursorPosition(Point newPoint)
{
cursorPoint = newPoint;
System.Drawing.Point mousePt = new System.Drawing.Point((int)cursorPoint.X, (int)cursorPoint.Y);
System.Windows.Forms.Cursor.Position = mousePt;
}
For me, the tricky parts were:
1. Diving into the SDK and figuring out which handlers to add. Documentation wasn't terribly helpful on this.
2. Mapping the mouse cursor to the kinect hand. As you can see, it involves dealing with System.Drawing.Point (separate from another library's Point) and System.Windows.Forms.Cursor (separate from another library's Cursor).
I have a WPF app that draws a compass. There is a large ring with tick marks and labels. I have a checkbox that toggles the compass graphics on and off. When I first start up the app, the compass turns on and off instantly.
Meanwhile, I have a combo box that grabs some data from a local database and uses that to render some overlay graphics. After using this combo box, the compass graphics no longer toggle quickly. In fact, the UI completely freezes for about 4 seconds whenever I click the checkbox.
I attempted to profile my app using Window Performance Profiling Tool for WPF. When I activated the checkbox, not only did my app freeze, so did the profiler. The graphs "catched up" afterward, but this tells me something must be seriously wrong.
I've managed to nail down that the problem graphics are the tick marks (not the numeric labels). If I eliminate them, the freezing problem stops. If I cut them down from 360 to, say, 36, the app still freezes, but for less time. Again, no matter how many tick marks I have, they toggle instantly when the app first starts.
My question is, How do I figure out why the toggle for my compass graphics goes from instant to horribly slow? I've tried extensive profiling and debugging, and I just can't come up with any reason why setting the Visibility on some tick marks should ever cause the app to freeze.
Edit
Okay, I've stripped everything out of my app to just the bare essentials, zipped it up, and uploaded it to Sendspace. Here is the link (it's about 143K):
http://www.sendspace.com/file/n1u3yg
[Note: don't accidentally click the banner ad, the real download link is much smaller and lower on the page.]
Two requests:
Do you experience the problem on your machine? Try opening Compass.exe (in bin\Release) and clicking the check box rapidly. The compass tick marks should turn on and off with no delay. Then, select an item from the combo box and try rapidly clicking the check box again. On my machine, it's very laggy, and after I stop rapid-fire clicking, it takes a few seconds for the graphics to catch up.
If you do experience the lag, do you see anything in the code that could be causing this odd behavior? The combo box is not connected to anything, so why should selecting an item from it affect the future performance of other graphics on the window?
Although ANTS didn't indicate a particular performance 'hotspot', I think that your technique is slightly flawed as it seems that every tick has a ViewModel that is responsible for handling an individual tick, and you are individually binding those ticks to the view. You end up creating 720 view models for these ticks that fire the a similar event each time the entire compass is shown or hidden. You also create a new LineGeometry every time this field is accessed.
The recommended approach for WPF in a custom drawn situation like this is to use a DrawingVisual and embrace the retained mode aspect of WPF's rendering system. There are several googleable resources that talk about this technique, but the gist is to declare a compass class inherits from FrameworkElement, and some smaller classes that inherit from DrawingVisual and use that to render the compass. With this technique, you can still have a ViewModel drive the compass behavior, but you wouldn't have individual viewmodels for each part of the compass. I'd be inclined to decompose the compass into parts such as bezel, arrow, sight, etc... but your problem may require a different approach.
class Compass : FrameworkElement
{
private readonly List<ICompassPart> _children = new List<ICompassPart>();
public void AddVisualChild(ICompassPart currentObject)
{
_children.Add(currentObject);
AddVisualChild((Visual)currentObject);
}
override protected int VisualChildrenCount { get { return _children.Count; } }
override protected Visual GetVisualChild(int index)
{
if (index < 0 || index >= _children.Count) throw new ArgumentOutOfRangeException();
return _children[index] as Visual;
}
override protected void OnRender(DrawingContext dc)
{
//The control automatically renders its children based on their RenderContext.
//There's really nothing to do here.
dc.DrawRectangle(Background, null, new Rect(RenderSize));
}
}
class Bezel : DrawingVisual
{
private bool _visible;
public bool Visible {
{
get { return _visible; }
set
{
_visible = value;
Update();
}
}
private void Update()
{
var dc = this.RenderOpen().DrawingContext;
dc.DrawLine(/*blah*/);
dc.Close();
}
}