Unity setting Button's onClick in script - c#

I am using Unity 2020.3 and making basically a question-answer game.
Assume that I have this two functions:
public void TrueAnswer()
{
Debug.Log("True !");
}
public void FalseAnswer()
{
Debug.Log("False !");
}
In every question, true answer's button is changing(For example, in xth question the answer is in first button but in x+1th question the answer is in third button). So, I should change answer buttons onClick event in every question. Depend on my research, there is some ways like:
using UnityEngine.UI;
m_YourFirstButton.onClick.AddListener(TaskOnClick);
m_YourSecondButton.onClick.AddListener(delegate {TaskWithParameters("Hello"); });
m_YourThirdButton.onClick.AddListener(() => ButtonClicked(42));
m_YourThirdButton.onClick.AddListener(TaskOnClick);
(This is from Unity Documentation, version 2019.1)
using UnityEngine.UIElements;
button.GetComponent<Button>().clicked += ButtonClicked;
(This is what I understand from Unity Documentation version 2020.3 and what VS 2019 IntelliSense recommend me to use "clicked" event. 2020.3 also has onClicked event but it is obsolote)
First, I used the first way to set buttons onClick events and it did not work.
Then, I read the documentations and tried the second solution, which is also did not work but I get an error:
ArgumentException: GetComponent requires that the requested component 'Button' derives from MonoBehaviour or Component or is an interface.
I understand what the error is, but I am very confused. After Unity 2019.1, I can't find the UnityEngine.UI library in Unity Documentations but if I use in script I can use it without any error. So depend on the documentation I must use UnityEngine.UIElements library but if I use, how can I make buttons derive from MonoBehaviour or others?

I don't think you need to change the buttons click event at all.
Rather those can be a "static" click event that return what button index was clicked.
[0] Button 1 -> ButtonClicked(0)
[1] Button 2 -> ButtonClicked(1)
[2] Button 3 -> ButtonClicked(2)
You can then have the answers in an array
[0] Answer 1
[1] Answer 2
[2] Answer 3
and use the clicked buttons index to determine which answer was picked.
When you go to ask a new question, all you have to do is create (or replace values of) the answers array with the relevant answers (in any order you wish).

Related

Unity Keep Selected Color on after press even if other buttons are pressed

I know this is a very long question, but I tried my best to be as clear as possible with this weird problem. Any help or suggestions would be appreciated. Let me know if I can provide any more resources.
The Setup
I am trying to create a sort-of navigation system where the user can still see what path they are on. I have three sets of buttons:
Year Buttons - these are three static buttons that are always on the screen (year 1, 2 and 3).
Discipline Buttons - these are dynamically instantiated and destroyed based on which year button has been selected.
Module Buttons - these function the same as the discipline buttons, but take into account the year AND discipline button when instantiating and destroying.
How It Should Work
When the user clicks on a button, the button must change color and stay that color until a different button of THE SAME SET (mentioned above) is pressed.
Example
User presses "YEAR 1" button. Button changes from orange to green. Discipline instances are instantiated based on year = 1.
User then presses one of the instantiated DISCIPLINE buttons. This button changes from orange to green. The YEAR button should still stay green. Module instances are instantiated based on year=1 and discipline text.
User presses "YEAR 2" button. module and discipline instances are destroyed, and YEAR 1 button must change back to original color. YEAR 2 button must change to green and discipline instances are instantiated.
My Code
I have three lists: one for buttons, one for disciplines, one for modules. The button list is static, while the other two are destroyed and added to as needed (the list adding/clearing does work).
When I instantiate disciplines or modules, the previous ones are all destroyed (and the list is cleared) and then each instance is added to the list. Each instance has an onClick listener (made in the script, it does not work to make the listener Editor-side) which calls this general method:
public void ChangeAllWhite(List<GameObject> list)
{
foreach (var button in list)
{
button.GetComponent<Image>().color = orangeColor;
}
}
and then calls one of these three methods to change that instance color:
public void YearColorChange(GameObject buttonToChange)
{
ChangeAllWhite(yearButtons);
buttonToChange.GetComponent().color = selectedColor;
}
public void DisciplineColorChange(GameObject buttonToChange)
{
ChangeAllWhite(disciplineButtons);
buttonToChange.GetComponent<Image>().color = selectedColor;
}
public void ModuleColorChange(GameObject buttonToChange)
{
ChangeAllWhite(moduleButtons);
buttonToChange.GetComponent<Image>().color = selectedColor;
}
This is the code to instantiate the disciplines (same as module pretty much) after the previous ones have been destroyed:
foreach (string item in disciplines)
{
GameObject disciplineInstance = Instantiate(DisciplineItem);
disciplineInstance.transform.SetParent(DisciplineLayoutGroup.transform, false);
disciplineInstance.GetComponentInChildren<Text>().text = item;
disciplineButtons.Add(disciplineInstance);
disciplineInstance.GetComponent<Button>().onClick.AddListener(() =>
{
UnityEngine.Debug.Log("debug discipline");
AppManager.instance.classroomInfo.SetDiscipline(item);
StartCoroutine(AppManager.instance.web.GetModules(AppManager.instance.classroomInfo.year, item));
DisciplineColorChange(disciplineInstance);
//needs year, discipline
});
}
What currently happens
I don't get any errors, but the colors don't change. When I called the methods from an OnClick on the editor-side, it breaks and/or doesn't work. I think what is happening is that OnClick methods from previously instantiated (and now deleted) instances are trying to do something and then it just doesn't do anything. Any suggestions are welcome!
Your code seems incredibly over-complicated.
Why not use the various button-states-colors that are built in to Button ?
Alternately, you must build a class that has the colors (animations, images, fonts, whatever) you want as a property and add that component to your buttons and use that. (In an ECS system, "adding a component" like that is a bit like extending a class in normal OO programming; you're just adding more behavior to the Button.)
So you would have your own cool
`SugarfreeButton`
and then you can do stuff like
sugarfreeButton.Look = YourLooks.Possible;
sugarfreeButton.Look = YourLooks.CannotChoose;
sugarfreeButton.Look = YourLooks.Neon;
etc. You really HAVE to do this, you can't be dicking with setting colors etc in your higher-level code.
It's only a few lines of code to achieve this.
I've attached a completely random example of such "behaviors you might add to a button" at the bottom of this - DippyButton
When you do "panels of buttons" it is almost a certainty that you use a layout group (HorizontalLayoutGroup or the vertical one). Do everything flawlessly in storyboard, and then just drop them inside that in the code with absolutley no positioning etc. concerns in the code.
Unfortunately, u won't be able to do anything in Unity UI until extremely familiar with the two layout groups (and indeed the subtle issues of LayoutElement, the fitters, etc.).
You should almost certainly be using prefabs for the various button types. All of your code inside the "foreach" (where you add listeners and so on) is really not good / extremely not good. :) All of that should be "inside" that style of button. So you will have a prefab for your "discipline buttons" thing, with of course class(es) as part of that prefab which "do everything". Everything in Unity should be agent based - each "thing" should take care of itself and have its own intelligence. Here's a typical such prefab:
So these "lobby heads" (whatever the hell that is!) entirely take care of themselves.
In your example, when one is instantiated, on its own, it would find out what the currently selected "year" is. Does it make sense? They should look in the discipline database (or whatever) and decide on their own what their value should be (perhaps based on their position (easy - sibling index) in the layout group, or whatever is relevant.
You will never be able to engineer and debug your code the way you are doing it, since it is "upside down". In unity all game objects / components should "look after themselves".
You should surely be using a toggle group as part of the solution, as user #derHugo says. In general, anything #derHugo says, I do, and you should also :)
Random code examples from point (1) ... DippyButton
using UnityEngine;
using UnityEngine.UI;
// simply make a button go away for awhile after being clicked
public class DippyButton : MonoBehaviour
{
Button b;
CanvasGroup fader;
void Start()
{
b = GetComponent<Button>();
if (b == null)
{
Destroy(this);
return;
}
b.onClick.AddListener(Dip);
fader = GetComponent<CanvasGroup>();
// generally in Unity "if there's something you'll want to fade,
// naturally you add a CanvasGroup"
}
public void Dip()
{
b.interactable = false;
if (fader != null) { fader.alpha = 0.5f;
}
Invoke("_undip", 5f);
}
public void _undip()
{
b.interactable = true;
if (fader != null) { fader.alpha = 1f; }
}
}

Unity C#: change what a button does via a script

what I want to do is to change the parameters of the buttons On click event(for example if the script(GETFromYTS.DisplayMovies) wants a string change that string). have not seen any good answers.
the "GetfromYTS.displaymovies" code:
public void DisplayMovies(string[] movieUrls)
{
//create text objects with the movie URLs
}
IDE is Jetbrains Rider
EDIT 1: new problem, how do I change the text of the button?
EDIT 2: the first problem solved. answer selected
According to the documentation, you can use button.onClick.AddListener(function);.
The other functions (like removing one or more functions already associated with the button) are listed here.
Aside from daniel's answer, i'd like to add that in your onclick method you could thrown an unity event: https://docs.unity3d.com/ScriptReference/Events.UnityEvent.html
This has some editor support, so in your editor you can call functions of other gameobjects.

Return selected letter in hololens keyboard using Unity

so I am super new to unity and hololense, but I wanted to get an idea for how to add this specific feature. I want to be able to print the individual letter that is selected as the user moves through the hololens keyboard. So for example, if the user moves across the middle of the keyboard, the application should print a new line for every letter that is selected and it would look something like below.
Selected: a
Selected: s
Selected: d
....
Selected: j
Selected: k
Selected: l
I have done some research into this, but the closest thing I have found is the getkey() method, but from what I understand for that method the user needs to actually click the individual letter for it to be registered. From what I read from the Unity forum, this feature looks to be feasible, but I haven't found any specifics on how to do it. I'd really appreciate any suggestions. Thank you in advance
We recommend you use the Non-Native Keyboard of MRTK2.3 to make things easier. You just need to implement IPointerEnterHandler interface for KeyboardValueKey class in KeyboardValueKey.cs script:
public void OnPointerEnter(PointerEventData eventData)
{
Debug.Log("Select: " + Value);
}
This method is called when the pointer has hovered over a certain GameObject. Besides, if you are not familiar with NonnativeKeyboard, the example scene, in MixedRealityToolkit.Examples\Experimental\NonnativeKeyboard\Scenes\NonNativeKeyboardExample, will show how to use Non-Native Keyboard .

Debug Event being "used" in Unity 3D

I'm having an issue with the user interface on project I'm working on. It involves "nodes" and connecting them together. I have the connecting of two nodes together tied to pressing a button on an initial node and then clicking the other node to connect to it. The latter half is implemented by checking for when Event.current.type == EventType.MouseDown then joining the initial node with the node you last hovered over.
This works fine most of the time however I noticed that sometimes when you clicked on another node it would not join them together instantaneously, but until you click off of the node. After doing Debug.Log(Event.current.type) it showed that sometimes the event was coming up as "used" when I clicked, instead of "mouseDown" and as such would not perform the join code until I clicked somewhere else. It seems to only happen for some nodes.
Here are two gifs of the behaviour with the console output:
Problem code:
private bool detectEscape()
{
Debug.Log(Event.current.type);
return (Event.current.type == EventType.MouseDown);
}
This function is returning False sometimes on mouse clicks due to the event being "used" sometimes. It is called in the GUI.
Know of any reason what causes the current event to be used? I do comparisons like above in a number of places in my code. Could that be what is causing the current event to be used? How do I avoid it?
Am I using the Event system correctly?
Know of a better way to capture mouse clicks? Using Input.GetMouseButtonDown(0) unfortunately isn't an option as it only works when the game is running and this program is meant to be an extension to the editor.
Otherwise, know of a way to put a break point in Unity 3D's source code so that I can put one in the the function Event.Use() and determine what is consuming the mouse event?
Presumably you are calling detectEscape from OnGUI somewhere, right? The current event is only valid during OnGUI. Additionally, OnGUI can be called multiple times per frame, with different current events. Sometimes the event type might be Repaint, sometimes it might be MouseDown, sometimes it might be something else. So if the event type is not MouseDown, you don't want to assume that the mouse is not down; the mouse might still be down but a different event might be occurring.

DockPanelSuite's DockState and AutoHide

Working with DockState and AutoHide, I am looking for the following things:
Find out if the DockContent is in AutoHide mode
Ability to toggle between 'regular' and AutoHide mode.
Trigger an event when an AutoHide dock has come into view.
Trigger an event when an AutoHide dock has 'left' and is now docked back into it's tab.
Answer Wiki:
IsAutoHide - get:
private WeifenLuo.WinFormsUI.Docking.DockState[] AutoHideStates = new WeifenLuo.WinFormsUI.Docking.DockState[] {
WeifenLuo.WinFormsUI.Docking.DockState.DockBottomAutoHide,
WeifenLuo.WinFormsUI.Docking.DockState.DockLeftAutoHide,
WeifenLuo.WinFormsUI.Docking.DockState.DockRightAutoHide,
WeifenLuo.WinFormsUI.Docking.DockState.DockTopAutoHide };
public bool IsAutoHide { get { return AutoHideStates.Contains(DockContent.DockState); } }
IsAutoHide - set:
No code yet - basically iterate through the modes or use a dictionary of interchangable modes (i.e. DockBottomAutoHide to DockBottom)
I have no idea, but this looks interesting, might have the idea.
I have no idea.
1 is a decent way to accomplish this. The library has an internal method, DockHelper.IsDockStateAutoHide() that does basically the same thing. This should actually be made into a public extension method and included as part of the library.
2 Your solution is good.
3 & 4 would probably be best implemented as a new event in the DockPanel: ActiveAutoHideContentChanged. You could then track the last autohide content on your own and when the event is raised you know that #3 is occurring if the new value is not null and #4 is occurring if the last known value was not null.
Feel free to open a request on GitHub to have the event added.

Categories

Resources