I'm building a Unity app for which target environment must include Mixed Reality. I've been able to find very good file picker assets on the assets store, but none of these seem to work in the Mixed Reality Headset, although they appear on screen even in VR mode.
Are there any default MR assets that I should be using or is there anything I should be looking for? Or do I have to build all this from scratch?
Thanks
The difference in VR is that there's no cursor, so normal EventSystem doesnt work out of the box. The simplest workaround that worked for me was this:
Add a box collider component to your UI elements. Raycast from the controller against box colliders. If the collider has a component that implements IPointerClickHandler interface, you can fire OnPointerClick(PointerEventData e) method against it and it will be considered a valid click (although thah bypasses eventsystem navigation).
You'll need to pass a PointerEventData object, I can't remember if you can just pass a null, but I am pretty sure passing a new PointerEventData(EventSystems.current) is fine.
For drags and more complicated events you might need to fill some additional fields for the UI to behave correctly
I ended up writing my own file picker using the "file manager" assetpurchased from the asset store and the mixed reality toolkit. Would it be worthwhile for me to put it on the asset store or is this overtaken by events now that we're have a better mrtk available?
Related
I'm new to Unity and my goal is not to make a game, but a mobile application, I don't use the Android SDK because I need to do something with augmented reality and I need to do it with Unity, it's getting quite complicated because I'm not familiar or with the language or with anything, in short, I was able to change scenes by means of a button, like a log that verifies things, but Unity does not have a scene transition effect or something similar, I saw several videos where it can be done a transition, effects and so on, but they are for when you touch a button or touch the screen in this case, I would like to know if there is the possibility of being able to implement an effect after it effectively validates the information and then makes a translation effect to the other scene, is this possible, thanks!
You could use:
SceneManager.LoadSceneAsync
Unity SceneManager.LoadSceneAsync manual
for loading scene while you run your own transition. Fade in/out is the most common transition for such occasions.
I'm working on a small game project using Windows Mixed Reality headset (Lenovo Explorer) and Unity. I'm currently running the latest MRTK v2.1 release.
I'm using a custom right hand controller. It's a prefab where the main object has the following components:
the mrtk's WindowsMixedRealityControllerVisualizer script
An Input controller script (using IMixedRealityInputHandler) to manage input into Actions (shoot, jump, etc)
A custom script dealing with the actual actions in the VR world.
Besides the component, it has a child object which is the 3D model prefab of the object I want rendered. It is in a child so I could place it correctly with some offset. AFAIK this is not a problem. The this whole thing is its own prefab that I then add on my custom MixedRealityControllerVisualizationProfileunder Global Right Hand Controller Model. In general this works just as I want it. The controller is rendered correctly on my right hand and the inputs behave like I wanted them too.
My problem is that once in the game, when I click the Home button (windows logo) to show the floating menu, once I click a second time to go back to the game, a new controller is spawned at 0,0,0 (or where my hand is at the time of returning to the game); I still have one on my hand though, and this new one it also responds to input the same way the one on my hand. If I open/close the home button again this repeats and I end up with several controllers spawned. So when I shoot, a shot is fired both from my hand, and from the new controller at 0,0,0 (or from as many controllers I've ended up in the scene with by then)
I don't think my controller is ever loosing tracking so IDK why mrtk is spawning a new one. I've considered to check for extra controller objects in the scene and deleting them manually per every update but that sounds silly, there must be some configuration somewhere that can surely take care of this no? Doesn't the visualizer script takes care of this?
I've look around online but haven't found anything specific about this. Any clues will be greatly welcomed.
This sounds like a bug in the MRTK. I would recommend filing and issue on the github repository at https://github.com/microsoft/MixedRealityToolkit-Unity/issues
I EXPECT FOR MY ENGLISH, BUT I AM USING GOOGLE TRANSLATOR
hi I'm creating a multiplayer game I put the graphics AAA, among these I put the component PostProcessingBehaviour, on the UI menu I made, on the graphics settings, I would like with a button to disable all the components that I put on the camera to make the graphics AAA , for less powerful PCs, so they can play all, to give me an example even to disable the PostProcessingBehaviour via a UI button, then I tried to use onclick, put the camera and put on the "bool enable" works but the game is online , and then it does not work, but if it was offline my problem was solved enough to use the button ui, but with the online I can not of course, what script I could use, the script will have to put on the UI button and disable the script component in this PostProcessingBehaviour case,
how can I do? do you recommend a script? I have little experience and I'm learning, I would need this thing, thanks in advance
IN SHORT WORDS
I would like to know a script that disables the PostProcessingBehaviour component or script from the "camera" object
use language c#
use unity 3d
Not sure if this is best suited for here or Unity or MS forums anymore, but we'll try StackOverflow.
I've been trying to reproduce Hololens tutorial 211 using the HoloToolkit. I'm just trying to do section 1, and reproduce the hand recognition.
In this situation, I've used all the files that are in the HoloToolkit that shared a name with those in the tutorial - except for Singleton, which seems to work differently in the two cases. For any files in the tutorial that were not in the Toolkit, I copied them over.
While the HandsManager is triggered and private void InteractionManager_SourceDetected(InteractionSourceState hand) gets called and sets the handsDetected to true, and handDetectedGameObject is set to active, nothing seems to change regarding the cursor. I'm not sure what information would be useful to reproduce this beyond what I wrote (I don't think it makes sense to drop so many files here on SO), but does anyone know why this might be? I'm using the same CursorFeedback script and I've attached the HandDetectedFeedback prefab as its HandDetected Asset, using a homemade prefab with a Billboard.cs component as the Feedback Parent.
If any more information here is useful let me know and I can provide it.
I haven't looked at the tutorials in a long time, but last time I did they were horribly out of date. Getting input from the Toolkit has changed dramatically since they were written.
You need to add the InputManager component to your scene from the Toolkit. Then create a script and add it to your scene that implements the interface ISourceState and implement "OnSourceDetected" and "OnSourceLost", which trigger when a hand is detected and when it is lost.
For more details you can reference the documentation from the HoloToolkit:
https://github.com/Microsoft/HoloToolkit-Unity/blob/master/Assets/HoloToolkit/Input/README.md
or check out the more complete tutorial that is up to date I have on my web site. This part of the tutorial specifically implements hand and click recognition:
http://www.cameronvetter.com/2017/01/03/hololens-tutorial-finalize-spatial-understanding/
hello i exporting a blender.fbx file into my assets folder in unity and it was working perfectly and i positioned it in front of the camera but when i play it the gun looks like its been taken apart whats going on?!if this helps i separated some parts so i could use it for animation cause it looks like all the animation parts are separate is this a unity thing or a blender thing?
http://www.flickr.com/photos/87198010#N07/7985274177/in/photostream
Two options:
It could be that your animation is incorrect.. remove animation to test..
The position of the child objects, barrel and trigger, may be incorrect in relationship to the parent.. I don't know for a fact, just a few thinks that come to mind.
Let us know when you find out.
When you load a mesh into unity, unity will as a standard add an animation module to it, even if you don't have any animations yet. this may be what is making the gun bug.
If there is an animation component on it, remove said component and try again.
Also it might be that different parts of the gun are at different scales or rotations, before you export from blender click and click 'apply rotation and scale', and then you export.