I'm working on a small game project using Windows Mixed Reality headset (Lenovo Explorer) and Unity. I'm currently running the latest MRTK v2.1 release.
I'm using a custom right hand controller. It's a prefab where the main object has the following components:
the mrtk's WindowsMixedRealityControllerVisualizer script
An Input controller script (using IMixedRealityInputHandler) to manage input into Actions (shoot, jump, etc)
A custom script dealing with the actual actions in the VR world.
Besides the component, it has a child object which is the 3D model prefab of the object I want rendered. It is in a child so I could place it correctly with some offset. AFAIK this is not a problem. The this whole thing is its own prefab that I then add on my custom MixedRealityControllerVisualizationProfileunder Global Right Hand Controller Model. In general this works just as I want it. The controller is rendered correctly on my right hand and the inputs behave like I wanted them too.
My problem is that once in the game, when I click the Home button (windows logo) to show the floating menu, once I click a second time to go back to the game, a new controller is spawned at 0,0,0 (or where my hand is at the time of returning to the game); I still have one on my hand though, and this new one it also responds to input the same way the one on my hand. If I open/close the home button again this repeats and I end up with several controllers spawned. So when I shoot, a shot is fired both from my hand, and from the new controller at 0,0,0 (or from as many controllers I've ended up in the scene with by then)
I don't think my controller is ever loosing tracking so IDK why mrtk is spawning a new one. I've considered to check for extra controller objects in the scene and deleting them manually per every update but that sounds silly, there must be some configuration somewhere that can surely take care of this no? Doesn't the visualizer script takes care of this?
I've look around online but haven't found anything specific about this. Any clues will be greatly welcomed.
This sounds like a bug in the MRTK. I would recommend filing and issue on the github repository at https://github.com/microsoft/MixedRealityToolkit-Unity/issues
Related
I would like to make a color system for my player. In my game scene the player can pick up the coins and the amount of coins will be save with playerprefs , but I don' know how can I use the amount of coins in my menu scene.
And I need some help to a player color selecter too. When the player select a color than in my game scene must instiate the player with the color.
Soo I think , i need to know how to communicate between 2 scenes.
Can somebody help me with some tutorial?
There are multiple ways to achieve this, which I feel comes down to a matter of taste: save data to file, use DoNotDestroyOnLoad...
But from what I understand, the recommended way now is to create a "manager scene" which will stay alive throughout the lifetime of your app and pass data to and from your other scenes as they are opened and closed, instead of using DontDestroyOnLoad:
It is recommended to avoid using DontDestroyOnLoad to persist manager GameObjects that you want to survive across scene loads. Instead, create a manager scene that has all your managers and use SceneManager.LoadScene(, LoadSceneMode.Additive) and SceneManager.UnloadScene to manage your game progress. src
See the Unity guide here. Basically you would have 2 scenes open at the same time, at any given moment: the manager scene and whatever the actual active game scene is. Then you can communicate between scripts in the two open scenes via event delegates. The way it would work is:
Player selects color in scene1
Color is sent from scene1 to manager scene via event delegate
scene1 is unloaded and scene2 is loaded
Color is sent from manager scene to scene2
This is the approach I've been using for a project now which looks like this:
"0-Attract", "1-Sealant", "2-Paint", and "3-Conclusion" are my actual game scenes, and "Manager Scene" contains everything that exists in every other scene (thus no reason to kill and respawn them) as well as all of my "manager" scripts which handle the passing of data between scenes.
Note that multi-scene editing can be confusing at first, as there are new things you need to pay attention to (i.e., which scene is currently "active" and what that means) so be sure to go through the unity guide I posed above. Good luck!
you can Call DontDestroyOnLoad method on any object that you want that will remain wen you are moving the a new scene so it will not be destroyed when scene is closing.
I'm working on a mod engine for a unity game. It is 2D based and stores all the level data in separate files ("level1", "level2") in the game data folder (I believe this is standard for unity). I am looking to be able to edit these in post to add/remove game objects in the scene. I wish to be able to do this programmatically in C#.
I've already had a look at the file in the hex editor, and it seems like this is possible (I can see basic game object data).
Currently im loading the scene then moving all objects around or instantiating new ones, but this is proving to be unstable because of the way the game handles objects.
If anyone could point me in the direction of how i would go about this, it would be greatly appreciated.
Update for those that are asking for additional info: Yes, by levels I mean scenes, unity saves them as “level0”, “level1”, etc
I am not the author of the game, the game was not designed with changing the scenes in mind, almost all of the interacatble objects have special riggers crafted to them, so in order to move them it requires me to be extremely careful or the game crashes.
From what you write it seems your separate files are actual Unity3D scenes. You should only have once scene and a system in place that reads and loads your data, for example from text files, then programmatically instantiate those object at run time.
I am developping an application using the new version of SteamVR plugin (2.2) for unity and i need to navigate between several scene during the game.
I have for all my scenes a Player object coming from the SteamVR plugin handle input and event, who is a singelton dontDestroyOnLoad, and i don't know the best way to handle this object during the change of scene.
There is 3 options in my opinion :
For all scene a player object is present, but steamVR don't check if an instance already exist so i add this feature to have only one player after scene initialisation.
The other solution is to destroy the current instance before loading the new scene but with the new system of event made by Valve in this version i don't know if it's a good way to manage the fact to have only one player at the time, can potentiolly create event conflict (didn't saw yet).
The last one but not really confident with it is to let 2 player per scene and one of this is the good instance of the singelton class (way more anoying to get the good instance Player, and can i have conflict between them ?)
If someone have some idea or can give me better understood of my conception issue you are welcome :)
I have a problem with the project I am working on in Unity and have been trying to solve the problem for days.
I am using the Meta 2 and the HTC Vive base station. I am continuing the work of a colleague. He was working on another computer. I am trying to recreate the project he created. He gave me the project and everything included but also doesn´t know how to solve my problem.
It looks like this:
I can open and play the project. When I play the project I can see everything I want to see in 3D with the Meta 2. But I can not look around. When I move the Meta I see the some kind of "vibrating" movement in the screen but in the end everything stays in the position.
I also connected the Sense Gloves I have with my computer. I can move the fingers of the sense gloves in the screen (they are shown as blue circles which i can move) BUT I cannot move the whole hand. It stays where it is initialized in the room.
I think it is a problem with the transfer of the vive tracking data to unity (the position of the tracker coordinate system in unity is not moving even though hit should).
I hope this explanation is accurate enough. If you need more information, do not hesitate to ask me, please.
Kind regards
Alex
I'm building a Unity app for which target environment must include Mixed Reality. I've been able to find very good file picker assets on the assets store, but none of these seem to work in the Mixed Reality Headset, although they appear on screen even in VR mode.
Are there any default MR assets that I should be using or is there anything I should be looking for? Or do I have to build all this from scratch?
Thanks
The difference in VR is that there's no cursor, so normal EventSystem doesnt work out of the box. The simplest workaround that worked for me was this:
Add a box collider component to your UI elements. Raycast from the controller against box colliders. If the collider has a component that implements IPointerClickHandler interface, you can fire OnPointerClick(PointerEventData e) method against it and it will be considered a valid click (although thah bypasses eventsystem navigation).
You'll need to pass a PointerEventData object, I can't remember if you can just pass a null, but I am pretty sure passing a new PointerEventData(EventSystems.current) is fine.
For drags and more complicated events you might need to fill some additional fields for the UI to behave correctly
I ended up writing my own file picker using the "file manager" assetpurchased from the asset store and the mixed reality toolkit. Would it be worthwhile for me to put it on the asset store or is this overtaken by events now that we're have a better mrtk available?