I have a problem with the project I am working on in Unity and have been trying to solve the problem for days.
I am using the Meta 2 and the HTC Vive base station. I am continuing the work of a colleague. He was working on another computer. I am trying to recreate the project he created. He gave me the project and everything included but also doesn´t know how to solve my problem.
It looks like this:
I can open and play the project. When I play the project I can see everything I want to see in 3D with the Meta 2. But I can not look around. When I move the Meta I see the some kind of "vibrating" movement in the screen but in the end everything stays in the position.
I also connected the Sense Gloves I have with my computer. I can move the fingers of the sense gloves in the screen (they are shown as blue circles which i can move) BUT I cannot move the whole hand. It stays where it is initialized in the room.
I think it is a problem with the transfer of the vive tracking data to unity (the position of the tracker coordinate system in unity is not moving even though hit should).
I hope this explanation is accurate enough. If you need more information, do not hesitate to ask me, please.
Kind regards
Alex
Related
I've been trying to figure out how to use the RTS plugin from the unity Asset Store but there seems to be a problem. I have no idea why this is happening, but the z and y values are mixed up. I can scroll left and right perfectly but whenever I press "W" the screen zooms out instead of moving upwards. The same applies to scrolling. When I scroll, the screen moves upwards/downwards and doesn't zoom out like its supposed too. I tried creating my own RTS camera via Brackeys and the same thing happened. His game would zoom, mine would just move upwards. I'm not for sure whats wrong. Im fairly new to all this unity jazz. ANY help would be appreciated
it's a little hard to know exactly why this is with the information you have provided. But if I were you the first thing I would check is your input manager! Have you done that and made sure your inputs correlate to exactly what you want?
It's sounds like your settings may be not be the default, causing the differences?
I'm trying to make a simple AR Images app, for students to use but, since I don't have much programming experience, I'm trying to just use and change the example files as the base for the app.
Until now, after many hours of trying, and reading about it, I still couldn't make it work with my own images and prefabs... Using AR Images should be the easiest thing in ARCore, right?
Could someone please help, by uploading a working base project, that is independent of the ARCore example files? Or at least the main scripts with these improvements:
using several images and prefabs, like 15 or 20 max.
an easier way, like a drag and drop prefab list, of corresponding the images number or name with the correct prefab...
...maybe like shown in ARCore + Unity + Augmented Images - Load different prefabs for different Images
an easier way of updating trackables - their state, adding and removing prefabs when visible or they get out of sight
I have seen some answers to individual problems, but it's hard to put all that in the scripts, without messing something else... at least for me it is ;-)
I think this could help a lot more people besides me, by updating this problem and putting several answers in one place. Thank you!
Using: Unity 2019.2 beta - ARCore 1.10 for Unity
I have a problem using HoloLens and Unity 2017.3.0f3. I open a menu (Canvas in World Space) depending on where the user is looking at using the voice commands.
When menu appears it seems stable, but when walking around it, there is a strange point where it starts to shake and it's separated in different colors, like a rainbow. Very shocking. After 2 seconds it stabilize and appears nice, but everytime I return to that point the same thing happens.
It's quite strange as it just only happen when getting into a specific angle like 65º by left (clockwise direction thinking you are at 6 and the object is the center of the circle).
I have improved the general stabilization using SetFocusPointForFrame but anyway it still makes that strange color shake. Also, I tried to reduce FPS to 24 with no results. Quality is set to lowest... I don't know what more I can do. Any help?
To understand better the effect I found this video:
https://youtu.be/QMrx-BU4Hnc?t=6m25s
That final effect is what happens to my hologram, but the color separation is bigger and everything shakes.
Thank you!
EDIT: I tried to record a video using Device Portal but the objects dissapear instead of shaking O_o really weird.
Well, it's not really a fix. But if I use Unity 2017.1.0f and Visual Studio 2015 with UWP SDK 10.0.14393.0 the problem disappears... I'll try to find if it's a Unity problem or a SDK problem, but for the moment this is a valid solution to avoid the terrible shake with color separation.
Hope this helps someone! :)
In Unity3D the logic dictates that objects are not to be rendered unless in the field of view. This is obviously the way to go for optimization purposes. However, it still renders meshes that can not be seen by a player due to being occluded. I would like to solve this and was wondering if there was already a method to do so or if I had to do it myself.
Here's a picture to help illustrate my point.
So far my only real ideas are using the culling, but that still would be in a range not necessarily visible.
https://docs.unity3d.com/ScriptReference/Camera-layerCullDistances.html
I guess essentially what I need to know is how to do occlusion culling after a scene starts because the scene is generated, it's not premade.
For anyone who's interested, I asked the unity community
Answer by Bunny83 · 4 hours ago
No, that's not possible. At least not with the occlusion culling
system that Unity uses. The calculation which parts are visible from
which points is quite complicated and has to be precomputed in the
editor. So it won't work for procedurally generated levels.
You have to roll your own solution if you need something like that.
Just a few weeks (or month?) ago i implemented a VisPortals solution
similar to those used by Doom3 (basically how most ID Tech engines
work). It's more a prove of concept than a ready-to-use solution.
Usually i upload a webplayer demo onto my dropbox, however i just
realised that Dropbox finally prevented to directly view HTML pages
off my public folder. They now force a download of the page which
breaks everything. So if you want to try it, you have to download the
project.
Of course vis portals doesn't work in all situations. They are perfect
for closed environments which can be split nicely into seperate areas.
Of course this splitting into areas and the creation of the visportals
currently is done by hand. So you would need to automate this
yourself.
Be careful with static batching, it might break the system as each
area has to be seperate so it can be enabled / disabled seperately.
We've setup the leap motion, got it to successfully run in Standard Unity by moving the DLLs around per instructions, and can successfully track hand positions when running the scenes in this demo. But we cannot grab objects in any scene. We have only gotten the Boxing and Flying scenes to work, because those in fact requires no gestures, simply pushing outwards knocks the bag around, or just detects relative positions of hands to cause flight. But the actual grab action we cannot get to execute, in Unity only. The Airspace apps (orientation + freeform) work fine, and the Visualizer works fine.
See this video short video of us trying http://youtu.be/9kTXCEwUhoc The documentation for the Boxing, ATVDriving, and Weapons all just say to grab when colliding, but we've tried many times and cannot get it to execute even once. The rings should turn red like exactly here https://www.youtube.com/watch?v=RA7a6foNlHo&t=1m8s but they never do, always staying blue no matter what we do.
Any idea what's wrong?
Demo Pack Documentation: https://developer.leapmotion.com/documentation/skeletal/csharp/devguide/Unity_Demo_Pack.html
GitHub project: https://github.com/GameMakersUnion/LeapTest (already has DLLs setup for Standard -free- Unity)
This question was answered on the Leap Motion forum; thought I'd copy it here:
Are you trying to grab objects in general or are you trying to get the demo pack to work?
I don't really know why the demo pack doesn't work as I don't know very much about it, but if you're trying to make an app to grab objects you might check out our v2 tracking and skeletal assets.
developer.leapmotion.com //
https://www.assetstore.unity3d.com/en/#!/content/177703
There's a few demos in our gallery that show grabbing, specifically the RagdollThrower example.
Also, next week we'll be updating the skeletal unity assets with more powerful grabbing.
Not sure if this is what you're looking for, but I thought it might be of interest.
(Source: https://community.leapmotion.com/t/anyone-here-tried-the-unity-demo-pack-we-cant-grab-objects-in-it/1415)
What you need is to look for the Physics model for Hand Pinching something like that. but the word pinch is the trick.
You can add the pinching to your hand contoller or sandbox, then the object should have Rigidbody to be grabable.
Please contact if you want to share info about leap and C#