I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, 'has no data'.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
Perhaps texture is a null value.
What is the reason?
If this doesn't work, is there a way I can only bring images of the real world from the mobile phone environment?
The real world image is required for use in image segmentation and should not include images of augmented objects.
Related
I want to design VR lenses for customized VR boxes and I want to develop Android apps in Unity but I cannot figure out how to change the distortion of the image on the screen accordingly.
image distorted for VR
Source
All sources I can find about this issue are created about 5 years ago.
This mentions Vertex-Displacement Lens Correction can be done with Cardboard SDK for Unity.
The SDK contains a CG including file titled CardboardDistortion.cginc.
This file contains a method which will convert a world-space vertex into inverse-lens distorted screen space ('lens space'), using the Brown–Conrady model for radial distortion correction.
Is there a simpler way to do VR Distortion Correction in Unity?
Can Unity distort the scene by changing some coefficients?
Are there any other alternative programs to solve that?
And also are there any alternatives to Google Cardboard SDK to develop VR apps on android phones?
Some of the video player apps enables users to change some settings like this in order to set the perfect angle and distortion coefficients (e.g. Deo VR). Can it be done visually in Unity?
I am trying to do a AR project using unity. I doesn't have much experience in Unity and I am using this project to learn and understand a little better. In my project I have the device's camera always connected and that is projected in my device.
I would like to understand if there is any way (or any propriety) of having my real-time image transparent. Like changing the transparency of what I see/ what it is captured.
Thank you in advance
I have a third party package that is for mobile VR. it consist of two cameras that giving me the feel of VR Look. I have designe a user interface which i want to show from my both VR cameras but the problem both camera showing single UI. My UI not duplicating for each camera instead its only showing single UI as image depicted. How can i duplicate My UI so that it show from both VR cams
I am able to get my answer from here, slightly outdated but valid post for unity 5.2 also:
Create two different canvases in your scene.
Select the Render Mode for each canvas as Screen Space - Camera.
Then assign the camera for each of those canvases as required.
Then that particular canvas will be drawn using the camera assigned
to it.
It will render UI for both cams like in my mobile VR project.
I follow this guide (How-to-build-an-ar-android-app-with-vuforia-and-unity) but added my own mobile picture in database (as a marker) so that I could see animated object on my mobile and i am able to run my app but camera is not rendering the real world object in scene. instead animated object showing only.
what i am doing wrong?
Remember vuforia doesn't recognize my web-cam profile and apply default profile on it.(can be this )
I am working on a project which requires me to render a virtual character onto the kinect video feed in which the player appears.
I am attempting to use Unity3D to accomplish this. I have looked at Zigfu but I don't think this directly helps. I still want to be able to send data from my C# WPF program to the game engine (I am forking my project off from kinect Fusion Explorer). Ideally Unity would be rendering the character and movement, my WPF program would be sending information to Unity about the landscape and running the Kinect feed.
Has anyone attempted this or have any idea how this could be achieved?
If this is not possible with Unity, are there other game dev libraries I could use to render a character onto the Kinect feed?
Thanks
If you want to send data via network (sockets) you will face problems with the size of frames. So my opinion is to use WCF. I'm not sure if it works for you, but this's how I managed it in my project (sending position and orientation)