Transparency of the image captured by camera Unity - c#

I am trying to do a AR project using unity. I doesn't have much experience in Unity and I am using this project to learn and understand a little better. In my project I have the device's camera always connected and that is projected in my device.
I would like to understand if there is any way (or any propriety) of having my real-time image transparent. Like changing the transparency of what I see/ what it is captured.
Thank you in advance

Related

UWP Video SlowMotion and Reverse Effect

I am developing an UWP Desktop Application, where I would like to capture a video with my WebCam and apply some Video Effects on it.
I am using MediaComposition, MediaClip, MediaOverlay to composite multiple videos and intro pngs as a composition and put some overlays on it.
Now I was trying to implement a slow motion and reverse (boomerang /pingpong) video effect on my composition. I was expecting, that there are already some available IBasicVideoEffect effects for that, but I am searching since 3 days and not able to find something similar.
There is also FFmpegInteropX, which is I was trying but still not able to achieve my goal. But I also would like to avoid using FFMPEG in my project.
Does someone have any Ideas how to implement a slow motion and reverse video effect with UWP?
Thank you in advance.

Unity Fog looks different on Android

The fog from my 3d game looks good in Unity (Desktop) but on Android the fog looks different? Can anyone explain the reason?
PC Unity:
Android:
There are few settings, which you have to check first.
Project Settings/Quality: make sure that your editor quality is same as your Android build.
Project Settings/Player/Other Settings: Color Space and Graphic APIs
According to your android screenshot, you also have color bending issue. And you have different gamma value on displays too.
Oversized scaled cubes/low vertex objects often cause issues on mobile
The solution would be cutting/separate the cube into more small cubes or add even more vertexes to the cube
Here is the thread I found this reference from

Frame.CameraImage.Texture has no data in Arcore

I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, 'has no data'.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
Perhaps texture is a null value.
What is the reason?
If this doesn't work, is there a way I can only bring images of the real world from the mobile phone environment?
The real world image is required for use in image segmentation and should not include images of augmented objects.

Using Unity3D to render a character onto a Kinect WPF application

I am working on a project which requires me to render a virtual character onto the kinect video feed in which the player appears.
I am attempting to use Unity3D to accomplish this. I have looked at Zigfu but I don't think this directly helps. I still want to be able to send data from my C# WPF program to the game engine (I am forking my project off from kinect Fusion Explorer). Ideally Unity would be rendering the character and movement, my WPF program would be sending information to Unity about the landscape and running the Kinect feed.
Has anyone attempted this or have any idea how this could be achieved?
If this is not possible with Unity, are there other game dev libraries I could use to render a character onto the Kinect feed?
Thanks
If you want to send data via network (sockets) you will face problems with the size of frames. So my opinion is to use WCF. I'm not sure if it works for you, but this's how I managed it in my project (sending position and orientation)

is there a way to make a 3D game in winforms or WPF in C#?

I'm thinking about making a 3D point and click game, is it possible to make one in winforms or WPF? I don't need any physics or anything all I need is to make the application render 3D objects. I know that I can use XNA but if I do then I will have to relearn almost everything again. My third approach would be to make the scenes in a 3D game engine and then print the screen and then load it as a image. Any suggestion would be appreciated.
There's a big difference between a 3D game, and just letting players interact with a rendered image.
Your approach of loading a pre-rendered image is possible to do in both Winforms and WPF. You would just need to capture click events on the image and check the location against your list of active areas. Then just handle what needed to be done, ie: move to the next area, activate item, etc.
Edit from comment:
It's not so much which is friendlier. You can host an XNA viewport in Winforms/WPF. It's more about how you want your game to work. If you never have moving 3D scenes, XNA is overkill, and images will work just fine.
If you want dynamic scenes, you'll need to be able to render them on the fly. Then, XNA makes more sense. It is a lot more work though compared to just displaying images.
If you just want to show pre-rendered 3d images in your game, why not create them using a real 3d graphics tool, such as 3D Studio Max or Maya (or a free one, such as Blender)? It sounds like there's no need for actually rendering the 3d scenes in a game engine at all.

Categories

Resources