Implementing YOLO/Tensorflow object detection on Hololens - c#

I'm developing an object detection project on Hololens. Yolo/tensorflow have been the best choice for object detection but they don't have solid support on UWP, which is a bummer. Even unity ML-agents cannot run on UWP devices (correct me if i'm wrong). So, i'm wondering what are the possible solutions to use Yolo/tensorflow on Hololens using unity3d?

Meanwhile Unity has its own inference engine called Barracuda. You can use it to execute (some) models in ONNX format.

Related

How to use LibVLCSharp to stream in game footage from Unity?

I would like to implement an application that is composed by a server application (built with Unity for Windows), where the actual simulation is running, and a client application (built with Unity for Android) to which the rendered game view is streamed to; in other words I want to do remote rendering in Unity and both ends are implemented with Unity.
I have found, among others, LibVLCSharp, which looks quite promising.
My questions are:
- Is it actually possible to achieve the indicated functionality with LibVLCSharp and Unity?
- Does someone know sources where a potentially similar project has been documented?
Unity implementation is currently actively being worked on. It hasn't been open sourced yet and doesn't work quite well for now.
Expect communications on LibVLCSharp if we manage to make it work !
First question: no.
Second question: there can't be any.
"Unity is not officially supported yet, this was a closed beta test. Unity will be supported in LibVLCSharp 4.0" source

Is it possible to port a WPF 3D app to UWP / UAP

I wrote an WPF app using all the 3D stuff of WPF which contains several Views displaying 3D data. IMHO WPF is not the platform for further development of the app so I am searching for some ways to port it to a Universal App/Windows Store App/UWP.
The way, Microsoft is promoting, is to use Unity meaning a complete redevelopment and complicated workflows because it should be a 2D-App with some 3D views.
The other way I was looking at way using Helix-Toolkit, because it does not just offer the WPF-way of displaying 3D but also the DirectX way using SharpDX. Unfortunately the progress concerning UWP is pretty low and not currently usable in the extend I need (like displaying custom geometries etc.).
Is anybody out there who got the same problem and found a workaround?
TY!
You may also like to consider Monogame. Like Helix-Toolkit it's based on SharpDX (when it's on Windows anyway). It's not quite as "componentised" as Helix-Toolkit but it's possible to make your app UWP-first and Monogame-second.

How to get WiFi-Signal Strength with Unity engine and C# on Android?

I need the WiFi strength of nearly access points to triangulate the position of the Android device user and show him the part of the building, he is located at the moment. It's for a location based game for new students at my university. There are easy ways with Java or Xamarin and C#, but i couldn't find a way to solve the problem using the Unity engine and C#. It there a way to get it?
I have a complete Android solution in Java from a fellow student and i tried to use AndroidJavaObjects to use that in my program, but without success. It seems to be too complex for that. Are there maybe other ways to use that Java Android code in Unity?
Thanks in advance for the help.

Integrating WPF, Unity3D (and Kinect)

I created a WPF project in Visual studio. The XAML markup is managed by C# code-behind. What i want to do is create a component on the user interface, which will show a 3D scene. I would like this 3D scene to be managed by Unity, because i need to take advantage of Unity's physics engine. The user must be able to interact with this 3D scene via gestures, recognized by Kinect (for example toss a ball).
Is there any way I can connect WPF, Unity3D and Kinect, so that the user will be able to manipulate 3D scene in such manner? If so, could you provide me with some examples/tutorials? If not, what is the best approach for letting user manipulate 3D scene with Kinect gestures?
I would see How get the Unity3D View In wpf winform as for using Unity3d with wpf. The tricky part will be the Kinect interaction.
You can interface with Kinect through WPF, and use Zigfu for unity. The downside to using WPF for the Kinect interaction is that you won't be able to send data to unity unless you use the new Kinect Client Server System and send all of your information to a webserver, then retrieve it in unity. That is a very bad idea, and will probably not be fast enough and experience serious lag.
The second option, using ZigFu in unity for the Kinect support, is to be considered. The issue with it is if you want to use the Kinect in WPF, then you would have to disconnect from unity. This is the only possible way I see being able to implement
Overall, this is achievable, but will be very difficult. I suggest you use ZigFu, but there are drawbacks to both methods.
This is quite a big and generic question. But I will suggest you to use a web deployment of your unity project and, on your project/References/COM there is a UnityWebPlayerAXLib Control that will help you a lot.

iOS project with (just a little) Unity 3D?

The standard way to develop with Unity 3D is to develop in the Unity 3D IDE and have it generated Xcode projects when necessary. If we need UIKit support or other Cocoa-Touch features, we need to write pluggins or wrappers for bridging.
Is it possible that we build the overall structure with Cocoa-Touch and Objective-C and only leverage the 3D capabilities of Unity 3D in some certain occasions?
If Unity automatically wraps your entire 3D project inside an Obj-C project, then no, it's not possible.
Why not just accomplish what you're trying to do within the game? Unity supports a lot of various .NET libraries - surely you could just accomplish what you need within Unity itself?
I agree, just use Unity , or don't use unity, its very complex to mix SDKs together

Categories

Resources