First, let me clarify somethings, I don't know python.
I am currently working on my final year project and I needed a good Object detection technique, after trying many methods (color threshold, Haar-Classifiers), I stumbled around tensorflow, found myself a good tutorial, followed it and got the detector I want.
The problem:
I need and I want to work on Unity, Unity only supports C#.
I found an asset called TensorflowSharp but didn't know how to use it. The fact is I don't want to train on Unity, I trained on python I just need to use the "frozen inference graph" (as named in the tutorial) in unity to detect the object I want.
Please, I have to present in a month, any help is appreciated.
You can take a look at my TFClassify-Unity example. It uses trained model from official Tensorflow repo, but you can try your own by renaming its extension from '.pb' to '.bytes' and replacing default one with it.
Related
I am developing a 2D Platformer RPG Game. The game will have many characters in it, each having different abilities (or powers). My question is that how can I maintain my project structure so that I can add as many charcters or abilities I want to add in the future without making a complete mess of my code.
For example:
I have a charcter lets say Iron-Man so I want him to use thrusters but let say there is another player using Captain America as his character who cant use thrusters.
Now how to make a system so that I can add characters and abilities or have characters exchange any ability with another at runtime?
I've heard about using interfaces to make code cleaner and also about using scriptable objects but I haven't quite worked with them.
I would like to know a concrete method of making this type of system (If there is any).
Any links to tutorials would be appreciated.
PS- My game has just 1 character in it and everyday I open my project hoping that I will add a new character but I am always afraid that I will break my code and I think that if I surf a little more on the internet I will get a proper sturcture to start with so I dont take any risk and I just end up changing some things here and there and close it.
A few things to check out (Mainly related to OOP):
SOLID design and Design Patterns Videos
https://unity3d.college/2017/11/24/solid-unity3d-code-architecture-open-closed-principal/
https://www.youtube.com/watch?v=FGVkio4bnPQ
https://www.youtube.com/watch?v=UoNumkMTx-U
Lots more in his channel, those were just a few
Design Patterns Code to study
https://github.com/Naphier/unity-design-patterns
https://github.com/QianMo/Unity-Design-Pattern
Lastly:
Unity's Free (For next month or so) learning: https://learn.unity.com/
I am rather new to ML and started using ML.NET early this year. Perhaps I am not educated enough on it, but I am attempting to find information on implementing GPU-based binary classification using C# and LightGBM. Despite numerous searches I cannot find any documentation or examples. I would very much appreciate any assistance anyone can offer.
B., I am on the ML.NET team at Microsoft. We do not currently support GPU for LightGBM, but this is something we are considering implementing over the next few months. If you would like, please make an issue on our GitHub repository and provide more detail about your use case.
I'm using Unity 2018.3 and am building an application in which I'm trying to do something very simple: turn off the pointer, after some command.
I can't find any scripting documentation in the GitHub repository and am wondering if it exist somewhere. What I'm finding is how to use the Unity Inspector to configure anything, half of it not working yet (e.g. controller model), but after anything is configured, can't find anything on how to use scripting to control it. Am I out to lunch? Anyone knows where to look?
Thanks!
From another example, I've chosen the following path, but maybe I'm wrong... let me know.
The API documentation for MRTK is available at https://microsoft.github.io/MixedRealityToolkit-Unity/api/Microsoft.MixedReality.Toolkit.html
It looks like you are also asking a second question here, if you could please post a separate question about your specific issue, it would be great. Here is a guide for how to ask good questions on stackoverflow, which might help give ideas for how to frame your question: https://stackoverflow.com/help/how-to-ask.
I've just downloaded the new Unity (beta) 2018.1b version and I'm wondering how to use all the new fanciness they introduced, like the two things mentioned in the topic. Can I just put pathfinding and raycasting code into IJob.Execute method definition and it will just work, or are there some more specialized structs I need to extend in order for those two to work (like IJobParallelForTransform)?
I'm asking this because the docs and google search uncovered nothing, which is to be expected, since this new version was released earlier today, but maybe someone already has some knowledge.
For anyone interested in this topic, here's the discussion on unity forums: https://forum.unity.com/threads/asynchronous-raycasting-and-pathfinding.511973/#post-3349101
Im trying to implement NLP in my project,
I need to Tag the words as Person,Location ,Organix=sation etc..If any body knows the logic please let me know..
Regards,
Stack
The task you want to perform is known as Named Entity Recognition (NER).
The majority of software for doing NER is in Java. For example, the Stanford NER system and the OpenNLP NER system. There are far fewer similar libraries written in C#, however I found SharpNLP through a Google search. I have not used it personally so I have no idea how well it works.
There is a nice web-service by Reuters: http://www.opencalais.com/.
You can access it via an API.
I thought that the demo was impressive http://viewer.opencalais.com/.
I did not pursue it further, as I want to create a German application. Calais is only supporting English.