Creating a visualizer Unity3d - c#

I have been trying to find some things online about how to make your own visualizer that responds to sound, But so far I only could find tutorials for on a mac.
I also found some premade visualizer but prefer not to spend 50$ on a asset for it
theasset
I was wondering of somebody could maybe tell me how to basicly code a particle system to respond to sound interactions such as certain frequency's

I've found a nice tutorial here: http://www.41post.com/4776/programming/unity-making-a-simple-audio-visualization.
It is able to get different frequencies from the sounds and do something accordingly. You can try and change the variables of the particle with it.
Another thing I found is a project which analyses the spectrum of sound. You can download it here.
I also found this tutorial which generates a level based on the sound.
These links no longer works
Try them out and let us know what you thought of them and if they helped! Good luck.

Related

How can I create an Android object recognition application where I can display information when they are clicked using Vuforia and Unity?

Right now I am trying to create an android object recognition application (it will be through an image) with Vuforia and Unity.
What I will show will be a 3D map, but in addition to showing it I want to present information of certain parts of the model (buildings, etc.) either by pressing it or pointing it at a similar point than in virtual reality..
What I want to create is something like this:
https://www.youtube.com/watch?v=y70yStPCBHA
I am very novice and as much as I try to understand how to do it, I do not succeed. Try searching for information on different websites, but I couldn't find anything similar to the video.
I hope you can help me, I would appreciate it.
I am sorry for my bad english.
your problem is not as complex as you might think. You should try searching for stuff not as a complete structure of every functionality that you need but you should break it down to smaller components.
In the instance you could just serch for how to detect touch on a game object in unity and you can find a lot of stuff about that. It's a good thing to get used to because you can find separate guides for all the components you need and then just combine everything together.
Check this video out about raycasting from touch to game object, it should be everything you need: https://www.youtube.com/watch?v=0sFrDJKwsdM
hope this helps!

Reproducing Hololens Gesture tutorial with HoloToolkit

Not sure if this is best suited for here or Unity or MS forums anymore, but we'll try StackOverflow.
I've been trying to reproduce Hololens tutorial 211 using the HoloToolkit. I'm just trying to do section 1, and reproduce the hand recognition.
In this situation, I've used all the files that are in the HoloToolkit that shared a name with those in the tutorial - except for Singleton, which seems to work differently in the two cases. For any files in the tutorial that were not in the Toolkit, I copied them over.
While the HandsManager is triggered and private void InteractionManager_SourceDetected(InteractionSourceState hand) gets called and sets the handsDetected to true, and handDetectedGameObject is set to active, nothing seems to change regarding the cursor. I'm not sure what information would be useful to reproduce this beyond what I wrote (I don't think it makes sense to drop so many files here on SO), but does anyone know why this might be? I'm using the same CursorFeedback script and I've attached the HandDetectedFeedback prefab as its HandDetected Asset, using a homemade prefab with a Billboard.cs component as the Feedback Parent.
If any more information here is useful let me know and I can provide it.
I haven't looked at the tutorials in a long time, but last time I did they were horribly out of date. Getting input from the Toolkit has changed dramatically since they were written.
You need to add the InputManager component to your scene from the Toolkit. Then create a script and add it to your scene that implements the interface ISourceState and implement "OnSourceDetected" and "OnSourceLost", which trigger when a hand is detected and when it is lost.
For more details you can reference the documentation from the HoloToolkit:
https://github.com/Microsoft/HoloToolkit-Unity/blob/master/Assets/HoloToolkit/Input/README.md
or check out the more complete tutorial that is up to date I have on my web site. This part of the tutorial specifically implements hand and click recognition:
http://www.cameronvetter.com/2017/01/03/hololens-tutorial-finalize-spatial-understanding/

Screen scraper for creating a log of gathering in a RPG

I want to create a program that can log gathering in a RPG for me, the problem is that I dont know where to start. I know c++ and some c#, but I haven't got a clue on how to scan the screen.
What I've looked at:
Screen scraping (most info is on scraping HTML pages)
OCR (Most info
is on OCR on a image file, not an active window)
Spy++ (I haven't got
a clue)
I can do it with any language, but I'd prefer c++ or c# since that what I'm most experienced in
What is the best way to do it? anyone got some helpful links/tips?
example of something I'd like to know (amongst other things):
Would it be wise to make a screenshot like once every seccond and then analyze that image?
Should I learn and use windows API for this?
before you bash me:
Yes, I know that this will be a big project, nothing I expect to complete easily
Yes, I understand that I will have to learn more about programming to do this (that's part of the goal)
Yes, I understand that it might be more than I can handle at my current level of skill, but I'd like to figure that out by trying :)
Please help me, I really dont have a clue about where to start
If you are serious about creating this program I would suggest you forget about scanning the screen, image analysis gets very complicated very quickly.
Instead look into memory reading, the information shown in the game can also be read as a straight up value from the games working memory.
There are probably resources out there for your game already, google it.
World of Warcraft
Diablo 3
etc

Suggestions for a video montage application (extracting and playing randomized short clips from a playlist)?

I'm an electronics engineer used to coding in embedded C and assembly, but I decided to start learning higher-level stuff like C#, .NET, etc., so I can start making software as a hobby. I have a great idea for one of my first projects, but after searching several forums for days on end, I'm left not really knowing what would be the easiest path forward.
The functionality that I'm looking to create is pretty similar to the idea of a photo slideshow, but applied to videos instead. The program would open a playlist or a folder full of videos and then play the videos in a random order, starting from a random starting position, and with a fixed duration (let's say 10 seconds as an example). You would end up being able to watch a sort of "video montage" that consisted of small clips from random parts of the videos in the playlist, shown in a random order, ad infinitum until the program is closed.
There are a number of ways I could tackle the problem:
Develop a standalone video player with the fixed functionality of showing "video slideshows." DirectX has the Microsoft.DirectX.AudioVideoPlayback API that
could be a good starting point. I found an example here: http://www.dreamincode.net/forums/topic/111181-adding-video-to-an-application/
Modify an open source project to add the desired functionality. I've seen a few cool projects that could get me started, like this simple C# Movie Player: http://www.codeproject.com/Articles/18552/C-Movie-Player
Use a scripting interface to implement this functionality on an existing media player, like VLC or Winamp. You could also control VLC via C#, like the example here: Controlling VLC via c#
I realize that the obvious answer for most people would be to "use whatever you're most comfortable with," but since I'm a pure beginner, I don't really have any allegiances to a particular language or development environment. So, I was just curious if anybody had an idea of what might be the least painful option for a beginner.
I also apologize that this is not a very specific programming question. I'm sort of just testing the waters to get my footing. Hopefully, once I get started on the project, I'll be able to come back and post more intelligent and relevant questions!
While your background would lend you toward C#, I recommend investigating something like this and using WPF for the media player. You can then control the media player using a background worker in order to stop the video or queue up the next one. Some other .NET concepts that will be of use to you are FileInfo and DirectoryInfo objects, to provide you with the necessary information about the files. I'm not sure if you've had experience with generic data structures in .NET, but the System.Collections.Generic namespace would be a good place to start to get a feel for data structure you want to keep your playlist in. WPF will also be able to help you with transitions between video clips.
Admittedly WPF is easier with an understanding of the MVVM or MVC design patterns, but I think you'll be able to get something working without having to delve too far into that right up front.

Implement gestures on Microsoft Surface

Is there any tutorial or example available that shows how to implement a custom gesture into Microsoft Surface? After hours of googling I couldn't find any.
Unfortunately, the SDK does not even provide a framework to recognize gestures.
Im particularly intersted in gestures like a circle, or ? or x
Edit:
Is there any news here? Or any good hints how to just recognize a X over a UI element?
Unfortunately the Surface doesn't have any gesture framework. You just have to dive in and do it yourself by tracking Contacts as they move, appear, and disappear.
Generally it is not a good idea to creat complex gestures like that in your application because they are really hard for people to discover. That said, you could track the touches to create a polyline and then apply an algorithm like the one outlined at
http://faculty.washington.edu/wobbrock/pubs/uist-07.1.pdf
to distinguish the path in a set of gestures.
Another option, but maybe not what you want is to use the SurfaceInkCanvas and recognize the text entered there.
Thanks!
In the meantime I implemented my own gesture recognition tool following this documentation
If somebody is interested, I could share the source.

Categories

Resources