TL;DR - I'm looking for a plugin or technique to pitch shift audio assets precisely.
I'm working on a unity3d project to create a binaural VR music visualizer (why I'm trying). I've created a basic visualizer and the music player components pretty easily with tutorials online but I'd like to find an easy way of frequency shifting audio tracks. This is similar to pitch-shifting, which unity supports via the Audio Pitch Shifter Effect but it's expressed as a multiplier rather then a unit of measure and effects playback speed. However, I need to pitch-shift the audio while maintaining playback rate as the normal audio plays in one ear, and the effected audio plays in the other.
The goal being that I can use a standard slider to decide the frequency of the binaural wave, and relate it to the left and right audio output, and with the VR solution, I can provide similar bi-visual effects at the appropriate frequency in each eye for a more intense effect.
I think I might have found my own solution buried in plugins - but it's much more specific then I would have thought: https://www.assetstore.unity3d.com/en/#!/content/66518
Though I would like to hear anyone else's approach to advanced audio effects like this.
Related
I'm the developer on a game which uses gesture recognition with the HTC Vive roomscale VR headset, and I'm trying to improve the accuracy of our gesture recognition.
(The game, for context: http://store.steampowered.com//app/488760 . It's a game where you cast spells by drawing symbols in the air.)
Currently I'm using the 1 dollar algorithm for 2D gesture recognition, and using an orthographic camera tied to the player's horizontal rotation to flatten the gesture the player draws in space.
However, I'm sure there must be better approaches to the problem!
I have to represent the gestures in 2D in instructions, so ideally I'd like to:
Find the optimal vector on which to flatten the gesture.
Flatten it into 2D space.
Use the best gesture recognition algorithm to recognise what gesture it is.
It would be really good to get close to 100% accuracy under all circumstances. Currently, for example, the game tends to get confused when players try to draw a circle in the heat of battle, and it assumes they're drawing a Z shape instead.
All suggestions welcomed. Thanks in advance.
Believe me or not, but I found this post two months ago and decided to test my VR/AI skills by preparing a Unity package intended for recognising magic gestures in VR. Now, I'm back with a complete VR demo: https://ravingbots.itch.io/vr-magic-gestures-ai
The recognition system tracks a gesture vector and then projects it onto a 2D grid. You can also setup a 3D grid very easily if you want the system to work with 3D shapes, but don't forget to provide a proper training set capturing a large number of shape variations.
Of course, the package is universal, and you can use it for a non-magical application as well. The code is well documented. Online documentation rendered to a PDF has 1000+ pages: https://files.ravingbots.com/docs/vr-magic-gestures-ai/
The package was tested with HTC Vive. Support for Gear VR and other VR devices is progressively added.
Seems to me this plugin called Gesture Recognizer 3.0 could give you a great insight on what step you should take
Gesture Recognizer 3.0
Also, I found this javascript gesture recognition lib in github
Jester
Hope it helps.
Personally I recommend AirSig
It covers more features like authentication using controllers.
The Vive version and Oculus version are free.
"It would be really good to get close to 100% accuracy under all circumstances." My experience is its built-in gestures is over 90% accuracy, signature part is over 96%. Hope it fits your requirement.
Hi I Want to Decrease the Compression rate/Playing Speed of My Audio Tracks in C# Using NAudio class, i.e. I want tracks to play at a slower speed than their original speed.
Previously I was using Windows Media Player object for just this thing and NAudio for everything else, but I want to shift completely to NAudio.
NAudio does not have a built-in feature to do this. When I need to change playback rate, I create a managed wrapper around the SoundTouch dll. I keep meaning to blog about how to do this, but for now, check out the PracticeSharp project which also uses SoundTouch and NAudio.
I'm not fully sure what I'm looking for so I was hoping to gain some insight. I'm brainstorming an application I want to write, but I've never dealt with this type of project. I was wondering if any of you knew of any resources, both literature and libraries/APIs that could lead me down the path to creating an application that involves the playback of audio, visualization of audio (like displaying the waveform with a scrolling position), setting playback points, maybe some effects like fade in and out, maybe even beat mapping; all in .net
For example, an application that displays a waveform and has a position indicator that moves with playback. It allows you to visually set multiple start and stop points, etc.
Any suggestions?
Check out http://naudio.codeplex.com/
Does current version of Silverlight Media Framework provides streaming with multiple smooth streaming media elements? For example, I would like to have a player that could play for a screen capture and another one for human just like this example Microsoft PDC . Is it possible to do with SMF?
Are there any solutions/examples available which allow me to simultaneously stream multiple videos in on player?
Thanks for suggestion.
There is a multi-cam demo sample in the IIS showcase, which is similar to the effect you are looking for...
IIS Showcase Demos
However, I must say I have not actually seen anyone use this yet, nor any code samples in how to achieve it.
It's worth noting, though - that in the case of Microsoft PDC, what you are watching is, in fact, a single stream. The camera and screen capture sources are simply combined into that layout pre-encoding (via either a video switcher or perhaps a software encoder like Wirecast). It requires a little more work on the "production" side, but makes your deployment a lot easier, and you don't need 2x the encoding power.
I am using XNA to develop a game which requires both sound effects and music. I'm trying to figure out how to implement the sound engine. Microsoft provides the ability to use the Content Pipeline to load and play audio. However, I also seen people use Xact to do the same thing. My question is, whats the difference and what would be the better approach to making a sound engine?
Xact is feature rich but complex to use. It was originally the only way to play sound but people wanted something simpler so Microsoft added the content pipeline method.
Use the Content Pipeline if you want:
To play a sound (2d or 3d)
To not have to invest a lot of time
learning an audio framework
Use Xact if you want:
Categories of sounds that can be
interdependently controlled, like
mute game sounds but not menu sounds
Real time advanced control over sound
pitch, volume. For things like
turrets spinning up, cars
accelerating etc.
To have multiple varieties of a
single sound effect like a seven
different pain sounds and have Xact
choose which one to play.
To have a sound play with slightly
different pitch, volume, filter or 3d
pan every time it is played. This is
really good for bullets and
repetitive things like that. There
is nothing that says fake computer
simulation like a repeating sound
with no variance.
To allow a game designer or sound
designer full control to edit and
change sounds without touching the
code.
To have sound banks (collections of
sound) that you can load or unload as
a group, which can use different
compression settings and can be in
memory or streaming.
To mix the volume, pitch and priority
of sounds in an editor.
To apply filtering to a sound.
To control whether the sound is
looping or not.
To use DSP Effects
One of my favourite things about Xact is editing and previewing of sound functions in editor. For example a volume fade on a turret overheat sound. With XACT you can sit down with the sound designer, even if he's not a technical guy and edit the control curves until he's happy with the sound. Once you've set it up it's really easy to change later on. In this example image here a turret is overheated. At the end of the overheat cycle the hissing steam noise volume is reduced, but because it's a curve I have a lot of control over how the sound fades out. I've used this with a beam weapon as well, dropping the frequency according to a curve as it runs out of ammo.