I'm making a War Game for iOS using Monotouch and C#. I'm running into some problem with the audio sound effects.
Here's what I require: The ability to play many sound effects simultaneously (possibly up to 10-20 at once) and the ability to adjust volume (for example, if the user zooms in on the battlefield the gun shot volume gets louder).
Here are my problems:
With AVAudioPlayer, I can adjust volume but I can only play 1 sound per thread. So if I want to play multiple sounds I have to have dozens and dozens of threads going just incase they overlap... This is a war game, picture 20 soldiers on the battlefield. Each soldier would have a "sound thread" to play gun fire sounds when they shoot because It is possible that every soldier could just happen to fire at the same exact time. I don't have a problem with making lots of threads, but my game already has dozens of threads running all the time and adding dozens more could get me into trouble... right? So I'd rather not go this road of adding dozens of more threads unless i have too...
With SystemSound, I can play as many sounds as I want in the same thread, but I can't adjust the volume.... So my work around here is, for every sound effect i have - save it like 4 times at 4 different volumes. That is a big pain... Any way to adjust volume with SystemSounds??
Both of these answer some of my requirements, but neither seems to be a seamless fit. Should I just go the AVAudioPlayer multi-threading nightmare road? Or the SystemSound multi-file-with-different-volume-levels nightmare road? Or is there a better way to do this?
Thanks in advance.
Finally found the solution to my problem. AVAudioPlayer IS capable of playiing multiple sounds at once but only with certain file formats... The details are available in this link. The reason why I couldn't play my sound effects simultaneously was because the file format was compressed and the iphone only has 1 hardware decompressor.
http://brainwashinc.wordpress.com/2009/08/14/iphone-playing-2-sounds-at-once/
Related
The reason I want to do this is to be able to layer the background music. (e.g, simple song starts playing, player triggers something, adds an instrument). I can work out the timing issues, if any.
I thought I could do that with MediaPlayer/Song, but it wouldn't work.
All I'm really looking for is the downsides to use SoundEffectInstance.
p.s, I don't use XACT, since I'll be changing over to MonoGame eventually.
Thanks
Actually, that's what the SoundEffectInstance is for!
It has limitations though, depending on the platform your game is running:
On Windows Phone, a game can have a maximum of 16 total playing
SoundEffectInstance instances at one time, combined across all loaded
SoundEffect objects. The only limit to the total number of loaded
SoundEffectInstance and SoundEffect objects is available memory.
However, the user can play only 16 sound effects at one time. Attempts
to play a SoundEffectInstance beyond this limit will fail. On Windows,
there is no hard limit. Playing too many instances can lead to
performance degradation. On Xbox 360, the limit is 300 sound effect
instances loaded or playing. Dispose of old instances if you need
more.
Oh and by the way, it's been a long time since I played with XNA but I'm pretty sure that the XACT tool was no longer necessary by the end of it's life cycle.
I seem to recall that you could load an mp3 on the Content folder and play it via the SoundEffectInstance object.
Actually, I think you'll find using the MediaPlayer class combined with the Song class is the recommended way to play background music.
Provides methods and properties to play, pause, resume, and stop songs. MediaPlayer also exposes shuffle, repeat, volume, play position, and visualization capabilities.
I think the primary difference is that the MediaPlayer can stream the data into memory rather than loading it all in at once. So, for long playing music tracks this is the way to go.
Also, in MonoGame these classes are implemented by wrapping around the platform specific classes that do the same thing. For example, on Android the SoundEffectInstance uses the Android SoundPool (intended for sound effects) and the MediaPlayer uses the Android MediaPlayer (intended for music). See this post on the MonoGame forums for reference.
slygamer says: MediaPlayer for background music and SoundEffect for sound effects is how it is designed to be used.
I'm not fully sure what I'm looking for so I was hoping to gain some insight. I'm brainstorming an application I want to write, but I've never dealt with this type of project. I was wondering if any of you knew of any resources, both literature and libraries/APIs that could lead me down the path to creating an application that involves the playback of audio, visualization of audio (like displaying the waveform with a scrolling position), setting playback points, maybe some effects like fade in and out, maybe even beat mapping; all in .net
For example, an application that displays a waveform and has a position indicator that moves with playback. It allows you to visually set multiple start and stop points, etc.
Any suggestions?
Check out http://naudio.codeplex.com/
I am using XNA to develop a game which requires both sound effects and music. I'm trying to figure out how to implement the sound engine. Microsoft provides the ability to use the Content Pipeline to load and play audio. However, I also seen people use Xact to do the same thing. My question is, whats the difference and what would be the better approach to making a sound engine?
Xact is feature rich but complex to use. It was originally the only way to play sound but people wanted something simpler so Microsoft added the content pipeline method.
Use the Content Pipeline if you want:
To play a sound (2d or 3d)
To not have to invest a lot of time
learning an audio framework
Use Xact if you want:
Categories of sounds that can be
interdependently controlled, like
mute game sounds but not menu sounds
Real time advanced control over sound
pitch, volume. For things like
turrets spinning up, cars
accelerating etc.
To have multiple varieties of a
single sound effect like a seven
different pain sounds and have Xact
choose which one to play.
To have a sound play with slightly
different pitch, volume, filter or 3d
pan every time it is played. This is
really good for bullets and
repetitive things like that. There
is nothing that says fake computer
simulation like a repeating sound
with no variance.
To allow a game designer or sound
designer full control to edit and
change sounds without touching the
code.
To have sound banks (collections of
sound) that you can load or unload as
a group, which can use different
compression settings and can be in
memory or streaming.
To mix the volume, pitch and priority
of sounds in an editor.
To apply filtering to a sound.
To control whether the sound is
looping or not.
To use DSP Effects
One of my favourite things about Xact is editing and previewing of sound functions in editor. For example a volume fade on a turret overheat sound. With XACT you can sit down with the sound designer, even if he's not a technical guy and edit the control curves until he's happy with the sound. Once you've set it up it's really easy to change later on. In this example image here a turret is overheated. At the end of the overheat cycle the hissing steam noise volume is reduced, but because it's a curve I have a lot of control over how the sound fades out. I've used this with a beam weapon as well, dropping the frequency according to a curve as it runs out of ammo.
I recently joined a project where I need to get some vehicle based computer vision system. So what sort of special functionalities does a camera need, to be able to capture images while traveling at varying speeds ? for example how high a frame rate is required, and the exposure duration, shutter speed? Do you think that webcams(even if high end) will be able to achieve it ? The project requires the camera to be programmable in C# ...
Thank you very much in advance!
Unless video is capable of producing high quality low blur images, I would go with a camera with really fast shutterspeed, very short exposure duration, and for frame rate, following Seth's math, 44 centimeters is roughly a little more than a foot, which should be decent for calculations.
Reaction time for a human to respond to someone hitting the breaks in front of them is 1.5 seconds. If you can determine they hit their break light within 1/30th of a second, and it takes you 1 second to calculate and apply breaks, you already beat a human in reaction time.
How fast your shutter speed needs to be, is based on how fast you're vehicle is moving. Shutter speed reduces motion blur for a more accurate picture to analyze.
Try different speeds (if you can get a camera with this value configurable, might help).
I'm not sure that's an answerable question. It sounds like the sort of thing that the Darpa Grand Challenge hopes to determine :)
With regard to frame rate: If you're vehicle is going 30 miles per hour, a 30 FPS web cam will capture one frame for every 44 centimeters the vehicle travels. Whether or not that's "enough" depends on what you're planning to do with the image.
Not sure about the out-of-the-box C# programability, but a specific web-cam style camera to consider would be the PS3 eye.
It was specially engineered for motion-capture and (as I understand it) is capable of higher-quality images a high framerates than the majority of the competition. Windows drivers are available for it, and that opens the door for creating a C# wrapper.
Here is the product page, note the 120fps upper-end spec (not sure that the Windows drivers run at this rate, but obviously the hardware is capable of it).
One Note on shutter speed... images taken at a high framerate in low-light will likely be underexposed and unusable. If you'll need this to work in varying light conditions then the framerate will likely either need to be fixed at the low-end of your acceptable range, or will need to self-adjust based on available light.
These guys: Mobileye - develop such commercial systems for lane departure warnings and vehicle and pedestrian detection.
If you go to the "Manufacturer Products->Development and Evaluation Platforms->Cameras"
You can see what they use as cameras and also for their processing platforms.
30 fps should be sufficient for the applications mentioned above.
If money isn't an issue, take a look at cameras from companies like Opeton and others. You can control every aspect of every image capture including: capture time, image size, ++.
My iPhone can take pictures out the side of a car that are fairly blur free... past 10-20 feet. Inside of that, things are simply moving too fast; the shutter speed would need to be higher to not blur that.
Start with a middle-of-the-road webcamera, and move up as necessary? A laptop and a ride in your car while capturing still images would probably give you an idea of how well it works.
I am looking for a decent programmatic approach to delivering the illusion of "riding in a van". Here is the synopsis:
I have a friend who is opening up a bar in San Francisco with a room interior designed to be like the inside of a van (picture the inside of the Scooby Doo Mystery Machine) . Set into the walls are “windows” and behind those windows are monitors. There are two servers (for the left and right sides) that are delivering simultaneous presentations from pre-recorded footage of a vehicle driving down the road.
At the moment the screens are split across a shared workspace so as items in the background move from the right to the left the impression of motion is flawless. However, once you move the screens apart there is no delay for empty "wall space" or the natural delay that one would expect to perceive as an object progresses from one screen through the space in the wall to the next screen.
Is there a managed code approach I could take to construct a project that could take a time delay argument for delivering content between monitors in this case? Or is there even an off-the-shelf program that might do the trick as well?
EDIT:
What I am really looking for is advice on how to program this: Can I load in a windows media file and stream it to separate monitors on separate threads with a slight delay?
Sure, you just have to do playback on both monitors separately and delay one of the videos.