SharpDX XAudio2: 6 SourceVoice limit - c#

I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh

As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.

Related

Send multiple tone generators to a mixer and play the output of the mixer

Using NAudio Library
I have seen the following link:
https://markheath.net/post/mixing-and-looping-with-naudio (don't fully understand the detail)
I am able to play generated tones one after the other.
What I don't understand is how to combine multiple tone generators/sources into a mixer and play the output of that mixer as shown below:
I'm missing some fundamental understanding of the process here I think, so any pointers or further detail would really help me move forward.
Creating the mixer:
WaveMixerStream32 mixer = new WaveMixerStream32();
mixer.AddInputStream(GenerateUpperSine(args[0]));
mixer.AddInputStream(GenerateLowerSine(args[1]));
Creating the Sine Generators:
private static WaveStream GenerateUpperSine(string frequency){
var outFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 1);
const int durationInSecods = 5;
var UpperFreq = new SignalGenerator();
{
Gain = 0.2,
Frequency = Double.Parse(frequency),
Type = SignalGeneratorType.Sin
}
var sp = UpperFreq.ToWaveProvider16();
byte{} data = new Byte[outFormat.AverageBytesPerSecond * durationInSeconds];
var bytesRead = new sp.Read(data, 0, data.Length);
return new RawSourceWaveStream(new MemoryStream(data), outFormat);
}
The GenerateLowerSine function is an exact duplicate of the upper one, just with some variables changed.
Calling this application from the command line as follows:
c:\path\to\exe\my.exe 1000 750
I have some code which I can add to the generation code which plays the tones as expected, one after the other. This may reside within the function and is as follows:
var wo = new WaveOutEvent();
wo.Init(UpperFreq);
wo.Play();
This indicates the basic functionality is working, at least in part.
What I believe I am missing is either:
how to send the output of the tone generators to the mixer and then initiate the playback from the output of the mixer
or
possibly connect the output of the mixer to a WaveOut device and send
the audio/data from the tone generators to the inputs of the mixer
Perhaps it is something else though.
Any help would be greatly appreciated.

BASS WASAPI BPMCounter

I want to analyse my default playback device and detect the beats. I've been using the BASS WASAPI to get the FFT data of the selected device with:
int ret = BassWasapi.BASS_WASAPI_GetData(_fft, (int)BASSData.BASS_DATA_FFT2048);
Now I was using the data to generate spectrum data and display this to the user. In addition I want to detect the Beats using the BPMCounter Class from BASS. However as far as I can tell the BPMCounter.ProcessAudio() function requires a stream (which I don't get with WASAPI) in order to work. Is there a ways I can use BPMCounter with WASAPI? Would be great if someone can point me to the right direction. Thanks
Edit:
Tried this to convert the data to a stream, but without success:
int ret = BassWasapi.BASS_WASAPI_GetData(_fft, (int)BASSData.BASS_DATA_FFT2048); //get channel fft data
var chan = Bass.BASS_StreamCreate(0, 44100, BASSFlag.BASS_DEFAULT, BASSStreamProc.STREAMPROC_PUSH);
Bass.BASS_ChannelPlay(chan, false);
Bass.BASS_StreamPutData(chan, _fft, _fft.Length);
bool beat = _count.ProcessAudio(chan, true);
Debug.Write(beat);
beat is always False, however I can see at the Spectrum that the capturing of the FFT Data is corrent.
I've just started playing with this lib a few hours ago and i am still going through the examples. So my answer maybe is not what you want. For my project i also want to transform WASAPI into a stream and use it for a displaying a spectrum. What i did was to create a StreamPush, right after BASS_WASAPI initialization.
To init your WASAPI use this call and this delegate:
private InitWasapi()
{
WASAPIPROC _process = new WASAPIPROC(Process); // Delegate
bool res = BassWasapi.BASS_WASAPI_Init(_YourDeviceNumber, 0, 0, BASSWASAPIInit.BASS_WASAPI_BUFFER, 1f, 0f, _process, IntPtr.Zero);
if (!res)
{
// Do error checking
}
// This is the part you are looking for (maybe!)
// Use these flags because Wasapi needs 32-bit sample data
var info = BassWasapi.BASS_WASAPI_GetInfo();
_stream = Bass.BASS_StreamCreatePush(info.freq, info.chans, BASSFlag.BASS_STREAM_DECODE | BASSFlag.BASS_SAMPLE_FLOAT, IntPtr.Zero);
BassWasapi.BASS_WASAPI_Start();
}
private int Process(IntPtr buffer, int length, IntPtr user)
{
Bass.BASS_StreamPutData(_stream, buffer, length);
return length;
}
Please note: This works, but i am still experimenting. For example i am not getting the same spectrum output as when i create the stream from the music file itself. There are some (small) differences. Maybe it's because i am using a custom EQ in Winamp for playing the same .mp3. So if anyone knows more on this subject, i would like also to hear it!

error in realtime wasapi capture(from sound card) manipulate and play

I'm trying to capture all sound going through the computer to manipulate and play it in realTime (there can be a slight delay due to manipulation but nothing too serious).
I'm trying to do this using Naudio wasapi. The problem is:
When I do it in exclusive mode, this line: audioClient.Initialize(shareMode, AudioClientStreamFlags.EventCallback, latencyRefTimes, latencyRefTimes,outputFormat, Guid.Empty);
throws this exception:
An unhandled exception of type 'System.Runtime.InteropServices.COMException' occurred in NAudio.dll
Additional information: HRESULT: 0x88890016
When I do it in shared mode I get lot of noise which I think is caused by sound feedback (similiar to what happens when recording and playing at the same time)
Here's my code:
WasapiLoopbackCapture source = new WasapiLoopbackCapture();
source.DataAvailable += CaptureOnDataAvailable;
bufferedWaveProvider = new BufferedWaveProvider(source.WaveFormat);
volumeProvider = new VolumeSampleProvider(bufferedWaveProvider.ToSampleProvider());
WasapiOut soundOut = new WasapiOut(AudioClientShareMode.Shared, 0);
soundOut.Init(volumeProvider);
soundOut.Play();
source.StartRecording();
soundOut.Volume = 0.5f;
}
private void CaptureOnDataAvailable(object sender, WaveInEventArgs waveInEventArgs)
{
int length = waveInEventArgs.Buffer.Length;
byte[] byteSamples = new Byte[length];
float[] buffer = waveInEventArgs.Buffer.toFloatArray(waveInEventArgs.BytesRecorded);//buffer to contains the samples about to be manipulated
fixer.fixSamples(length / 2, buffer, ref fixedSamples);
if (fixedSamples.Count > 0)
{
//convert the fixed samples back to bytes in order for them to be able to play out
byteSamples = fixedSamples.convertToByteArray(position);
bufferedWaveProvider.AddSamples(byteSamples, 0, byteSamples.Length);
volumeProvider.Volume = .5f;
}
position = fixedSamples.Count;
}
How can I solve these problems?
Also, I don't know if it's the best approach for what i'm trying to do, so if anyone has a better idea how to do this I'm more than happy to hear.
(i thought about using asio, but decided not to since there are a lot of computers without an asio driver)
this error is AUDCLNT_E_UNSUPPORTED_FORMAT
You can only capture audio in certain formats with WASAPI. Usually has to be IEEE float, stereo, 44.1kHz / 48kHz

Writing to MIDI file in C#

I have been trying to find a way to write to a MIDI file using the C# MIDI Toolkit. However, I am constantly running into problems with time synchronization. The resulting MIDI file is always off beat. More precisely, it has correct rhythm in relation to itself, but when imported into a sequencer, it does not seem to contain any tempo information (which is understandable, since I never specify it from within my program). There is no documentation on how to do this.
I am using the following code to insert notes into a track.
public const int NOTE_LENGTH = 32;
private static void InsertNote(Track t, int pitch, int velocity, int position, int duration, int channel)
{
ChannelMessageBuilder builder = new ChannelMessageBuilder();
builder.Command = ChannelCommand.NoteOn;
builder.Data1 = pitch;
builder.Data2 = velocity;
builder.MidiChannel = channel;
builder.Build();
t.Insert(position * NOTE_LENGTH, builder.Result);
builder.Command = ChannelCommand.NoteOff;
builder.Build();
t.Insert((position + duration) * NOTE_LENGTH, builder.Result);
}
I am sure the notes themselves are okay, since the resulting output is audible, but has no tempo information. How do I enter tempo information into the Sequence object that contains my tracks?
Stumbled upon an answer by brute-force trying: the NOTE_LENGTH should be evenly devisable by 3.

Audio generation software or .NET library

I need to be able to play certain tones in a c# application. I don't care if it generates them on the fly or if it plays them from a file, but I just need SOME way to generate tones that have not only variable volume and frequency, but variable timbre. It would be especially helpful if whatever I used to generate these tones would have many timbre pre-sets, and it would be even more awesome if these timbres didn't all sound midi-ish (meaning some of them sounded like the might have been recordings of actual instruments).
Any suggestions?
You might like to take a look at my question Creating sine or square wave in C#
Using NAudio in particular was a great choice
This article helped me with something similar:
http://social.msdn.microsoft.com/Forums/vstudio/en-US/18fe83f0-5658-4bcf-bafc-2e02e187eb80/beep-beep
The part in particular is the Beep Class:
public class Beep
{
public static void BeepBeep(int Amplitude, int Frequency, int Duration)
{
double A = ((Amplitude * (System.Math.Pow(2, 15))) / 1000) - 1;
double DeltaFT = 2 * Math.PI * Frequency / 44100.0;
int Samples = 441 * Duration / 10;
int Bytes = Samples * 4;
int[] Hdr = {0X46464952, 36 + Bytes, 0X45564157, 0X20746D66, 16, 0X20001, 44100, 176400, 0X100004, 0X61746164, Bytes};
using (MemoryStream MS = new MemoryStream(44 + Bytes))
{
using (BinaryWriter BW = new BinaryWriter(MS))
{
for (int I = 0; I < Hdr.Length; I++)
{
BW.Write(Hdr[I]);
}
for (int T = 0; T < Samples; T++)
{
short Sample = System.Convert.ToInt16(A * Math.Sin(DeltaFT * T));
BW.Write(Sample);
BW.Write(Sample);
}
BW.Flush();
MS.Seek(0, SeekOrigin.Begin);
using (SoundPlayer SP = new SoundPlayer(MS))
{
SP.PlaySync();
}
}
}
}
}
It can be used as follows
Beep.BeepBeep(100, 1000, 1000); /* 10% volume */
There's a popular article on CodeProject along these lines:
http://www.codeproject.com/KB/audio-video/CS_ToneGenerator.aspx
You might also check out this thread:
http://episteme.arstechnica.com/eve/forums/a/tpc/f/6330927813/m/197000149731
In order for your generated tones not to sound 'midi-ish', you'll have to use real-life samples and play them back. Try to find some good real instrument sample bank, like http://www.sampleswap.org/filebrowser-new.php?d=INSTRUMENTS+single+samples%2F
Then, when you want to compose melody from them, just alternate playback frequency relative to the original sample frequency.
Please drop me a line if you find this answer usefull.

Categories

Resources