BASS WASAPI BPMCounter - c#

I want to analyse my default playback device and detect the beats. I've been using the BASS WASAPI to get the FFT data of the selected device with:
int ret = BassWasapi.BASS_WASAPI_GetData(_fft, (int)BASSData.BASS_DATA_FFT2048);
Now I was using the data to generate spectrum data and display this to the user. In addition I want to detect the Beats using the BPMCounter Class from BASS. However as far as I can tell the BPMCounter.ProcessAudio() function requires a stream (which I don't get with WASAPI) in order to work. Is there a ways I can use BPMCounter with WASAPI? Would be great if someone can point me to the right direction. Thanks
Edit:
Tried this to convert the data to a stream, but without success:
int ret = BassWasapi.BASS_WASAPI_GetData(_fft, (int)BASSData.BASS_DATA_FFT2048); //get channel fft data
var chan = Bass.BASS_StreamCreate(0, 44100, BASSFlag.BASS_DEFAULT, BASSStreamProc.STREAMPROC_PUSH);
Bass.BASS_ChannelPlay(chan, false);
Bass.BASS_StreamPutData(chan, _fft, _fft.Length);
bool beat = _count.ProcessAudio(chan, true);
Debug.Write(beat);
beat is always False, however I can see at the Spectrum that the capturing of the FFT Data is corrent.

I've just started playing with this lib a few hours ago and i am still going through the examples. So my answer maybe is not what you want. For my project i also want to transform WASAPI into a stream and use it for a displaying a spectrum. What i did was to create a StreamPush, right after BASS_WASAPI initialization.
To init your WASAPI use this call and this delegate:
private InitWasapi()
{
WASAPIPROC _process = new WASAPIPROC(Process); // Delegate
bool res = BassWasapi.BASS_WASAPI_Init(_YourDeviceNumber, 0, 0, BASSWASAPIInit.BASS_WASAPI_BUFFER, 1f, 0f, _process, IntPtr.Zero);
if (!res)
{
// Do error checking
}
// This is the part you are looking for (maybe!)
// Use these flags because Wasapi needs 32-bit sample data
var info = BassWasapi.BASS_WASAPI_GetInfo();
_stream = Bass.BASS_StreamCreatePush(info.freq, info.chans, BASSFlag.BASS_STREAM_DECODE | BASSFlag.BASS_SAMPLE_FLOAT, IntPtr.Zero);
BassWasapi.BASS_WASAPI_Start();
}
private int Process(IntPtr buffer, int length, IntPtr user)
{
Bass.BASS_StreamPutData(_stream, buffer, length);
return length;
}
Please note: This works, but i am still experimenting. For example i am not getting the same spectrum output as when i create the stream from the music file itself. There are some (small) differences. Maybe it's because i am using a custom EQ in Winamp for playing the same .mp3. So if anyone knows more on this subject, i would like also to hear it!

Related

Send multiple tone generators to a mixer and play the output of the mixer

Using NAudio Library
I have seen the following link:
https://markheath.net/post/mixing-and-looping-with-naudio (don't fully understand the detail)
I am able to play generated tones one after the other.
What I don't understand is how to combine multiple tone generators/sources into a mixer and play the output of that mixer as shown below:
I'm missing some fundamental understanding of the process here I think, so any pointers or further detail would really help me move forward.
Creating the mixer:
WaveMixerStream32 mixer = new WaveMixerStream32();
mixer.AddInputStream(GenerateUpperSine(args[0]));
mixer.AddInputStream(GenerateLowerSine(args[1]));
Creating the Sine Generators:
private static WaveStream GenerateUpperSine(string frequency){
var outFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 1);
const int durationInSecods = 5;
var UpperFreq = new SignalGenerator();
{
Gain = 0.2,
Frequency = Double.Parse(frequency),
Type = SignalGeneratorType.Sin
}
var sp = UpperFreq.ToWaveProvider16();
byte{} data = new Byte[outFormat.AverageBytesPerSecond * durationInSeconds];
var bytesRead = new sp.Read(data, 0, data.Length);
return new RawSourceWaveStream(new MemoryStream(data), outFormat);
}
The GenerateLowerSine function is an exact duplicate of the upper one, just with some variables changed.
Calling this application from the command line as follows:
c:\path\to\exe\my.exe 1000 750
I have some code which I can add to the generation code which plays the tones as expected, one after the other. This may reside within the function and is as follows:
var wo = new WaveOutEvent();
wo.Init(UpperFreq);
wo.Play();
This indicates the basic functionality is working, at least in part.
What I believe I am missing is either:
how to send the output of the tone generators to the mixer and then initiate the playback from the output of the mixer
or
possibly connect the output of the mixer to a WaveOut device and send
the audio/data from the tone generators to the inputs of the mixer
Perhaps it is something else though.
Any help would be greatly appreciated.

SharpDX XAudio2: 6 SourceVoice limit

I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.

error in realtime wasapi capture(from sound card) manipulate and play

I'm trying to capture all sound going through the computer to manipulate and play it in realTime (there can be a slight delay due to manipulation but nothing too serious).
I'm trying to do this using Naudio wasapi. The problem is:
When I do it in exclusive mode, this line: audioClient.Initialize(shareMode, AudioClientStreamFlags.EventCallback, latencyRefTimes, latencyRefTimes,outputFormat, Guid.Empty);
throws this exception:
An unhandled exception of type 'System.Runtime.InteropServices.COMException' occurred in NAudio.dll
Additional information: HRESULT: 0x88890016
When I do it in shared mode I get lot of noise which I think is caused by sound feedback (similiar to what happens when recording and playing at the same time)
Here's my code:
WasapiLoopbackCapture source = new WasapiLoopbackCapture();
source.DataAvailable += CaptureOnDataAvailable;
bufferedWaveProvider = new BufferedWaveProvider(source.WaveFormat);
volumeProvider = new VolumeSampleProvider(bufferedWaveProvider.ToSampleProvider());
WasapiOut soundOut = new WasapiOut(AudioClientShareMode.Shared, 0);
soundOut.Init(volumeProvider);
soundOut.Play();
source.StartRecording();
soundOut.Volume = 0.5f;
}
private void CaptureOnDataAvailable(object sender, WaveInEventArgs waveInEventArgs)
{
int length = waveInEventArgs.Buffer.Length;
byte[] byteSamples = new Byte[length];
float[] buffer = waveInEventArgs.Buffer.toFloatArray(waveInEventArgs.BytesRecorded);//buffer to contains the samples about to be manipulated
fixer.fixSamples(length / 2, buffer, ref fixedSamples);
if (fixedSamples.Count > 0)
{
//convert the fixed samples back to bytes in order for them to be able to play out
byteSamples = fixedSamples.convertToByteArray(position);
bufferedWaveProvider.AddSamples(byteSamples, 0, byteSamples.Length);
volumeProvider.Volume = .5f;
}
position = fixedSamples.Count;
}
How can I solve these problems?
Also, I don't know if it's the best approach for what i'm trying to do, so if anyone has a better idea how to do this I'm more than happy to hear.
(i thought about using asio, but decided not to since there are a lot of computers without an asio driver)
this error is AUDCLNT_E_UNSUPPORTED_FORMAT
You can only capture audio in certain formats with WASAPI. Usually has to be IEEE float, stereo, 44.1kHz / 48kHz

Xamarin Monotouch Audio Units Callbacks

I have a problem with Audio Units in MonoTouch/Xamarin.
It seems like I can't get a callback on recording, just playback.
I used this example:
https://github.com/xamarin/monotouch-samples/blob/master/AUSoundTriggeredPlayingSoundMemoryBased/ExtAudioBufferPlayer.cs
and looked for Obj C examples. The Obj C examples are pretty much the same like my code so Im a little bit confused about this thing.
The output if running my example is:
INPUT0
Which is the bus number for output.
So the expected output should be:
INPUT1
So my question is: How do I get a recording callback and a playback callback running the same time, or just how do I get a recording callback.
My Code:
void prepareAudioUnit()
{
// AudioSession
AudioSession.Initialize();
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
AudioSession.PreferredHardwareIOBufferDuration = Config.packetLength;
AudioSession.PreferredHardwareSampleRate = Format.samplingRate;
//AudioSession.SetActive (false);
AudioSession.SetActive(true);
Logger.log("HWSR:" + AudioSession.CurrentHardwareSampleRate);
// Getting AudioComponent Remote output
_audioComponent = AudioComponent.FindComponent(AudioTypeOutput.VoiceProcessingIO);
// creating an audio unit instanc
_audioUnit = new AudioUnit(_audioComponent);
// turning on microphone
_audioUnit.SetEnableIO(true,
AudioUnitScopeType.Input,
1 // Remote Input
);
_audioUnit.SetEnableIO(true,
AudioUnitScopeType.Output,
0 // Remote output
);
// setting audio format
_audioUnit.SetAudioFormat(Format.AudioStreamBasicDescription,
AudioUnitScopeType.Output,
1
);
_audioUnit.SetAudioFormat(Format.AudioStreamBasicDescription,
AudioUnitScopeType.Input,
0
);
// setting callback method
_audioUnit.SetRenderCallback(_audioUnit_OutputCallback, AudioUnitScopeType.Global, 0);
_audioUnit.SetRenderCallback(_audioUnit_InputCallback, AudioUnitScopeType.Global, 1);
}
AudioUnitStatus _audioUnit_OutputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioBuffers data)
{
Logger.log("OUTPUT" + busNumber);
return AudioUnitStatus.NoError;
}
AudioUnitStatus _audioUnit_InputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioBuffers data)
{
Logger.log("INPUT" + busNumber);
return AudioUnitStatus.NoError;
}
This problem is a bug in Xamarin, they forgot to add a method for InputCallbacks.
I reported the bug but for the people needing the same:
http://nopaste.info/8d0aca98d9.html
Its not good, but it shows how to solve the problem to write a fix yourself till Xamarin updates this.

How to process audio data from sound card output using Bass.NET

I want to capture and process data using Bass.NET using the BASS_ChannelGetData method. The examples I've seen that use this play audio files through the Bass.NET library and then sample that, however I wish to sample the data my soundcard outputs, so that I can capture and process audio data from third party audio players, for example Spotify.
Bass.BASS_ChannelGetData(handle, buffer, (int)BASSData.BASS_DATA_FFT256);
How would I get a handle that would allow me to process this data?
Bass.BASS_RecordInit do return a handle but if you look closely at the documentation they do use it only for playing (actually starting) the record channel. Their code sample uses a callback to retrieve the audio samples.
Take a look at Bass.BASS_RecordStart Method documentation.
private RECORDPROC _myRecProc; // make it global, so that the GC can not remove it
private int _byteswritten = 0;
private byte[] _recbuffer; // local recording buffer
...
if ( Bass.BASS_RecordInit(-1) )
{
_myRecProc = new RECORDPROC(MyRecording);
int recHandle = Bass.BASS_RecordStart(44100, 2, BASSFlag.BASS_RECORD_PAUSE, _myRecProc, IntPtr.Zero);
...
// start recording
Bass.BASS_ChannelPlay(recHandle, false);
}
...
private bool MyRecording(int handle, IntPtr buffer, int length, IntPtr user)
{
bool cont = true;
if (length > 0 && buffer != IntPtr.Zero)
{
// increase the rec buffer as needed
if (_recbuffer == null || _recbuffer.Length < length)
_recbuffer = new byte[length];
// copy from managed to unmanaged memory
Marshal.Copy(buffer, _recbuffer, 0, length);
_byteswritten += length;
// write to file
...
// stop recording after a certain amout (just to demo)
if (_byteswritten > 800000)
cont = false; // stop recording
}
return cont;
}
Note that you should be able to use BASS_ChannelGetData inside that callback instead of Marshal.Copy.
Did you mean resample instead of sample ? If so then BassMix class will handle that job.

Categories

Resources