I want to an audio source to be played back after a delay of 3 minutes. I tried to make kind of buffered WaveSource but the sound is played almost instantly.
That's the code I'm currently using:
using (var soundIn = new WasapiCapture()
{
Device = GetDevice("VoiceMeeter", DataFlow.Capture)
})
{
soundIn.Initialize();
var source = new SoundInSource(soundIn)
{
FillWithZeros = true
};
soundIn.Start();
using (var soundOut = new WasapiOut())
{
soundOut.Initialize(source);
soundOut.Play();
Console.ReadKey();
}
}
If anyone has a good idea how to delay the data for 3 minutes I would be very thankful. If not obvious enough I'm using the CSCore library.
Related
I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.
I am trying to post a video to Twitter using Tweetinvi library:
byte[] video = DownloadBlobFromUrl(parameters.VideoUrl);
IMedia media = Upload.ChunkUploadBinary(new UploadQueryParameters { Binaries = new List<byte[]> { video }, MediaType = "video/mp4", MediaCategory = "tweet_video", MaxChunkSize = VIDEO_MB_CHUNK_SIZE * 1024 * 1024 });
publishParameters.Medias = new List<IMedia> { media };
ITweet tweet = Tweet.PublishTweet(message, publishParameters);
The problem is that publishing fails, unless I add, before publishing, some sort of sleep, like:
await Task.Delay(25000);
With delay it works. Interesting is the fact that IMedia's member HasBeenUploaded is set to true. I also tried using chunk upload, but with the same result. How can I wait until video is fully uploaded to Twitter, assuming this is the issue?
I am the developer of Tweetinvi.
The problem you are encountering is a problem of the Twitter UPLOAD API. The problem is that when an upload completes it takes between few milliseconds up to 1 second for their upload service to process it and make it available to you.
From there you have 2 solutions.
Solution 1 (simplicity)
Don't specify the MediaCategory and use the classical Upload as followed:
var videoBinary = File.ReadAllBytes("file_path");
var videoMedia = Upload.UploadVideo(videoBinary);
Tweet.PublishTweet("test", new PublishTweetOptionalParameters()
{
Medias = { videoMedia }
});
This video should be available straight away. But I have experienced times when a delay is required. Therefore I usually add a delay of 500ms for Twitter servers to be ready for the incoming Tweet.
Solution 2 (amplify_video)
amplify_video is a more robust solution as it is the solution provided by Twitter to solve the delay problem.
var videoBinary = File.ReadAllBytes(#"C:\Users\linvi\Pictures\mov_bbb.mp4");
var videoMedia = Upload.UploadVideo(videoBinary, "video/mp4", "amplify_video");
var isProcessed = videoMedia.UploadedMediaInfo.ProcessingInfo.State == "succeeded";
var timeToWait = videoMedia.UploadedMediaInfo.ProcessingInfo.CheckAfterInMilliseconds;
while (!isProcessed)
{
Thread.Sleep(timeToWait);
// The second parameter (false) informs Tweetinvi that you are manually awaiting the media to be ready
var mediaStatus = Upload.GetMediaStatus(videoMedia, false);
isProcessed = mediaStatus.ProcessingInfo.State == "succeeded";
timeToWait = mediaStatus.ProcessingInfo.CheckAfterInMilliseconds;
}
I realize that this is complicated but few people uses amplify_video.
In the next release I will add a new method that will do all this logic automatically for you.
If you want to be informed when this feature is released you can find the work item here : https://github.com/linvi/tweetinvi/issues/347.
I will also provide a new enum for ProcessingInfo.State (https://github.com/linvi/tweetinvi/issues/348).
I hope this answer helps you.
Have a great day.
Found an answer, not so elegant, but it works. You have to set the media category to amplify_video. For anyone else with this issue:
byte[] video = DownloadBlobFromUrl(parameters.VideoUrl);
IMedia media = Upload.ChunkUploadBinary(new UploadQueryParameters { Binaries = new List<byte[]> { video }, MediaType = "video/mp4", MediaCategory = "amplify_video", MaxChunkSize = VIDEO_MB_CHUNK_SIZE * 1024 * 1024 });
publishParameters.Medias = new List<IMedia> { media };
IUploadedMediaInfo status = Upload.GetMediaStatus(media);
int numberOfTries = 1;
while (status.ProcessingInfo.State != "succeeded" && numberOfTries < VIDEO_UPLOAD_TRY_COUNT)
{
numberOfTries++;
await Task.Delay(VIDEO_UPLOAD_WAIT_SECONDS * 1000);
status = Upload.GetMediaStatus(media);
}
if (status.ProcessingInfo.State == "succeeded")
{
tweet = Tweet.PublishTweet(message, publishParameters);
return tweet.IdStr;
}
Hello i am trying to map system mic audio to external sound card speaker and external sound card mic audio to system speaker. By using code
public void MapForManualCall()
{
try
{
if (db.getResultOnQuery("SELECT [Value] FROM [dbo].[SystemProperties] where property='RecordingEnabled'").Rows[0][0].ToString().Equals("YES"))
{
SystemMic = new NAudio.Wave.WaveInEvent();
SystemMic.DeviceNumber = 0;
SystemMic.WaveFormat = new NAudio.Wave.WaveFormat(44100, NAudio.Wave.WaveIn.GetCapabilities(SystemMic.DeviceNumber).Channels);
SoundcardMic = new NAudio.Wave.WaveInEvent();
SoundcardMic.DeviceNumber = 1;
SoundcardMic.WaveFormat = new NAudio.Wave.WaveFormat(44100, NAudio.Wave.WaveIn.GetCapabilities(SoundcardMic.DeviceNumber).Channels);
//NAudio.Wave.WaveInProvider waveIn = new NAudio.Wave.WaveInProvider(sourceStream);
// used to set listen property of mic on
var waveOutReceiver = new NAudio.Wave.WaveOut();
waveOutReceiver.DeviceNumber = 0;
// used to wavout caller voice on receiver speaker
NAudio.Wave.WaveInProvider waveInProviderCaller = new NAudio.Wave.WaveInProvider(SystemMic);
waveOutReceiver.Init(waveInProviderCaller);
waveOutReceiver.Play();
var waveOutCaller = new NAudio.Wave.WaveOut();
waveOutCaller.DeviceNumber = 1;
// used to wavout receiver voice on caller speaker
NAudio.Wave.WaveInProvider waveInProviderReceiver = new NAudio.Wave.WaveInProvider(SoundcardMic);
waveOutCaller.Init(waveInProviderReceiver);
waveOutCaller.Play();
//sourceStream.StartRecording();
//waveOut.Play();
// SoundcardMic.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(waveIn_DataAvailable1);
// writer1 = new NAudio.Wave.WaveFileWriter(outputFilenameReceiver, SoundcardMic.WaveFormat);
SoundcardMic.StartRecording();
//SystemMic.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(waveIn_DataAvailable);
//writer = new NAudio.Wave.WaveFileWriter(outputFilenameCaller, SystemMic.WaveFormat);
SystemMic.StartRecording();
// MapSpeakerNMic();
}
}
catch (Exception ex)
{
MessageBox.Show("Please Check Headphone and Device Cable Connected Properly!");
}
}
Code above works perfect but there is delay of 3-4 seconds between mapping. When i am trying above task using Listen functionalities of windows 7 it works perfect. According to me it can be issue of reading writing buffer. Don't know how to do it...
Latency is the issue here. There is latency at the recording and playback stage. You will find it very hard to reduce this to small values without using something like ASIO. However, all the NAudio APIs allow you to specify buffer sizes so you can see how low you can go before you get dropouts.
I am using NAudio for a screen recording software I am designing and I need to know if it's possible to not only control the specific application's volume but also display a VU Meter for the application's sound.
I've Googled all over the place and it seems I can only get a VU Meter for the devices currently on my computer and set the volume for those devices.
Even though I am using NAudio, I am open to other solutions.
I asked the question in more detail after this question. I have since found the answer so I will leave the answer here for those who stumble upon it. Trying to use NAudio & CSCore has gotten me quite familiar with so please ask if you need further assistance.
This block of code uses CSCore and is a modified and commented version of the answer found here:Getting individual windows application current volume output level as visualized in audio Mixer
class PeakClass
{
static int CurrentProcessID = 0000;
private static void Main(string[] args)
{
//Basically gets your default audio device and session attached to it
using (var sessionManager = GetDefaultAudioSessionManager2(DataFlow.Render))
{
using (var sessionEnumerator = sessionManager.GetSessionEnumerator())
{
//This will go through a list of all processes uses the device
//the code got two line above.
foreach (var session in sessionEnumerator)
{
//This block of code will get the peak value(value needed for VU Meter)
//For whatever process you need it for (I believe you can also check by name
//but I found that less reliable)
using (var session2 = session.QueryInterface<AudioSessionControl2>())
{
if(session2.ProcessID == CurrentProcessID)
{
using (var audioMeterInformation = session.QueryInterface<AudioMeterInformation>())
{
Console.WriteLine(audioMeterInformation.GetPeakValue());
}
}
}
//Uncomment this block of code if you need the peak values
//of all the processes
//
//using (var audioMeterInformation = session.QueryInterface<AudioMeterInformation>())
//{
// Console.WriteLine(audioMeterInformation.GetPeakValue());
//}
}
}
}
}
private static AudioSessionManager2 GetDefaultAudioSessionManager2(DataFlow dataFlow)
{
using (var enumerator = new MMDeviceEnumerator())
{
using (var device = enumerator.GetDefaultAudioEndpoint(dataFlow, Role.Multimedia))
{
Console.WriteLine("DefaultDevice: " + device.FriendlyName);
var sessionManager = AudioSessionManager2.FromMMDevice(device);
return sessionManager;
}
}
}
}
The following code block will allow you to change the volume of the device using NAudio
MMDevice VUDevice;
public void SetVolume(float vol)
{
if(vol > 0)
{
VUDevice.AudioEndpointVolume.Mute = false;
VUDevice.AudioEndpointVolume.MasterVolumeLevelScalar = vol;
}
else
{
VUDevice.AudioEndpointVolume.Mute = true;
}
Console.WriteLine(vol);
}
I have code from two different libraries only to answer the question I posted directly which was how to both set the volume and get VU Meter values (peak values). CSCore and NAudio are very similar so most of the code here is interchangeable.
I am trying to pair my Wiimotes using 32Feet API and I am successfully in doing so by following code.
var client = new InTheHand.Net.Sockets.BluetoothClient();
var devices = client.DiscoverDevices();
var count = (from d in devices
where d.DeviceName.Contains("Nintendo")
select d).Count();
foreach (var device in devices)
{
if (device.DeviceName.Contains("Nintendo"))
{
if (device.InstalledServices.Length > 0)
{
InTheHand.Net.Bluetooth.BluetoothSecurity.RemoveDevice(device.DeviceAddress);
//while it's being removed
Thread.Sleep(2000);
}
device.SetServiceState(InTheHand.Net.Bluetooth.BluetoothService.HumanInterfaceDevice, false);
device.SetServiceState(InTheHand.Net.Bluetooth.BluetoothService.HumanInterfaceDevice, true);
//Here I am confused! What to do to read from stream?
}
}
The line which I have commented as "Here I am confused!..." is what messing all the time. Can someone help me how to connect to all the wiimotes one by one and then to read from their stream please?
Don't try to reinvent the wheel, use an existing library: http://wiimotelib.codeplex.com/