I am trying to increase the amplitude of the sound wave in my code. I have the buffer consisting of all the bytes needed to make the wave.
Here is my code for the audio Playback:
public void AddSamples(byte[] buffer)
{
//somehow adjust the buffer to make the sound louder
bufferedWaveProvider.AddSamples(buffer, 0, buffer.Length);
WaveOut waveout = new WaveOut();
waveout.Init(bufferedWaveProvider);
waveout.Play();
//to make the program more memory efficient
bufferedWaveProvider.ClearBuffer();
}
You could convert to an ISampleProvider and then try to amplify the signal by passing it through a VolumeSampleProvider with a gain > 1.0. However, you could end up with hard clipping if any samples go above 0.
WaveOut waveout = new WaveOut();
var volumeSampleProvider = new VolumeSampleProvider(bufferedWaveProvider.ToSampleProvider());
volumeSampleProvider.Volume = 2.0f; // double the amplitude of every sample - may go above 0dB
waveout.Init(volumeSampleProvider);
waveout.Play();
A better solution would be to use a dynamic range compressor effect, but NAudio does not come with one out of the box.
I had a similar problem, too. But Could done with it with this link:
http://mark-dot-net.blogspot.hu/2009/10/playback-of-sine-wave-in-naudio.html
If you know all details about your sound It would be helpful I think
Related
I'm trying to record the speaker in a WAV file using the WasapiLoopbackCapture class.
I notice that if the speaker is initially silent, the WAV file starts recording when the first sound is emitted from the speaker, for example after 5 or 10 seconds from the beginning of the recording.
Is there a way to record also the initial silence in the WAV file?
Here is the code I wrote:
WasapiLoopbackCapture _speakerWave;
protected WaveFileWriter _speakerWriter;
_speakerWave = new WasapiLoopbackCapture();
_speakerWave.DataAvailable += (s, a) =>
{
_speakerWriter.Write(a.Buffer, 0, a.BytesRecorded);
};
_speakerWriter = new WaveFileWriter("test.wav", _speakerWave.WaveFormat);
_speakerWave.StartRecording();
Thanks
From the NAudio WasapiLoopbackCapture documentation:
Now there is one gotcha with WasapiLoopbackCapture. If no audio is playing whatsoever, then the DataAvailable event won't fire. So if you want to record "silence", one simple trick is to simply use an NAudio playback device to play silence through that device for the duration of time you're recording. Alternatively, you could insert silence yourself when you detect gaps in the incoming audio.
Requirement:
I am trying to capture Audio/Video of windows screen with SharpAPI Example with Loopback audio stream of NAudio Example.
I am using C#, wpf to achieve the same.
Couple of nuget packages.
SharpAvi - forVideo capturing
NAudio - for Audio capturing
What has been achieved:
I have successfully integrated that with the sample provided and I'm trying to capture the audio through NAudio with SharpAPI video stream for the video to record along with audio implementation.
Issue:
Whatever I write the audio stream in SharpAvi video. On output, It was recorded only with video and audio is empty.
Checking audio alone to make sure:
But When I try capture the audio as separate file called "output.wav" and It was recorded with audio as expected and can able to hear the recorded audio. So, I'm concluding for now that the issue is only on integration with video via SharpApi
writterx = new WaveFileWriter("Out.wav", audioSource.WaveFormat);
Full code to reproduce the issue:
https://drive.google.com/open?id=1H7Ziy_yrs37hdpYriWRF-nuRmmFbsfe-
Code glimpse from Recorder.cs
NAudio Initialization:
audioSource = new WasapiLoopbackCapture();
audioStream = CreateAudioStream(audioSource.WaveFormat, encodeAudio, audioBitRate);
audioSource.DataAvailable += audioSource_DataAvailable;
Capturing audio bytes and write it on SharpAvi Audio Stream:
private void audioSource_DataAvailable(object sender, WaveInEventArgs e)
{
var signalled = WaitHandle.WaitAny(new WaitHandle[] { videoFrameWritten, stopThread });
if (signalled == 0)
{
audioStream.WriteBlock(e.Buffer, 0, e.BytesRecorded);
audioBlockWritten.Set();
Debug.WriteLine("Bytes: " + e.BytesRecorded);
}
}
Can you please help me out on this. Any other way to reach my requirement also welcome.
Let me know if any further details needed.
Obviously, author doesn't need it, but since I run to the same problem others might need it.
Problem in my case was that I was getting audio every 0.1 seconds and attempted to write both new video and audio at the same time. And getting new video data (taking screenshot) took me too long. Causing each frame was added every 0.3 seconds instead of 0.1. And that caused some problems with audio stream being not sync with video and not being played properly by video players (or whatever it was). And after optimizing code a little bit to be within 0.1 second, the problem is gone
I have been checking around to convert live frames into video. And I found (NReco.VideoConverter) ffmpeg lib to convert live frames to Video, but the problem is it is taking time to write each frame to ConvertLiveMediaTask (async live media task conversion).
I have an event that provides (raw) frames (1920x1080) (25fps) from IpCamera. Whenever I get frame I am doing the following
//Image availbale event fired
//...
//...
// Record video is true
if(record)
{
//////////////############# Time taking part ##############//////////////////////
var bd = frameBmp.LockBits(new Rectangle(0, 0, frameBmp.Width, frameBmp.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
var buf = new byte[bd.Stride * frameBmp.Height];
Marshal.Copy(bd.Scan0, buf, 0, buf.Length);
// write to ConvertLiveMediaTask
convertLiveMediaTask.Write(buf, 0, buf.Length); // ffMpegTask
frameBmp.UnlockBits(bd);
//////////////////////////////////////////////////////////////////////////////////
}
As the above part is taking much time, I am loosing the frames.
//Stop recording
convertLiveMediaTask.Stop(); //ffMpegTask
Stop recording, for this part I have used BackgroundWorker, because this takes too smuch time to save the media to file.
My question is how can I write the frame to ConvertLiveMediaTask in faster way? are there any possibilites to write it in background?
Please give me suggestions.
I'm sure that most time takes encoding and compressing raw bitmaps (if you encode them with h264 or something like that) by FFMpeg because of FullHD resolution (NReco.VideoConverter is a wrapper to FFMpeg). You must know that real-time encoding of FullHD is VERY CPU consuming task; if your computer is not able to do that, you may try to play with FFMPeg encoding parameters (decrease video quality / compression ratio etc) or use encoder that requires less CPU resources.
If You need to record some limited time live stream, You can split video capturing and compressing/saving into two threads.
Use for example ConcurrentQueue to buffer live frames (En-queue) on one thread without delay, and other thread could save those frames at a pace it can (De-queue). This way you will not loose frames.
Obviously You will have strain on RAM and also after stopping Live video You will have a delay while saving thread finishes.
I'm streaming music from Spotify with C# wrapper ohLibSpotify and play it with NAudio. Now I'm trying to create a spectrum visualization for the data i receive.
When i get data from libspotify, following callback gets called:
public void MusicDeliveryCallback(SpotifySession session, AudioFormat format, IntPtr frames, int num_frames)
{
//handle received music data from spotify for streaming
//format: audio format for streaming
//frames: pointer to the byte-data in storage
var size = num_frames * format.channels * 2;
if (size != 0)
{
_copiedFrames = new byte[size];
Marshal.Copy(frames, _copiedFrames, 0, size); //Copy Pointer Bytes to _copiedFrames
_bufferedWaveProvider.AddSamples(_copiedFrames, 0, size); //adding bytes from _copiedFrames as samples
}
}
Is it possible to analyze the data i pass to the BufferedWaveProvider to create a realtime visualization? And can somebody explain how?
The standard tool for transforming time-domain signals like audio samples into a frequency domain information is the Fourier transform.
Grab the fast Fourier transform library of your choice and throw it at your data; you will get a decomposition of the signal into its constituent frequencies. You can then take that data and visualize however you like. Spectrograms are particularly easy; you just need to plot the magnitude of each frequency component versus the frequency and time.
In my Silverlight application, I read audio data from a File with a ZipInputStream, then store it in a MemoryStream. Here's the code I'm using:
byte[] buf = new byte[1024];
MemoryStream memoryStream = new MemoryStream();
int len;
while ((len = zipInputStream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStream.Write(buf, 0, len);
}
// Reset the position for reading.
memoryStream.Position = 0;
// Check how large the byte[] is.
textBox.Text = memoryStream.ToArray().Length.ToString();
MediaElement me = new MediaElement();
me.setSource(memoryStream);
me.Play();
This code partly works; the song from the input file starts playing. In addition, the byte[] always has the same length for the same song. I take this to mean that the song is being completely read each time.
However, my problem is that the audio randomly stops playing at a different point each run through. The song has not yet fully played, either. I'm not exactly sure why this happens.
If anyone knows, I'd like to know why this is happening. I'd also like to know if there's something wrong with my code, or if there's a different way I should go about storing the audio (that doesn't involve storing a file on the user's computer).
I was finally able to find a solution. By making the MediaElement and MemoryStream global variables, the song played through completely each time. I'm still not 100% sure what caused this error, although my best guess is that the problem was caused by the garbage collector deleting the stream.