Can someone give an example of how to record, play, save and also encode a .wav file in a pcm file (u-law encode for ex). I would like to create a rtp stream. THX
NAudio can be used to record audio, convert it to u-law and save it to a WAV file. The included NAudioDemo application demonstrates how to perform all three tasks.
For creating an RTP stream, you could try RTP.NET
Related
Thanks in advance ,I am converting rtsp live stream to .wav file using ffmpeg.Its converting good but i want to convert .wav file to byte stream parallely at the time of converting rtsp to .wav file
1)Launch ffmpeg as a Proceas with the option to decode to standard output in the Arguments (usually a '-' at the end of the arguments)
2) Read the standard output from that Process in accordance with the output format being written.
This keeps the output in memory and allows you to consume it just after its been provided to stdout.
See this question for an example commanf of how to get raw audio from ffmpeg.
Can ffmpeg convert audio to raw PCM? If so, how?
You can't expect a player to understand the datastrucutes without the wav headers because the sample size and rate are not defined in a raw pcm stream.
Finally I think you'll see that it's better to output wav format and read the wav format which will contain everything a player needs to playback the audio.
You would them provide wav audio instead of pcm which will work in naudio as well as many other players quite easily.
in my windows phone gaming apps, I have to use lots of sound. Now I have seen that Windows phone does not support mp3 files. So I need to use the wav files. Any mp3 file which is just 500 kb size in mp3 format, when convert that to ".wav" it becomes min 2.5MB. It's actually eating up my apps size and unnecessarily the size of my apps is getting bigger.
Anyone know how can I use the mp3 file? In my solution I have a Asset folder and inside this folder all the ".wav" files are located.
How I am doing this let me write a code
SoundEffect effect;
Iinside constructor-
{
...
var soundFile = "Assets/Jump.wav";
Stream stream = TitleContainer.OpenStream(soundFile);
effect = SoundEffect.FromStream(stream);
And in the code
effect.Play();
Is there any better approach. In some thread I come to know that doing this is not a better way coding as it creates object and used up the system space. Please suggest what to do, how do I add mp3 files and write better code for working with sound files.
you can use BackgroundAudioPlayer to play your wav and mp3 files. SoundEffect class cannot play mp3 data
Go through this it's an entire app on it's own.
Background Audio WP
To use an MP3 file you would have to decode the MP3 into PCM in memory, and use that resulting stream in the SoundEffect.FromStream method.
Another thing you could try is encoding the wav files as ADPCM. This usually compresses the wave file at ratio of 4:1. If you can't use the stream directly, decoding of ADPCM is much more straightforward than decoding an mp3 file.
And one more thing you could try to save space is converting the uncompressed wave files into mono, and a lower sampling rate. You can perform listening tests on the sounds after the conversion to make sure that the loss in quality is acceptable or not.
Having failed to find a way to programmatically convert a CCITT u-Law wave file to a PCM file (which Soundplayer demands) in accord with this question: How to play non-PCM file or convert it to PCM on the fly ?
(SOX looks like it might work, but I can't find any examples for converting from CCITT u-Law .wav file to a "regular" (PCM) .wav file using it from C#),
I wonder if I'm going about it the wrong way: maybe I should find a way to play CCITT u-Law .wav files, rather than trying to convert such to a PCM .wav file.
Does anybody know how this is possible? SoundPlayer always says, "Sound API only supports playing PCM wave files" so maybe there's another API I can use?
Note: Alvas.Audio is also "not an option" due to it not being free or open source.
The way to do it is to use newkie's code at: http://www.codeproject.com/Articles/175030/PlaySound-A-Better-Way-to-Play-Wav-Files-in-C?msg=4366037#xx4366037xx
In my case, at least, I had to change all of the lowercase x's to uppercase x's, though, to get it to work.
It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels.
Thanks for your help.
You have to set up the wav header data which contains info about sample rate, file size, sample size, stereo or mono etc. Then write the actual audio data.
Then you just have to write it out as a binary file with '.wav' file extension.
I am trying to read bytes from a wav file and send it across to a stream but it plays slowly.
Could you please help me to know the right way of populating the byte[]?
Thanks for you help.
are you using NAudio to both read the WAV file and play the data?
you need to make sure you use the same WaveFormat at both ends