It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels.
Thanks for your help.
You have to set up the wav header data which contains info about sample rate, file size, sample size, stereo or mono etc. Then write the actual audio data.
Then you just have to write it out as a binary file with '.wav' file extension.
Related
Thanks in advance ,I am converting rtsp live stream to .wav file using ffmpeg.Its converting good but i want to convert .wav file to byte stream parallely at the time of converting rtsp to .wav file
1)Launch ffmpeg as a Proceas with the option to decode to standard output in the Arguments (usually a '-' at the end of the arguments)
2) Read the standard output from that Process in accordance with the output format being written.
This keeps the output in memory and allows you to consume it just after its been provided to stdout.
See this question for an example commanf of how to get raw audio from ffmpeg.
Can ffmpeg convert audio to raw PCM? If so, how?
You can't expect a player to understand the datastrucutes without the wav headers because the sample size and rate are not defined in a raw pcm stream.
Finally I think you'll see that it's better to output wav format and read the wav format which will contain everything a player needs to playback the audio.
You would them provide wav audio instead of pcm which will work in naudio as well as many other players quite easily.
in my windows phone gaming apps, I have to use lots of sound. Now I have seen that Windows phone does not support mp3 files. So I need to use the wav files. Any mp3 file which is just 500 kb size in mp3 format, when convert that to ".wav" it becomes min 2.5MB. It's actually eating up my apps size and unnecessarily the size of my apps is getting bigger.
Anyone know how can I use the mp3 file? In my solution I have a Asset folder and inside this folder all the ".wav" files are located.
How I am doing this let me write a code
SoundEffect effect;
Iinside constructor-
{
...
var soundFile = "Assets/Jump.wav";
Stream stream = TitleContainer.OpenStream(soundFile);
effect = SoundEffect.FromStream(stream);
And in the code
effect.Play();
Is there any better approach. In some thread I come to know that doing this is not a better way coding as it creates object and used up the system space. Please suggest what to do, how do I add mp3 files and write better code for working with sound files.
you can use BackgroundAudioPlayer to play your wav and mp3 files. SoundEffect class cannot play mp3 data
Go through this it's an entire app on it's own.
Background Audio WP
To use an MP3 file you would have to decode the MP3 into PCM in memory, and use that resulting stream in the SoundEffect.FromStream method.
Another thing you could try is encoding the wav files as ADPCM. This usually compresses the wave file at ratio of 4:1. If you can't use the stream directly, decoding of ADPCM is much more straightforward than decoding an mp3 file.
And one more thing you could try to save space is converting the uncompressed wave files into mono, and a lower sampling rate. You can perform listening tests on the sounds after the conversion to make sure that the loss in quality is acceptable or not.
I have a Stream of audio data coming from my mic for which I would like to display the current recording volume level. From what I've gathered, I need to store X number of bytes in an array and then I can use that data to process that one sample from the recording. How do I determine what X is, and what do I need to do to get the volume level from that data?
I'm working in C# but even pseudo code would be very helpful
WAV files are amplitude-modulated, so each sample value is the relative volume. Average across time and you get an average volume.
Things to watch out for:
The above only applies to uncompressed LPCM data. WAV files can be compressed, in which case you'd need to implement whatever decoder is needed to get uncompressed data to work with.
WAV files can be both 8-bit or 16-bit
WAV files do have some header info to skip past, the file format is well-documented (https://ccrma.stanford.edu/courses/422/projects/WaveFormat/)
Watch your endians when reading the header
Here's some sample .NET code for reading WAV files:
http://www.codeproject.com/KB/audio-video/WaveEdit.aspx
I have nessary chunk header bytes for a wave file stored in a text file. What I'd like to do is create a new .wav that will vary in data length and write into it 50ms/10kHz signals(which is stored in separate file). How can I accomplish with .NET/C#?
-Mickey
Use the System.IO.BinaryReader and System.IO.BinaryWriter streams to read data from your signals files, write binary datatypes to a file, as per the wav file specification.
Can someone give an example of how to record, play, save and also encode a .wav file in a pcm file (u-law encode for ex). I would like to create a rtp stream. THX
NAudio can be used to record audio, convert it to u-law and save it to a WAV file. The included NAudioDemo application demonstrates how to perform all three tasks.
For creating an RTP stream, you could try RTP.NET