I'm trying to reimplement an existing Matlab 8-band equalizer GUI I created for a project last week in C#. In Matlab, songs load as a dynamic array into memory, where they can be freely manipulated and playing is as easy as sound(array).
I found the NAudio library which conveniently already has Mp3 extractors, players, and both convolution and FFT defined. I was able to open the Mp3 and read all its data into an array (though I'm not positive I'm going about it correctly.) However, even after looking through a couple of examples, I'm struggling to figure out how to take the array and write it back into a stream in such a way as to play it properly (I don't need to write to file).
Following the examples I found, I read my mp3's like this:
private byte[] CreateInputStream(string fileName)
{
byte[] stream;
if (fileName.EndsWith(".mp3"))
{
WaveStream mp3Reader = new Mp3FileReader(fileName);
songFormat = mp3Reader.WaveFormat; // songFormat is a class field
long sizeOfStream = mp3Reader.Length;
stream = new byte[sizeOfStream];
mp3Reader.Read(stream, 0, (int) sizeOfStream);
}
else
{
throw new InvalidOperationException("Unsupported Exception");
}
return stream;
}
Now I have an array of bytes presumably containing raw audio data, which I intend to eventually covert to floats so as to run through the DSP module. Right now, however, I'm simply trying to see if I can play the array of bytes.
Stream outstream = new MemoryStream(stream);
WaveFileWriter wfr = new WaveFileWriter(outstream, songFormat);
// outputStream is an array of bytes and a class variable
wfr.Write(outputStream, 0, (int)outputStream.Length);
WaveFileReader wr = new WaveFileReader(outstream);
volumeStream = new WaveChannel32(wr);
waveOutDevice.Init(volumeStream);
waveOutDevice.Play();
Right now I'm getting errors thrown in WaveFileReader(outstream) which say that it can't read past the end of the stream. I suspect that's not the only thing I'm not doing correctly. Any insights?
Your code isn't working because you never close the WaveFileWriter so its headers aren't written correctly, and you also would need to rewind the MemoryStream.
However, there is no need for writing a WAV file if you want to play back an array of byes. Just use a RawSourceWaveStream and pass in your MemoryStream.
You may also find the AudioFileReader class more suitable to your needs as it will provide the samples as floating point directly, and allow you to modify the volume.
Related
I have the following constructor, which successfully plays pink noise directly to my audio output device:
public WAVEManager(string inputFileName, string outputFileName)
{
IWavePlayer outputDevice;
outputDevice = new WaveOutEvent();
SignalGenerator pinkNoiseGenerator = new SignalGenerator();
pinkNoiseGenerator.Type = SignalGeneratorType.Pink;
outputDevice.Init(pinkNoiseGenerator);
outputDevice.Play();
// Wait for 10 seconds
System.Threading.Thread.Sleep(10000);
}
This all works fine. I understand that now if I want to write to a .wav, I have to initialise a WaveFileWriter like so:
WaveFileWriter writer = new WaveFileWriter(outputFileName, pinkNoiseGenerator.WaveFormat);
And then write to the created WAVE file:
writer.WriteData(buffer, 0, numSamples);
The rub being that I have no idea how to populate the buffer directly from pinkNoiseGenerator. I have searched through the documentation and examples and can't find anything to do with this - I imagine it must involve the .Read() method of the SignalGenerator class, but as the generator plays indefinitely, it has no defined length. To me, this means that the buffer can't be populated in the same way it could if we were, say, reading directly from an input WAVE file (as far as I can tell).
Could someone please point me in the right direction?
Thanks.
Here's how you can create a WAV file containing 10 seconds of pink noise:
var pinkNoiseGenerator = new SignalGenerator();
pinkNoiseGenerator.Type = SignalGeneratorType.Pink;
WaveFileWriter.CreateWaveFile16("pinkNoise.wav", pinkNoiseGenerator.Take(TimeSpan.FromSeconds(10)));
I am recording soundbytes using nAudio and I would like to store these directly into variables inside an object instead of writing them to a file.
At the moment, I am recording the data as follows:
internal void start_recording(int device_number, string recording_file)
{
recording_file_name = recording_file;
mic_source_stream = new WaveIn();
mic_source_stream.DeviceNumber = device_number;
mic_source_stream.WaveFormat = new WaveFormat(44100, WaveIn.GetCapabilities(device_number).Channels);
mic_source_stream.DataAvailable += new EventHandler<WaveInEventArgs>(mic_source_stream_DataAvailable);
wave_writer = new WaveFileWriter(recording_file, mic_source_stream.WaveFormat);
mic_source_stream.StartRecording();
}
internal void mic_source_stream_DataAvailable(object sender, WaveInEventArgs e)
{
if (wave_writer == null) return;
wave_writer.Write(e.Buffer, 0, e.BytesRecorded);
wave_writer.Flush();
}
This creates a *.wav file containing the recording.
Instead, I would like keep the audio data in variables, to keep all related recordings into a single object instead of multiple *.wav files in the file system, but nAudio seems to be geared towards recording directly to a file and playing from a file.
If there an easy way to record audio to a variable and play it back from that variable or should I go the silly but simple route of recording to a *.wav file, reading the file to a byte array, then writing the array back to a file before loading it for playback?
Recordings will be very small, so performance is not an issue, it's just that it's grating to write to disk only to read it right back in memory, twice.
I'd recommend simply writing the recorded audio to a MemoryStream. Then when you want to play back, use a RawSourceWavestream passing in the MemoryStream and the WaveFormat you are recording in. No need to involve the WAV file format at all.
WaveFileWriter has constructor which accepts arbitrary Stream - just pass MemoryStream there. Name of this writer is a bit confusing of course.
I have a task for work, I need to convert a local .wav file into 2 separate PCM files.
I have managed to read the file into a WavStream or into a Byte[] but have no knowledge on how to convert each channel to a separate file, without losing the headers.
Sample Code will be highly appreciated.
Thanks,
Nokky.
P.S
This is the code I used.
public void WavToPcmConvert(string filePath)
{
string fileName = Path.GetFileNameWithoutExtension(filePath);
using (var reader = new WaveFileReader(filePath))
{
using (var converter = WaveFormatConversionStream.CreatePcmStream(reader))
{
WaveFileWriter.CreateWaveFile(, converter);
}
}
}
In a PCM file, samples are interleaved left, right. Assuming you have 16 bit samples, this means you get two bytes left channel, two bytes right, and so on.
So create two WaveFileWriters, then read a second's worth of audio from converter, loop through the bytes you've read, and write alternating pairs into each WaveFileWriter. Keep going until you get to the end of your input stream (converter.Read returns 0).
My objective is this: to allow users of my .NET program to choose their own .wav files for sound effects. These effects may be played simultaneously. NAudio seemed like my best bet.
I decided to use WaveMixerStream32. One early challenge was that my users had .wav files of different formats, so to be able to mix them together with WaveMixerStream32, I needed to "normalize" them to a common format. I wasn't able to find a good example of this to follow so I suspect my problem is a result of my doing this part wrong.
My problem is that when some sounds are played, there are very noticeable "clicking" sounds at their end. I can reproduce this myself.
Also, my users have complained that sometimes, sounds aren't played at all, or are "scratchy" all the way through. I haven't been able to reproduce this in development but I have heard this for myself in our production environment.
I've played the user's wav files myself using Windows Media and VLC, so I know the files aren't corrupt. It must be a problem with how I'm using them with NAudio.
My NAudio version is v1.4.0.0.
Here's the code I used. To set up the mixer:
_mixer = new WaveMixerStream32 { AutoStop = false, };
_waveOutDevice = new WaveOut(WaveCallbackInfo.NewWindow())
{
DeviceNumber = -1,
DesiredLatency = 300,
NumberOfBuffers = 3,
};
_waveOutDevice.Init(_mixer);
_waveOutDevice.Play();
Surprisingly, if I set "NumberOfBuffers" to 2 here I found that sound quality was awful, with audible "ticks" occurring several times a second.
To initialize a sound file, I did this:
var sample = new AudioSample(fileName);
sample.Position = sample.Length; // To prevent the sample from playing right away
_mixer.AddInputStream(sample);
AudioSample is my class. Its constructor is responsible for the "normalization" of the wav file format. It looks like this:
private class AudioSample : WaveStream
{
private readonly WaveChannel32 _channelStream;
public AudioSample(string fileName)
{
MemoryStream memStream;
using (var fileStream = File.OpenRead(fileName))
{
memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
}
WaveStream originalStream = new WaveFileReader(memStream);
var pcmStream = WaveFormatConversionStream.CreatePcmStream(originalStream);
var blockAlignReductionStream = new BlockAlignReductionStream(pcmStream);
var waveFormatConversionStream = new WaveFormatConversionStream(
new WaveFormat(44100, blockAlignReductionStream.WaveFormat.BitsPerSample, 2), blockAlignReductionStream);
var waveOffsetStream = new WaveOffsetStream(waveFormatConversionStream);
_channelStream = new WaveChannel32(waveOffsetStream);
}
Basically, the AudioSample delegates to its _channelStream object. To play an AudioSample, my code sets its "Position" to 0. This code that does this is marshalled onto the UI thread.
This almost works great. I can play multiple sounds simultaneously. Unfortunately the sound quality is bad as described above. Can anyone help me figure out why?
Some points in response to your question:
Yes, you have to have all inputs at the same sample rate before you feed them into a mixer. This is simply how digital audio works. The ACM sample rate conversion provided by WaveFormatConversion stream isn't brilliant (has no aliasing protection). What sample rates are your input files typically at?
You are passing every input through two WaveFormatConversionStreams. Only do this if it is absolutely necessary.
I'm surprised that you are getting bad sound with NumberOfBuffers=2, which is now the default in NAudio. Have you been pausing and resuming, because there was a bug where a buffer could get dropped (fixed in the latest and will be fixed for NAudio 1.4 final)
A click at the end of a file can just mean it doesn't end on a zero sample. You would have to add a fade out to eliminate this (a lot of media players do this automatically)
Whenever you are troubleshooting a bad sound issue, I always recommend using WaveFileWriter to convert your WaveStream into a WAV file (taking care not to produce a never ending file!), so you can listen to it in another media player. This allows you to quickly determine whether it is your audio processing that is causing the problem, or the playback itself.
I need to develop WinForms app, which will be able to decrypt a media file (a movie) and then play it without saving decrypted file to the HDD (the decrypted file finally will be stored in the memory stream) The problem is, how then play that movie from the memory stream ? Is it possible ?
It is possible, but I expect you will need to write your own DirectShow filter to do so, which once created will act as a file reader (implementing the IFileSourceFilter interface), and, as the video plays, will read successive frames from the file, decrypt them, and pass them up to the next filter.
This will only work however if the file is encrypted in a sequential form (i.e. each individual frame is encrypted as a seperate entity). Otherwise, you will have to decrypt the entire file at once, which could be intensive, slow, and probably have to hit the hard drive to store the end file.
But anyway, this link should get you started: http://msdn.microsoft.com/en-us/library/dd375454%28VS.85%29.aspx
I'm afraid that in order to create the DirectShow filter, you will need to use C++, and it isn't the easiest API to get your head around.
An alternate way to do it may be to use the Windows Media Format SDK, which allows you to pass custom video packets to a renderer in real time. There is also a good interop library for C# (WindowsMediaLib)
First of all, it's a good idea to encrypt source video piece by piece. So the encrypted video file is a set of encrypted parts. Just split original file into parts of the same size and encrypt them.
Here the scheme (OutputStream is a stream of encrypted video file, InputStream is original file stream, ChunkSize is a size of each part in the original file, also we write some metadata: sizes of original and encrypted pieces):
using (BinaryWriter Writer = new BinaryWriter(OutputStream))
{
byte[] Buf = new byte[ChunkSize];
List<int> SourceChunkSizeList = new List<int>();
List<int> EncryptedChunkSizeList = new List<int>();
int ReadBytes;
while ((ReadBytes = InputStream.Read(Buf, 0, Buf.Length)) > 0)
{
byte[] EncryptedData = Encrypt(Buf, ReadBytes);
OutputStream.Write(EncryptedData, 0, EncryptedData.Length);
SourceChunkSizeList.Add(ReadBytes);
EncryptedChunkSizeList.Add(EncryptedData.Length);
}
foreach (int SourceChunkSize in SourceChunkSizeList)
Writer.Write(SourceChunkSize);
foreach (int EncryptedChunkSize in EncryptedChunkSizeList)
Writer.Write(EncryptedChunkSize);
}
Such metadata will help to find encrypted part rapidly.
Secondly, don't decrypt encrypted data in each read request. Cache it: video playing in the most case is just a sequential reading.
The tricky part is how to play encrypted video file. You may write either a DirectShow filter (video specific solution), or check a 3rd party product (multipurpose solution): BoxedApp, a virtualization SDK. What's cool is that they have an article that shows how to solve exact your task, look: http://boxedapp.com/encrypted_video_streaming.html