Right now i have an audio file (2 Channels, 44.1kHz Sample Rate, 16bit Sample size, WAV) I would like to pass it into this method but i am not sure of any way to convert the WAV file to a byte array.
/// <summary>
/// Process 16 bit sample
/// </summary>
/// <param name="wave"></param>
public void Process(ref byte[] wave)
{
_waveLeft = new double[wave.Length / 4];
_waveRight = new double[wave.Length / 4];
if (_isTest == false)
{
// Split out channels from sample
int h = 0;
for (int i = 0; i < wave.Length; i += 4)
{
_waveLeft[h] = (double)BitConverter.ToInt16(wave, i);
_waveRight[h] = (double)BitConverter.ToInt16(wave, i + 2);
h++;
}
}
else
{
// Generate artificial sample for testing
_signalGenerator = new SignalGenerator();
_signalGenerator.SetWaveform("Sine");
_signalGenerator.SetSamplingRate(44100);
_signalGenerator.SetSamples(16384);
_signalGenerator.SetFrequency(5000);
_signalGenerator.SetAmplitude(32768);
_waveLeft = _signalGenerator.GenerateSignal();
_waveRight = _signalGenerator.GenerateSignal();
}
// Generate frequency domain data in decibels
_fftLeft = FourierTransform.FFTDb(ref _waveLeft);
_fftRight = FourierTransform.FFTDb(ref _waveRight);
}
Edit Hi sorry for the confusion. I'm currently new to audio signalling so my explanation of what I might like to get is wrong. For this method to work correctly, i believe i need to pass in the byte array of the data chunk in the wav file only. The end result would be to apply fft on it as shown in the code and transform it to a spectrogram. Thanks.
you need:
using System.IO;
and this code to get the byte array
byte[] data = File.ReadAllBytes(PathToFile);
where PathToFile is the Location (as String) of the .wav file.
Edit:
Right now i have an audio file (2 Channels, 44.1kHz Sample Rate, 16bit Sample size, WAV) I would like to pass it into this method but i am not sure of any way to convert the WAV file to a byte array.
He asks for a function to get the byte array from the .wav file he didn't say anything about getting the specific part of the byte array that contains the data of the music.
So Downvoting a correct answer is..
Related
I need to merge two wave files in .net core. So I choosed OpenTK as wrapper for OpenAL.
I tried to merge two wave files with same bits per sample and sapmle rate.
1) To do this I get this example
2) Make 2 byte areas:
var sound_data1 = LoadWave(path1, FileMode.Open), out channels, out bits_per_sample, out sample_rate);
var sound_data2 = LoadWave(path2, FileMode.Open), out channels, out bits_per_sample, out sample_rate);
3) Make sum for each byte and devide it to 2
for (int i = 0; i < sound_data1; i++)
{
result_sound_data[i] = (byte)((sound_data1[i] + sound_data2[i]) / 2);
}
4) then:
AL.BufferData(buffer, GetSoundFormat(channels, bits_per_sample), result_sound_data, result_sound_data.Length, sample_rate);
AL.Source(source, ALSourcei.Buffer, buffer);
AL.SourcePlay(source);
An finaly I got some damaged sound instead of mixed signal. How do I solve it?
Merging audio streams is apparently basically taking a sum of the corresponding samples in each input audio file. You can study the source code of this sample on codeproject. This code is maybe not the cleanest, but seems to do the job (I tested).
Apart from handling the WAV file header, the actual merging logic for an array of input files is described here:
// 8. Do the merging..
// The basic algorithm for doing the merging is as follows:
// while there is at least 1 sample remaining in any of the source files
// sample = 0
// for each source file
// if the source file has any samples remaining
// sample = sample + next available sample from the source file
// sample = sample / # of source files
// write the sample to the output file
This is implemented in that code sample as follows, for 8-bit samples:
while (SamplesRemain(scaledAudioFiles))
{
byte sample = 0;
for (var i = 0; i < scaledAudioFiles.GetLength(0); ++i)
{
if (scaledAudioFiles[i].NumSamplesRemaining > 0)
{
sample += scaledAudioFiles[i].GetNextSample_8bit();
}
}
sample /= (byte)(scaledAudioFiles.GetLength(0));
outputFile.AddSample_8bit(sample);
}
The code in that sample is entirely compatible with .Net Core.
If you just want to take that code and merge some .wav files, here's how to do just that (still, using the sample in question):
private static void Main(string[] args) => WAVFile.MergeAudioFiles(
new[] { "file1.wav", "file2.wav", "file3.wav" },
"result.wav",
Path.GetTempPath()
);
I'm writing a small program that replaces specific byte information at their appropriate offsets.
Currently, the code sends FileStream to look for the desired offset given from a Data Grid entry, but I'm wanting to then read the next 50 bytes for a byte sequence/pattern and then set the current FileStream's position to the starting point of the found pattern.
Here is my code as it sits now.
using (var fs = new FileStream(yOBJFile.FileName, FileMode.Open,FileAccess.ReadWrite))
{
fs.Position = Int32.Parse(rowp.Cells[4].Value.ToString(), System.Globalization.NumberStyles.HexNumber);
string BytePatternString = "675F66537065634C6576656C";
byte[] BytePattern = Encoding.ASCII.GetBytes(BytePatternString);
}
I would very much appreciate any help aiding me in a solution.
Thank you.
I'm currently trying to implement my own single track MIDI file output. It turns an 8x8 grid of colours stored in multiple frames, into a MIDI file that can be imported into a digital audio interface and played through a Novation Launchpad. Some more context here.
I've managed to output a file that programs recognize as MIDI, but the resultant MIDI does not play, and its not matching files generated via the same frame data. I've been doing comparisions by recording my programs live MIDI messages through a dedicated MIDI program, and then spitting out a MIDI file via that. I then compare my generated file to that properly generated file via a hex editor. Things are correct as far as the headers, but that seems to be it.
I've been slaving over multiple renditions of the MIDI specification and existing Stack Overflow questions with no 100% solution.
Here is my code, based on what I have researched. I can't help but feel I'm missing something simple. I'm avoiding the use of existing MIDI libraries, as I only need this one MIDI function to work (and want the learning experience of doing this from scratch). Any guidance would be very helpful.
/// <summary>
/// Outputs an MIDI file based on frames for the Novation Launchpad.
/// </summary>
/// <param name="filename"></param>
/// <param name="frameData"></param>
/// <param name="bpm"></param>
/// <param name="ppq"></param>
public static void WriteMidi(string filename, List<FrameData> frameData, int bpm, int ppq) {
decimal totalLength = 0;
using (FileStream stream = new FileStream(filename, FileMode.Create, FileAccess.Write)) {
// Output midi file header
stream.WriteByte(77);
stream.WriteByte(84);
stream.WriteByte(104);
stream.WriteByte(100);
for (int i = 0; i < 3; i++) {
stream.WriteByte(0);
}
stream.WriteByte(6);
// Set the track mode
byte[] trackMode = BitConverter.GetBytes(Convert.ToInt16(0));
stream.Write(trackMode, 0, trackMode.Length);
// Set the track amount
byte[] trackAmount = BitConverter.GetBytes(Convert.ToInt16(1));
stream.Write(trackAmount, 0, trackAmount.Length);
// Set the delta time
byte[] deltaTime = BitConverter.GetBytes(Convert.ToInt16(60000 / (bpm * ppq)));
stream.Write(deltaTime, 0, deltaTime.Length);
// Output track header
stream.WriteByte(77);
stream.WriteByte(84);
stream.WriteByte(114);
stream.WriteByte(107);
for (int i = 0; i < 3; i++) {
stream.WriteByte(0);
}
stream.WriteByte(12);
// Get our total byte length for this track. All colour arrays are the same length in the FrameData class.
byte[] bytes = BitConverter.GetBytes(frameData.Count * frameData[0].Colours.Count * 6);
// Write our byte length to the midi file.
stream.Write(bytes, 0, bytes.Length);
// Cycle through frames and output the necessary MIDI.
foreach (FrameData frame in frameData) {
// Calculate our relative delta for this frame. Frames are originally stored in milliseconds.
byte[] delta = BitConverter.GetBytes((double) frame.TimeStamp / 60000 / (bpm * ppq));
for (int i = 0; i < frame.Colours.Count; i++) {
// Output the delta length to MIDI file.
stream.Write(delta, 0, delta.Length);
// Get the respective MIDI note based on the colours array index.
byte note = (byte) NoteIdentifier.GetIntFromNote(NoteIdentifier.GetNoteFromPosition(i));
// Check if the current color signals a MIDI off event.
if (!CheckEqualColor(frame.Colours[i], Color.Black) && !CheckEqualColor(frame.Colours[i], Color.Gray) && !CheckEqualColor(frame.Colours[i], Color.Purple)) {
// Signal a MIDI on event.
stream.WriteByte(144);
// Write the current note.
stream.WriteByte(note);
// Check colour and write the respective velocity.
if (CheckEqualColor(frame.Colours[i], Color.Red)) {
stream.WriteByte(7);
} else if (CheckEqualColor(frame.Colours[i], Color.Orange)) {
stream.WriteByte(83);
} else if (CheckEqualColor(frame.Colours[i], Color.Green) || CheckEqualColor(frame.Colours[i], Color.Aqua) || CheckEqualColor(frame.Colours[i], Color.Blue)) {
stream.WriteByte(124);
} else if (CheckEqualColor(frame.Colours[i], Color.Yellow)) {
stream.WriteByte(127);
}
} else {
// Calculate the delta that the frame had.
byte[] offDelta = BitConverter.GetBytes((double) (frameData[frame.Index - 1].TimeStamp / 60000 / (bpm * ppq)));
// Write the delta to MIDI.
stream.Write(offDelta, 0, offDelta.Length);
// Signal a MIDI off event.
stream.WriteByte(128);
// Write the current note.
stream.WriteByte(note);
// No need to set our velocity to anything.
stream.WriteByte(0);
}
}
}
}
}
BitConverter.GetBytes returns the bytes in the native byte order, but MIDI files use big-endian values. If you're running on x86 or ARM, you must reverse the bytes.
The third value in the file header is not called "delta time"; it is the number of ticks per quarter note, which you already have as ppq.
The length of the track is not 12; you must write the actual length.
Due to the variable-length encoding of delta times (see below), this is usually not possible before collecting all bytes of the track.
You need to write a tempo meta event that specifies the number of microseconds per quarter note.
A delta time is not an absolute time; it specifies the interval starting from the time of the previous event.
A delta time specifies the number ticks; your calculation is wrong.
Use TimeStamp * bpm * ppq / 60000.
Delta times are not stored as a double floating-point number but as a variable-length quantity; the specification has example code for encoding it.
The last event of the track must be an end-of-track meta event.
Another approach would be to use one of the .NET MIDI libraries to write the MIDI file. You just have to convert your frames into Midi objects and pass them to the library to save. The library will take care of all the MIDI details.
You could try MIDI.NET and C# Midi Toolkit. Not sure if NAudio does writing MIDI files and at what abstraction level that is...
Here is more info on the MIDI File Format specification:
http://www.blitter.com/~russtopia/MIDI/~jglatt/tech/midifile.htm
Hope it helps,
Marc
I am uploading jpeg images as fast as i can to a web service (it is the requirement I have been given).
I am using async call to the web service and I calling it within a timer.
I am trying to optimise as much as possible and tend to use an old laptop for testing. On a normal/reasonable build PC all is OK. On the laptop I get high RAM usage.
I know I will get a higher RAM usage using that old laptop but I want to know the lowest spec PC the app will work on.
As you can see in the code below I am converting the jpeg image into a byte array and then I upload the byte array.
If I can reduce/compress/zip the bye array then I am hoping this will be 1 of the ways of improving memory usage.
I know jpegs are already compressed but if I compare the current byte array with the previous byre array then uploading the difference between this byte arrays I could perhaps compress it even more on the basis that some of the byte values will be zero.
If I used a video encoder (which would do the trick) I would not be real time as much I would like.
Is there an optimum way of comparing 2 byte arrays and outputting to a 3rd byte array? I have looked around but could not find an answer that I liked.
This is my code on the client:
bool _uploaded = true;
private void tmrLiveFeed_Tick(object sender, EventArgs e)
{
try
{
if (_uploaded)
{
_uploaded = false;
_live.StreamerAsync(Shared.Alias, imageToByteArray((Bitmap)_frame.Clone()), Guid.NewGuid().ToString()); //web service being called here
}
}
catch (Exception _ex)
{
//do some thing but probably time out error here
}
}
//web service has finished the client invoke
void _live_StreamerCompleted(object sender, AsyncCompletedEventArgs e)
{
_uploaded = true; //we are now saying we start to upload the next byte array
}
private wsLive.Live _live = new wsLive.Live(); //web service
private byte[] imageToByteArray(Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms,System.Drawing.Imaging.ImageFormat.Jpeg); //convert image to best image compression
imageIn.Dispose();
return ms.ToArray();
}
thanks...
As C.Evenhuis said - JPEG files are compressed, and changing even few pixels results in complettly differrent file. So - comparing resulting JPEG files is useless.
BUT you can compare your Image objects - quick search results in finding this:
unsafe Bitmap PixelDiff(Bitmap a, Bitmap b)
{
Bitmap output = new Bitmap(a.Width, a.Height, PixelFormat.Format32bppArgb);
Rectangle rect = new Rectangle(Point.Empty, a.Size);
using (var aData = a.LockBitsDisposable(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb))
using (var bData = b.LockBitsDisposable(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb))
using (var outputData = output.LockBitsDisposable(rect, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb))
{
byte* aPtr = (byte*)aData.Scan0;
byte* bPtr = (byte*)bData.Scan0;
byte* outputPtr = (byte*)outputData.Scan0;
int len = aData.Stride * aData.Height;
for (int i = 0; i < len; i++)
{
// For alpha use the average of both images (otherwise pixels with the same alpha won't be visible)
if ((i + 1) % 4 == 0)
*outputPtr = (byte)((*aPtr + *bPtr) / 2);
else
*outputPtr = (byte)~(*aPtr ^ *bPtr);
outputPtr++;
aPtr++;
bPtr++;
}
}
return output;
}
If your goal is to find out whether two byte arrays contain exactly the same data, you can create an MD5 hash and compare these as others have suggested. However in your question you mention you want to upload the difference which means the result of the comparison must be more than a simple yes/no.
As JPEGs are already compressed, the smallest change to the image could lead to a large difference in the binary data. I don't think any two JPEGs contain binary data similar enough to easily compare.
For BMP files you may find that changing a single pixel affects only one or a few bytes, and more importantly, the data for the pixel at a certain offset in the image is located at the same position in both binary files (given that both images are of equal size and color depth). So for BMPs the difference in binary data directly relates to the difference in the images.
In short, I don't think obtaining the binary difference between JPEG files will improve the size of the data to be sent.
I need a fast method to store all samples of a wav file in an array. I am currently working around this problem by playing the music and storing the values from the Sample Provider, but this is not very elegant.
From the NAudio Demo I have the Audioplayer Class with this Method:
private ISampleProvider CreateInputStream(string fileName)
{
if (fileName.EndsWith(".wav"))
{
fileStream = OpenWavStream(fileName);
}
throw new InvalidOperationException("Unsupported extension");
}
var inputStream = new SampleChannel(fileStream, true);
var sampleStream = new NotifyingSampleProvider(inputStream);
SampleRate = sampleStream.WaveFormat.SampleRate;
sampleStream.Sample += (s, e) => { aggregator.Add(e.Left); }; // at this point the aggregator gets the current sample value, while playing the wav file
return sampleStream;
}
I want to skip this progress of getting the sample values while playing the file, instead I want the values immediatly without waiting till the end of the file. Basically like the wavread command in matlab.
Use AudioFileReader to read the file. This will automatically convert to IEEE float samples. Then repeatedly call the Read method to read a block of samples into a float[] array.