how to I deal with NaN results from FFT? - c#

I am trying to implement a function which takes an wav file, runs a 100th of a second worth of audio through the FFT by AForge. When I change the offset to alter where in the audio I am computing through the FFT, sometimes I will get results in which I can show in my graph but most of the time I get a complex array of NaN's. Why could this be?
Here is my code.
public double[] test()
{
OpenFileDialog file = new OpenFileDialog();
file.ShowDialog();
WaveFileReader reader = new WaveFileReader(file.FileName);
byte[] data = new byte[reader.Length];
reader.Read(data, 0, data.Length);
samepleRate = reader.WaveFormat.SampleRate;
bitDepth = reader.WaveFormat.BitsPerSample;
channels = reader.WaveFormat.Channels;
Console.WriteLine("audio has " + channels + " channels, a sample rate of " + samepleRate + " and bitdepth of " + bitDepth + ".");
float[] floats = new float[data.Length / sizeof(float)];
Buffer.BlockCopy(data, 0, floats, 0, data.Length);
size = 2048;
int inputSamples = samepleRate / 100;
int offset = samepleRate * 15 * channels;
int y = 0;
Complex[] complexData = new Complex[size];
float[] window = CalcWindowFunction(inputSamples);
for (int i = 0; i < inputSamples; i++)
{
complexData[y] = new Complex(floats[i * channels + offset] * window[i], 0);
y++;
}
while (y < size)
{
complexData[y] = new Complex(0, 0);
y++;
}
FourierTransform.FFT(complexData, FourierTransform.Direction.Forward);
double[] arr = new double[complexData.Length];
for (int i = 0; i < complexData.Length; i++)
{
arr[i] = complexData[i].Magnitude;
}
Console.Write("complete, ");
return arr;
}
private float[] CalcWindowFunction(int inputSamples)
{
float[] arr = new float[size];
for(int i =0; i<size;i++){
arr[i] = 1;
}
return arr;
}

A complex array of NaNs is usually the result of one of the inputs to the FFT being a NaN. To debug, you might check all the values in the input array before the FFT to make sure they are within some valid range, given the audio input scaling.

Related

NAudio frequency analyser giving inconsistant results

I'm developping a simple program that analyses frequencies of audio files.
Using an fft length of 8192, samplerate of 44100, if I use as input a constant frequency wav file - say 65Hz, 200Hz or 300Hz - the output is a constant graph at that value.
If I use a recording of someone speaking, the frequencies has peaks as high as 4000Hz, with an average at 450+ish on a 90 seconds file.
At first I thought it was because of the recording being stereo sound, but converting it to mono with the exact same bitrate as the test files doesn't change much. (average goes down from 492 to 456 but that's still way too high)
Has anyone got an idea as to what could cause this ?
I think I shouldn't find the highest value but perhaps take either an average or a median value ?
EDIT : using the average of the magnitudes per 8192 bytes buffer and getting the index that's closest to that magnitude messes everything up.
This is the code for the handler of the event the Sample Aggregator fires when it has calculated fft for current buffer
void FftCalculated(object sender, FftEventArgs e)
{
int length = e.Result.Length;
float[] magnitudes = new float[length];
for (int i = 0; i < length / 2; i++)
{
float real = e.Result[i].X;
float imaginary = e.Result[i].Y;
magnitudes[i] = (float)(10 * Math.Log10(Math.Sqrt((real * real) + (imaginary * imaginary))));
}
float max_mag = float.MinValue;
float max_index = -1;
for (int i = 0; i < length / 2; i++)
if (magnitudes[i] > max_mag)
{
max_mag = magnitudes[i];
max_index = i;
}
var currentFrequency = max_index * SAMPLERATE / 8192;
Console.WriteLine("frequency be " + currentFrequency);
}
ADDITION : this is the code that reads and sends the file to the analysing part
using (var rdr = new WaveFileReader(audioFilePath))
{
var newFormat = new WaveFormat(Convert.ToInt32(SAMPLERATE/*44100*/), 16, 1);
byte[] buffer = new byte[8192];
var audioData = new AudioData(); //custom class for project
using (var conversionStream = new WaveFormatConversionStream(newFormat, rdr))
{
// Used to send audio in realtime, it's a timestamps issue for the graphs
// I'm working on fixing this, but it has lower priority so disregard it :p
TimeSpan audioDuration = conversionStream.TotalTime;
long audioLength = conversionStream.Length;
int waitTime = (int)(audioDuration.TotalMilliseconds / audioLength * 8192);
while (conversionStream.Read(buffer, 0, buffer.Length) != 0)
{
audioData.AudioDataBase64 = Utils.Base64Encode(buffer);
Thread.Sleep(waitTime);
SendMessage("AudioData", Utils.StringToAscii(AudioData.GetJSON(audioData)));
}
Console.WriteLine("Reached End of File");
}
}
This is the code that receives the audio data
{
var audioData = new AudioData();
audioData =
AudioData.GetStateFromJSON(Utils.AsciiToString(receivedMessage));
QueueAudio(Utils.Base64Decode(audioData.AudioDataBase64)));
}
followed by
var waveFormat = new WaveFormat(Convert.ToInt32(SAMPLERATE/*44100*/), 16, 1);
_bufferedWaveProvider = new BufferedWaveProvider(waveFormat);
_bufferedWaveProvider.BufferDuration = new TimeSpan(0, 2, 0);
{
void QueueAudio(byte[] data)
{
_bufferedWaveProvider.AddSamples(data, 0, data.Length);
if (_bufferedWaveProvider.BufferedBytes >= fftLength)
{
byte[] buffer = new byte[_bufferedWaveProvider.BufferedBytes];
_bufferedWaveProvider.Read(buffer, 0, _bufferedWaveProvider.BufferedBytes);
for (int index = 0; index < buffer.Length; index += 2)
{
short sample = (short)((buffer[index] | buffer[index + 1] << 8));
float sample32 = (sample) / 32767f;
sampleAggregator.Add(sample32);
}
}
}
}
And then the SampleAggregator fires the event above when it's done with the fft.

32-bit Grayscale Tiff with floating point pixel values to array using LibTIFF.NET C#

I just started using LibTIFF.NET in my c# application to read Tiff images as heightmaps obtained from ArcGIS servers. All I need is to populate an array with image's pixel values for terrain generation based on smooth gradients. The image is a LZW compressed 32-bit Grayscale Tiff with floating point pixel values representing elevaion in meters.
It's been some days now that I struggle to return right values but all I get is just "0" values assuming it's a total black or white image!
Here's the code so far: (Updated - Read Update 1)
using (Tiff inputImage = Tiff.Open(fileName, "r"))
{
int width = inputImage.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = inputImage.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int bytesPerPixel = 4;
int count = (int)inputImage.RawTileSize(0); //Has to be: "width * height * bytesPerPixel" ?
int resolution = (int)Math.Sqrt(count);
byte[] inputImageData = new byte[count]; //Has to be: byte[] inputImageData = new byte[width * height * bytesPerPixel];
int offset = 0;
for (int i = 0; i < inputImage.NumberOfTiles(); i++)
{
offset += inputImage.ReadEncodedTile(i, inputImageData, offset, (int)inputImage.RawTileSize(i));
}
float[,] outputImageData = new float[resolution, resolution]; //Has to be: float[,] outputImageData = new float[width * height];
int length = inputImageData.Length;
Buffer.BlockCopy(inputImageData, 0, outputImageData, 0, length);
using (StreamWriter sr = new StreamWriter(fileName.Replace(".tif", ".txt"))) {
string row = "";
for(int i = 0; i < resolution; i++) { //Change "resolution" to "width" in order to have correct array size
for(int j = 0; j < resolution; j++) { //Change "resolution" to "height" in order to have correct array size
row += outputImageData[i, j] + " ";
}
sr.Write(row.Remove(row.Length - 1) + Environment.NewLine);
row = "";
}
}
}
Sample Files & Results: http://terraunity.com/SampleElevationTiff_Results.zip
Already searched everywhere on internet and couldn't find the solution for this specific issue. So I really appreciate the help which makes it useful for others too.
Update 1:
Changed the code based on Antti Leppänen's answer but got weird results which seems to be a bug or am I missing something? Please see uploaded zip file to see the results with new 32x32 tiff images here:
http://terraunity.com/SampleElevationTiff_Results.zip
Results:
LZW Compressed: RawStripSize = ArraySize = 3081 = 55x55 grid
Unompressed: RawStripSize = ArraySize = 65536 = 256x256 grid
Has to be: RawStripSize = ArraySize = 4096 = 32x32 grid
As you see the results, LibTIFF skips some rows and gives irrelevant orderings and it even gets worse if the image size is not power of 2!
Your example file seems to be tiled tiff and not stripped. Console says:
ElevationMap.tif: Can not read scanlines from a tiled image
I changed your code to read tiles. This way it seems to be reading data.
for (int i = 0; i < inputImage.NumberOfTiles(); i++)
{
offset += inputImage.ReadEncodedTile(i, inputImageData, offset, (int)inputImage.RawTileSize(i));
}
I know it could be late, but I had the same mistake recently and I found the solution, so it could be helpful. The mistake is in the parameter count of the function Tiff.ReadEncodedTile(tile, buffer, offset, count). It must be the decompressed bytes size, not the compressed bytes size. That's the reason why you have not all the information, because you are not saving the whole data in your buffer. See how-to-translate-tiff-readencodedtile-to-elevation-terrain-matrix-from-height.
A fast method to read a floating point tiff.
public static unsafe float[,] ReadTiff(Tiff image)
{
const int pixelStride = 4; // bytes per pixel
int imageWidth = image.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int imageHeight = image.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
float[,] result = new float[imageWidth, imageHeight];
int tileCount = image.NumberOfTiles();
int tileWidth = image.GetField(TiffTag.TILEWIDTH)[0].ToInt();
int tileHeight = image.GetField(TiffTag.TILELENGTH)[0].ToInt();
int tileStride = (imageWidth + tileWidth - 1) / tileWidth;
int bufferSize = tileWidth * tileHeight * pixelStride;
byte[] buffer = new byte[bufferSize];
fixed (byte* bufferPtr = buffer)
{
float* array = (float*)bufferPtr;
for (int t = 0; t < tileCount; t++)
{
image.ReadEncodedTile(t, buffer, 0, buffer.Length);
int x = tileWidth * (t % tileStride);
int y = tileHeight * (t / tileStride);
var copyWidth = Math.Min(tileWidth, imageWidth - x);
var copyHeight = Math.Min(tileHeight, imageHeight - y);
for (int j = 0; j < copyHeight; j++)
{
for (int i = 0; i < copyWidth; i++)
{
result[x + i, y + j] = array[j * tileWidth + i];
}
}
}
}
return result;
}

FFT which frequencies are in which bins?

I would like to see how certain frequencies, specifically low bass at 20 - 60hz are present in a piece of audio. I have the audio as a byte array, I convert it to array of shorts, then into a complex number by (short[i]/(double)short.MaxValue, 0). Then i pass this to the FFT from Aforge.
The audio is mono and sample rate of 44100. I understand I can only put chucks through the FFT at ^2. So 4096 for example. I don't understand what frequencies be in the output bins.
if I am taking 4096 samples from the audio that is at 44100 sample rate. Does this mean I am taking milliseconds worth of audio? or only getting some of the frequencies that will be present?
I add the output of the FFT to a array, my understanding is that as I am taking 4096 then bin 0 would contain 0*44100/4096 = 0hz, bin 1 would hold 1*44100/4096 = 10.7666015625hz and so on. Is this correct? or im I doing something fundamentally wrong here?
My goal would be to average the frequencies between say 20 - 60 hz, so for a song with very low, heavy bass then this number would be higher than say a soft piano piece with very little bass.
Here is my code.
OpenFileDialog file = new OpenFileDialog();
file.ShowDialog();
WaveFileReader reader = new WaveFileReader(file.FileName);
byte[] data = new byte[reader.Length];
reader.Read(data, 0, data.Length);
samepleRate = reader.WaveFormat.SampleRate;
bitDepth = reader.WaveFormat.BitsPerSample;
channels = reader.WaveFormat.Channels;
Console.WriteLine("audio has " + channels + " channels, a sample rate of " + samepleRate + " and bitdepth of " + bitDepth + ".");
short[] shorts = data.Select(b => (short)b).ToArray();
int size = 4096;
int window = 44100 * 10;
int y = 0;
Complex[] complexData = new Complex[size];
for (int i = window; i < window + size; i++)
{
Complex tmp = new Complex(shorts[i]/(double)short.MaxValue, 0);
complexData[y] = tmp;
y++;
}
FourierTransform.FFT(complexData, FourierTransform.Direction.Forward);
double[] arr = new double[complexData.Length];
//print out sample of conversion
for (int i = 0; i < complexData.Length; i++)
{
arr[i] = complexData[i].Magnitude;
}
Console.Write("complete, ");
return arr;
edit : changed to FFT fro DFT
Here's a modified version of your code. Note the comments starting with "***".
OpenFileDialog file = new OpenFileDialog();
file.ShowDialog();
WaveFileReader reader = new WaveFileReader(file.FileName);
byte[] data = new byte[reader.Length];
reader.Read(data, 0, data.Length);
samepleRate = reader.WaveFormat.SampleRate;
bitDepth = reader.WaveFormat.BitsPerSample;
channels = reader.WaveFormat.Channels;
Console.WriteLine("audio has " + channels + " channels, a sample rate of " + samepleRate + " and bitdepth of " + bitDepth + ".");
// *** NAudio "thinks" in floats
float[] floats = new float[data.Length / sizeof(float)]
Buffer.BlockCopy(data, 0, floats, 0, data.Length);
int size = 4096;
// *** You don't have to fill the FFT buffer to get valid results. More noisy & smaller "magnitudes", but better freq. res.
int inputSamples = samepleRate / 100; // 10ms... adjust as needed
int offset = samepleRate * 10 * channels;
int y = 0;
Complex[] complexData = new Complex[size];
// *** get a "scaling" curve to make both ends of sample region 0 but still allow full amplitude in the middle of the region.
float[] window = CalcWindowFunction(inputSamples);
for (int i = 0; i < inputSamples; i++)
{
// *** "floats" is stored as LRLRLR interleaved data for stereo audio
complexData[y] = new Complex(floats[i * channels + offset] * window[i], 0);
y++;
}
// make sure the back portion of the buffer is set to all 0's
while (y < size)
{
complexData[y] = new Complex(0, 0);
y++;
}
// *** Consider using a DCT here instead... It returns less "noisy" results
FourierTransform.FFT(complexData, FourierTransform.Direction.Forward);
double[] arr = new double[complexData.Length];
//print out sample of conversion
for (int i = 0; i < complexData.Length; i++)
{
// *** I assume we don't care about phase???
arr[i] = complexData[i].Magnitude;
}
Console.Write("complete, ");
return arr;
Once you get the results, and assuming a 44100 Hz sample rate and size = 4096, elements 2 - 4 should be the values you are looking for. There's a way to convert them to dB, but I don't remember it offhand.
Good luck!

Midpoint displacement algorithm - weird results

I am writing my own midpoint displacement algorithm for learning purposes and I decided to implement it in my own way to see if I was 1) able to understand the algorithm and 2) see if i could modify it to my liking.
here is the code to generate the fractal:
public void Generate(Double rg, int size)
{
Random rand = new Random();
int min = -127;
int max = 128;
// Starting points of the rectangle
MDP_Point s1 = new MDP_Point(0, 0, rand.Next(min, max));
MDP_Point s2 = new MDP_Point(size, 0, rand.Next(min, max));
MDP_Point s3 = new MDP_Point(size, size, rand.Next(min, max));
MDP_Point s4 = new MDP_Point(0, size, rand.Next(min, max));
// Lists containing the rectangles
List<MDP_Rect> newRect = new List<MDP_Rect>(); // Newly created rectangles
List<MDP_Rect> oldRect = new List<MDP_Rect>(); // Rectangles being divided
// Starting rectangle is added to the list
oldRect.Add(new MDP_Rect(s1, s2, s3, s4));
// Distance between 2 points in a rectangle
int h = size;
while (h > 1)
{
foreach (MDP_Rect r in oldRect)
{
// Middle points of rectangle segments
MDP_Point m1 = new MDP_Point();
MDP_Point m2 = new MDP_Point();
MDP_Point m3 = new MDP_Point();
MDP_Point m4 = new MDP_Point();
// Middle point of rectangle
MDP_Point mm = new MDP_Point();
m1.x = (r.C1.x + r.C2.x) / 2;
m1.y = (r.C1.y + r.C2.y) / 2;
m1.z = ((r.C1.z + r.C2.z) / 2) +(rand.Next(min, max) * rg);
m2.x = (r.C2.x + r.C3.x) / 2;
m2.y = (r.C2.y + r.C3.y) / 2;
m2.z = ((r.C2.z + r.C3.z) / 2) +(rand.Next(min, max) * rg);
m3.x = (r.C3.x + r.C4.x) / 2;
m3.y = (r.C3.y + r.C4.y) / 2;
m3.z = ((r.C3.z + r.C4.z) / 2) +(rand.Next(min, max) * rg);
m4.x = (r.C1.x + r.C4.x) / 2;
m4.y = (r.C1.y + r.C4.y) / 2;
m4.z = ((r.C1.z + r.C4.z) / 2) + (rand.Next(min, max) * rg);
mm.x = (r.C1.x + r.C2.x + r.C3.x + r.C4.x) / 4;
mm.y = (r.C1.y + r.C2.y + r.C3.y + r.C4.y) / 4;
mm.z = ((r.C1.z + r.C2.z + r.C3.z + r.C4.z) / 4) + (rand.Next(min, max) * rg);
newRect.Add(new MDP_Rect(r.C1, m1, mm, m4));
newRect.Add(new MDP_Rect(m1, r.C2, m2, mm));
newRect.Add(new MDP_Rect(mm, m2, r.C3, m3));
newRect.Add(new MDP_Rect(m4, mm, m3, r.C4));
}
oldRect.Clear();
oldRect = new List<MDP_Rect>(newRect);
newRect.Clear();
h /= 2;
}
List<MDP_Rect> sorted = new List<MDP_Rect>();
sorted = oldRect.OrderBy(y => y.C1.y).ThenBy(x => x.C1.x).ToList();
List<MDP_Point> mapArray = new List<MDP_Point>();
mapArray.AddRange(CreateArray(sorted));
CreateImage(size, mapArray, rg);
}
MDP_Point only contains x, y and z values
MDP_Rectangle contains 4 points, creating a rectangle
The CreateArray() method only takes the ordered rectangle list and outputs and list of points in the correct order to create an image.
CreateArray():
private List<MDP_Point> CreateArray(List<MDP_Rect> lRect)
{
List<MDP_Point> p = new List<MDP_Point>();
int size = (int)Math.Sqrt(lRect.Count);
int i = 0;
foreach (MDP_Rect r in lRect)
{
p.Add(new MDP_Point((int)r.C1.x, (int)r.C1.y, (int)r.C1.z));
if (i > 0 && i % size == size - 1)
{
p.Add(new MDP_Point((int)r.C2.x, (int)r.C2.y, (int)r.C2.z));
}
i++;
}
for (int a = 0; a < size; a++)
{
p.Add(new MDP_Point((int)lRect[(size * size - size) + a].C4.x,
(int)lRect[(size * size - size) + a].C4.y,
(int)lRect[(size * size - size) + a].C4.z));
if (a > 0 && a % size == size - 1)
{
p.Add(new MDP_Point((int)lRect[(size * size - size) + a].C3.x,
(int)lRect[(size * size - size) + a].C3.y,
(int)lRect[(size * size - size) + a].C3.z));
}
}
return p;
}
This is the method to create the image:
private void CreateImage(int size, List<MDP_Point> arr, double roughness)
{
Bitmap map = new Bitmap(size, size);
int ver = 0;
for (int i = 0; i < map.Height; i++)
{
for (int n = 0; n < map.Width; n++ )
{
int h = (int)arr[ver].z + 127;
if (h < 0)
{
h = 0;
}
else if (h > 255)
{
h = 255 ;
}
Color c = Color.FromArgb(h, h, h);
//map.SetPixel(n, i, c);
map.SetPixel(i, n, c);
ver++;
}
}
Bitmap m = new Bitmap(map);
bool saved = true;
int num = 0;
while (saved)
{
if (File.Exists("map_S" + size + "_R" + roughness + "_" + num + ".png"))
{
num++;
}
else
{
m.Save("map_S" + size + "_R" + roughness + "_" + num + ".png", System.Drawing.Imaging.ImageFormat.Png);
saved = false;
}
}
map.Dispose();
m.Dispose();
}
The fact that any value below 0 is set to 0 and any value above 255 is set to 255 is probably a big issue.... not sure what to do about it.
Here's an image generated by the code:
Size: 1024
Roughness: 0.5
The most obvious issues are the diagonal "ridge lines" and the tiled look there is.
At this point, i'm not sure what to do to fix this to make it look more natural.
Any ideas?
Here, I think part of the problem is your hacking of the h variable with 255 and 0. I tried the code using:
int h = (int) arr[ver].z;
if (h < 0)
{
h = Math.Abs(h);
}
while(h > 255)
{
h -= 255;
}
On my PC the result was:
and when I used:
int h = (int) arr[ver].z + 127;
Note that I just had to create a testing MDP_Point class and MDP_Rect to test this...
To avoid these artifacts, you should use the two-stage method proposed by Peitgen et al. in "The Science of Fractal Images". They also suggest adding additional random displacements to all vertices after each subdivision step. I found a scanned excerpt here (page 45):
http://cs455.cs.byu.edu/lectureslides/FractalsPart2.pdf

Fast conversion byte array to short array of audio data

I need fastest way to convert byte array to short array of audio data.
Audio data byte array contains data from two audio channels placed in such way:
C1C1C2C2 C1C1C2C2 C1C1C2C2 ...
where
C1C1 - two bytes of first channel
C2C2 - two bytes of second channel
Currently I use such algorithm, but I feel there is better way to perform this task.
byte[] rawData = //from audio device
short[] shorts = new short[rawData.Length / 2];
short[] channel1 = new short[rawData.Length / 4];
short[] channel2 = new short[rawData.Length / 4];
System.Buffer.BlockCopy(rawData, 0, shorts, 0, rawData.Length);
for (int i = 0, j = 0; i < shorts.Length; i+=2, ++j)
{
channel1[j] = shorts[i];
channel2[j] = shorts[i+1];
}
You can leave out copying the buffer:
byte[] rawData = //from audio device
short[] channel1 = new short[rawData.Length / 4];
short[] channel2 = new short[rawData.Length / 4];
for (int i = 0, j = 0; i < rawData.Length; i+=4, ++j)
{
channel1[j] = (short)(((ushort)rawData[i + 1]) << 8 | (ushort)rawData[i]);
channel2[j] = (short)(((ushort)rawData[i + 3]) << 8 | (ushort)rawData[i + 2]);
}
To get the loop faster, you can have a look at the Task Parralel Library, exspecially Parallel.For:
[EDIT]
System.Threading.Tasks.Parallel.For( 0, shorts.Length/2, ( i ) =>
{
channel1[i] = shorts[i*2];
channel2[i] = shorts[i*2+1];
} );
[/EDIT]
Another way is loop unrolling, but I think the TPL will boost this up as well.
You could use unsafe code to avoid array addressing or bit shifts. But as PVitt said on new PCs you are better using standard managed code and the TPL if your data size is important.
short[] channel1 = new short[rawData.Length / 4];
short[] channel2 = new short[rawData.Length / 4];
fixed(byte* pRawData = rawData)
fixed(short* pChannel1 = channel1)
fixed(short* pChannel2 = channel2)
{
byte* end = pRawData + rawData.Length;
while(pRawData < end)
{
(*(pChannel1++)) = *((short*)pRawData);
pRawData += sizeof(short);
(*(pChannel2++)) = *((short*)pRawData);
pRawData += sizeof(short);
}
}
As with all optimization problems you need to time carefully, pay special attention to your
buffer allocations, channel1 and channel2 could be static (big) buffers growing automatically
and you could use only the nth first bytes. You will be able to skip 2 big arrays allocations
for each executions of this function. and will make the GC work less
(always better when timing is important)
As noted by CodeInChaos the endianness could be important, if your data is not in the
correct endianness you will need to do the conversion, for example to convert between big
and little endian assuming 8bit atomic elements the code will look like :
short[] channel1 = new short[rawData.Length / 4];
short[] channel2 = new short[rawData.Length / 4];
fixed(byte* pRawData = rawData)
fixed(byte* pChannel1 = (byte*)channel1)
fixed(byte* pChannel2 = (byte*)channel2)
{
byte* end = pRawData + rawData.Length;
byte* pChannel1High = pChannel1 + 1;
byte* pChannel2High = pChannel2 + 1;
while(pRawData < end)
{
*pChannel1High = *pRawData;
pChannel1High += 2 * sizeof(short);
*pChannel1 = *pRawData;
pChannel1 += 2 * sizeof(short);
*pChannel2High = *pRawData;
pChannel2High += 2 * sizeof(short);
*pChannel2 = *pRawData;
pChannel2 += 2 * sizeof(short);
}
}
I didn't compile any code in this post with an actual compiler so if you find errors feel free to edit it.
You can benchmark it by yourself! remember to use Release Mode and run without Debug (Ctrl+F5)
class Program
{
[StructLayout(LayoutKind.Explicit)]
struct UnionArray
{
[FieldOffset(0)]
public byte[] Bytes;
[FieldOffset(0)]
public short[] Shorts;
}
unsafe static void Main(string[] args)
{
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
byte[] rawData = new byte[10000000];
new Random().NextBytes(rawData);
Stopwatch sw1 = Stopwatch.StartNew();
short[] shorts = new short[rawData.Length / 2];
short[] channel1 = new short[rawData.Length / 4];
short[] channel2 = new short[rawData.Length / 4];
System.Buffer.BlockCopy(rawData, 0, shorts, 0, rawData.Length);
for (int i = 0, j = 0; i < shorts.Length; i += 2, ++j)
{
channel1[j] = shorts[i];
channel2[j] = shorts[i + 1];
}
sw1.Stop();
Stopwatch sw2 = Stopwatch.StartNew();
short[] channel1b = new short[rawData.Length / 4];
short[] channel2b = new short[rawData.Length / 4];
for (int i = 0, j = 0; i < rawData.Length; i += 4, ++j)
{
channel1b[j] = BitConverter.ToInt16(rawData, i);
channel2b[j] = BitConverter.ToInt16(rawData, i + 2);
}
sw2.Stop();
Stopwatch sw3 = Stopwatch.StartNew();
short[] shortsc = new UnionArray { Bytes = rawData }.Shorts;
short[] channel1c = new short[rawData.Length / 4];
short[] channel2c = new short[rawData.Length / 4];
for (int i = 0, j = 0; i < shorts.Length; i += 2, ++j)
{
channel1c[j] = shortsc[i];
channel2c[j] = shortsc[i + 1];
}
sw3.Stop();
Stopwatch sw4 = Stopwatch.StartNew();
short[] channel1d = new short[rawData.Length / 4];
short[] channel2d = new short[rawData.Length / 4];
for (int i = 0, j = 0; i < rawData.Length; i += 4, ++j)
{
channel1d[j] = (short)((short)(rawData[i + 1]) << 8 | (short)rawData[i]);
channel2d[j] = (short)((short)(rawData[i + 3]) << 8 | (short)rawData[i + 2]);
//Equivalent warning-less version
//channel1d[j] = (short)(((ushort)rawData[i + 1]) << 8 | (ushort)rawData[i]);
//channel2d[j] = (short)(((ushort)rawData[i + 3]) << 8 | (ushort)rawData[i + 2]);
}
sw4.Stop();
Stopwatch sw5 = Stopwatch.StartNew();
short[] channel1e = new short[rawData.Length / 4];
short[] channel2e = new short[rawData.Length / 4];
fixed (byte* pRawData = rawData)
fixed (short* pChannel1 = channel1e)
fixed (short* pChannel2 = channel2e)
{
byte* pRawData2 = pRawData;
short* pChannel1e = pChannel1;
short* pChannel2e = pChannel2;
byte* end = pRawData2 + rawData.Length;
while (pRawData2 < end)
{
(*(pChannel1e++)) = *((short*)pRawData2);
pRawData2 += sizeof(short);
(*(pChannel2e++)) = *((short*)pRawData2);
pRawData2 += sizeof(short);
}
}
sw5.Stop();
Stopwatch sw6 = Stopwatch.StartNew();
short[] shortse = new short[rawData.Length / 2];
short[] channel1f = new short[rawData.Length / 4];
short[] channel2f = new short[rawData.Length / 4];
System.Buffer.BlockCopy(rawData, 0, shortse, 0, rawData.Length);
System.Threading.Tasks.Parallel.For(0, shortse.Length / 2, (i) =>
{
channel1f[i] = shortse[i * 2];
channel2f[i] = shortse[i * 2 + 1];
});
sw6.Stop();
if (!channel1.SequenceEqual(channel1b) || !channel1.SequenceEqual(channel1c) || !channel1.SequenceEqual(channel1d) || !channel1.SequenceEqual(channel1e) || !channel1.SequenceEqual(channel1f))
{
throw new Exception();
}
if (!channel2.SequenceEqual(channel2b) || !channel2.SequenceEqual(channel2c) || !channel2.SequenceEqual(channel2d) || !channel2.SequenceEqual(channel2e) || !channel2.SequenceEqual(channel2f))
{
throw new Exception();
}
Console.WriteLine("Original: {0}ms", sw1.ElapsedMilliseconds);
Console.WriteLine("BitConverter: {0}ms", sw2.ElapsedMilliseconds);
Console.WriteLine("Super-unsafe struct: {0}ms", sw3.ElapsedMilliseconds);
Console.WriteLine("PVitt shifts: {0}ms", sw4.ElapsedMilliseconds);
Console.WriteLine("unsafe VirtualBlackFox: {0}ms", sw5.ElapsedMilliseconds);
Console.WriteLine("TPL: {0}ms", sw6.ElapsedMilliseconds);
Console.ReadKey();
return;
}
}
On x86 fastest is the unsafe code of VirtualBlackFox, second the "super unsafe" struct "trick" of C# unsafe value type array to byte array conversions, third PVitt.
On x64 fastest is the unsafe code of VirtualBlackFox, second PVitt.

Categories

Resources