Naudio - Convert 32 bit wav to 16 bit wav - c#

I've been trying to convert a 32 bit stereo wav to 16 bit mono wav. I use naudio to capture the sound I and thought that using just the two of four more significant bytes will work.
Here is the DataAvailable implementation:
void _waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
byte[] newArray = new byte[e.BytesRecorded / 2];
short two;
for (int i = 0, j = 0; i < e.BytesRecorded; i = i + 4, j = j + 2)
{
two = (short)BitConverter.ToInt16(e.Buffer, i + 2);
newArray[j] = (byte)(two & 0xFF);
newArray[j + 1] = (byte)((two >> 8) & 0xFF);
}
//do something with the new array:
}
Any help would be greatly appreciated!

I finally found the solution. I just had to multiply the converted value by 32767 and cast it to short:
void _waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
byte[] newArray16Bit = new byte[e.BytesRecorded / 2];
short two;
float value;
for (int i = 0, j = 0; i < e.BytesRecorded; i += 4, j += 2)
{
value = (BitConverter.ToSingle(e.Buffer, i));
two = (short)(value * short.MaxValue);
newArray16Bit[j] = (byte)(two & 0xFF);
newArray16Bit[j + 1] = (byte)((two >> 8) & 0xFF);
}
}

A 32-bit sample can be as high as 4,294,967,295 and a 16-bit sample can be as high as 65,536. So you'll have to scale down the 32-bit sample to fit into the 16-bit sample's range. More or less, you're doing something like this...
SixteenBitSample = ( ThirtyTwoBitSample / 4294967295 ) * 65536;
EDIT:
For the stereo to mono portion, if the two channels have the same data, just dump one of them, otherwise, add the wave forms together, and if they fall outside of the sample range (65,536) then you'll have to scale them down similarly to the above equation.
Hope that helps.

Related

C# signed fixed point to floating point conversion

I have a temperature sensor returning 2 bytes.
The temperature is defined as follows :
What is the best way in C# to convert these 2 byte to a float ?
My sollution is the following, but I don't like the power of 2 and the for loop :
static void Main(string[] args)
{
byte[] sensorData = new byte[] { 0b11000010, 0b10000001 }; //(-1) * (2^(6) + 2^(1) + 2^(-1) + 2^(-8)) = -66.50390625
Console.WriteLine(ByteArrayToTemp(sensorData));
}
static double ByteArrayToTemp(byte[] data)
{
// Convert byte array to short to be able to shift it
if (BitConverter.IsLittleEndian)
Array.Reverse(data);
Int16 dataInt16 = BitConverter.ToInt16(data, 0);
double temp = 0;
for (int i = 0; i < 15; i++)
{
//We take the LSB of the data and multiply it by the corresponding second power (from -8 to 6)
//Then we shift the data for the next loop
temp += (dataInt16 & 0x01) * Math.Pow(2, -8 + i);
dataInt16 >>= 1;
}
if ((dataInt16 & 0x01) == 1) temp *= -1; //Sign bit
return temp;
}
This might be slightly more efficient, but I can't see it making much difference:
static double ByteArrayToTemp(byte[] data)
{
if (BitConverter.IsLittleEndian)
Array.Reverse(data);
ushort bits = BitConverter.ToUInt16(data, 0);
double scale = 1 << 6;
double result = 0;
for (int i = 0, bit = 1 << 14; i < 15; ++i, bit >>= 1, scale /= 2)
{
if ((bits & bit) != 0)
result += scale;
}
if ((bits & 0x8000) != 0)
result = -result;
return result;
}
You're not going to be able to avoid a loop when calculating this.

generate a random biginteger between two values c#

I use the biginteger class whose source , and I want to generate a biginteger number between two values min and max randomly so i used this method found on stackoverflow :
public BigInteger getRandom(int n)
{
var rng = new RNGCryptoServiceProvider();
byte[] bytes = new byte[n / 8];
rng.GetBytes(bytes);
return new BigInteger(bytes);
}
But I can not generate numbers between min and max because the parameters of this function represent the number of bits, can someone help me, thank you in advance!
min and max are also a biginteger.
Try this one:
// max exclusive (not included!)
public static BigInteger GetRandom(RNGCryptoServiceProvider rng, BigInteger min, BigInteger max)
{
// shift to 0...max-min
BigInteger max2 = max - min;
int bits = max2.bitCount();
// 1 bit for sign (that we will ignore, we only want positive numbers!)
bits++;
// we round to the next byte
int bytes = (bits + 7) / 8;
int uselessBits = bytes * 8 - bits;
var bytes2 = new byte[bytes];
while (true)
{
rng.GetBytes(bytes2);
// The maximum number of useless bits is 1 (sign) + 7 (rounding) == 8
if (uselessBits == 8)
{
// and it is exactly one byte!
bytes2[0] = 0;
}
else
{
// Remove the sign and the useless bits
for (int i = 0; i < uselessBits; i++)
{
//Equivalent to
//byte bit = (byte)(1 << (7 - (i % 8)));
byte bit = (byte)(1 << (7 & (~i)));
//Equivalent to
//bytes2[i / 8] &= (byte)~bit;
bytes2[i >> 3] &= (byte)~bit;
}
}
var bi = new BigInteger(bytes2);
// If it is too much big, then retry!
if (bi >= max2)
{
continue;
}
// unshift the number
bi += min;
return bi;
}
}
There are some comments that explain a little how it work.

Resample loopback capture

I successfully captured sound from Wasapi using the following code:
IWaveIn waveIn = new WasapiLoopbackCapture();
waveIn.DataAvailable += OnDataReceivedFromWaveOut;
What I need to do now, is to resample the in-memory data to pcm with a sample rate of 8000 and 16 bits per sample mono.
I can't use ACMStream to resample the example, because the recorded audio is 32 bits per second.
I have tried this code to convert bytes from 32 to 16 bits, but all I get every time is just blank audio.
byte[] newArray16Bit = new byte[e.BytesRecorded / 2];
short two;
float value;
for (int i = 0, j = 0; i < e.BytesRecorded; i += 4, j += 2)
{
value = (BitConverter.ToSingle(e.Buffer, i));
two = (short)(value * short.MaxValue);
newArray16Bit[j] = (byte)(two & 0xFF);
newArray16Bit[j + 1] = (byte)((two >> 8) & 0xFF);
}
source = newArray16Bit;
I use this routine to resample on the fly from WASAPI IeeeFloat to the format I need in my app, which is 16kHz, 16 bit, 1 channel. My formats are fixed, so I'm hardcoding the conversions I need, but it can be adapted as needed.
private void ResampleWasapi(object sender, WaveInEventArgs e)
{
//the result of downsampling
var resampled = new byte[e.BytesRecorded / 12];
var indexResampled = 0;
//a variable to flag the mod 3-ness of the current sample
var arity = -1;
var runningSamples = new short[3];
for(var offset = 0; offset < e.BytesRecorded; offset += 8)
{
var float1 = BitConverter.ToSingle(e.Buffer, offset);
var float2 = BitConverter.ToSingle(e.Buffer, offset + 4);
//simple average to collapse 2 channels into 1
float mono = (float)((double)float1 + (double)float2) / 2;
//convert (-1, 1) range int to short
short sixteenbit = (short)(mono * 32767);
//the input is 48000Hz and the output is 16000Hz, so we need 1/3rd of the data points
//so save up 3 running samples and then mix and write to the file
arity = (arity + 1) % 3;
//record the value
runningSamples[arity] = sixteenbit;
//if we've hit the third one
if (arity == 2)
{
//simple average of the 3 and put in the 0th position
runningSamples[0] = (short)(((int)runningSamples[0] + (int)runningSamples[1] + (int)runningSamples[2]) / 3);
//copy that short (2 bytes) into the result array at the current location
Buffer.BlockCopy(runningSamples, 0, resampled, indexResampled, 2);
//for next pass
indexResampled += 2;
}
}
//and tell listeners that we've got data
RaiseDataEvent(resampled, resampled.Length);
}

HEX to bit[ ] array (also known as bool[ ])

I'm kindda new to c# and i was looking for some ideas on 2 thing. I have looked far and wide for answers but haven't found an answer to my exact problem.
I have a byte array (called BA) within a for loop which keeps over-writting itself and there is no way for my to be able print it as a whole array. Is there a way to export it outside the for loop (maybe with a different name) so i can use it later on in the program? i just want something like this:
byte[] BA2 = {3 187,3 203,111 32, ...etc}; //(groups of 2bytes separated by commas).
From the original
string hexValues = "03 BB,03 CB,6F 20,57 6F,72 6C,64 21";
(and also to represent this information in bits (boolean) so {00000011 10111011,00000011 11001011, ...etc})
The second thing i must do is to shift these two bytes by 4 and apply and AND gate with FFF0 (which is the same as multiplying the first byte * 1, and the second by 0xF0). Then put this in a ushort[ ] (unsigned short array) which holds the transformed bytes in binary format and then from there convert it back to HEX.
I understand that this might be unclear (my code is kind of messy), and pretty complex. but i was hoping some of you c# guru's could lend me hand.
Here's my code so far, i have put in comments the bits that don't work so the code runs. but i desperatly need to fix them.
class Program
{
static void Main(string[] args)
{
string hexValues = "03 BB,03 CB,6F 20,57 6F,72 6C,64 21";
string[] hex2byte = hexValues.Split(',');
for (int j = 0; j < 6; j++)
{
Console.WriteLine("\n2 byte String is: "+ hex2byte[j]);
string[] hex1byte = hex2byte[j].Split(' ');
for (int k = 0; k < 2; k++)
{
Console.WriteLine("byte " + hex1byte[k]);
byte[] BA = StringToByteArray((hex1byte[k]));
//bool[] AA = BitConverter.ToBoolean(BA); // I'm essentially stuck here. I need somehting which actually works.
//for (int i2 = 0; i2 < 2; i2++); // This is my attemp to perform de shift and AND.
//{
// ushort[] X = new ushort[1];
// X[0] = (ushort)((ushort)(BA[0] << 4) + (ushort)((BA[1] & 0xF0) >> 4)); // They have to be in this order: ((1stByte & 0xFF) << 4) + ((2byte & 0xF0) >> 4); first to the left then the right.
//}
Console.WriteLine("Converted " + BA[0]);
}
}
//Console.WriteLine(BA[4]); // it says: Does not exist in current context. Can it only b accesed in the for loop?
Console.ReadKey();
} // Main method finishes.
// Define StringToByteArray method.
public static byte[] StringToByteArray(String hex)
{
int NumberChars = hex.Length;
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
{
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
}
return bytes;
}
}
Is this what you are looking for?
string[] hexValues = new string[] { "03BB", "03CB", "6F20", "576F", "726C", "6421" };
ushort result = 0;
for (int i = 0; i < hexValues.Length; i++)
{
Console.WriteLine("2 byte String is {0}", hexValues[i]);
result = ushort.Parse(hexValues[i], NumberStyles.AllowHexSpecifier);
Console.WriteLine("converted: {0}", result.ToString());
Console.WriteLine("converted: {0}", result.ToString("x")); // "x" format in ToString -> very useful for creating hex strings.
}
For your shifting you can use the << and >> operators, and | and & for bitwise operations.

Counting common bits in a sequence of unsigned longs

I am looking for a faster algorithm than the below for the following. Given a sequence of 64-bit unsigned integers, return a count of the number of times each of the sixty-four bits is set in the sequence.
Example:
4608 = 0000000000000000000000000000000000000000000000000001001000000000
4097 = 0000000000000000000000000000000000000000000000000001000000000001
2048 = 0000000000000000000000000000000000000000000000000000100000000000
counts 0000000000000000000000000000000000000000000000000002101000000001
Example:
2560 = 0000000000000000000000000000000000000000000000000000101000000000
530 = 0000000000000000000000000000000000000000000000000000001000010010
512 = 0000000000000000000000000000000000000000000000000000001000000000
counts 0000000000000000000000000000000000000000000000000000103000010010
Currently I am using a rather obvious and naive approach:
static int bits = sizeof(ulong) * 8;
public static int[] CommonBits(params ulong[] values) {
int[] counts = new int[bits];
int length = values.Length;
for (int i = 0; i < length; i++) {
ulong value = values[i];
for (int j = 0; j < bits && value != 0; j++, value = value >> 1) {
counts[j] += (int)(value & 1UL);
}
}
return counts;
}
A small speed improvement might be achieved by first OR'ing the integers together, then using the result to determine which bits you need to check. You would still have to iterate over each bit, but only once over bits where there are no 1s, rather than values.Length times.
I'll direct you to the classical: Bit Twiddling Hacks, but your goal seems slightly different than just typical counting (i.e. your 'counts' variable is in a really weird format), but maybe it'll be useful.
The best I can do here is just get silly with it and unroll the inner-loop... seems to have cut the performance in half (roughly 4 seconds as opposed to the 8 in yours to process 100 ulongs 100,000 times)... I used a qick command-line app to generate the following code:
for (int i = 0; i < length; i++)
{
ulong value = values[i];
if (0ul != (value & 1ul)) counts[0]++;
if (0ul != (value & 2ul)) counts[1]++;
if (0ul != (value & 4ul)) counts[2]++;
//etc...
if (0ul != (value & 4611686018427387904ul)) counts[62]++;
if (0ul != (value & 9223372036854775808ul)) counts[63]++;
}
that was the best I can do... As per my comment, you'll waste some amount (I know not how much) running this in a 32-bit environment. If your that concerned over performance it may benefit you to first convert the data to uint.
Tough problem... may even benefit you to marshal it into C++ but that entirely depends on your application. Sorry I couldn't be more help, maybe someone else will see something I missed.
Update, a few more profiler sessions showing a steady 36% improvement. shrug I tried.
Ok let me try again :D
change each byte in 64 bit integer into 64 bit integer by shifting each bit by n*8 in lef
for instance
10110101 -> 0000000100000000000000010000000100000000000000010000000000000001
(use the lookup table for that translation)
Then just sum everything togeter in right way and you got array of unsigned chars whit integers.
You have to make 8*(number of 64bit integers) sumations
Code in c
//LOOKTABLE IS EXTERNAL and has is int64[256] ;
unsigned char* bitcounts(int64* int64array,int len)
{
int64* array64;
int64 tmp;
unsigned char* inputchararray;
array64=(int64*)malloc(64);
inputchararray=(unsigned char*)input64array;
for(int i=0;i<8;i++) array64[i]=0; //set to 0
for(int j=0;j<len;j++)
{
tmp=int64array[j];
for(int i=7;tmp;i--)
{
array64[i]+=LOOKUPTABLE[tmp&0xFF];
tmp=tmp>>8;
}
}
return (unsigned char*)array64;
}
This redcuce speed compared to naive implemetaton by factor 8, becuase it couts 8 bit at each time.
EDIT:
I fixed code to do faster break on smaller integers, but I am still unsure about endianess
And this works only on up to 256 inputs, becuase it uses unsigned char to store data in. If you have longer input string, you can change this code to hold up to 2^16 bitcounts and decrease spped by 2
const unsigned int BYTESPERVALUE = 64 / 8;
unsigned int bcount[BYTESPERVALUE][256];
memset(bcount, 0, sizeof bcount);
for (int i = values.length; --i >= 0; )
for (int j = BYTESPERVALUE ; --j >= 0; ) {
const unsigned int jth_byte = (values[i] >> (j * 8)) & 0xff;
bcount[j][jth_byte]++; // count byte value (0..255) instances
}
unsigned int count[64];
memset(count, 0, sizeof count);
for (int i = BYTESPERVALUE; --i >= 0; )
for (int j = 256; --j >= 0; ) // check each byte value instance
for (int k = 8; --k >= 0; ) // for each bit in a given byte
if (j & (1 << k)) // if bit was set, then add its count
count[i * 8 + k] += bcount[i][j];
Another approach that might be profitable, would be to build an array of 256 elements,
which encodes the actions that you need to take in incrementing the count array.
Here is a sample for a 4 element table, which does 2 bits instead of 8 bits.
int bitToSubscript[4][3] =
{
{0}, // No Bits set
{1,0}, // Bit 0 set
{1,1}, // Bit 1 set
{2,0,1} // Bit 0 and bit 1 set.
}
The algorithm then degenerates to:
pick the 2 right hand bits off of the number.
Use that as a small integer to index into the bitToSubscriptArray.
In that array, pull off the first integer. That is the number of elements in the count array, that you need to increment.
Based on that count, Iterate through the remainder of the row, incrementing count, based on the subscript you pull out of the bitToSubscript array.
Once that loop is done, shift your original number two bits to the right.... Rinse Repeat as needed.
Now there is one issue I ignored, in that description. The actual subscripts are relative. You need to keep track of where you are in the count array. Every time you loop, you add two to an offset. To That offset, you add the relative subscript from the bitToSubscript array.
It should be possible to scale up to the size you want, based on this small example. I would think that another program could be used, to generate the source code for the bitToSubscript array, so that it can be simply hard coded in your program.
There are other variation on this scheme, but I would expect it to run faster on average than anything that does it one bit at a time.
Good Hunting.
Evil.
I believe this should give a nice speed improvement:
const ulong mask = 0x1111111111111111;
public static int[] CommonBits(params ulong[] values)
{
int[] counts = new int[64];
ulong accum0 = 0, accum1 = 0, accum2 = 0, accum3 = 0;
int i = 0;
foreach( ulong v in values ) {
if (i == 15) {
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
i = 0;
}
accum0 += (v) & mask;
accum1 += (v >> 1) & mask;
accum2 += (v >> 2) & mask;
accum3 += (v >> 3) & mask;
i++;
}
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
return counts;
}
Demo: http://ideone.com/eNn4O (needs more test cases)
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
One of them
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Keep in mind, that complexity of this method is aprox O(log2(n)) where n is the number to count bits in, so for 10 binary it need only 2 loops
You should probably take the metod for counting 32 bits whit 64 bit arithmetics and applying it on each half of word, what would take by 2*15 + 4 instructions
// option 3, for at most 32-bit values in v:
c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) %
0x1f;
c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
If you have sse4,3 capable processor you can use POPCNT instruction.
http://en.wikipedia.org/wiki/SSE4

Categories

Resources