Convert Amplitude BACK to .wav file (C#) - c#

What I have done is converted a wave file to amplitude values in a short[] array as found here Mean amplitude of a .wav in C#
I modified the values and now want to convert back to .wav format or a byte[] array which when can be written to a byte file.

void SetShortToBuffer(short val,byte[] outArray,int Offset)
{
outArray[Offset] = (byte)(val & 0x00FF);
Offset++;
outArray[Offset] = (byte)((val >> 8) & 0x00FF);
}
byte[] ConvertShortArray(short[] Data,int Offset,int Count)
{
byte[] helper = new byte[Count * sizeof(short)];
int end = Offset+Count;
int io=0;
for (int i = Offset; i < end; i++)
{
SetShortToBuffer(Data[i], helper, io);
io+=sizeof(short);
}
return helper;
}
In C this would not be an issue, you could simply tell the compiler that your previously declared short array should now be treated as a byte array (simple cast) but after failing to do so in C# outside of unsafe context I came up with this code :)
You can use ConvertShortArray function to get chunks of data in case your wave is large
EDIT:
Quick and dirty wave header creator, not tested
byte[] CreateWaveFileHeader(int SizeOfData, short ChannelCount, uint SamplesPerSecond, short BitsPerSample)
{
short BlockAlign = (short)(ChannelCount * (BitsPerSample / 8));
uint AverageBytesPerSecond = SamplesPerSecond * BlockAlign;
List<byte> pom = new List<byte>();
pom.AddRange(ASCIIEncoding.ASCII.GetBytes("RIFF"));
pom.AddRange(BitConverter.GetBytes(SizeOfData + 36)); //Size + up to data
pom.AddRange(ASCIIEncoding.ASCII.GetBytes("WAVEfmt "));
pom.AddRange(BitConverter.GetBytes(((uint)16))); //16 For PCM
pom.AddRange(BitConverter.GetBytes(((short)1))); //PCM FMT
pom.AddRange(BitConverter.GetBytes(((short)ChannelCount)));
pom.AddRange(BitConverter.GetBytes((uint)SamplesPerSecond));
pom.AddRange(BitConverter.GetBytes((uint)AverageBytesPerSecond));
pom.AddRange(BitConverter.GetBytes((short)BlockAlign));
pom.AddRange(BitConverter.GetBytes((short)BitsPerSample));
pom.AddRange(ASCIIEncoding.ASCII.GetBytes("data"));
pom.AddRange(BitConverter.GetBytes(SizeOfData));
return pom.ToArray();
}

Related

int to byte[] consistency over network

I have a struct that gets used all over the place and that I store as byteArray on the hd and also send to other platforms.
I used to do this by getting a string version of the struct and using getBytes(utf-8) and getString(utf-8) during serialization. With that I guess I avoided the little and big endian problems?
However that was quite a bit of overhead and I am now using this:
public static explicit operator byte[] (Int3 self)
{
byte[] int3ByteArr = new byte[12];//4*3
int x = self.x;
int3ByteArr[0] = (byte)x;
int3ByteArr[1] = (byte)(x >> 8);
int3ByteArr[2] = (byte)(x >> 0x10);
int3ByteArr[3] = (byte)(x >> 0x18);
int y = self.y;
int3ByteArr[4] = (byte)y;
int3ByteArr[5] = (byte)(y >> 8);
int3ByteArr[6] = (byte)(y >> 0x10);
int3ByteArr[7] = (byte)(y >> 0x18);
int z = self.z;
int3ByteArr[8] = (byte)z;
int3ByteArr[9] = (byte)(z >> 8);
int3ByteArr[10] = (byte)(z >> 0x10);
int3ByteArr[11] = (byte)(z >> 0x18);
return int3ByteArr;
}
public static explicit operator Int3(byte[] self)
{
int x = self[0] + (self[1] << 8) + (self[2] << 0x10) + (self[3] << 0x18);
int y = self[4] + (self[5] << 8) + (self[6] << 0x10) + (self[7] << 0x18);
int z = self[8] + (self[9] << 8) + (self[10] << 0x10) + (self[11] << 0x18);
return new Int3(x, y, z);
}
It works quite well for me, but I am not quite sure how little/big endian works,. do I still have to take care of something here to be safe when some other machine receives an int I sent as a bytearray?
Your current approach will not work for the case when your application running on system which use Big-Endian. In this situation you don't need reordering at all.
You don't need to reverse byte arrays by your self
And you don't need check for endianess of the system by your self
Static method IPAddress.HostToNetworkOrder will convert integer to the integer with big-endian order.
Static method IPAddress.NetworkToHostOrder will convert integer to the integer with order your system using
Those methods will check for Endianness of the system and will do/or not reordering of integers.
For getting bytes from integer and back use BitConverter
public struct ThreeIntegers
{
public int One;
public int Two;
public int Three;
}
public static byte[] ToBytes(this ThreeIntegers value )
{
byte[] bytes = new byte[12];
byte[] bytesOne = IntegerToBytes(value.One);
Buffer.BlockCopy(bytesOne, 0, bytes, 0, 4);
byte[] bytesTwo = IntegerToBytes(value.Two);
Buffer.BlockCopy(bytesTwo , 0, bytes, 4, 4);
byte[] bytesThree = IntegerToBytes(value.Three);
Buffer.BlockCopy(bytesThree , 0, bytes, 8, 4);
return bytes;
}
public static byte[] IntegerToBytes(int value)
{
int reordered = IPAddress.HostToNetworkOrder(value);
return BitConverter.GetBytes(reordered);
}
And converting from bytes to struct
public static ThreeIntegers GetThreeIntegers(byte[] bytes)
{
int rawValueOne = BitConverter.ToInt32(bytes, 0);
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = BitConverter.ToInt32(bytes, 4);
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = BitConverter.ToInt32(bytes, 8);
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
return new ThreeIntegers(valueOne, valueTwo, valueThree);
}
If you will use BinaryReader and BinaryWriter for saving and sending to another platforms then BitConverter and byte array manipulating can be dropped off.
// BinaryWriter.Write have overload for Int32
public static void SaveThreeIntegers(ThreeIntegers value)
{
using(var stream = CreateYourStream())
using (var writer = new BinaryWriter(stream))
{
int reordredOne = IPAddress.HostToNetworkOrder(value.One);
writer.Write(reorderedOne);
int reordredTwo = IPAddress.HostToNetworkOrder(value.Two);
writer.Write(reordredTwo);
int reordredThree = IPAddress.HostToNetworkOrder(value.Three);
writer.Write(reordredThree);
}
}
For reading value
public static ThreeIntegers LoadThreeIntegers()
{
using(var stream = CreateYourStream())
using (var writer = new BinaryReader(stream))
{
int rawValueOne = reader.ReadInt32();
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = reader.ReadInt32();
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = reader.ReadInt32();
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
}
}
Of course you can refactor methods above and get more cleaner solution.
Or add as extension methods for BinaryWriter and BinaryReader.
Yes you do. With changes endianness your serialization which preserves bit ordering will run into trouble.
Take the int value 385
In a bigendian system it would be stored as
000000000000000110000001
Interpreting it as littleendian would read it as
100000011000000000000000
And reverse translate to 8486912
If you use the BitConverter class there will be a book property desiring the endianness of the system. The bitconverter can also produce the bit arrays for you.
You will have to decide to use either endianness and reverse the byte arrays according to the serializing or deserializing systems endianness.
The description on MSDN is actually quite detailed. Here they use Array.Reverse for simplicity. I am not certain that your casting to/from byte in order to do the bit manipulation is in fact the fastest way of converting, but that is easily benchmarked.

Convert uint[] to byte[] [duplicate]

This might be a simple one, but I can't seem to find an easy way to do it. I need to save an array of 84 uint's into an SQL database's BINARY field. So I'm using the following lines in my C# ASP.NET project:
//This is what I have
uint[] uintArray;
//I need to convert from uint[] to byte[]
byte[] byteArray = ???
cmd.Parameters.Add("#myBindaryData", SqlDbType.Binary).Value = byteArray;
So how do you convert from uint[] to byte[]?
How about:
byte[] byteArray = uintArray.SelectMany(BitConverter.GetBytes).ToArray();
This'll do what you want, in little-endian format...
You can use System.Buffer.BlockCopy to do this:
byte[] byteArray = new byte[uintArray.Length * 4];
Buffer.BlockCopy(uintArray, 0, byteArray, 0, uintArray.Length * 4];
http://msdn.microsoft.com/en-us/library/system.buffer.blockcopy.aspx
This will be much more efficient than using a for loop or some similar construct. It directly copies the bytes from the first array to the second.
To convert back just do the same thing in reverse.
There is no built-in conversion function to do this. Because of the way arrays work, a whole new array will need to be allocated and its values filled-in. You will probably just have to write that yourself. You can use the System.BitConverter.GetBytes(uint) function to do some of the work, and then copy the resulting values into the final byte[].
Here's a function that will do the conversion in little-endian format:
private static byte[] ConvertUInt32ArrayToByteArray(uint[] value)
{
const int bytesPerUInt32 = 4;
byte[] result = new byte[value.Length * bytesPerUInt32];
for (int index = 0; index < value.Length; index++)
{
byte[] partialResult = System.BitConverter.GetBytes(value[index]);
for (int indexTwo = 0; indexTwo < partialResult.Length; indexTwo++)
result[index * bytesPerUInt32 + indexTwo] = partialResult[indexTwo];
}
return result;
}
byte[] byteArray = Array.ConvertAll<uint, byte>(
uintArray,
new Converter<uint, byte>(
delegate(uint u) { return (byte)u; }
));
Heed advice from #liho1eye, make sure your uints really fit into bytes, otherwise you're losing data.
If you need all the bits from each uint, you're gonna to have to make an appropriately sized byte[] and copy each uint into the four bytes it represents.
Something like this ought to work:
uint[] uintArray;
//I need to convert from uint[] to byte[]
byte[] byteArray = new byte[uintArray.Length * sizeof(uint)];
for (int i = 0; i < uintArray.Length; i++)
{
byte[] barray = System.BitConverter.GetBytes(uintArray[i]);
for (int j = 0; j < barray.Length; j++)
{
byteArray[i * sizeof(uint) + j] = barray[j];
}
}
cmd.Parameters.Add("#myBindaryData", SqlDbType.Binary).Value = byteArray;

Sending an image from a C# client to a C server

If I send plain text there is no problem. Everything is ok.
However If I try to send from the C# client an image, the server receives correct bytes number, but when I save the buffer to a file (in binary mode - wb), it always has 4 bytes.
I send it by the C# client by using the function File.ReadAllBytes().
My saving code looks like
FILE * pFile;
char *buf = ReceiveMessage(s);
pFile = fopen (fileName , "wb");
fwrite(buf, sizeof(buf[0]), sizeof(buf)/sizeof(buf[0]), pFile);
fclose (pFile);
free(buf);
My receiving function looks like
static unsigned char *ReceiveMessage(int s)
{
int prefix;
recv(s, &prefix, 4, 0);
int len = prefix;
char *buffer= (char*)malloc(len + 1);
int received = 0, totalReceived = 0;
buffer[len] = '\0';
while (totalReceived < len)
{
if (len - totalReceived > BUFFER_SIZE)
{
received = recv(s, buffer + totalReceived, BUFFER_SIZE, 0);
}
else
{
received = recv(s, buffer + totalReceived, len - totalReceived, 0);
}
totalReceived += received;
}
return buffer;
}
Your C code needs to pass len back from the ReceiveMessage() function.
char *buf = ReceiveMessage(s); // buf is a char*
... sizeof(buff) // sizeof(char*) is 4 or 8
So you'll need something like
static unsigned char *ReceiveMessage(int s, int* lenOut)
{
...
*lenOut = totalReceived ;
}
You do a beginners mistake of using sizeof(buf). It doesn't return the number of bytes in the buffer but the size of the pointer (which is four or eight depending on if you run 32 or 64 bit platform).
You need to change the ReceiveMessage function to also "return" the size of the received data.
You do not get size of array by sizeof. Change to i.e.:
int len = 0;
char *buf;
buf = ReceiveMessage(s, &len);
/* then use len to calculate write length */
static unsigned char *ReceiveMessage(int s, int *len)
/* or return len and pass ptr to buf */
{
...
}

Converting binary reading function from C++ to C#

I am honestly really confused on reading binary files in C#.
I have C++ code for reading binary files:
FILE *pFile = fopen(filename, "rb");
uint n = 1024;
uint readC = 0;
do {
short* pChunk = new short[n];
readC = fread(pChunk, sizeof (short), n, pFile);
} while (readC > 0);
and it reads the following data:
-156, -154, -116, -69, -42, -36, -42, -41, -89, -178, -243, -276, -306,...
I tried convert this code to C# but cannot read such data. Here is code:
using (var reader = new BinaryReader(File.Open(filename, FileMode.Open)))
{
sbyte[] buffer = new sbyte[1024];
for (int i = 0; i < 1024; i++)
{
buffer[i] = reader.ReadSByte();
}
}
and i get the following data:
100, -1, 102, -1, -116, -1, -69, -1, -42, -1, -36
How can i get similar data?
A short is not a signed byte, it's a signed 16 bit value.
short[] buffer = new short[1024];
for (int i = 0; i < 1024; i++) {
buffer[i] = reader.ReadInt16();
}
That's because in C++ you're reading shorts and in C# you're reading signed bytes (that's why SByte means). You should use reader.ReadInt16()
Your C++ code reads 2 bytes at a time (you're using sizeof(short)), while your C# code reads one byte at a time. A SByte (see http://msdn.microsoft.com/en-us/library/d86he86x(v=vs.71).aspx) uses 8 bits of storage.
You should use the same data type to get the correct output or cast to a new type.
In c++ you are using short. (i suppose the file is also written with short) so use short itself in c#. or you can use Sytem.Int16.
You are getting different values because short and sbyte are not equivalent. short is 2 bytes and Sbyte is 1 byte
using (var reader = new BinaryReader(File.Open(filename, FileMode.Open)))
{
System.Int16[] buffer = new System.Int16[1024];
for (int i = 0; i < 1024; i++)
{
buffer[i] = reader.ReadInt16();
}
}

How can I convert an int to byte array without the 0x00 bytes?

I'm trying to convert an int value to a byte array, but I'm using the byte for MIDI information (meaning that the 0x00 byte which is returned when using GetBytes acts as a separator) which renders my MIDI information useless.
I would like to convert the int to an array which leaves out the 0x00 bytes and just contains the bytes which contain actual values. How can I do this?
You've completely misunderstood what you need, but luckily you mentioned MIDI. You need to use the multi-byte encoding that MIDI defines, which is somewhat similar to UTF-8 in that less than 8 bits of data are placed into each octet, with the remaining providing information about the number of bits used.
See the description on wikipedia. Pay close attention to the fact that protobuf uses this encoding, you can probably reuse some of Google's code.
Based on the info Ben added, this should do what you require:
static byte[] VlqEncode(int value)
{
uint uvalue = (uint)value;
if (uvalue < 128) return new byte[] { (byte)uvalue }; // simplest case
// calculate length of buffer required
int len = 0;
do {
len++;
uvalue >>= 7;
} while (uvalue != 0);
// encode (this is untested, following the VQL/Midi/protobuf confusion)
uvalue = (uint)value;
byte[] buffer = new byte[len];
for (int offset = len - 1; offset >= 0; offset--)
{
buffer[offset] = (byte)(128 | (uvalue & 127)); // only the last 7 bits
uvalue >>= 7;
}
buffer[len - 1] &= 127;
return buffer;
}

Categories

Resources