I'm trying to display a waveform ( i recorded my self in the microphone).
The keep the data in a byte array.
In addition my sample rate is 44100 and the sample size is 16bit.
My array contains only positive values, but in some examples i saw that the values are between -1.0 to 1.0. Why is that?
In addition, If my sample size is 16 bit and i'm using a byte array - i need to make a conversion.
This is what i tried to do so far:
double[] x = new double[Arc.Buffer.Count / 2];
byte[] y = new byte[2];
for (i = 0; i < Arc.Buffer.Count; i++)
{
Array.Copy(Arc.Buffer.ToArray(), 2*i, y, 0, 2);
x[i] = (double)((y[1] << 8) | (y[0] & 0x00FF));
}
But after this code, my x array contain huge values (and not -1.0 to 1.0).
I'm new in this subject and i will appreciate any help! (:
Assuming this is PCM data, there are a number of WAV/RIFF formats possible. If your sample size is 16 bits, the values you're reading are going to be signed, between -32,768 and 32,767.
A quick search for "wav riff formats" turned up a resource regarding parsing digital audio data.
Since you are processing 16-bit signed samples then the values are going to be between -32768 and 32767. To get in the range of -1.0 to 1.0 double precision you need to divide by 32768.0
double[] x = new double[Arc.Buffer.Count / 2];
byte[] y = new byte[2];
for (i = 0; i < Arc.Buffer.Count; i++)
{
Array.Copy(Arc.Buffer.ToArray(), 2*i, y, 0, 2);
x[i] = ((y[1] << 8) | (y[0] & 0x00FF)) / 32768.0;
}
Related
I am trying to hide a string of text into a bitmap using the LSB algorithm, which is replacing the least significant bit of the RGB Values for each pixel. So far I have loped through the pixels of the image and cleared the LSB value for each pixel. The part that I am struggling with is inserting the new LSB values that come from a string.
This is what I have done so far any pointers of where to go next would be helpful
string text = txtEncrypt.Text;
//Gets the ascii value of each character from the string
var n = ASCIIEncoding.ASCII.GetBytes(text);
Bitmap myBitmap = new Bitmap(myPictureBox.Image);
byte[] rgbBytes = new byte[0];
int R=0, G=0, B=0;
for (int i = 0; i < myBitmap.Width; i++)
{
for (int j = 0; j < myBitmap.Height; j++)
{
Color pixel = myBitmap.GetPixel(i, j);
// now, clear the least significant bit (LSB) from each pixel element
//Therefore Three bits in each pixel spare
R = pixel.R - pixel.R % 2;
G = pixel.G - pixel.G % 2;
B = pixel.B - pixel.B % 2;
// Need to insert new values
}
}
Although you can do bit manipulation using "regular" arithmetics (the kind they teach in the first grade) it is more common to use bit manipulation operators to achieve the same goal.
For example, writing R = pixel.R & ~1 is a lot more common than subtracting pixel.R % 2.
You don't need to clear the bit before setting it. To force a bit into 1 use R = pixel.R | 1. To force it into zero use the R = pixel.R & ~1 mentioned above.
To iterate bits of the "messagestored as a sequence ofN` bytes use this check:
if (message[pos / 8] & (1 << pos % 8) != 0) {
// bit at position pos is 1
} else {
// bit at position pos is 0
}
Bitwise operators make this easy to do:
set last bit to 1:
var newR = pixel.R | 0b00000001
set last bit to 0
var newR = pixel.R & 0b11111110
how this works: | merges bits like an or operator would per byte. and & merges bits like an and operator would (psuedocode):
10101000 | 0000001 = 10101001
10101001 & 1111110 = 10101000
I'm playing around with Source RCON Protocol, but I'm having issues converting a strings to byte array successfully.
Original Code (VB.NET) + Pastebin: http://pastebin.com/4BkbTRfD
Private Function RCON_Command(ByVal Command As String,
ByVal ServerData As Integer) As Byte()
Dim Packet As Byte() = New Byte(CByte((13 + Command.Length))) {}
Packet(0) = Command.Length + 9 'Packet Size (Integer)
Packet(4) = 0 'Request Id (Integer)
Packet(8) = ServerData 'SERVERDATA_EXECCOMMAND / SERVERDATA_AUTH (Integer)
For X As Integer = 0 To Command.Length - 1
Packet(12 + X) = System.Text.Encoding.Default.GetBytes(Command(X))(0)
Next
Return Packet
End Function
My current code in C# + Pastebin: http://pastebin.com/eVv0nZCf
byte[] RCONCommand(string cmd, int serverData)
{
int packetSize = cmd.Length + 12;
byte[] byteList = new byte[packetSize];
byteList[0] = (byte)packetSize;
byteList[4] = 0;
byteList[8] = (byte)serverData;
for(int X = 0; X < cmd.Length; X++)
{
byteList[12 + X] = Encoding.ASCII.GetBytes(cmd)[X];
}
return byteList;
}
When I use Encoding.ASCII.GetString(RCONCommand("Word", 3)); the result will be square mark. I tried with Encoding.UTF8.GetString() too, but same result.
Packet structure can be found here: https://developer.valvesoftware.com/wiki/Source_RCON_Protocol#Basic_Packet_Structure
I don't just figure it out what I've done wrong, because I'm not even a familiar with bytes and such. PS. The example applications that has been posted in C# for Source RCON Protocol documentation are garbled, because people is using so much OOP and creates millions of class files so I cannot even find the correct stuff.
A byte is 8 bits. That means that a value such as Size, which according to the spec is a 32-bit little-endian unsigned integer, will require 4 bytes (32 ÷ 8 = 4).
To fit 32 bits of information into four bytes, you have to split it up. Think of this as dividing a four-digit number into four strings, one for each digit. Except we're working in binary, so it's a little more complicated. We have to do some bit shifting and masking to get just the bits we want into each "string."
The spec calls for little-endian, sothe least significant bytes come first; this takes some getting used to if you haven't done this before.
byte[0] = size && 0xFF;
byte[1] = (size >> 8) && 0xFF;
byte[2] = (size >> 16) && 0xFF;
byte[3] = (size >> 24) && 0xFF;
If you want to rely on the CLR, you can use BitConverter, although it's platform dependent, and not all platforms are little-endian.
var tmp = BitConverter.GetBytes(size);
if (BitConverter.IsLittleEndian)
{
byte[0] = tmp[0];
byte[1] = tmp[1];
byte[2] = tmp[2];
byte[3] = tmp[3];
}
else //in case you are running on a bigendian machine like a Mac
{
byte[0] = tmp[3];
byte[1] = tmp[2];
byte[2] = tmp[1];
byte[3] = tmp[0];
}
I am writing a checksum for a manifest file for a courrier based system written in C# in the .NET environment.
I need to have an 8 digit field representing the checksum which is calculated as per the following:
Record Check Sum Algorithm
Form the 32-bit arithmetic sum of the products of
• the 7 low order bits of each ASCII character in the record
• the position of each character in the record numbered from 1 for the first character.
for the length of the record up to but excluding the check sum field itself :
Sum = Σi ASCII( ith character in the record ).( i )
where i runs over the length of the record excluding the check sum field.
After performing this calculation, convert the resultant sum to binary and split the 32 low order
bits of the Sum into eight blocks of 4 bits (octets). Note that each of the octets has a decimal
number value ranging from 0 to 15.
Add an offset of ASCII 0 ( zero ) to each octet to form an ASCII code number.
Convert the ASCII code number to its equivalent ASCII character thus forming printable
characters in the range 0123456789:;<=>?.
Concatenate each of these characters to form a single string of eight (8) characters in overall
length.
I am not the greatest at mathematics so I am struggling to write the code correctly as per the documentation.
I have written the following so far:
byte[] sumOfAscii = null;
for(int i = 1; i< recordCheckSum.Length; i++)
{
string indexChar = recordCheckSum.ElementAt(i).ToString();
byte[] asciiChar = Encoding.ASCII.GetBytes(indexChar);
for(int x = 0; x<asciiChar[6]; x++)
{
sumOfAscii += asciiChar[x];
}
}
//Turn into octets
byte firstOctet = 0;
for(int i = 0;i< sumOfAscii[6]; i++)
{
firstOctet += recordCheckSum;
}
Where recordCheckSum is a string made up of deliveryAddresses, product names etc and excludes the 8-digit checksum.
Any help with calculating this would be greatly appreciated as I am struggling.
There are notes in line as I go along. Some more notes on the calculation at the end.
uint sum = 0;
uint zeroOffset = 0x30; // ASCII '0'
byte[] inputData = Encoding.ASCII.GetBytes(recordCheckSum);
for (int i = 0; i < inputData.Length; i++)
{
int product = inputData[i] & 0x7F; // Take the low 7 bits from the record.
product *= i + 1; // Multiply by the 1 based position.
sum += (uint)product; // Add the product to the running sum.
}
byte[] result = new byte[8];
for (int i = 0; i < 8; i++) // if the checksum is reversed, make this:
// for (int i = 7; i >=0; i--)
{
uint current = (uint)(sum & 0x0f); // take the lowest 4 bits.
current += zeroOffset; // Add '0'
result[i] = (byte)current;
sum = sum >> 4; // Right shift the bottom 4 bits off.
}
string checksum = Encoding.ASCII.GetString(result);
One note, I use the & and >> operators, which you may or may not be familiar with. The & operator is the bitwise and operator. The >> operator is logical shift right.
I've been trying to convert a 32 bit stereo wav to 16 bit mono wav. I use naudio to capture the sound I and thought that using just the two of four more significant bytes will work.
Here is the DataAvailable implementation:
void _waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
byte[] newArray = new byte[e.BytesRecorded / 2];
short two;
for (int i = 0, j = 0; i < e.BytesRecorded; i = i + 4, j = j + 2)
{
two = (short)BitConverter.ToInt16(e.Buffer, i + 2);
newArray[j] = (byte)(two & 0xFF);
newArray[j + 1] = (byte)((two >> 8) & 0xFF);
}
//do something with the new array:
}
Any help would be greatly appreciated!
I finally found the solution. I just had to multiply the converted value by 32767 and cast it to short:
void _waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
byte[] newArray16Bit = new byte[e.BytesRecorded / 2];
short two;
float value;
for (int i = 0, j = 0; i < e.BytesRecorded; i += 4, j += 2)
{
value = (BitConverter.ToSingle(e.Buffer, i));
two = (short)(value * short.MaxValue);
newArray16Bit[j] = (byte)(two & 0xFF);
newArray16Bit[j + 1] = (byte)((two >> 8) & 0xFF);
}
}
A 32-bit sample can be as high as 4,294,967,295 and a 16-bit sample can be as high as 65,536. So you'll have to scale down the 32-bit sample to fit into the 16-bit sample's range. More or less, you're doing something like this...
SixteenBitSample = ( ThirtyTwoBitSample / 4294967295 ) * 65536;
EDIT:
For the stereo to mono portion, if the two channels have the same data, just dump one of them, otherwise, add the wave forms together, and if they fall outside of the sample range (65,536) then you'll have to scale them down similarly to the above equation.
Hope that helps.
I have a 1-dimensional float array of root mean square values, each calculated with the same window length. Let's say
RMS = {0, 0.01, 0.4, ... }
Now the RMS for a larger window, which can be represented as a range of the original windows, can be calculated as the RMS of the "participating" RMS values from RMS[i] to RMS[i + len]. Here len is the length of the larger window divided by the lenght of the original windows.
I'd like to create a rolling window. I want
rollingRMS[0] = RMS from 0 to len
...
rollingRMS[n] = RMS from n to len+n
calculated as efficiently as possible. I know this isn't very hard to crack, but does anyone have ready code for this?
EDIT: I asked for sample code, so I guess it would be decent to provide some. The following is based on pierr's answer and is written in C#. It's a bit different from my original question as I realized it would be nice to have the resulting array to have the same size as the original and to have the windows end at each element.
// The RMS data to be analysed
float[] RMS = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
// The resulting rolling RMS values
float[] rollingRMS = new float[RMS.Length];
// Window lenght
int len = 3;
// Calculate: rollingRMS will hold root mean square from windows which end at
// each respective sample in the RMS array. For the first len samples the input
// will be treated as zero-padded
for (int i = 0; i < RMS.Length; i++)
{
if (i == 0)
rollingRMS[i] = (float)Math.Sqrt((RMS[i] * RMS[i] / len));
else if (i < len)
rollingRMS[i] = (float)Math.Sqrt(
( RMS[i] * RMS[i] +
len * (rollingRMS[i - 1] * rollingRMS[i - 1])
) / len);
else
rollingRMS[i] = (float)Math.Sqrt(
( len * (rollingRMS[i - 1] * rollingRMS[i - 1]) +
RMS[i] * RMS[i] -
RMS[i - len] * RMS[i - len]
) / len);
}
I am not sure that I have understood your problem correctly. But let me have a try.
a=[1,2,3,4,5,6,7,8,9,10]
LEN = 3
SquareOfRollingRMS[0] = (a[0]^2 + a[1]^2 + a[2]^2 ) / LEN
SquareOfRollingRMS[1] = ( a[1]^2 + a[2]^2 + a[3]^2 ) / LEN
It's not difficult to notice that:
SquareOfRollingRMS[i] = RollingRMS[i-1] * LEN - a[i-1]^2 + a[i+LEN-1]^2
RollingRMS[i] = SqurefOfRollingRMS[i]^(1/2)
Doing it this way ,you are avoiding recaculating the overlap windows.
EDIT:
You can save some divide and multiply operation by moving LEN to the left side of the equations. This might speed up a lot as dividing is usually relatively slow.
LEN_by_SquareOfRollingRMS[0] = (a[0]^2 + a[1]^2 + a[2]^2)
LEN_by_SquareOfRollingRMS[i] = LEN_by_RollingRMS[i-1] - a[i-1]^2 + a[i+LEN-1]^2