Decoding RTP G,711 A-Law Packet Problem , difference between c# & c++ - c#

first sorry for my English
by using c#
I'm try to decoding RTP A-Law Packet but it is gave me a noise,
i checked my code with wireshark code which can gave me the voice without noise , i can not get the difference between wireshark code (c++) and my code (c#) which because I can not Debugging wireshark , but i get the difference Bytes resulting from my code and wireshark code
i will write the Bytes result from two code and simple of my code and wireshark
for example :
when the alaw_exp_table[data[i]] = -8
in my cod the bytes result are : 248 , 255
in wireshark code the bytes result are : 255 , 248
are you see 248,255 : 255,248 i think it was a reflection but the next example not
when the alaw_exp_table[data[i]] = 8
in my cod the bytes result are : 8 , 0
in wireshark code the bytes result are : 0 , 0
this wireshark
int
decodeG711a(void *input, int inputSizeBytes, void *output, int *outputSizeBytes)
{
guint8 *dataIn = (guint8 *)input;
gint16 *dataOut = (gint16 *)output;
int i;
for (i=0; i<inputSizeBytes; i++)
{
dataOut[i] = alaw_exp_table[dataIn[i]];
}
*outputSizeBytes = inputSizeBytes * 2;
return 0;
}
static short[] alaw_exp_table= {
-5504, -5248, -6016, -5760, -4480, -4224, -4992, -4736,
-7552, -7296, -8064, -7808, -6528, -6272, -7040, -6784,
-2752, -2624, -3008, -2880, -2240, -2112, -2496, -2368,
-3776, -3648, -4032, -3904, -3264, -3136, -3520, -3392,
-22016,-20992,-24064,-23040,-17920,-16896,-19968,-18944,
-30208,-29184,-32256,-31232,-26112,-25088,-28160,-27136,
-11008,-10496,-12032,-11520, -8960, -8448, -9984, -9472,
-15104,-14592,-16128,-15616,-13056,-12544,-14080,-13568,
-344, -328, -376, -360, -280, -264, -312, -296,
-472, -456, -504, -488, -408, -392, -440, -424,
-88, -72, -120, -104, -24, -8, -56, -40,
-216, -200, -248, -232, -152, -136, -184, -168,
-1376, -1312, -1504, -1440, -1120, -1056, -1248, -1184,
-1888, -1824, -2016, -1952, -1632, -1568, -1760, -1696,
-688, -656, -752, -720, -560, -528, -624, -592,
-944, -912, -1008, -976, -816, -784, -880, -848,
5504, 5248, 6016, 5760, 4480, 4224, 4992, 4736,
7552, 7296, 8064, 7808, 6528, 6272, 7040, 6784,
2752, 2624, 3008, 2880, 2240, 2112, 2496, 2368,
3776, 3648, 4032, 3904, 3264, 3136, 3520, 3392,
22016, 20992, 24064, 23040, 17920, 16896, 19968, 18944,
30208, 29184, 32256, 31232, 26112, 25088, 28160, 27136,
11008, 10496, 12032, 11520, 8960, 8448, 9984, 9472,
15104, 14592, 16128, 15616, 13056, 12544, 14080, 13568,
344, 328, 376, 360, 280, 264, 312, 296,
472, 456, 504, 488, 408, 392, 440, 424,
88, 72, 120, 104, 24, 8, 56, 40,
216, 200, 248, 232, 152, 136, 184, 168,
1376, 1312, 1504, 1440, 1120, 1056, 1248, 1184,
1888, 1824, 2016, 1952, 1632, 1568, 1760, 1696,
688, 656, 752, 720, 560, 528, 624, 592,
944, 912, 1008, 976, 816, 784, 880, 848};
and this is my code
public static void ALawDecode(byte data, out byte[] decoded)
{
int size = data.Length;
decoded = new byte[size * 2];
for (int i = 0; i < size; i++)
{
//First byte is the less significant byte
decoded[2 * i] = (byte)(alaw_exp_table[data[i]] & 0xff);
//Second byte is the more significant byte
decoded[2 * i + 1] = (byte)(alaw_exp_table[data[i]] >> 8);
}
}
the alaw_exp_table is same in my code and wireshark code
please tell me what is the wrong in my code which do that noise ?
thanks in advance

Your are probably handling the endianess incorrectly.
Try swapping the the two decoding operations in your C# sample. Eg:
decoded[2 * i + 1] = (byte)(alaw_exp_table[data[i]] & 0xff);
decoded[2 * i] = (byte)(alaw_exp_table[data[i]] >> 8);

You are decoding eight bit A-law samples into 16 bit signed PCM, so it would make sense for you to use an array of shorts for the output. This is close to what the C code is doing.
If you don't have a particular reason for using a byte array as output, I would suggest just having the A-law lookup table be a short array an just move 16-bit signed values around instead of messing around with byte ordering.
If you really do care about bytes and byte ordering, you need to get the byte ordering right, as #leppie says. This will depend on what you actually do with the output.

Related

How can I store 4 8 bit coordinates into one integer (C#)?

Lets say I have the following four variables: player1X, player1Y, player2X, player2Y. These have, for example, respectively the following values: 5, 10, 20, 12. Each of these values is 8 bits at max and I want to store them into one integer (32 bits), how can I achieve this?
By doing this, I want to create a dictionary, keeping count of how often certain states have happened in the game. For example, 5, 10, 20, 12 is one state, 6, 10, 20, 12 would be another.
You can use BitConverter
To get one Integer out of 4 bytes:
int i = BitConverter.ToInt32(new byte[] { player1X, player1Y, player2X, player2Y }, 0);
To get the four bytes out of the integer:
byte[] fourBytes = BitConverter.GetBytes(i);
To "squeeze" 4 8 bits value in a 32 bit space, you need to "shift" the bits for your various values, and add them together.
The opposite operations is to "unshift" and use some modulo to get the individual numbers you need.
Here is an alterantive:
Make a struct with defined packing. Expose:
The int32 and all 4 bytes at the same time
Make sure the apcking overlaps (i.e. int starts at 0, byte variables at 0, 1,2,3
Done.
And you can easily access and work with them WITHOUT a bitconverter et al and never have to define an array, which is expensive jsut to throw it away.
You can place the values by shifting to the apropriate offset
Example:
// Composing
byte x1 = ...;
byte x2 = ...;
byte x3 = ...;
byte x4 = ...;
uint x = x1 | (x2 << 0x8) | (x3 << 0x10) | (x4 << 0x18);
// Decomposing
uint x = ...;
byte x1 = x & 0xFF;
byte x2 = (x >> 0x8) & 0xFF;
byte x3 = (x >> 0x10) & 0xFF;
byte x4 = (x >> 0x18) & 0xFF;

Converting C# BitConverter.GetBytes() to PHP

I'm trying to port this C# code to PHP:
var headerList = new List<byte>();
headerList.AddRange(Encoding.ASCII.GetBytes("Hello\n"));
headerList.AddRange(BitConverter.GetBytes(1));
byte[] header = headerList.ToArray();
If I output header, what does it looks like?
My progress so far:
$in_raw = "Hello\n";
for($i = 0; $i < mb_strlen($in_raw, 'ASCII'); $i++){
$in.= ord($in_raw[$i]);
}
$k=1;
$byteK=array(8); // should be 16? 32?...
for ($i = 0; $i < 8; $i++){
$byteK[$i] = (( $k >> (8 * $i)) & 0xFF); // Don't known if it is a valid PHP bitwise op
}
$in.=implode($byteK);
print_r($in);
Which gives me this output: 721011081081111010000000
I'm pretty confident that the first part of converting the string to ASCII bytes is correct, but these BitConverter... I don't know what to expect as output...
This string (or byte array) is used as an handshake for an socket connection. I know that the C# version does work, but my refurnished code doesn't.
If you don't have access to a machine/tool that can run C#, there are a couple of REPL websites that you can use. I've taken your code, qualified a couple of the namespaces (just for convenience), wrapped it in a main() method to just run once as a CLI and put it here. It also includes a for loop that writes the contents of the array out so that you can see what is at each index.
Here's the same code for reference:
using System;
class MainClass {
public static void Main (string[] args) {
var headerList = new System.Collections.Generic.List<byte>();
headerList.AddRange(System.Text.Encoding.ASCII.GetBytes("Hello\n"));
headerList.AddRange(System.BitConverter.GetBytes(1));
byte[] header = headerList.ToArray();
foreach(byte b in header){
Console.WriteLine(b);
}
}
}
When you run this code, the following output is generated:
72
101
108
108
111
10
1
0
0
0
Encoding.ASCII.GetBytes("Hello\n").ToArray()
gives byte[6] { 72, 101, 108, 108, 111, 10 }
BitConverter.GetBytes((Int64)1).ToArray()
gives byte[8] { 1, 0, 0, 0, 0, 0, 0, 0 }
BitConverter.GetBytes((Int32)1).ToArray()
byte[4] { 1, 0, 0, 0 }
the last one is default compiler conversion of 1.
if PHP code please try $byteK=array(4); and $i < 4
The string "Hello\n" is already encoded in ASCII so you have nothing to do.
BitConverter.GetBytes() gives the binary representation of a 32-bit integer in machine byte order, which can be done in PHP with the pack() function and the l format.
So the PHP code is simply:
$in = "Hello\n";
$in .= pack('l', 1);

Text Hashing trick produces different results in Python and C#

I am trying to move a trained model into a production environment and have encountered an issue trying to replicate the behavior of the Keras hashing_trick() function in C#. When I go to encode the sentence my output is different in C# than it is in python:
Text: "Information - The configuration processing is completed."
Python: [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 217 142 262 113 319 413]
C#: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 433, 426, 425, 461, 336, 146, 52]
(copied from debugger, both sequences have length 30)
What I've tried:
changing the encoding of the text bytes in C# to match the python string.encode() function default (UTF8)
Changing capitalization of letters to lowercase and upper case
Tried using Convert.ToUInt32 instead of BitConverter (resulted in overflow error)
My code (below) is my implementation of the Keras hashing_trick function. A single input sentence is given and then the function will return the corresponding encoded sequence.
public uint[] HashingTrick(string data)
{
const int VOCAB_SIZE = 534; //Determined through python debugging of model
var filters = "!#$%&()*+,-./:;<=>?#[\\]^_`{|}~\t\n".ToCharArray().ToList();
filters.ForEach(x =>
{
data = data.Replace(x, '\0');
});
string[] parts = data.Split(' ');
var encoded = new List<uint>();
parts.ToList().ForEach(x =>
{
using (System.Security.Cryptography.MD5 md5 = System.Security.Cryptography.MD5.Create())
{
byte[] inputBytes = System.Text.Encoding.UTF8.GetBytes(x);
byte[] hashBytes = md5.ComputeHash(inputBytes);
uint val = BitConverter.ToUInt32(hashBytes, 0);
encoded.Add(val % (VOCAB_SIZE - 1) + 1);
}
});
return PadSequence(encoded, 30);
}
private uint[] PadSequence(List<uint> seq, int maxLen)
{
if (seq.Count < maxLen)
{
while (seq.Count < maxLen)
{
seq.Insert(0, 0);
}
return seq.ToArray();
}
else if (seq.Count > maxLen)
{
return seq.GetRange(seq.Count - maxLen - 1, maxLen).ToArray();
}
else
{
return seq.ToArray();
}
}
The keras implementation of the hashing trick can be found here
If it helps, I am using an ASP.NET Web API as my solution type.
The biggest problem with your code is that it fails to account for the fact that Python's int is an arbitrary precision integer, while C#'s uint has only 32 bits. This means that Python is calculating the modulo over all 128 bits of the hash, while C# is not (and BitConverter.ToUInt32 is the wrong thing to do in any case, as the endianness is wrong). The other problem that trips you up is that \0 does not terminate strings in C#, and \0 can't just be added to an MD5 hash without changing the outcome.
Translated in as straightforward a manner as possible:
int[] hashingTrick(string text, int n, string filters, bool lower, string split) {
var splitWords = String.Join("", text.Where(c => !filters.Contains(c)))
.Split(new[] { split }, StringSplitOptions.RemoveEmptyEntries);
return (
from word in splitWords
let bytes = Encoding.UTF8.GetBytes(lower ? word.ToLower() : word)
let hash = MD5.Create().ComputeHash(bytes)
// add a 0 byte to force a non-negative result, per the BigInteger docs
let w = new BigInteger(hash.Reverse().Concat(new byte[] { 0 }).ToArray())
select (int) (w % (n - 1) + 1)
).ToArray();
}
Sample use:
const int vocabSize = 534;
Console.WriteLine(String.Join(" ",
hashingTrick(
text: "Information - The configuration processing is completed.",
n: vocabSize,
filters: "!#$%&()*+,-./:;<=>?#[\\]^_`{|}~\t\n",
lower: true,
split: " "
).Select(i => i.ToString())
));
217 142 262 113 319 413
This code has various inefficiencies: filtering characters with LINQ is very inefficient compared to using a StringBuilder and we don't really need BigInteger here since MD5 is always exactly 128 bits, but optimizing (if necessary) is left as an exercise to the reader, as is padding the outcome (which you already have a function for).
Instead of solving the issue of trying to fight with C# to get the hashing right, I took a different approach to the problem. When making my data set to train the model (this is a machine learning project after all) I decided to use #Jeron Mostert's implementation of the hashing function to pre-hash the data set before feeding it into the model.
This solution was much easier to implement and ended up working just as well as the original text hashing. Word of advice for those attempting to do cross language hashing like me: Don't do it, it's a lot of headache! Use one language for hashing your text data and find a way to create a valid data set with all of the information required.

iOS & .NET Produce Different AES256 Results

I've been at this for a few days now. My original (and eventual) goal was to use CommonCrypto on iOS to encrypt a password with a given IV and key, then successfully decrypt it using .NET. After tons of research and failures I've narrowed down my goal to simply producing the same encrypted bytes on iOS and .NET, then going from there.
I've created simple test projects in .NET (C#, framework 4.5) and iOS (8.1). Please note the following code is not intended to be secure, but rather winnow down the variables in the larger process. Also, iOS is the variable here. The final .NET encryption code will be deployed by a client so it's up to me to bring the iOS encryption in line. Unless this is confirmed impossible the .NET code will not be changed.
The relevant .NET encryption code:
static byte[] EncryptStringToBytes_Aes(string plainText, byte[] Key, byte[] IV)
{
byte[] encrypted;
// Create an Aes object
// with the specified key and IV.
using (Aes aesAlg = Aes.Create())
{
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
// Create an encryptor to perform the stream transform.
ICryptoTransform encryptor = aesAlg.CreateEncryptor(Key, IV);
// Create the streams used for encryption.
using (MemoryStream msEncrypt = new MemoryStream())
{
using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter swEncrypt = new StreamWriter(csEncrypt))
{
//Write all data to the stream.
swEncrypt.Write(plainText);
}
encrypted = msEncrypt.ToArray();
}
}
}
return encrypted;
}
The relevant iOS encryption code:
+(NSData*)AES256EncryptData:(NSData *)data withKey:(NSData*)key iv:(NSData*)ivector
{
Byte keyPtr[kCCKeySizeAES256+1]; // Pointer with room for terminator (unused)
// Pad to the required size
bzero(keyPtr, sizeof(keyPtr));
// fetch key data
[key getBytes:keyPtr length:sizeof(keyPtr)];
// -- IV LOGIC
Byte ivPtr[16];
bzero(ivPtr, sizeof(ivPtr));
[ivector getBytes:ivPtr length:sizeof(ivPtr)];
// Data length
NSUInteger dataLength = data.length;
// See the doc: For block ciphers, the output size will always be less than or equal to the input size plus the size of one block.
// That's why we need to add the size of one block here
size_t bufferSize = dataLength + kCCBlockSizeAES128;
void *buffer = malloc(bufferSize);
size_t numBytesEncrypted = 0;
CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
ivPtr,
data.bytes, dataLength,
buffer, bufferSize,
&numBytesEncrypted);
if (cryptStatus == kCCSuccess) {
return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
}
free(buffer);
return nil;
}
The relevant code for passing the pass, key, and IV in .NET and printing result:
byte[] c_IV = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
byte[] c_Key = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
String passPhrase = "X";
// Encrypt
byte[] encrypted = EncryptStringToBytes_Aes(passPhrase, c_Key, c_IV);
// Print result
for (int i = 0; i < encrypted.Count(); i++)
{
Console.WriteLine("[{0}] {1}", i, encrypted[i]);
}
The relevant code for passing the parameters and printing the result in iOS:
Byte c_iv[16] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
Byte c_key[16] = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
NSString* passPhrase = #"X";
// Convert to data
NSData* ivData = [NSData dataWithBytes:c_iv length:sizeof(c_iv)];
NSData* keyData = [NSData dataWithBytes:c_key length:sizeof(c_key)];
// Convert string to encrypt to data
NSData* passData = [passPhrase dataUsingEncoding:NSUTF8StringEncoding];
NSData* encryptedData = [CryptoHelper AES256EncryptData:passData withKey:keyData iv:ivData];
long size = sizeof(Byte);
for (int i = 0; i < encryptedData.length / size; i++) {
Byte val;
NSRange range = NSMakeRange(i * size, size);
[encryptedData getBytes:&val range:range];
NSLog(#"[%i] %hhu", i, val);
}
Upon running the .NET code it prints out the following bytes after encryption:
[0] 194
[1] 154
[2] 141
[3] 238
[4] 77
[5] 109
[6] 33
[7] 94
[8] 158
[9] 5
[10] 7
[11] 187
[12] 193
[13] 165
[14] 70
[15] 5
Conversely, iOS prints the following after encryption:
[0] 77
[1] 213
[2] 61
[3] 190
[4] 197
[5] 191
[6] 55
[7] 230
[8] 150
[9] 144
[10] 5
[11] 253
[12] 253
[13] 158
[14] 34
[15] 138
I cannot for the life of me determine what is causing this difference. Some things I've already confirmed:
Both iOS and .NET can successfully decrypt their encrypted data.
The lines of code in the .NET project:
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
Do not affect the result. They can be commented and the output is the same. I assume this means they are the default valus. I've only left them in to make it obvious I'm matching iOS's encryption properties as closely as possible for this example.
If I print out the bytes in the iOS NSData objects "ivData" and "keyData" it produces the same list of bytes that I created them with- so I don't think this is a C <-> ObjC bridging problem for the initial parameters.
If I print out the bytes in the iOS variable "passData" it prints the same single byte as .NET (88). So I'm fairly certain they are starting the encryption with the exact same data.
Due to how concise the .NET code is I've run out of obvious avenues of experimentation. My only thought is that someone may be able to point out a problem in my "AES256EncryptData:withKey:iv:" method. That code has been modified from the ubiquitous iOS AES256 code floating around because the key we are provided is a byte array- not a string. I'm pretty studied at ObjC but not nearly as comfortable with the C nonsense- so it's certainly possible I've fumbled the required modifications.
All help or suggestions would be greatly appreciated.
I notice you are using AES256 but have a 128-bit key! 16-bytes x 8-bits. You can not count on various functions to pad a key the same, that is undefined.
You're likely dealing with an issue of string encoding. In your iOS code I see that you are passing the string as UTF-8, which would result in a one-byte string of "X". .NET by default uses UTF-16, which means you have a two-byte string of "X".
You can use How to convert a string to UTF8? to convert your string to a UTF-8 byte array in .NET. You can try writing out the byte array of the plain-text string in both cases to determine that you are in fact passing the same bytes.

A 32-bit unsigned fixed-point number (16.16)

i have array of byte, i want to find 32-bit unsigned fixed-point number (16.16) ) use c# and
the output must 44100
array of byte:
byte[] m = new byte[4] {172,68,0,0}
Console.WriteLine(" sample rate {0}", BitConverter.ToInt32(m, 0));
The output is 17580. This is wrong: it should be 44100
how to convert it to (a 32-bit unsigned fixed-point number (16.16) ) use c# ??
.Net doesn't have a built-in 32-bit fixed point data type, but you could store the result pretty easily in a double.
This is not quite as efficient or elegant as what you're probably looking for, but you could do something like this to convert your byte array to a double:
byte[] m = new byte[4] { 172, 68, 0, 0 };
double[] magnitude = new[] { 256.0, 1.0, 1.0/256.0, 1.0/65536.0 };
double i = m.Zip(magnitude, (x, y) => x * y).Sum(); // 44100.0
Alternatively, if you change the way you store the bits like this:
byte[] m = new byte[4] { 0, 0, 68, 172 };
double i = BitConverter.ToUInt32(m, 0) / 65536.0; // 44100.0
The conversion between your original storage format and this one is fairly straightforward. You could probably simply reverse the bytes, although I'm not entirely sure which decimal digit is more significant.
Depending on how you answer #JonSkeet comment above will depend on the fractional value of this. However, this solution works for the integer part
byte[] m = new byte[4] { 172, 68, 0, 0 };
byte[] fraction = m.Reverse().Take(2).ToArray();
byte[] integer = m.Reverse().Skip(2).Take(2).ToArray();
System.Diagnostics.Debug.Print("{0}", BitConverter.ToUInt16(integer, 0));

Categories

Resources