integer to utf8 string not working c# - c#

I'm converting UTF8 string to integer, and the other way around.
If i enter 卐 as a string, it converts to 21328.
But when I try to convert 21328 back to string I get "PS".
I tried:
int dec = Convert.ToInt32(decimal1.Text, 10);
byte[] bajti = new byte[4];
bajti[0] = (byte)(dec >> 24);
bajti[1] = (byte)(dec >> 16);
bajti[2] = (byte)(dec >> 8);
bajti[3] = (byte)dec;
znak1.Text = Encoding.UTF8.GetString(bajti);
I have also tried converting using BitConverter and got same result.
I have thought, it could be a problem with TextBox, and I tried to wrote it down in notepad, but got same result...

You can also try the following code:
// Conversion from String to Int32
string text = "§";
byte[] textBytes = Encoding.UTF8.GetBytes(text);
byte[] numberBytes = new byte[sizeof(int)];
Array.Copy(textBytes, numberBytes, textBytes.Length);
int number = BitConverter.ToInt32(numberBytes, 0);
//Conversion from Int32 to String
numberBytes = BitConverter.GetBytes(number);
text = Encoding.UTF8.GetString(numberBytes);
PS: The code will work, but some characters when converted take up less than 4 bytes in space, therefore when converted back to a string from an Int32 (4 bytes), trailing \0 may appear (which are not rendered, because they represent a null character).

Try this:
byte[] bajti = HexToBytes(hex1.Text);
char c = 'a';
if (bajti.Length == 1)
{
c = (char)bajti[0];
}
else if (bajti.Length == 2)
{
c = (char)((bajti[0] << 8) + bajti[1]);
}
else if (bajti.Length == 3)
{
c = (char)((bajti[0] << 16) + (bajti[1] << 8) + bajti[2]);
}
else if (bajti.Length == 4)
{
c = (char)((bajti[0] << 24)+(bajti[1] << 16) + (bajti[2] << 8) + bajti[3]);
}
znak1.Text = c.ToString();

Related

Android base64 hash mismatch with server side hash using C# script

I am creating base64 hash using HMAC SHA256 in my Android application. and send it on server for match with server side hash.
Following this tutorial.
Working Android code:
public String getHash(String data,String key)
{
try
{
String secret = key;
String message = data;
Mac sha256_HMAC = Mac.getInstance("HmacMD5");
SecretKeySpec secret_key = new SecretKeySpec(secret.getBytes(), "HmacMD5");
sha256_HMAC.init(secret_key);
String hash = Base64.encodeBase64String(sha256_HMAC.doFinal(message.getBytes()));
System.out.println(hash);
return hash;
}
catch (Exception e){
System.out.println("Error");
}
}
server code is in C# script and its as per below
using System.Security.Cryptography;
namespace Test
{
public class MyHmac
{
private string CreateToken(string message, string secret)
{
secret = secret ?? "";
var encoding = new System.Text.ASCIIEncoding();
byte[] keyByte = encoding.GetBytes(secret);
byte[] messageBytes = encoding.GetBytes(message);
using (var hmacsha256 = new HMACSHA256(keyByte))
{
byte[] hashmessage = hmacsha256.ComputeHash(messageBytes);
return Convert.ToBase64String(hashmessage);
}
}
}
}
but hash key generated at android side is not match with server side and below is objective c code which generate same as C# code
objective c code:
#import "AppDelegate.h"
#import <CommonCrypto/CommonHMAC.h>
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSString* key = #"secret";
NSString* data = #"Message";
const char *cKey = [key cStringUsingEncoding:NSASCIIStringEncoding];
const char *cData = [data cStringUsingEncoding:NSASCIIStringEncoding];
unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH];
CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC);
NSData *hash = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)];
NSLog(#"%#", hash);
NSString* s = [AppDelegate base64forData:hash];
NSLog(s);
}
+ (NSString*)base64forData:(NSData*)theData
{
const uint8_t* input = (const uint8_t*)[theData bytes];
NSInteger length = [theData length];
static char table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
NSMutableData* data = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t* output = (uint8_t*)data.mutableBytes;
NSInteger i;
for (i=0; i < length; i += 3) {
NSInteger value = 0;
NSInteger j;
for (j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) { value |= (0xFF & input[j]);
}
}
NSInteger theIndex = (i / 3) * 4; output[theIndex + 0] = table[(value >> 18) & 0x3F];
output[theIndex + 1] = table[(value >> 12) & 0x3F];
output[theIndex + 2] = (i + 1) < length ? table[(value >> 6) & 0x3F] : '=';
output[theIndex + 3] = (i + 2) < length ? table[(value >> 0) & 0x3F] : '=';
}
return [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding];
}
#end
please help me to sove out this issue,
Thanks in advance.
I have solved this issue by changing HmacSHA256 to HmacMD5 and its give same hash value as given by C# code.
I have updated my question with working code. check it
I suspect this is an encoding issue.
In one sample you specify the string should be encoded using ASCII when converting the string to a byte array. In the other sample you do not specify an encoding.
If the default encoding is anything other than ASCII that means the byte arrays will be different, leading to different hash results.
In android secret.getBytes may get UTF-16 bytes, check the length of the result. In general separate such functions out into separate statements for easier debugging.
Not the answer, rather a demonstration of a simpler Obj-C implementation and provides the hash and Base64 vaules:
NSString* key = #"secret";
NSString* data = #"Message";
NSData *keyData = [key dataUsingEncoding:NSASCIIStringEncoding];
NSData *dataData = [data dataUsingEncoding:NSASCIIStringEncoding];
NSMutableData *hash = [NSMutableData dataWithLength:CC_SHA256_DIGEST_LENGTH];
CCHmac(kCCHmacAlgSHA256, keyData.bytes, keyData.length , dataData.bytes, dataData.length, hash.mutableBytes);
NSLog(#"hash: %#", hash);
NSString* s = [hash base64EncodedStringWithOptions:0];
NSLog(#"s: %#", s);
Output:
hash: <aa747c50 2a898200 f9e4fa21 bac68136 f886a0e2 7aec70ba 06daf2e2 a5cb5597>
s: qnR8UCqJggD55PohusaBNviGoOJ67HC6Btry4qXLVZc=

Decoding a base64 string from windows phone

i am getting a Soap response which contains a base64 string. I am using XDocument to get the value of the element and a function like this to read it
public void main()
{
//****UPDATE
string data64 = "";
data64 = removeNewLinesFromString(data64);
content = data64.ToCharArray();
byte[] binaryData = Convert.FromBase64CharArray(content, 0, content.Length);
Stream stream = new MemoryStream(binaryData);
BinaryReader reader = new BinaryReader(stream,Encoding.UTF8);
string object64 = SoapSerializable.ReadUTF(reader);
}
this is the readUTF function
public static String ReadUTF(BinaryReader reader)
{
// read the following string's length in bytes
int length = Helpers.FlipInt32(reader.ReadInt32());
// read the string's data bytes
byte[] utfString = reader.ReadBytes(length);
// get the string by interpreting the read data as UTF-8
return System.Text.Encoding.UTF8.GetString(utfString, 0, utfString.Length);
}
and my FlipInt32 function
public static Int32 FlipInt32(Int32 value)
{
Int32 a = (value >> 24) & 0xFF;
Int32 b = (value >> 16) & 0xFF;
Int32 c = (value >> 8) & 0xFF;
Int32 d = (value >> 0) & 0xFF;
return (((((d << 8) | c) << 8) | b) << 8) | a;
}
but the resulting values are slightly different from the results an online decoder gives.
I am missing something here?
I am not sure what you are trying to do with BinaryReader But here is what I do to get
this is a dummy encoded base64 string from your base64 data
string data64 = "dGhpcyBpcyBhIGR1bW15IGVuY29kZWQgYmFzZTY0IHN0cmluZy4=";
var buf = Convert.FromBase64String(data64);
var str = Encoding.UTF8.GetString(buf);

C# equalent to perl `pack("v",value)` while packing some values into `byte[]`

I am trying to replicate behavior of a perl script in my c# code. When we convert any value into the Byte[] it should look same irrespective of the language used. SO
I have this function call which looks like this in perl:
$diag_cmd = pack("V", length($s_part)) . $s_part;
where $s_par is defined in following function. It is taking the .pds file at the location C:\Users\c_desaik\Desktop\DIAG\PwrDB\offtarget\data\get_8084_gpio.pds
$s_part =
sub read_pds
{
my $bin_s;
my $input_pds_file = $_[0];
open(my $fh, '<', $input_pds_file) or die "cannot open file $input_pds_file";
{
local $/;
$bin_s = <$fh>;
}
close($fh);
return $bin_s;
}
My best guess is that this function is reading the .pds file and turning it into a Byte array.
Now, I tried to replicate the behavior into c# code like following
static byte[] ConstructPacket()
{
List<byte> retval = new List<byte>();
retval.AddRange(System.IO.File.ReadAllBytes(#"C:\Users\c_desaik\Desktop\DIAG\PwrDB\offtarget\data\get_8084_gpio.pds"));
return retval.ToArray();
}
But the resulting byte array does not look same. Is there any special mechanism that I have to follow to replicate the behavior of pack("V", length($s_part)) . $s_part ?
As Simon Whitehead mentioned the template character V tells pack to pack your values into unsigned long (32-bit) integers (in little endian order). So you need to convert your bytes to a list (or array) of unsigned integers.
For example:
static uint[] UnpackUint32(string filename)
{
var retval = new List<uint>();
using (var filestream = System.IO.File.Open(filename, System.IO.FileMode.Open))
{
using (var binaryStream = new System.IO.BinaryReader(filestream))
{
var pos = 0;
while (pos < binaryStream.BaseStream.Length)
{
retval.Add(binaryStream.ReadUInt32());
pos += 4;
}
}
}
return retval.ToArray();
}
And call this function:
var list = UnpackUint32(#"C:\Users\c_desaik\Desktop\DIAG\PwrDB\offtarget\data\get_8084_gpio.pds");
Update
If you wanna read one length-prefixed string or a list of them, you can use this function:
private string[] UnpackStrings(string filename)
{
var retval = new List<string>();
using (var filestream = System.IO.File.Open(filename, System.IO.FileMode.Open))
{
using (var binaryStream = new System.IO.BinaryReader(filestream))
{
var pos = 0;
while ((pos + 4) <= binaryStream.BaseStream.Length)
{
// read the length of the string
var len = binaryStream.ReadUInt32();
// read the bytes of the string
var byteArr = binaryStream.ReadBytes((int) len);
// cast this bytes to a char and append them to a stringbuilder
var sb = new StringBuilder();
foreach (var b in byteArr)
sb.Append((char)b);
// add the new string to our collection of strings
retval.Add(sb.ToString());
// calculate start position of next value
pos += 4 + (int) len;
}
}
}
return retval.ToArray();
}
pack("V", length($s_part)) . $s_part
which can also be written as
pack("V/a*", $s_part)
creates a length-prefixed string. The length is stored as a 32-bit unsigned little-endian number.
+----------+----------+----------+----------+-------- ...
| Length | Length | Length | Length | Bytes
| ( 7.. 0) | (15.. 8) | (23..16) | (31..24) |
+----------+----------+----------+----------+-------- ...
This is how you recreate the original string from the bytes:
Read 4 bytes
If using a machine other than a little-endian machine,
Rearrange the bytes into the native order.
Cast those bytes into an 32-bit unsigned integer.
Read a number of bytes equal to that number.
Convert that sequences of bytes into a string.
Some languages provide tools that perform more than one of these steps.
I don't know C#, so I can't write the code for you, but I can give you an example in two other languages.
In Perl, this would be written as follows:
sub read_bytes {
my ($fh, $num_bytes_to_read) = #_;
my $buf = '';
while ($num_bytes_to_read) {
my $num_bytes_read = read($fh, $buf, $num_bytes_to_read, length($buf));
if (!$num_bytes_read) {
die "$!\n" if !defined($num_bytes_read);
die "Premature EOF\n";
}
$num_bytes_to_read -= $num_bytes_read;
}
return $buf;
}
sub read_uint32le { unpack('V', read_bytes($_[0], 4)) }
sub read_pstr { read_bytes($_[0], read_uint32le($_[0])) }
my $str = read_pstr($fh);
In C,
int read_bytes(FILE* fh, void* buf, size_t num_bytes_to_read) {
while (num_bytes_to_read) {
size_t num_bytes_read = fread(buf, 1, num_bytes_to_read, fh);
if (!num_bytes_read)
return 0;
num_bytes_to_read -= num_bytes_read;
buf += num_bytes_read;
}
return 1;
}
int read_uint32le(FILE* fh, uint32_t* p_i) {
int ok = read_bytes(fh, p_i, sizeof(*p_i));
if (!ok)
return 0;
{ /* Rearrange bytes on non-LE machines */
const char* p = (char*)p_i;
*p_i = ((((p[3] << 8) | p[2]) << 8) | p[1]) << 8) | p[0];
}
return 1;
}
char* read_pstr(FILE* fh) {
uint32_t len;
char* buf = NULL;
int ok;
ok = read_uint32le(fh, &len);
if (!ok)
goto ERROR;
buf = malloc(len+1);
if (!buf)
goto ERROR;
ok = read_bytes(fh, buf, len);
if (!ok)
goto ERROR;
buf[len] = '\0';
return buf;
ERROR:
if (p)
free(p);
return NULL;
}
char* str = read_pstr(fh);

Getting upper and lower byte of an integer in C# and putting it as a char array to send to a com port, how?

In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}

How can I assign an integer to 3Bytes of field?

I have retrieved the Size of my Struct by using size of like below:
int len = Marshal.SizeOf(packet);
Now the len has a Value of 40. I have to assign this 40 to a 3-byte Field of my Structure.My Strucure looks like below:
public struct TCP_CIFS_Packet
{
public byte zerobyte;
public byte[] lengthCIFSPacket;
public CIFSPacket cifsPacket;
}
I tried assigning the values like following:
tcpCIFSPacket.lengthCIFSPacket = new byte[3];
tcpCIFSPacket.lengthCIFSPacket[0] = Convert.ToByte(0);
tcpCIFSPacket.lengthCIFSPacket[1] = Convert.ToByte(0);
tcpCIFSPacket.lengthCIFSPacket[2] = Convert.ToByte(40);
But this doesn't seem to be the right way. Is there any other Way I can do this?
Edit #ho1 and #Rune Grimstad:
After using BitConverter.GetBytes like follwoing:
tcpCIFSPacket.lengthCIFSPacket = BitConverter.GetBytes(lengthofPacket);
The size of lengthCIFSPacket changes to 4-bytes but I have only 3-bytes of space for tcpCIFSPacket.lengthCIFSPacket as the packet structure.
int number = 500000;
byte[] bytes = new byte[3];
bytes[0] = (byte)((number & 0xFF) >> 0);
bytes[1] = (byte)((number & 0xFF00) >> 8);
bytes[2] = (byte)((number & 0xFF0000) >> 16);
or
byte[] bytes = BitConverter.GetBytes(number); // this will return 4 bytes of course
edit: you can also do this
byte[] bytes = BitConverter.GetBytes(number);
tcpCIFSPacket.lengthCIFSPacket = new byte[3];
tcpCIFSPacket.lengthCIFSPacket[0] = bytes[0];
tcpCIFSPacket.lengthCIFSPacket[1] = bytes[1];
tcpCIFSPacket.lengthCIFSPacket[2] = bytes[2];
Look at BitConverter.GetBytes. It'll convert the int to an array of bytes. See here for more info.
You can use the BitConverter class to convert an Int32 to an array of bytes using the GetBytes method.

Categories

Resources