I have Binary(16) column in table 'Chip' with value 0xE1FC2E6F8674B7B9045C1104F9124C48 and in another table i have column chip_i which is type of integer that has the same value (but in int) = -116241336.
Im using SQL Server 2012.
How can i convert 0xE1FC2E6F8674B7B9045C1104F9124C48 to -116241336 in C#?
I tried to convert it like this:
string hexString = "0xE1FC2E6F8674B7B9045C1104F9124C48";
byte[] hexByte = Encoding.ASCII.GetBytes(hexString);
var chip_i = BitConverter.ToInt32(hexByte, 0);
but the result is 826636336
-116241336 is simply the last 4 bytes treated as a raw little-endian integer; 0xF9124C48. So: just use the last 4 bytes as is. No need for ASCII:
int chip_i = Convert.ToInt32(hexString.Substring(hexString.Length - 8, 8), 16);
Related
Why, when I turn INT value to bytes and to ASCII and back, I get another value?
Example:
var asciiStr = new string(Encoding.ASCII.GetChars(BitConverter.GetBytes(2000)));
var intVal = BitConverter.ToInt32(Encoding.ASCII.GetBytes(asciiStr), 0);
Console.WriteLine(intVal);
// Result: 1855
ASCII is only 7-bit - code points above 127 are unsupported. Unsupported characters are converted to ? per the docs on Encoding.ASCII:
The ASCIIEncoding object that is returned by this property might not have the appropriate behavior for your app. It uses replacement fallback to replace each string that it cannot encode and each byte that it cannot decode with a question mark ("?") character.
So 2000 decimal = D0 07 00 00 hexadecimal (little endian) = [unsupported character] [BEL character] [NUL character] [NUL character] = ? [BEL character] [NUL character] [NUL character] = 3F 07 00 00 hexadecimal (little endian) = 1855 decimal.
TL;DR: Everything's fine. But you're a victim of character replacement.
We start with 2000. Let's acknowledge, first, that this number can be represented in hexadecimal as 0x000007d0.
BitConverter.GetBytes
BitConverter.GetBytes(2000) is an array of 4 bytes, Because 2000 is a 32-bit integer literal. So the 32-bit integer representation, in little endian (least significant byte first), is given by the following byte sequence { 0xd0, 0x07, 0x00, 0x00 }. In decimal, those same bytes are { 208, 7, 0, 0 }
Encoding.ASCII.GetChars
Uh oh! Problem. Here's where things likely took an unexpected turn for you.
You're asking the system to interpret those bytes as ASCII-encoded data. The problem is that ASCII uses codes from 0-127. The byte with value 208 (0xd0) doesn't correspond to any character encodable by ASCII. So what actually happens?
When decoding ASCII, if it encounters a byte that is out of the range 0-127 then it decodes that byte to a replacement character and moves to the next byte. This replacement character is a question mark ?. So the 4 chars you get back from Encoding.ASCII.GetChars are ?, BEL (bell), NUL (null) and NUL (null).
BEL is the ASCII name of the character with code 7, which traditionally elicits a beep when presented on a capable terminal. NUL (code 0) is a null character traditionally used for representing the end of a string.
new string
Now you create a string from that array of chars. In C# a string is perfectly capable of representing a NUL character within the body of a string, so your string will have two NUL chars in it. They can be represented in C# string literals with "\0", in case you want to try that yourself. A C# string literal that represents the string you have would be "?\a\0\0" Did you know that the BEL character can be represented with the escape sequence \a? Many people don't.
Encoding.ASCII.GetBytes
Now you begin the reverse journey. Your string is comprised entirely of characters in the ASCII range. The encoding of a question mark is code 63 (0x3F). And the BEL is 7, and the NUL is 0. so the bytes are { 0x3f, 0x07, 0x00, 0x00 }. Surprised? Well, you're encoding a question mark now where before you provided a 208 (0xd0) byte that was not representable with ASCII encoding.
BitConverter.ToInt32
Converting these four bytes back to a 32-bit integer gives the integer 0x0000073f, which, in decimal, is 1855.
String encoding (ASCII, UTF8, SHIFT_JIS, etc.) is designed to pigeonhole human language into a binary (byte) form. It isn't designed to store arbitrary binary data, such as the binary form of an integer.
While your binary data will be interpreted as a string, some of the information will be lost, meaning that storing binary data in this way will fail in the general case. You can see the point where this fails using the following code:
for (int i = 0; i < 255; ++i)
{
var byteData = new byte[] { (byte)i };
var stringData = System.Text.Encoding.ASCII.GetString(byteData);
var encodedAsBytes = System.Text.Encoding.ASCII.GetBytes(stringData);
Console.WriteLine("{0} vs {1}", i, (int)encodedAsBytes[0]);
}
Try it online
As you can see it starts off well because all of the character codes correspond to ASCII characters, but once we get up in the numbers (i.e. 128 and beyond), we start to require a more than 7 bits to store the binary value. At this point it ceases to be decoded correctly, and we start seeing 63 come back instead of the input value.
Ultimately you will have this problem encoding binary data using any string encoding. You need to choose an encoding method specifically meant for storing binary data as a string.
Two popular methods are:
Hexadecimal
Base64 using ToBase64String and FromBase64String
Hexadecimal example (using the hex methods here):
int initialValue = 2000;
Console.WriteLine(initialValue);
// Convert from int to bytes and then to hex
byte[] bytesValue = BitConverter.GetBytes(initialValue);
string stringValue = ByteArrayToString(bytesValue);
Console.WriteLine("As hex: {0}", stringValue); // outputs D0070000
// Convert form hex to bytes and then to int
byte[] decodedBytesValue = StringToByteArray(stringValue);
int intValue = BitConverter.ToInt32(decodedBytesValue, 0);
Console.WriteLine(intValue);
Try it online
Base64 example:
int initialValue = 2000;
Console.WriteLine(initialValue);
// Convert from int to bytes and then to base64
byte[] bytesValue = BitConverter.GetBytes(initialValue);
string stringValue = Convert.ToBase64String(bytesValue);
Console.WriteLine("As base64: {0}", stringValue); // outputs 0AcAAA==
// Convert form base64 to bytes and then to int
byte[] decodedBytesValue = Convert.FromBase64String(stringValue);
int intValue = BitConverter.ToInt32(decodedBytesValue, 0);
Console.WriteLine(intValue);
Try it online
P.S. If you simply wanted to convert your integer to a string (e.g. "2000") then you can simply use .ToString():
int initialValue = 2000;
string stringValue = initialValue.ToString();
I convert my Hex to dump to get special character like symbol but when I try to convert my "0x18" i "\u0018" this value. Can anyone give me solution regarding this matter.
Here is my code:
public static string FromHexDump(string sText)
{
Int32 lIdx;
string prValue ="" ;
for (lIdx = 1; lIdx < sText.Length; lIdx += 2)
{
string prString = "0x" + Mid(sText, lIdx, 2);
string prUniCode = Convert.ToChar(Convert.ToInt64(prString,16)).ToString();
prValue = prValue + prUniCode;
}
return prValue;
}
I used VB language. I have a database that already encrypted text to my password and the value is BAA37D40186D like this so I loop it by step 2 and it will like this 0xBA,0xA3,0x7D,0x40,0x18,0x6D and the VB result getting like this º£}#m
You can use this code:
var myHex = '\x0633';
var formattedString += string.Format(#"\x{0:x4}", (int)myHex);
Or you can use this code from MSDN (https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/how-to-convert-between-hexadecimal-strings-and-numeric-types):
string hexValues = "48 65 6C 6C 6F 20 57 6F 72 6C 64 21";
string[] hexValuesSplit = hexValues.Split(' ');
foreach (string hex in hexValuesSplit)
{
// Convert the number expressed in base-16 to an integer.
int value = Convert.ToInt32(hex, 16);
// Get the character corresponding to the integral value.
string stringValue = Char.ConvertFromUtf32(value);
char charValue = (char)value;
Console.WriteLine("hexadecimal value = {0}, int value = {1}, char value = {2} or {3}",
hex, value, stringValue, charValue);
}
The question is unclear - what is the database column's type? Does it contain 6 bytes, or 12 characters with the hex encoding of the bytes? In any case, this has nothing to do with special characters or encodings.
First, 0x18 is the byte value of the Cancel Character in the Latin 1 codepage, not the pound sign. That's 0xA3. It seems that the byte values in the question are just the Latin 1 bytes for the string in hex.
.NET strings are Unicode (UTF16LE specifically). There's no UTF8 string or Latin1 string. Encodings and codepages apply when converting bytes to strings or vice versa. This is done using the Encoding class and eg Encoding.GetBytes
In this case, this code will convert the byte to the expected string form, including the unprintable character :
new byte[] {0xBA,0xA3,0x7D,0x40,0x18,0x6D};
var latinEncoding=Encoding.GetEncoding(1252);
var result=latinEncoding.GetString(dbBytes);
The result is :
º£}#m
With the Cancel character between # and m.
If the database column contains the byte values as strings :
it takes double the required space and
the hex values have to be converted back to bytes before converting to strings
The x format is used to convert numbers or bytes to their hex form and vice versa. For each byte value, ToString("x") returns the hex string.
The hex string can be produced from the original buffer with :
var dbBytes=new byte[] {0xBA,0xA3,0x7D,0x40,0x18,0x6D};
var hexString=String.Join("",dbBytes.Select(c=>c.ToString("x")));
There are many questions that show how to parse a byte string into a byte array. I'll just steal Jared Parson's LINQ answer :
public static byte[] StringToByteArray(string hex) {
return Enumerable.Range(0, hex.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(hex.Substring(x, 2), 16))
.ToArray();
}
With that, we can parse the hex string into a byte array and convert it to the original string :
var bytes=StringToByteArray(hexString);
var latinEncoding=Encoding.GetEncoding(1252);
var result=latinEncoding.GetString(bytes);
First of all u don't need dump but Unicode, I would recomend to read about unicode/encoding etc and why this is a problem with strings.
PS: solution : StackOverflow
I have a byte array containing value like this:
byte[] data={0x04,0x00};
I need to convert it to a string a print it as str_data=0x400
But when i convert this to string the data is printed as 40 where last 0x00 is considered as only 0.
I am new to C# and I am struggling to solve this. Please help.
Your question is a bit unclear, but I think what you want is the X2 format specifier for bytes, which will print your bytes as two hex digits, e.g.:
byte b = 0x40;
Console.WriteLine( b.ToString( "X2" ) ); // Prints '40'
Convert each of your bytes into a string (with e.g. LINQ's Select method), then join them and add the "0x" prefix.
I have a database table with all columns to allow nulls for testing purposes. Between all of my columns I have int, varchar or bit datatypes. When I try to submit the form I get the following error message:
Value was either too large or too small for an Int32.
Here is the code:
using (storeDataContext db = new storeDataContext())
{
db.Dealerssses.InsertOnSubmit(new Dealersss
{
AppFName = txtFName.Text,
AppLName = txtLName.Text,
AppTitle = ddlTitles.SelectedItem.Text,
AppPhone = Convert.ToInt32(txtPhone.Text),
AppEmail = txtEmail.Text,
AppAddress = txtAddress.Text,
AppCity = txtCity.Text,
AppState = txtState.Text,
AppZip = Convert.ToInt32(txtZip.Text),
BusName = txtBusName.Text,
BusCA = Convert.ToInt32(txtBusResale.Text),
BusContact = txtBusContact.Text,
BusDBA = txtDBA.Text,
BusEIN = Convert.ToInt32(txtBusEIN.Text),
BusEmail = txtBusEmail.Text,
BusFax = Convert.ToInt32(txtBusFax.Text),
BusMonth = ddlMonthStart.SelectedItem.Text,
BusNumEmployees = Convert.ToInt32(txtBusEmployees.Text),
BusPhone = Convert.ToInt32(txtBusPhone.Text),
BusYear = int.Parse(txtYearStart.Text),
Active = false
});
db.SubmitChanges();
};
Int32.MaxValue is 2,147,483,647, which is only 10 digits long.
Your values are too large for an Int32.
Your Phone, Fax, ZIP, and EIN fields should be strings (NVARCHARs), not numbers.
I'm betting it's the value for your phone number fields. Int32 is a 4 byte integer with a max value of 2147483647. Most phone numbers will overflow that.
My guess is it is on your phone #'s Int32's have a value of:
-2,147,483,648 to 2,147,483,647
So if you have the phone # of 517111111 (5,171,111,111), you are too large.
You should use varchar/char for phone numbers.
**Hey i was working on an application which converts any basenumber like (2,8,10,16,etc) to user's desire base system. I am having a problem in converting a binary number to its octal number can anyone help me out?
I tried everthing like
// i am taking a binary number in value and then converting it to base 8
Int32 value = int.Parse(convertnumber);
Console.WriteLine(Convert.ToString(value, 8));
For example:
value =10011
Answer should be this "23" but using the above code i am getting "23433"
"23433" is is the correct answer, when converting "10011" in base 10 to base 8.
You may have meant to interpret "10011" as a binary number. In which case, you want:
int value = Convert.ToInt32(convertnumber, 2);
Edit: in response to comments, here's almost-complete code:
string val = "10011";
int convertnumber = Convert.ToInt32(val, 2);
Console.WriteLine(Convert.ToString(convertnumber, 8)); // prints "23"
string binary = "10011";
int integer = Convert.ToInt32(binary, 2);
Console.WriteLine(Convert.ToString(integer, 8));
Output: 23
In this example we convert the binary string representation to an integer and from an integer to the octal string representation.
int value = Convert.ToInt32(convertnumber, 2);
Console.WriteLine(Convert.ToString(value, 8));
You are taking a base 10 number 10011 and converting it to base 8. Which is 23433.
If you want to do this manually (so you understand what is going on) here is a suggestion:
First pad the binary string to be divisable by 3 ( 3 bits = 1 octal digit )
string binary = "10011";
int pad = binary.Length % 3;
binary = new string('0', 3-pad) + binary;
Then process each three bits into one octal digit
int n = binary.Length / 3;
char[] bin_digits = binary.ToCharArray();
char[] oct_digits = new char[n];
for (int i = 0; i < n; i++)
{
int digit = bin_digits.Skip(3 * i).Take(3).Aggregate(0,
(x, v) => (int)v - (int)'0' + 2 * x);
// x is the value accumulation
// v is a char '0' or '1' representing a bit and is converted to int 0, 1
oct_digits[i] = (char)(digit + (int)'0');
// convert int to char digit
}
Convert the digits array into a string
string oct_value = new string(oct_digits);
Example results:
"10011" -> "23"
"11000" -> "30"
"1011011" -> "133"
Naturally, int.Parse parses a decimal number. If your input is binary, then you'll need to first do a conversion from binary to integer.
Int32 value = Convert.ToInt32( "10011", 2 );
Console.WriteLine(Convert.ToString(value, 8));
That's because int.Parse is converting 10011 to, well, 10011 in decimal. It is not converting it from 10011 binary to 23 octal (19 decimal) as you want it to.