I am working a problem in C# and I am having issues with converting my string of multiple hex values to a byte[].
string word = "\xCD\x01\xEF\xD7\x30";
(\x starts each new value, so I have: CD 01 EF D7 30)
This is my first time asking a question here, so please let me know if you need anything extra from me.
More information on the project:
I need to be able to change both
"apple" and "\xCD\x01\xEF\xD7\x30" to a byte array.
For the normal string "apple" I use
byte[] data = Encoding.ASCII.GetBytes(word);
this does not seem to be working with "\xCD\x01\xEF\xD7\x30" I am getting the values
63, 1, 63, 63, 48
Ok... You were trying to directly "downcast"/"upcast" char <-> byte (where char is the C# char that is 16 bits long, and byte is 8 bits long).
There are various ways to do it. The simplest (probably not the more performant) is to use the iso-8859-1 encoding that "maps" the byte values 0-255 to the unicode codes 0-255 (and return).
Encoding enc = Encoding.GetEncoding("iso-8859-1");
string str = "apple";
byte[] bytes = enc.GetBytes(str);
string str2 = enc.GetString(bytes);
You can even do a little LINQ:
string str = "apple";
// This is "bad" if the string contains codepoints > 255
byte[] bytes = str.Select(x => (byte)x).ToArray();
// This is always safe, because by definition any value of a byte
// is a legal unicode character
string str2 = string.Concat(bytes.Select(x => (char)x));
Related
I need to convert a string into binary and back.
But when the input is an Unicode Character the resulting Binary String is longer than 8 characters and converting it back using Convert.ToByte causes an Overflow Exception.
I used the Convert.ToString method to convert the String into Binary.
String input = "⌓" //Unicode Character
String binary = Convert.ToString(input, 2); //Convert to Binary
//->> Returns "10001100010011"
byte re = Convert.ToByte("10001100010011", 2); //This causes an Overflow Exception
//What should I do here? Is there a better reverse of Convert.ToString than Convert.ToByte?
Is there an alternative or something similar?
I think the problem here is that a byte is only 8 bits. You are putting in a string with 14 bits in it. Try using Convert.ToUint16() and declare "re" as Uint16 like so:
UInt16 re = Convert.ToUInt16("10001100010011", 2);
// UInt16 is large enough to hold the data
Console.WriteLine(re); // prints 8979
//unicode decimal 8979: http://www.codetable.net/decimal/8979
I have a very specific requirement. I have some data. Of which, strings and spaces are to be converted to EBCDIC while numbers to Hexadecimal.
For Example, my string is "Test123"
Test => EBCDIC
123 => Hexadecimal.
What I am trying to do is check every character in string if its number or not, and then based on that doing my conversion.
byte[] dataBuffer = new byte[length];
int i = 0;
if (toEBCDIC)
{
foreach (char c in data)
{
byte[] temp = new byte[1];
if (Char.IsNumber(c))
{
string hexValue = Convert.ToInt32(c).ToString("X");
temp = Encoding.ASCII.GetBytes(hexValue);
dataBuffer[i] = temp[0];
}
else
{
temp = Encoding.GetEncoding("IBM01140").GetBytes(c.ToString());
dataBuffer[i] = temp[0];
}
i++;
}
dataBuffer.CopyTo(array, byteIndex);
The problem comes when i try to convert the number. I need to keep my output in byte array, as i have to write the output to a memory stream and then to a file.
When i get the hex value of number, and then try to convert it to byte, actual conversion happens.
For "1", hexvalue = 31.
Now I want to keep this 31 unchanged in bytes. I mean to say that, when i write it to byte array, it should remain 31 only. But when do GetBytes, it makes byte array, converting 3 and 1 separately to bytes.
Can anyone please help me on this..!!
The problem is here:
ToString("X")
Now it's a hexadecimal string. So in your example, from this point onward, the 3 and the 1 have become separated.
How to fix this: don't convert.
if (Char.IsNumber(c))
{
dataBuffer[i] = (byte)c;
}
Not tested. I think that's what you want. At least, that's what you describe in the last paragraph. That wouldn't make the numbers hexadecimal though - it would make them ASCII, and it's a bit odd to be mixing that with EBCDIC.
You convert the char to its code and then convert that code to string. You don't have to do the second step, instead use the code directly:
if (Char.IsNumber(c))
{
byte hexValue = Convert.ToByte(c);
dataBuffer[i] = hexValue;
}
I am trying to port some Javascript to C# and I'm having a bit of trouble. The javascript I am porting calls this
var binary = out.map(function (c) {
return String.fromCharCode(c);
}).join("");
return btoa(binary);
out is an array of numbers. I understand that it is taking the numbers and using fromCharCode to add characters to a string. At first I wasn't sure if my C# equivalent of btoa was working correctly, but the only characters I'm having issues with are the first 6 or 8. My encoded string outputs the same except for the first few characters.
At first in C# I was doing this
String binary = "";
foreach(int val in output){
binary += ((char)val);
}
And then I tried
foreach(int val in output){
System.Text.ASCIIEncoding convertor = new System.Text.ASCIIEncoding();
char o = convertor.GetChars(new byte[] { (byte)val })[0];
binary += o;
}
Both work fine on the later characters of the String but not the start. I've researched but I don't know what I'm missing.
My array of numbers is as follows: { 10, 135, 3, 10, 182, ....}
I know the 10s are newline characters, the 3 is end of text, the 182 is ¶, but what's confusing me is that the 135 should be the double dagger ‡. The Javascript does not show it when I print the string.
So what ends up happening is when the String is converted to Base64 my string looks like Cj8DCj8CRFF.... while the Javascript String looks like CocDCrYCRFF.... The rest of the strings are the same and the int arrays used are identical.
Any ideas?
It's important to understand that binary data does not always represent valid text in a given encoding, and that some encodings have variable numbers of bytes to represent different characters. In short: binary data and text are not the same at all, and you can only convert between the two in some cases and by following clear, accurate rules. Treating them incorrectly will cause pain.
That said, if you have a list of ints, that are always within the range 0-255, that should become a base64 string, here is a way to do it:
var output = new[] { 0, 1, 2, 68, 69, 70, 254, 255 };
var binary = new List<byte>();
foreach(int val in output){
binary.Add((byte)val);
}
var result = Convert.ToBase64String(binary.ToArray());
If you have text that should be encoded as a base64 string...generally I'd recommend UTF8 encoding, unless you need it to match the JS's implementation.
var str = "Hello, world!";
var result = Convert.ToBase64String(Encoding.UTF8.GetBytes(str));
The encoding that JS uses appears to be the same as casting between byte and char (chars > 255 are invalid), which isn't one of the standard Encodings available.
Here's how you might combine raw numbers and strings, then convert that to base64.
checked // ensures that values outside of byte's range do not fail silently
{
var output = new int[] { 10, 135, 3, 10, 182 };
var binary = output.Select(x => (byte)x)
.Concat("Hello, world".Select(c => (byte)c)).ToArray();
var result = Convert.ToBase64String(binary);
}
I have a uint value that I need to represent as a ByteArray and the convert in a string.
When I convert back the string to a byte array I found different values.
I'm using standard ASCII converter so I don't understand why I'm getting different values.
To be more clear this is what I'm doing:
byte[] bArray = BitConverter.GetBytes((uint)49694);
string test = System.Text.Encoding.ASCII.GetString(bArray);
byte[] result = Encoding.ASCII.GetBytes(test);
The bytearray result is different from the first one:
bArray ->
[0x00000000]: 0x1e
[0x00000001]: 0xc2
[0x00000002]: 0x00
[0x00000003]: 0x00
result ->
[0x00000000]: 0x1e
[0x00000001]: 0x3f
[0x00000002]: 0x00
[0x00000003]: 0x00
Notice that the byte 1 is different in the two arrays.
Thanks for your support.
Regards
string test = System.Text.Encoding.ASCII.GetString(bArray);
byte[] result = Encoding.ASCII.GetBytes(test);
Because raw data is not ASCII. Encoding.GetString is only meaningful if the data you are decoding is text data in that encoding. Anything else: you corrupt it. If you want to store a byte[] as a string, then base-n is necessary - typically base-64 because a: it is conveniently available (Convert.{To|From}Base64String), and b: you can fit it into ASCII, so you rarely hit code-page / encoding issues. For example:
byte[] bArray = BitConverter.GetBytes((uint)49694);
string test = Convert.ToBase64String(bArray); // "HsIAAA=="
byte[] result = Convert.FromBase64String(test);
Because c2 is not a valid ASCII char and it is replaced with '?'(3f)
Converting any byte array to string using SomeEncoding.GetString() is not a safe method as #activwerx suggested in comments. Instead use Convert.FromBase64String, Convert.ToBase64String
I am retrieving ASCII strings encoded with code page 437 from another system which I need to transform to Unicode so they can be mixed with other Unicode strings.
This is what I am working with:
var asciiString = "\u0094"; // 94 corresponds represents 'ö' in code page 437.
var asciiEncoding = Encoding.GetEncoding(437);
var unicodeEncoding = Encoding.Unicode;
// This is what I attempted to do but it seems not to be able to support the eight bit. Characters using the eight bit are replaced with '?' (0x3F)
var asciiBytes = asciiEncoding.GetBytes(asciiString);
// This work-around does the job, but there must be built in functionality to do this?
//var asciiBytes = asciiString.Select(c => (byte)c).ToArray();
// This piece of code happliy converts the character correctly to unicode { 0x94 } => { 0xF6, 0x0 } .
var unicodeBytes = Encoding.Convert(asciiEncoding, unicodeEncoding, asciiBytes);
var unicodeString = unicodeEncoding.GetString(unicodeBytes); // I want this to be 'ö'.
What I am struggling with is that I cannot find a suitable method in the .NET framework to transform a string with character codes above 127 to a byte array. This seems strange since there are support there to transform a byte array with characters above 127 to Unicode strings.
So my question is, is there any built in method to do this conversion properly or is my work-around the proper way to do it?
var asciiString = "\u0094";
Whatever you name it, this will always be a Unicode string. .NET only has Unicode strings.
I am retrieving ASCII strings encoded with code page 437 from another system
Treat the incoming data as byte[], not as string.
var asciiBytes = new byte[] { 0x94 }; // 94 corresponds represents 'ö' in code page 437.
var asciiEncoding = Encoding.GetEncoding(437);
var unicodeString = asciiEncoding.GetString(asciiBytes);
\u0094 is Unicode code-point 0094, which is a control character; it is not ö. If you wanted ö, the correct string is
string s = "ö";
which is LATIN SMALL LETTER O WITH DIAERESIS, aka code-point 00F6.
So:
var s = "\u00F6"; // Identical to "ö"
Now we get our encoding:
var enc = Encoding.GetEncoding(437);
var bytes = enc.GetBytes(s);
And we find that it is a single-byte decimal 148, which is hex 94 - i.e. what you were after.
The significance here is that in C# when you use the "\uXXXX" syntax, the XXXX is always referring to Unicode code-points, not the encoded value in some particular encoding.
You have to look earlier in the code. Once you have the data as a string, it has already been decoded. Any characters lost in that decoding is impossible to get back.
You need the input as bytes, so that you can use your encoding object for code page 437 to decode it into a string.
byte[] asciiData = new byte[] { 0x94 }; // character ö in codepage 437
Encoding asciiEncoding = Encoding.GetEncoding(437);
string unicodeString = asciiEncoding.GetString(asciiData);
Console.WriteLine(unicodeString);
Output:
ö