I am trying to port some Javascript to C# and I'm having a bit of trouble. The javascript I am porting calls this
var binary = out.map(function (c) {
return String.fromCharCode(c);
}).join("");
return btoa(binary);
out is an array of numbers. I understand that it is taking the numbers and using fromCharCode to add characters to a string. At first I wasn't sure if my C# equivalent of btoa was working correctly, but the only characters I'm having issues with are the first 6 or 8. My encoded string outputs the same except for the first few characters.
At first in C# I was doing this
String binary = "";
foreach(int val in output){
binary += ((char)val);
}
And then I tried
foreach(int val in output){
System.Text.ASCIIEncoding convertor = new System.Text.ASCIIEncoding();
char o = convertor.GetChars(new byte[] { (byte)val })[0];
binary += o;
}
Both work fine on the later characters of the String but not the start. I've researched but I don't know what I'm missing.
My array of numbers is as follows: { 10, 135, 3, 10, 182, ....}
I know the 10s are newline characters, the 3 is end of text, the 182 is ¶, but what's confusing me is that the 135 should be the double dagger ‡. The Javascript does not show it when I print the string.
So what ends up happening is when the String is converted to Base64 my string looks like Cj8DCj8CRFF.... while the Javascript String looks like CocDCrYCRFF.... The rest of the strings are the same and the int arrays used are identical.
Any ideas?
It's important to understand that binary data does not always represent valid text in a given encoding, and that some encodings have variable numbers of bytes to represent different characters. In short: binary data and text are not the same at all, and you can only convert between the two in some cases and by following clear, accurate rules. Treating them incorrectly will cause pain.
That said, if you have a list of ints, that are always within the range 0-255, that should become a base64 string, here is a way to do it:
var output = new[] { 0, 1, 2, 68, 69, 70, 254, 255 };
var binary = new List<byte>();
foreach(int val in output){
binary.Add((byte)val);
}
var result = Convert.ToBase64String(binary.ToArray());
If you have text that should be encoded as a base64 string...generally I'd recommend UTF8 encoding, unless you need it to match the JS's implementation.
var str = "Hello, world!";
var result = Convert.ToBase64String(Encoding.UTF8.GetBytes(str));
The encoding that JS uses appears to be the same as casting between byte and char (chars > 255 are invalid), which isn't one of the standard Encodings available.
Here's how you might combine raw numbers and strings, then convert that to base64.
checked // ensures that values outside of byte's range do not fail silently
{
var output = new int[] { 10, 135, 3, 10, 182 };
var binary = output.Select(x => (byte)x)
.Concat("Hello, world".Select(c => (byte)c)).ToArray();
var result = Convert.ToBase64String(binary);
}
Related
I am working a problem in C# and I am having issues with converting my string of multiple hex values to a byte[].
string word = "\xCD\x01\xEF\xD7\x30";
(\x starts each new value, so I have: CD 01 EF D7 30)
This is my first time asking a question here, so please let me know if you need anything extra from me.
More information on the project:
I need to be able to change both
"apple" and "\xCD\x01\xEF\xD7\x30" to a byte array.
For the normal string "apple" I use
byte[] data = Encoding.ASCII.GetBytes(word);
this does not seem to be working with "\xCD\x01\xEF\xD7\x30" I am getting the values
63, 1, 63, 63, 48
Ok... You were trying to directly "downcast"/"upcast" char <-> byte (where char is the C# char that is 16 bits long, and byte is 8 bits long).
There are various ways to do it. The simplest (probably not the more performant) is to use the iso-8859-1 encoding that "maps" the byte values 0-255 to the unicode codes 0-255 (and return).
Encoding enc = Encoding.GetEncoding("iso-8859-1");
string str = "apple";
byte[] bytes = enc.GetBytes(str);
string str2 = enc.GetString(bytes);
You can even do a little LINQ:
string str = "apple";
// This is "bad" if the string contains codepoints > 255
byte[] bytes = str.Select(x => (byte)x).ToArray();
// This is always safe, because by definition any value of a byte
// is a legal unicode character
string str2 = string.Concat(bytes.Select(x => (char)x));
I have the following problem: if the String contains a char that is not known from ASCII, it uses a 63.
Because of that i changed the encoding to UTF8, but I know a char can have the length of two bytes, so I get a out of range error.
How can I solve the problem?
System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding();
byte[] baInput = enc.GetBytes(strInput);
// Split byte array (6 Byte) in date (days) and time (ms) parts
byte[] baMsec = new byte[4];
byte[] baDays = new byte[2];
for (int i = 0; i < baInput.Length; i++)
{
if (4 > i)
{
baMsec[i] = baInput[i];
}
else
{
baDays[i - 4] = baInput[i];
}
}
The problem you seem to be having is that you know the number of characters, but not the number of bytes, when using UTF8. To solve just that problem, you could use:
byte[] baMsec = Encoding.UTF8.GetBytes(strInput.SubString(0, 4));
byte[] baDays = Encoding.UTF8.GetBytes(strInput.SubString(4));
Recommended Solution:
1) Split the strInput using the SubString(Int32, Int32) method and get the date and time parts in separate String variables, say strDate and strTime.
2) Then call UTF8Encoding.GetBytes on strDate and strTime and collect the byte array in baDays and baMsec respectively.
Why this works:
C# String is by default UTF-16 encoded, which is equally good to represent non-ASCII characters. Hence, no data is lost.
General Caution:
Never try to directly manipulate encoded strings at byte-level, you'll get lost. Use the String and Encoding class methods of C# to get the bytes if you want bytes.
Alternate approach:
I'm wondering (like others) why your date-time data contains non-numeric characters. I saw in a comment that you get your data from reader["TIMESTAMP2"].ToString(); and the sample content is §║ ê or l¦h. Check if you are interpreting numeric data stored in reader["TIMESTAMP2"] as String by mistake and should you actually treat it as a numeric type. Otherwise, even with this method, you'll be getting unexpected output soon.
The problem is that your baInput can contain more values than both baDays and baMsec can contain. After 6 iterations, you run out of the array size. Hence, the exception.
When you hit the seventh iteration, you get i - 4 which yields 6 - 4 = 2.
Since baDays only has two items, you can set the values on index 0 and 1.
Trying to convert a huge hex string to a binary string, but the OverflowException keeps gets thrown. This is my code to convert an image file to a hex string (which when used with a FlowDocument works perfectly!):
string h = new System.Runtime.Remoting.Metadata.W3cXsd2001.SoapHexBinary(System.IO.File.ReadAllBytes(Path)).ToString();
Now, however, I want to take this hex string and convert it to a binary string so that it may also displayed in FlowDocument. First, I tried writing it to a temp text file and then attempt to read it into a byte array:
string TempPath = System.IO.Path.Combine(System.IO.Path.GetTempPath(), "Text.txt");
using (System.IO.StreamWriter sw = new System.IO.StreamWriter(TempPath))
{
sw.WriteLine(Convert.ToString(Convert.ToInt64(h, 16), 2).PadLeft(12, '0'));
}
byte[] c = System.IO.File.ReadAllBytes(TempPath);
When that didn't work, I tried reading it into a string:
string c = System.IO.File.ReadAll(TempPath);
Neither worked and still throw OverflowException. I have also tried just doing this and skipped writing to a file altogether:
string s = Convert.ToString(Convert.ToInt64(h, 16), 2).PadLeft(12, '0')
And despite what approach I take, I still get an exception thrown. How are large strings like this normally handled?
Update
I've modified my algorithm to convert one character at a time, so now it looks like this:
string NewBinary = "";
try
{
int i = 0;
foreach (char c in h)
{
if (i == 100) break;
NewBinary = string.Concat(NewBinary, Convert.ToString(Convert.ToInt64(c.ToString(), 16), 2).PadLeft(12, '0'));
i++;
}
}
The problem with this is that the string is always going to be super long and the code above takes a LONG time to generate the binary string. I limited the length to 100 to test conversion, so the conversion itself is not an issue.
An int64 is represented by a 16 character hex string, which is why attempting to convert a "huge string" causes an OverflowException - the value is more than can be represented by an int64. You will need to break the string up into groups of max 16 chars & convert those to binary & concatenate them.
You could convert a nibble at a time using a lookup array, for example:
public static string HexStringToBinaryString(string hexString)
{
var result = new StringBuilder();
string[] lookup =
{
"0000", "0001", "0010", "0011",
"0100", "0101", "0110", "0111",
"1000", "1001", "1010", "1011",
"1100", "1101", "1110", "1111"
};
foreach (char nibble in hexString.Select(char.ToUpper))
result.Append((nibble > '9') ? lookup[10+nibble-'A'] : lookup[nibble-'0']);
return result.ToString();
}
Convert each hex character of the string into its corresponding binary pattern (eg A becomes 1010 etc)
I have a very specific requirement. I have some data. Of which, strings and spaces are to be converted to EBCDIC while numbers to Hexadecimal.
For Example, my string is "Test123"
Test => EBCDIC
123 => Hexadecimal.
What I am trying to do is check every character in string if its number or not, and then based on that doing my conversion.
byte[] dataBuffer = new byte[length];
int i = 0;
if (toEBCDIC)
{
foreach (char c in data)
{
byte[] temp = new byte[1];
if (Char.IsNumber(c))
{
string hexValue = Convert.ToInt32(c).ToString("X");
temp = Encoding.ASCII.GetBytes(hexValue);
dataBuffer[i] = temp[0];
}
else
{
temp = Encoding.GetEncoding("IBM01140").GetBytes(c.ToString());
dataBuffer[i] = temp[0];
}
i++;
}
dataBuffer.CopyTo(array, byteIndex);
The problem comes when i try to convert the number. I need to keep my output in byte array, as i have to write the output to a memory stream and then to a file.
When i get the hex value of number, and then try to convert it to byte, actual conversion happens.
For "1", hexvalue = 31.
Now I want to keep this 31 unchanged in bytes. I mean to say that, when i write it to byte array, it should remain 31 only. But when do GetBytes, it makes byte array, converting 3 and 1 separately to bytes.
Can anyone please help me on this..!!
The problem is here:
ToString("X")
Now it's a hexadecimal string. So in your example, from this point onward, the 3 and the 1 have become separated.
How to fix this: don't convert.
if (Char.IsNumber(c))
{
dataBuffer[i] = (byte)c;
}
Not tested. I think that's what you want. At least, that's what you describe in the last paragraph. That wouldn't make the numbers hexadecimal though - it would make them ASCII, and it's a bit odd to be mixing that with EBCDIC.
You convert the char to its code and then convert that code to string. You don't have to do the second step, instead use the code directly:
if (Char.IsNumber(c))
{
byte hexValue = Convert.ToByte(c);
dataBuffer[i] = hexValue;
}
I am retrieving ASCII strings encoded with code page 437 from another system which I need to transform to Unicode so they can be mixed with other Unicode strings.
This is what I am working with:
var asciiString = "\u0094"; // 94 corresponds represents 'ö' in code page 437.
var asciiEncoding = Encoding.GetEncoding(437);
var unicodeEncoding = Encoding.Unicode;
// This is what I attempted to do but it seems not to be able to support the eight bit. Characters using the eight bit are replaced with '?' (0x3F)
var asciiBytes = asciiEncoding.GetBytes(asciiString);
// This work-around does the job, but there must be built in functionality to do this?
//var asciiBytes = asciiString.Select(c => (byte)c).ToArray();
// This piece of code happliy converts the character correctly to unicode { 0x94 } => { 0xF6, 0x0 } .
var unicodeBytes = Encoding.Convert(asciiEncoding, unicodeEncoding, asciiBytes);
var unicodeString = unicodeEncoding.GetString(unicodeBytes); // I want this to be 'ö'.
What I am struggling with is that I cannot find a suitable method in the .NET framework to transform a string with character codes above 127 to a byte array. This seems strange since there are support there to transform a byte array with characters above 127 to Unicode strings.
So my question is, is there any built in method to do this conversion properly or is my work-around the proper way to do it?
var asciiString = "\u0094";
Whatever you name it, this will always be a Unicode string. .NET only has Unicode strings.
I am retrieving ASCII strings encoded with code page 437 from another system
Treat the incoming data as byte[], not as string.
var asciiBytes = new byte[] { 0x94 }; // 94 corresponds represents 'ö' in code page 437.
var asciiEncoding = Encoding.GetEncoding(437);
var unicodeString = asciiEncoding.GetString(asciiBytes);
\u0094 is Unicode code-point 0094, which is a control character; it is not ö. If you wanted ö, the correct string is
string s = "ö";
which is LATIN SMALL LETTER O WITH DIAERESIS, aka code-point 00F6.
So:
var s = "\u00F6"; // Identical to "ö"
Now we get our encoding:
var enc = Encoding.GetEncoding(437);
var bytes = enc.GetBytes(s);
And we find that it is a single-byte decimal 148, which is hex 94 - i.e. what you were after.
The significance here is that in C# when you use the "\uXXXX" syntax, the XXXX is always referring to Unicode code-points, not the encoded value in some particular encoding.
You have to look earlier in the code. Once you have the data as a string, it has already been decoded. Any characters lost in that decoding is impossible to get back.
You need the input as bytes, so that you can use your encoding object for code page 437 to decode it into a string.
byte[] asciiData = new byte[] { 0x94 }; // character ö in codepage 437
Encoding asciiEncoding = Encoding.GetEncoding(437);
string unicodeString = asciiEncoding.GetString(asciiData);
Console.WriteLine(unicodeString);
Output:
ö