I've been trying to convert a string containing EBCDIC characters to ASCII, this is my code so far:
string data = "F2F1F0F2F2F5F4";
Encoding ascii = Encoding.ASCII;
Encoding ebcdic = Encoding.GetEncoding("IBM037");
byte[] ebcdicData = ebcdic.GetBytes(data);
// Convert to ASCII
byte[] ebcdicDataConverted = Encoding.Convert(ebcdic, ascii, ebcdicData);
string sample = ascii.GetString(ebcdicDataConverted);
But I was expecting that the variable sample contained this value: 2102254
Instead, it shows the same value as data F2F1F0F2F2F5F4
Maybe I'm not understanding how this works, or I'm just burnt out, this page contains the conversion table that:
translates 8-bit EBCDIC characters to 7-bit ASCII
Is the Enconding that I'm using the right one? Am I doing something wrong?
Thanks
I converted de string to byte using a for like this:
byte[] ebcdicData = new byte[data.Length / 2];
for (int i = 0; i < ebcdicData.Length; i++)
{
string chunk = data.Substring(i * 2, 2);
ebcdicData[i] = Convert.ToByte(chunk, 16);
}
This way it worked. Idk why GetBytes() does something different, I will take a look at the documentation
Related
I would like to use the Replace() method but using hex values instead of string value.
I have a programm in C# who write text file.
I don't know why, but when the programm write the '°' (-> Number) it's wrotten ° ( in hex : C2 B0 instead of B0).
I just would like to patch it, in order to corect this.
Is it possible to do re place in order to replace C2B0 by B0 ? How doing this ?
Thanks a lot :)
Not sure if this is the best solution for your problem but if you want a replace function for a string using hex values this will work:
var newString = HexReplace(sourceString, "C2B0", "B0");
private static string HexReplace(string source, string search, string replaceWith) {
var realSearch = string.Empty;
var realReplace = string.Empty;
if(search.Length % 2 == 1) throw new Exception("Search parameter incorrect!");
for (var i = 0; i < search.Length / 2; i++) {
var hex = search.Substring(i * 2, 2);
realSearch += (char)int.Parse(hex, System.Globalization.NumberStyles.HexNumber);
}
for (var i = 0; i < replaceWith.Length / 2; i++) {
var hex = replaceWith.Substring(i * 2, 2);
realReplace += (char)int.Parse(hex, System.Globalization.NumberStyles.HexNumber);
}
return source.Replace(realSearch, realReplace);
}
C# strings are Unicode. When they are written to a file, an encoding must be applied. The default encoding used by File.WriteAllText is utf-8 with no byte order mark.
The two-byte sequence 0xC2B0 is the representation of the ° degree sign U+00B0 codepoint in utf-8.
To get rid of the 0xC2 part, apply a different encoding, for example latin-1:
var latin1 = Encoding.GetEncoding(1252);
File.WriteAllText(path, text, latin1);
To address the "hex replace" idea of the question: Best practice to remove the utf-8 leading byte from existing files would be to do a ReadAllText with utf-8, followed by a WriteAllText as shown above (or stream chunking if the files are too big to read to memory as a whole).
Single-byte character encodings cannot represent all Unicode characters, so substitution will happen for any such character in your DataTable.
The rendition as ° must be blamed on the viewer/editor you are using to display the file.
Further reading: https://stackoverflow.com/a/17269952/1132334
I have one column in DB containing UTF16 string and I want to convert the UTF16 string into normal text. How to achieve this in c# ?
For example :
Source : 0645 0631 062D 0628 0627 0020 0627 0644 0639 0627 0644 0645
Convert : مرحبا العالم
I presume that source is simply a string containing the byte values, as this is one thing not quite clear from your question.
You first need to turn that into a byte array. Of course you first need to remove the blanks.
// Initialize the byte array
string sourceNoBlanks = source.Replace(" ", "").Trim();
if ((sourceNoBlanks.Length % 2) > 0)
throw new ArgumentException("The length of the source string must be a multiple of 2!");
byte[] sourceBytes = new byte[source.Length / 2];
// Then, create the bytes
for (int i = 0; i < sourceBytes.Length; i++)
{
string byteString = sourceNoBlanks.Substring(i*2, 2);
sourceBytes[i] = Byte.Parse(byteString, NumberStyles.HexNumber);
}
After that you can easily convert it to string:
string result = Encoding.UTF32.GetString(sourceBytes);
I suggest you read up on the UTF32 encoding to understand little/big endian encoding.
I am looking for a smart way to convert a string of hex-byte-values into a string of 'real text' (ASCII Characters).
For example I have the word "Hello" written in Hexadecimal ASCII: 48 45 4C 4C 4F. And using some method I want to receive the ASCII text of it (in this case "Hello").
// I have this string (example: "Hello") and want to convert it to "Hello".
string strHexa = "48454C4C4F";
// I want to convert the strHexa to an ASCII string.
string strResult = ConvertToASCII(strHexa);
I am sure there is a framework method. If this is not the case of course I could implement my own method.
Thanks!
var str = Encoding.UTF8.GetString(SoapHexBinary.Parse("48454C4C4F").Value); //HELLO
PS: SoapHexBinary is in System.Runtime.Remoting.Metadata.W3cXsd2001 namespace
I am sure there is a framework method.
A a single framework method: No.
However the second part of this: converting a byte array containing ASCII encoded text into a .NET string (which is UTF-16 encoded Unicode) does exist: System.Text.ASCIIEncoding and specifically the method GetString:
string result = ASCIIEncoding.GetString(byteArray);
The First part is easy enough to do yourself: take two hex digits at a time, parse as hex and cast to a byte to store in the array. Seomthing like:
byte[] HexStringToByteArray(string input) {
Debug.Assert(input.Length % 2 == 0, "Must have two digits per byte");
var res = new byte[input.Length/2];
for (var i = 0; i < input.Length/2; i++) {
var h = input.Substring(i*2, 2);
res[i] = Convert.ToByte(h, 16);
}
return res;
}
Edit: Note: L.B.'s answer identifies a method in .NET that will do the first part more easily: this is a better approach that writing it yourself (while in a, perhaps, obscure namespace it is implemented in mscorlib rather than needing an additional reference).
StringBuilder sb = new StringBuilder();
for (int i = 0; i < hexStr.Length; i += 2)
{
string hs = hexStr.Substring(i, 2);
sb.Append(Convert.ToByte(hs, 16));
}
I am retrieving ASCII strings encoded with code page 437 from another system which I need to transform to Unicode so they can be mixed with other Unicode strings.
This is what I am working with:
var asciiString = "\u0094"; // 94 corresponds represents 'ö' in code page 437.
var asciiEncoding = Encoding.GetEncoding(437);
var unicodeEncoding = Encoding.Unicode;
// This is what I attempted to do but it seems not to be able to support the eight bit. Characters using the eight bit are replaced with '?' (0x3F)
var asciiBytes = asciiEncoding.GetBytes(asciiString);
// This work-around does the job, but there must be built in functionality to do this?
//var asciiBytes = asciiString.Select(c => (byte)c).ToArray();
// This piece of code happliy converts the character correctly to unicode { 0x94 } => { 0xF6, 0x0 } .
var unicodeBytes = Encoding.Convert(asciiEncoding, unicodeEncoding, asciiBytes);
var unicodeString = unicodeEncoding.GetString(unicodeBytes); // I want this to be 'ö'.
What I am struggling with is that I cannot find a suitable method in the .NET framework to transform a string with character codes above 127 to a byte array. This seems strange since there are support there to transform a byte array with characters above 127 to Unicode strings.
So my question is, is there any built in method to do this conversion properly or is my work-around the proper way to do it?
var asciiString = "\u0094";
Whatever you name it, this will always be a Unicode string. .NET only has Unicode strings.
I am retrieving ASCII strings encoded with code page 437 from another system
Treat the incoming data as byte[], not as string.
var asciiBytes = new byte[] { 0x94 }; // 94 corresponds represents 'ö' in code page 437.
var asciiEncoding = Encoding.GetEncoding(437);
var unicodeString = asciiEncoding.GetString(asciiBytes);
\u0094 is Unicode code-point 0094, which is a control character; it is not ö. If you wanted ö, the correct string is
string s = "ö";
which is LATIN SMALL LETTER O WITH DIAERESIS, aka code-point 00F6.
So:
var s = "\u00F6"; // Identical to "ö"
Now we get our encoding:
var enc = Encoding.GetEncoding(437);
var bytes = enc.GetBytes(s);
And we find that it is a single-byte decimal 148, which is hex 94 - i.e. what you were after.
The significance here is that in C# when you use the "\uXXXX" syntax, the XXXX is always referring to Unicode code-points, not the encoded value in some particular encoding.
You have to look earlier in the code. Once you have the data as a string, it has already been decoded. Any characters lost in that decoding is impossible to get back.
You need the input as bytes, so that you can use your encoding object for code page 437 to decode it into a string.
byte[] asciiData = new byte[] { 0x94 }; // character ö in codepage 437
Encoding asciiEncoding = Encoding.GetEncoding(437);
string unicodeString = asciiEncoding.GetString(asciiData);
Console.WriteLine(unicodeString);
Output:
ö
I need to convert a string into it's binary equivilent and keep it in a string. Then return it back into it's ASCII equivalent.
You can encode a string into a byte-wise representation by using an Encoding, e.g. UTF-8:
var str = "Out of cheese error";
var bytes = Encoding.UTF8.GetBytes(str);
To get back a .NET string object:
var strAgain = Encoding.UTF8.GetString(bytes);
// str == strAgain
You seem to want the representation as a series of '1' and '0' characters; I'm not sure why you do, but that's possible too:
var binStr = string.Join("", bytes.Select(b => Convert.ToString(b, 2)));
Encodings take an abstract string (in the sense that they're an opaque representation of a series of Unicode code points), and map them into a concrete series of bytes. The bytes are meaningless (again, because they're opaque) without the encoding. But, with the encoding, they can be turned back into a string.
You seem to be mixing up "ASCII" with strings; ASCII is simply an encoding that deals only with code-points up to 128. If you have a string containing an 'é', for example, it has no ASCII representation, and so most definitely cannot be represented using a series of ASCII bytes, even though it can exist peacefully in a .NET string object.
See this article by Joel Spolsky for further reading.
You can use these functions for converting to binary and restore it back :
public static string BinaryToString(string data)
{
List<Byte> byteList = new List<Byte>();
for (int i = 0; i < data.Length; i += 8)
{
byteList.Add(Convert.ToByte(data.Substring(i, 8), 2));
}
return Encoding.ASCII.GetString(byteList.ToArray());
}
and for converting string to binary :
public static string StringToBinary(string data)
{
StringBuilder sb = new StringBuilder();
foreach (char c in data.ToCharArray())
{
sb.Append(Convert.ToString(c, 2).PadLeft(8, '0'));
}
return sb.ToString();
}
Hope Helps You.
First convert the string into bytes, as described in my comment and in Cameron's answer; then iterate, convert each byte into an 8-digit binary number (possibly with Convert.ToString, padding appropriately), then concatenate. For the reverse direction, split by 8 characters, run through Convert.ToInt16, build up a byte array, then convert back to a string with GetString.