I have a .NET plugin which needs to get the text of the current buffer. I found this page, which shows a way to do it:
public static string GetDocumentText(IntPtr curScintilla)
{
int length = (int)Win32.SendMessage(curScintilla, SciMsg.SCI_GETLENGTH, 0, 0) + 1;
StringBuilder sb = new StringBuilder(length);
Win32.SendMessage(curScintilla, SciMsg.SCI_GETTEXT, length, sb);
return sb.ToString();
}
And that's fine, until we reach the character encoding issues. I have a buffer that is set in the Encoding menu to "UTF-8 without BOM", and I write that text to a file:
System.IO.File.WriteAllText(#"C:\Users\davet\BBBBBB.txt", sb.ToString());
when I open that file (in notepad++) the encoding menu shows UTF-8 without BOM but the ß character is broken (ß).
I was able to get as far as finding the encoding for my current buffer:
int currentBuffer = (int)Win32.SendMessage(PluginBase.nppData._nppHandle, NppMsg.NPPM_GETCURRENTBUFFERID, 0, 0);
Console.WriteLine("currentBuffer: " + currentBuffer);
int encoding = (int) Win32.SendMessage(PluginBase.nppData._nppHandle, NppMsg.NPPM_GETBUFFERENCODING, currentBuffer, 0);
Console.WriteLine("encoding = " + encoding);
And that shows "4" for "UTF-8 without BOM" and "0" for "ASCII", but I cannot find what notepad++ or Scintilla thinks those values are supposed to represent.
So I'm a bit lost for where to go next (Windows not being my natural habitat). Anyone know what I'm getting wrong, or how to debug it further?
Thanks.
Removing the StringBuilder fixes this problem.
public static string GetDocumentTextBytes(IntPtr curScintilla) {
int length = (int) Win32.SendMessage(curScintilla, SciMsg.SCI_GETLENGTH, 0, 0) + 1;
byte[] sb = new byte[length];
unsafe {
fixed (byte* p = sb) {
IntPtr ptr = (IntPtr) p;
Win32.SendMessage(curScintilla, SciMsg.SCI_GETTEXT, length, ptr);
}
return System.Text.Encoding.UTF8.GetString(sb).TrimEnd('\0');
}
}
Alternative approach:
The reason for the broken UTF-8 characters is that this line..
Win32.SendMessage(curScintilla, SciMsg.SCI_GETTEXT, length, sb);
..reads the string using [MarshalAs(UnmanagedType.LPStr)], which uses your computer's default ANSI encoding when decoding strings (MSDN). This means you get a string with one character per byte, which breaks for multi-byte UTF-8 characters.
Now, to save the original UTF-8 bytes to disk, you simply need to use the same default ANSI encoding when writing the file:
File.WriteAllText(#"C:\Users\davet\BBBBBB.txt", sb.ToString(), Encoding.Default);
Related
I've got two strings which are derived from Windows filenames, which contain unicode characters that do not display correctly in Windows (they show just the square box "unknown character" instead of the correct character). However the filenames are valid and these files exist without problems in the operating system, which means I need to be able to deal with them correctly and accurately.
I'm loading the filenames the usual way:
string path = #"c:\folder";
foreach (FileInfo file in DirectoryInfo.EnumerateFiles(path))
{
string filename = file.FullName;
}
but for the purposes of explaining this problem, these are the two filenames I'm having issues with:
string filename1 = "\ude18.txt";
string filename2 = "\udca6.txt";
Two strings, two filenames with a single unicode character plus an extension, both different. This so far is fine, I can read and write these files no problem, however I need to store these strings in a sqlite db and later retrieve them. Every attempt I make to do so results in both of these characters being changed to the "unknown character", so the original data is lost and I can no longer differentiate the two strings. At first I thought this was an sqlite issue, and I've made sure my db is in UTF16, but it turns out it's the conversion in c# to UTF16 that is causing the problem.
If I ignore sqlite entirely, and simply try to manually convert these strings to UTF16 (or to any other encoding), these characters are converted to the "unknown character" and the original data is lost. If I do this:
System.Text.Encoding enc = System.Text.Encoding.Unicode;
string filename1 = "\ude18.txt";
string filename2 = "\udca6.txt";
byte[] name1Bytes = enc.GetBytes(filename1);
byte[] name2Bytes = enc.GetBytes(filename2);
and I then inspect the bytearrays 'name1Bytes' and 'name2Bytes' they are both identical. and I can see that the unicode character in both cases has been converted to a pair of bytes 253 and 255 - the unknown character. and sure enough when I convert back
string newFilename1 = enc.GetString(name1Bytes);
string newFilename2 = enc.GetString(name2Bytes);
the orignal unicode character in each case is lost, and replaced with a diamond question mark symbol. I have lost the original filenames altogether.
It seems that these encoding conversions rely on the system font being able to display the characters, and this is a problem as these strings already exist as filenames, and changing the filenames isn't an option. I need to preserve this data somehow when sending it to sqlite, and when it's sent to sqlite it will go through a conversion process to UTF16, and it's this conversion that I need it to survive without losing data.
If you cast a char to an int, you get the numeric value, bypassing the Unicode conversion mechanism:
foreach (char ch in filename1)
{
int i = ch; // 0x0000de18 == 56856 for the first char in filename1
... do whatever, e.g., create an int array, store it as base64
}
This turns out to work as well, and is perhaps more elegant:
foreach (int ch in filename1)
{
...
}
So perhaps something like this:
string Encode(string raw)
{
byte[] bytes = new byte[2 * raw.Length];
int i = 0;
foreach (int ch in raw)
{
bytes[i++] = (byte)(ch & 0xff);
bytes[i++] = (byte)(ch >> 8);
}
return Convert.ToBase64String(bytes);
}
string Decode(string encoded)
{
byte[] bytes = Convert.FromBase64String(encoded);
char[] chars = new char[bytes.Length / 2];
for (int i = 0; i < chars.Length; ++i)
{
chars[i] = (char)(bytes[i * 2] | (bytes[i * 2 + 1] << 8));
}
return new string(chars);
}
I would like to use the Replace() method but using hex values instead of string value.
I have a programm in C# who write text file.
I don't know why, but when the programm write the '°' (-> Number) it's wrotten ° ( in hex : C2 B0 instead of B0).
I just would like to patch it, in order to corect this.
Is it possible to do re place in order to replace C2B0 by B0 ? How doing this ?
Thanks a lot :)
Not sure if this is the best solution for your problem but if you want a replace function for a string using hex values this will work:
var newString = HexReplace(sourceString, "C2B0", "B0");
private static string HexReplace(string source, string search, string replaceWith) {
var realSearch = string.Empty;
var realReplace = string.Empty;
if(search.Length % 2 == 1) throw new Exception("Search parameter incorrect!");
for (var i = 0; i < search.Length / 2; i++) {
var hex = search.Substring(i * 2, 2);
realSearch += (char)int.Parse(hex, System.Globalization.NumberStyles.HexNumber);
}
for (var i = 0; i < replaceWith.Length / 2; i++) {
var hex = replaceWith.Substring(i * 2, 2);
realReplace += (char)int.Parse(hex, System.Globalization.NumberStyles.HexNumber);
}
return source.Replace(realSearch, realReplace);
}
C# strings are Unicode. When they are written to a file, an encoding must be applied. The default encoding used by File.WriteAllText is utf-8 with no byte order mark.
The two-byte sequence 0xC2B0 is the representation of the ° degree sign U+00B0 codepoint in utf-8.
To get rid of the 0xC2 part, apply a different encoding, for example latin-1:
var latin1 = Encoding.GetEncoding(1252);
File.WriteAllText(path, text, latin1);
To address the "hex replace" idea of the question: Best practice to remove the utf-8 leading byte from existing files would be to do a ReadAllText with utf-8, followed by a WriteAllText as shown above (or stream chunking if the files are too big to read to memory as a whole).
Single-byte character encodings cannot represent all Unicode characters, so substitution will happen for any such character in your DataTable.
The rendition as ° must be blamed on the viewer/editor you are using to display the file.
Further reading: https://stackoverflow.com/a/17269952/1132334
i'm having a mysql dump with some special characters ("Ä, ä, Ö, ö, Ü, ü, ß"). I have to reimport this dump into the latest mysql version. This is crashing the special characters because of the encoding. The dump is not encoded with UTF-8.
Within this dump there are also some binary attachments which should not be overwritten. Otherwise the attachments will be broken.
I have to overwrite every special character with the bytes that are readable for UTF-8.
I'm currently trying it that way (this is changing the ANSI ü to an for UTF-8 readable ü):
newByteArray[y] = 195;
if (bytesFromLine[i] == 252)
{
newByteArray[y + 1] = 188;
}
newByteArray[y + 2] = bytesFromLine[y + 1];
252 is displaying a 'ü' in Encoding.Default. 195 188 is displaying a 'ü' in Encoding.UTF8.
Now i need help with searching this specific characters in this dump file an overwriting this bytes with the right bytes. I can't replace all '252' with '195 188' because the attachments would get broken then.
Thanks in advance.
Relax
DISCLAIMER: This might corrupt your data. The best way of dealing with this is to get a proper mysqldump from the source database. This solution should only be use when you don't have that option and stuck with a potentially broken dump file.
Assuming all strings in the dump file in quotes (using single quote ') and can be escaped as \':
INSERT INTO `some_table` VALUES (123, 'this is a string', ...
Not too sure how binary data is represented. That might need more checks, you need to check your dump file and see if these assumptions are correct.
const char quote = '\'';
const char escape = '\\';
using (var dumpOut = new FileStream("dump_out.txt", FileMode.Create, FileAccess.Write))
using (var dumpIn = new FileStream("dump_in.txt", FileMode.Open, FileAccess.Read))
{
bool inquotes = false;
byte previousByte = 0;
var stringBytes = new List<byte>();
while (true)
{
int readByte = dumpIn.ReadByte();
if (readByte == -1) break;
var b = (byte) readByte;
if (b == quote && previousByte != escape)
{
if (inquotes) // closing quote
{
var buffer = stringBytes.ToArray();
stringBytes.Clear();
byte[] converted = Encoding.Convert(Encoding.Default, Encoding.UTF8, buffer);
dumpOut.Write(converted, 0, converted.Length);
dumpOut.WriteByte(b);
}
else // opening quote
{
dumpOut.WriteByte(b);
}
inquotes = !inquotes;
continue;
}
previousByte = b;
if (inquotes)
stringBytes.Add(b);
else
dumpOut.WriteByte(b);
}
}
I'm trying to read a large file from a disk and report percentage while it's loading. The problem is FileInfo.Length is reporting different size than my Encoding.ASCII.GetBytes().Length.
public void loadList()
{
string ListPath = InnerConfig.dataDirectory + core.operation[operationID].Operation.Trim() + "/List.txt";
FileInfo f = new FileInfo(ListPath);
int bytesLoaded = 0;
using (FileStream fs = File.Open(ListPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (BufferedStream bs = new BufferedStream(fs))
using (StreamReader sr = new StreamReader(bs))
{
string line;
while ((line = sr.ReadLine()) != null)
{
byte[] array = Encoding.ASCII.GetBytes(line);
bytesLoaded += array.Length;
}
}
MessageBox.Show(bytesLoaded + "/" + f.Length);
}
The result is
13357/15251
There's 1900 bytes 'missing'. The file contains list of short strings. Any tips why it's reporting different file sizes? does it has to do anything with '\r' and '\n' characters in the file? In addition, I have the following line:
int bytesLoaded = 0;
if the file is lets say 1GB large, do I have to use 'long' instead? Thank you for your time!
Your intuition is correct; the difference in the reported sizes is due to the newline characters. Per the MSDN documentation on StreamReader.ReadLine:
The string that is returned does not contain the terminating carriage return or line feed.
Depending on the source which created your file, each newline would consist of either one or two characters (most commonly: \r\n on Windows; just \n on Linux).
That said, if your intention is to read the file as a sequence of bytes (without regard to lines), you should use the FileStream.Read method, which avoids the overhead of ASCII encoding (as well as returns the correct count in total):
byte[] array = new byte[1024]; // buffer
int total = 0;
using (FileStream fs = File.Open(ListPath, FileMode.Open,
FileAccess.Read, FileShare.ReadWrite))
{
int read;
while ((read = fs.Read(array, 0, array.Length)) > 0)
{
total += read;
// process "array" here, up to index "read"
}
}
Edit: spender raises an important point about character encodings; your code should only be used on ASCII text files. If your file was written using a different encoding – the most popular today being UTF-8 – then results may be incorrect.
Consider, for example, the three-byte hex sequence E2-98-BA. StreamReader, which uses UTF8Encoding by default, would decode this as a single character, ☺. However, this character cannot be represented in ASCII; thus, calling Encoding.ASCII.GetBytes("☺") would return a single byte corresponding to the ASCII value of the fallback character, ?, thereby leading to loss in character count (as well as incorrect processing of the byte array).
Finally, there is also the possibility of an encoding preamble (such as Unicode byte order marks) at the beginning of the text file, which would also be stripped by the ReadLine, resulting in a further discrepancy of a few bytes.
It's the line endings which get swallowed by ReadLine, and could also possibly be because your source file is in a more verbose encoding than ASCII (perhaps it's UTF8?).
int.MaxValue is 2147483647, so you're going to run into problem using an int for bytesLoaded if your file is >2GB. Switch to a long. After all, FileInfo.Length is defined as a long.
The ReadLine method removes the trailing line termination character.
I am using ReadProcessMemory in C# output is bytes[]. I want to covert this to string. How to do that? My code is below..
!ReadProcessMemory(appProcess.Handle, mbi.BaseAddress, buffer, mbi.RegionSize, ref nRead))
{
int lastError = Marshal.GetLastWin32Error();
if (lastError != 0)
throw new ApplicationException(string.Format("ReadProcessMemory returned Win32 Error {0}", lastError));
}
I am using string szData = Encoding.UTF8.GetString(buffer); and i am getting the below output.. how to get valid string
�#y��Actx Actx �ȶ�+eMZ�Actx Actx Actx ��ؚ~���������������MZ�j��xIlj�u�z�uy�u͙�u�}�u:�u��If�՜��D$f��f�4$��5Q�G"��L[���T_�N�b�l"���aa1wa��[�ۖ+3�����⯚*�e%��m�v�a�����S�+ ��b�r��o���V�G�q�1)v��*��[k<�CP�C�FYYE^i>�o �R��敠{�u�B3�����w�/���E�{U-��v|5�馘���U1�7�ҡ��[�## P^�J�
S4����S�<���� ���cD$�$ډD$$���&,�}�34���e��_��U����V�,I�
R��}��=63S�L���M�z[�|�v�{Y^OZ�q<2�#u�c7��dzx����8�.��'h��Jsw���V�J�4)���˧JV#c�z�R��~i�
��c0g�r�|
e����e�t2�!. �+�X*m�#�U9�5�������������E
��q
�n�'s�Yi��
�������H�����vG�Z�O� �0d��C͕����{D %�#�C���Y�M_E
�6�;3�v��c��Ʌ1]�y}�ldu�����#t���A�h�9#�SVG���zfnuy�osKђ�N��q�OD$������E0�v��������������sȶ1+e�����?�������5��h0MZ��D$��M�z�uB|�u�;�ulj�uy�u���'��H[���&���
BEGINTHM�y[������RESCDIRRESCSEG{��"~��������D-x�.MZ���.�z�uB|�u�;�uK�u�E�uy�u�&��__�5����DD�.9���WU����~~�z==G�dd��]]�2+�ss�������OOѣ��D""fT**~;���
����FF����)k���(<���y�^^�
���v���;d22Vt::N
�II�
H$$l�\���]���nC����bb�9���1������7�yy����2���Cn77Y�mm�������d�NN�I����ll��VV�������%�ee��zz�G���o����xx�J%%o..r8$W���s��Ǘ��Q���#���|�tt�>!�KK�a���
�������pp�|>>Bq����ff��HH�����aa�j55_�WW�i���������X:''������8���+���"3�ii����p���3���-���<"������ ���I�UU�P((x���z���Y��� ���
e������1�BB��hh��AA�)���Z--w{��˨TT�m���,:��cc��||��ww��{{
����kk��ooT���P00��gg}V++���b����M����vvE��ʝ��#��ɇ�}}����YYɎGG
����A��g����_���E���#���S����rr[����u������=��jL&&Zl66A~??���O���\h44�Q��4�������qqs���Sb11?*R���eF##^���(0�7��
�/�� 6$���=���&���iN''�����uu ���tX,,.4-6��nn�ZZ�[����RRMv;;a����}��{R))>���q^//�����SSh���_ValidateTexInfoatToResourceFormat��y��{��"~����{��"~����RESCSEG�\�Ѕȶ1+e����ȶ1+e�������?�������'��P��W��n��W������9$�?������MZ��L$V3��y�t�ы��;T$t��F�Ѓx�u���ID�����ts.r�.��-�������#.MxX� p���O�.rsrc��lp���
�r�aaI��dGS��pOBB�W.�6t��g����MZ�����u��u�v�u���u��u��u�&\w~��u���u���u���u��\w�\w�=�uA\w��u#��u��uئ�u�D�u���uZ�u;��uܔ�u
You are reading raw binary data from a process, that will be a string only by accident. If it is a string at all, it is definitely not going to be encoded in UTF8. That's a format that you'd only ever see in files or data sent across the Internet. The in-memory representation of strings are ASCII or UTF-16.
But start out dumping this data in the same kind of format the debugger uses in the Debug + Windows + Memory 1 window. You can find the code to do so in this post.
It depends on the text encoding. For UTF8, you can do this:
string s = Encoding.UTF8.GetString(buffer);
You need to specify an encoding and then use that to construct your string.
Example:
byte [] dBytes = ...
string str;
System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding();
str = enc.GetString(dBytes);