Is there a C# version of the Unicode algorithm that takes a Unicode string and breaks it into runs that can be correctly rendered? Each run should be either left-to-right or right-to-left.
We understand this is part of the Java ICU4J, but that is a large library, and we're only looking for this specific functionality, to render text correctly.
This is the unicode standard for bidi handling:
UNICODE BIDIRECTIONAL ALGORITHM
Also try: this
Implementations:
JAVA
C++
I'm sure you will be able to convert them to c# fairly simply
Related
In my case, I recently picked up the irrKlang library which allows me to work with audio files without doing too much work. Then I ran into the issue where unicode characters in filepaths were not supported by the library. It either reads it incorrectly (I would've thought even if it was read wrong, it could still find the file), or simply ignores it, leaving me with invalid file paths.
Searched their support forums for a solution to this, but all I got out of it was "unicode? uhhh why not just use ascii?" kind of attitude towards unicode, which I suppose is not uncommon.
What are some techniques that I could use to reliably pass unicode strings to libraries that don't have unicode support?
Simply put you don't, you can pass through them using a byte array, and then interpret it back as a unicode array on the other end, but if it doesn't do unicode, it doesn't do unicode.
There is no point in passing a unicode string to a library that is incapable of interpreting it.
If you need to do something specific(like using a load command on a filesystem with unicode paths e.g HFS+), then don't. Rather use the system provided file APIs and push the data into the uncooperative libraries constructor.
If you're seriously having problems with this unicode file path business, cause you don't work well with passing addresses and bitstreams around, then a simple solution is to make your own function:
obj_ptr* loadObjFromUnicodePath(path)
{
//create tmp ASCII named symlink to file at arg(path).
//call load API of irrKlang on symlink.
//delete tmp symlink, return object.
}
We have parsers for various Microsoft languages (VB6, VB.net, C#, MS dialects of C/C++).
They are Unicode enabled to the extent that we all agree on what Unicode is. Where we don't agree, our lexers object.
Recent MS IDEs all seem to read/write their source code files in UTF-8... I'm not sure this is always true. Is there some reference document that makes it clear how MS will write a souce code file? With or without byte order marks? Does it vary from IDE version to version? (I can't imagine that the old VB6 dev environment wrote anything other than an 8 bit character set, and I'd guess it would be in the CP-xxxx encoding established by the locale, right?)
For C# (and I assume other modern language dialects supported by MS), the character code \uFEFF can actually be found in the middle of a file. This code is defined as a zero-width no-break space. It appears to be ignored by VS 2010 when found in the middle of an identifier, in whitespace, but is significant in keywords and numbers. So, what are the rules? Or does MS have some kind of normalize-identifiers to handle things like composite characters, that allows different identifier strings to be treated as identical?
This is in a way a non-answer, because it does not tell what Microsoft says but what the standards say. Hope it will be of assistance anyway.
U+FEFF as a regular character
As you stated, U+FEFF should be treated as BOM (byte order mark) in the beginning of a file. Theoretically it could also appear in the middle of text since it actually is character denoting a zero width non-breaking space (ZWNBSP). In some languages/writing systems all words in a line are joined (=written together) and in such cases this character could be used as a separator, just like regular space in English but it does not cause a typographically visible gap. I'm not actually familiar with such scripts so my view might not be fully correct.
U+FEFF should only appear as a BOM
However, the usage of U+FEFF as a ZWNBSP has been deprecated as of Unicode version 3.2 and currently the purpose of U+FEFF is to act as a BOM. Instead of ZWNBSP as a separator, U+2060 (word joiner) character is strongly preferred by the Unicode consortium. Their FAQ also suggests that any U+FEFF occurring in the middle of a file can be treated as an unsupported character that should be displayed as invisible. Another possible solutions that comes into my mind would be to replace any U+FEFF occurring in the middle of a file with U+2060 or just ignore it.
Accidentally added U+FEFF
I guess the most probable reason for U+FEFF to appear in the middle of text is that it is a an erroneous result (or side effect) of a string concatenation. RFC 3629, that incorporated the usage of a BOM, denotes that stripping of the leading U+FEFF is necessary in concatenating strings. This also implies that the character could just be removed when found in middle of text.
U+FEFF and UTF-8
U+FEFF as a BOM has no real effect when the text is encoded as UTF-8 since it always has the same byte order. BOM in UTF-8 interferes with systems that rely on the presence of certain leading characters and protocols that explicitly mandate the encoding or an encoding identification method. Real world experience has also showed that some applications choke on UTF-8 with BOM. Therefore the usage of a BOM is generally discouraged when using UTF-8. Removing BOM from an UTF-8 encoded file should should not cause incorrect interpretation of the file (unless there is some checksum or digital signature related to the byte stream of the file).
On "how MS will write a souce code file" : VS can save files with and without BOM, as well in whole bunch of other encodings. The default is UTF-8 with BOM. You can try it yourself by going File -> Save ... as -> click triangle on "Save" button and chose "save with encoding".
On usage of FEFF in actual code - never seen one using it in the code... wikipedia suggests that it should be treated as zero-width space if happened anywhere but first position ( http://en.wikipedia.org/wiki/Byte_order_mark ).
For C++, the file is either Unicode with BOM, or will be interpreted as ANSI (meaning the system code page, not necessarily 1252). Yes, you can save with whatever encoding you want, but the compiler will choke if you try to compile a Shift-JIS file (Japanese, code page 932) on an OS with 1252 as system code page.
In fact, even the editor will get it wrong. You can save it as Shift-JIS on a 1252 system, and will look ok. But close the project and open it, and the text looks like junk. So the info is not preserved anywhere.
So that's your best guess: if there is no BOM, assume ANSI. That is what the editor/compiler do.
Also: VS 2008 and VS 2010, older editors where no to Unicode friendly.
And C++ has different rules than C# (for C++ the files are ANSI by default, for C# they are utf-8)
I want to send Cyrillic string as parameter over webservice from iPhone to .net framework server. How should I encode it correctly? I would like the result to be something like:
"myParam=\U0438\U0422"
If it's doable, would it matter if it is Cyrillic or just Latin letters?
And how should I decode it on the server, where I am using C#?
I would like the result to be something like "myParam=\U0438\U0422"
Really? That's not the standard for URL parameter encoding, which would be:
myParam=%d0%b8%d0%a2
assuming the UTF-8 encoding, which will be the default for an ASP.NET app. You don't need to manually decode anything then, the Request.QueryString/Form collections will give you native Unicode strings.
URL-encoding would normally be done using stringByAddingPercentEscapesUsingEncoding, except that it's a bit broken. See this question for background.
The C# strings default encoding Unicode. So for you, it's enough to ensure that your string is encoded like unicode.
One time it's encoded like unicode, there is no any difference if you put there cyrilic, latin, arabic or whatever letters, should be enough to use correct Code Page.
EDIT
Was searching for.. good article here Globalization Step by Step
Correction by #chibacity note: even if default string encoding is Unicode in C#, Web Services in your case use UTF-8. (more flexible one)
I have a C++ program that sends data via FTP via ASCII mode to an IBM Mainframe. I am now doing this via C#.
When it gets there and viewed the file looks like garbage.
I cannot see anything in the C++ code that does anything special to encode the file into something like EPCDIC. When the C++ files are sent they are viewed ok. The only thing I see different is \015 & \012 for line feeds whereas C# is using \r\n.
Would these characters have an effect and if so how can I get my C# app to use \015?
Do I have to do any special encoding to make it appear ok?
It sounds like you should indeed be using an EBCDIC encoding, and then probably transferring the text in binary. I have an EBCDIC encoding class you can use, should you wish.
Note that \015\012 is \r\n - they're characters 13 and 10 in decimal, just different ways of representing them. If you think the C++ code really is producing the same files as C#, compare two files which should be the same in a binary file editor.
Make sure you have the TYPE TEXT instead of TYPE BINARY command before you transfer the file.
If you are truly sending the files in ASCII mode, then the mainframe itself will convert that to EBCDIC (it's receiver-makes-good).
The fact that you're getting apparent garbage at the mainframe end, and character codes \015 and \012 (which are CR and LF respectively) means that you're not transferring in ASCII mode.
As an aside, the ISPF editor has been able to view ASCII data sets for quite a few versions now. Open up the file and enter the commands source ascii and lf.
The first renders converts the characters from ASCII to EBCDIC so you can see what they are, the second goes through and pads out "lines" so that linefeed markers are replaced with enough spaces to reach the record length.
Invaluable commands when dealing with mixed-encoding environments, which is where I do a lot of my work.
I'm looking for a way to programmatically, in C#, show the difference of two chunks of text.
The result, with deletes, adds are going to be shown in HTML, but that is a second step, and is an optional answer for the question.
I would like not to call/shell out to a command line if possible, ie calling third party diff tool or similar. Platform is Windows.
It must support Asian languages, such as Japanese, Chinese and Korean, meaning that traditional word break characters don't (necessarily) apply.
Have a look at this SO thread. Few choices for diff engine are listed there - perhaps one of them can suite you.