How to make C# show arabic? - c#

I have a problem that while writing a C# code the output sometimes is words arabic language,and it appears as a strange symbols,how to make C# read and show arabic??

I don't know the precise problem you're having, but would suggest you read The Absolute Minimum Every Programmer Should Know About Unicode to give yourself a solid grounding in this often confusing topic.

Arabic Console Output/Input is not possible on Windows Platforms, according to Microsoft:
http://www.microsoft.com/middleeast/msdn/arabicsupp.aspx#12

C#/.NET will display Arabic characters without a problem, as it represents string internally as UTF-16.
The issue is with how you display the characters.
If you are on the web, you need to ensure that your are including the correct charset encoding header or meta tag for the output.
Please provide more information on where you don't see the characters, and how you are outputting the strings.

maybe it is a problem with your system language, go to Control Panel then to Language options and try to change you System Local Language to Arabic and ensure that the language for non-Unicode programs is arabic.

Please make sure that you have correct fonts installed. If you have them on your system, it could be a fallback mechanism problem.
For web pages (Asp.Net), please make sure that:
You are using (and declaring) correct encoding.
You have correct fonts declared in your style definition.
I know that it sounds strange, but for Internet Explorer, it helps to set language for non-Unicode programs to what you need to support (in my case it was Chinese Simplified on German Windows 2003).

Related

C# windows form cannot display simplified Chinese characters

Somehow my previous question has been marked as duplicate.
Question:
I have a database with records in Chinese characters. I can take them out, and use them in button.Text.
However, when I use
Console.WriteLine(button.Text);
The output displays every Chinese character as a "?"
Now, why is the question NOT duplicate?
I have THOROUGHLY searched for a solution, not just on stackoverflow, on everywhere I can search (with my limited skills). Read all those related posts. I found two potential solutions:
One:
Console.OutputEncoding = Encoding.Unicode;
Unicode, UTF8, UT7, UTF32.
Two:
Change my computer's locale in Control Panel to a region with Simplified Chinese. Then reboot and run the solution again.
I have tried both these suggested solutions, individually and together. Nothing works. The output changes from "?" to completely jibberish, unrecognizable characters.
Does anyone have any idea what to do here?
This is a more complete version of my comment. The way I was able to display Simplified Chinese characters was by changing the language of Non-Unicode programs to Chinese:
Then in the cmd properties set the font to Consolas
I didn't even need to set the Console.OutputEncoding. This is the result (these are Chinese characters copy/pasted from the internet):
I think this is a duplicate of How to write Unicode characters to the console? which indicates that although .net and Unicode support your characters, the Font you are using as the output font of the Console does not support that Unicode character.
Your post does not indicate that you have tried adjusting the Console Font.

Is it possible to create multilingual help (.chm) file?

I need to create multilingual help (.chm) file for my WPF application. Please suggest best way to create it.
I'd try to steer clear of that if possible. CHM is a proprietary format and, although it has been reverse engineered, I think you'll get far more benefit from doing a truly portable solution like on-disk HTML.
Back when we still used CHM files, we found no easy way to embed multi-language capability into a single file and we had to provide translations in independent CHM files, leading to massive duplication of things like charts, pictures and so forth (this was many years ago so you should check if the situation has improved since then, if you really want to use CHM).
The support for Unicode was, shall we say, less than adequate, and there were numerous security problems which caused many customers to disallow use of CHM files - seriously, who in their right mind allows arbitrary code to be run by a help system?
With on-disk HTML, not only did this duplication disappear (since each language version included common images), we also got much better Unicode support and the ability to have a default front page (in English) with links to alternative front pages for other locales.
And we gained a big boost in portability since it's an open standard. That means we could pretty much run it in any browser on any platform.
And, on top of that, it appears Microsoft don't support it any more. From the Wikipedia CHM article:
In 2002, Microsoft announced security risks associated with the .CHM format, as well as security bulletins and patches. They have since announced their intentions not to develop the .CHM format further.
As the comments in the previous answer state, the CHM format is both a very old format by Microsoft, as well as being proprietary. Distributing HTML files with your application will accomplish the same functionality as a single CHM file. Worrying about the language of the user interface the user will see is more of a non-issue; chances are if the user is reading Italian help, he or she will be using a (1) Italian-localized version of Windows and (2) an Italian-language localized web browser.
That being said, because CHM is an old format and seemingly partially unsupported now, you can generate the same file based on the reversed engineered specification that the CHM files follow. Furthermore, because CHM is merely a binary file format emcompassing HTML files, encoding the HTML files using UTF-8 will accomplish getting the help documents themselves in whichever language you desire.
There is no Microsoft-supported CHM .NET API, so you'll have to output the Binary yourself using Streams / BinaryWriter.

How to fill out a PDF form and support multiple languages in iTextSharp?

I wanted to know if there is a way to support multiple languages when filling out a form field with iTextSharp. We need to support user’s filling out fields in English, European languages with diacritics, and Asian languages like Chinese and Japanese, but do not know how to support these all on the same PDF (e.g. the user could have form fields that are answered in English and some in Chinese for example). We have to work with Acrobat forms that are pre-defined, e.g. we cannot create a PDF completely from scratch in our scenario.
Is there a way to accomplish this within iTextSharp? At least to support most European languages and Chinese and for the form/generation process to know when to use the right know that support the particular character(s)?
Would it be an option to dynamically generate the PDF based on user inputs from another program, e.g. a windows forms app or a web page? Based on the user's selection from said app you could dynamically generate the PDF (based on a template) and apply the appropriate character sets.
Yes.
The problem you're having (best guess) is that the pre-defined fonts for the fields you're filling use WinAnsiEncoding (or some other mono-byte encoding that doesn't support all the diacritics you need).
And I see that iText does support setting a field's font directly. Excellent.
myAcroFields.setFieldProperty(fldName, "textfont", myBaseFont, null);
I believe you're required to subset fonts with Chinese encodings, but for the European-encoded fonts you probably want to fully embed the font in question. Fonts in form fields (that can be edited) react poorly when you try to display a missing character... at least they used to, many moons ago. Last time I tried was around Acrobat 5, so it's quite likely that the behavior has improved.

How to show difference for japanese, chinese and other unicode text

I'm looking for a way to programmatically, in C#, show the difference of two chunks of text.
The result, with deletes, adds are going to be shown in HTML, but that is a second step, and is an optional answer for the question.
I would like not to call/shell out to a command line if possible, ie calling third party diff tool or similar. Platform is Windows.
It must support Asian languages, such as Japanese, Chinese and Korean, meaning that traditional word break characters don't (necessarily) apply.
Have a look at this SO thread. Few choices for diff engine are listed there - perhaps one of them can suite you.

C# Reading Hebrew?

My program reads a CSV file that contains hebrew text, it then displays the values in a form but the text is unredable. What am I doing wrong?
Thanks
James
Possible options for what you're doing wrong:
Reading the file with the wrong encoding
Using a font that doesn't support Hebrew
Using a control that doesn't support right-to-left
How are you reading the file? If you look at the data in the debugger, does it seem correct? Do you know what encoding the file is in to start with?
See my Debugging Unicode Problems for some suggestions - although they won't help with any right-to-left issues. (I'm afraid I don't know much about bidi displays.)

Categories

Resources