My desktop c# application gets various documents from users, possibly in different encodings.
I need to show users existing documents, allow to manipulate them in my UI, and store them for future use.
Adding the notion of "encoding" to each of these steps seems complex to me. I was thinking to internally always convert the user input documents to UTF-8, and so my UI and data store do not need to worry about it. Then when the user wants the document back as a file I ask the user which encoding to use.
Does this make sense? Are encodings interoperable? What if I only support unicode?
In your application you should use native Unicode support (what the platform uses for storing Unicode). On Windows and OS X this is a sort of UTF-16, but on Linux it is UTF-8.
When it comes to saving/loading files or communicating with external systems, go for UTF-8.
Also, do not confuse code-pages with encodings.
Regarding code-pages, today I think it is not so important to support them anymore. At least it should not be a priority for you. Because for ANSI encodings you do not have BOMs, it will be really hard guess the encoding of files (in fact it is impossible to do it perfectly).
Encodings are not interoperable, since some have characters that others don't have.
Unicode internal representation is a good idea since it has the wider charset, but I'd advice to save back the document in the original encoding if the added characters are still in the said encoding. If not, prompt the user that you'll save in Unicode in order to encode correctly these characters.
Just decode all the documents to String. Strings in .Net are always Unicode (utf-16). Only use encodings when you are reading or writing a file.
When you get ANSI files you should know the codepage before converting to unicode e. g. create a utf-16 string, otherwise the bytes from 128 to 255 could result into the wrong unicode codepoints. You might get into trouble when you want to store unicode string to a ANSI file, because codepoints up to 0x10ffff cannot fit into a single byte.
There are only two reasons to ever use UTF-16 in an interchange format (that is, one that gets sent from A to B):
You didn't design the document type, and have to interoperate with something that already uses it.
Your content is of such that with some languages UTF-16 is shorter. This is relatively rare as even with those languages, there is often a high number of characters from the BMP in the mix, so UTF-8 ends up being more concise.
Barring that case, there are only two reasons to ever use anything other than UTF-8 in an interchange format:
You didn't design the document type, and have to interoperate with something that already uses legacy character sets.
You hate people.
Number 2 is particularly pressing if you particularly hate foreigners and people who don't use your own language, but if you just hate people generally, you'll cause enough headaches to enough people that you should find the exercise satisfying.
Now, extending from that, if a given document format designed by someone else allows UTF-8, and you can expect all modern software dealing with it to be able to handle UTF-8, then there are two reasons to not do this:
There is some sort of security checks done on the data to make sure it hasn't been changed (note, if you in any way edit or alter the document, this inherently doesn't apply).
You hate people. Again with a bonus for xenophobes.
For your internal storing, it's just a matter of whatever is most useful to you. As a rule, .NET tends to default to UTF-16 when in memory (char and string work with that) and UTF-8 when writing to and reading from strings. If your backing store is a SQL Server, then UTF-16 is your friend (the 'nchar', 'nvarchar', 'ntext' variants of 'char', 'varchar', 'text' to avoid issues if the character set was set to anything other than UTF-8), and other databases either have their own way of dealing with modern characters, or can use UTF-8.
In general though, use UTF-8 unless someone forces you to do otherwise (because either they were forced to deal with code from the 1990s or earlier, or because they hate people).
Related
So, I saw this question here on Stack Overflow (the question), and it says;
Update 2: This note on Roslyn confirms that the underlying platform defines the Unicode support for the compiler, and in the link to the code it explains that C# 6.0 supports Unicode 6.0 and up (with a breaking change for C# identifiers as a result).
So I am now wondering if I can, for example, read a file that contains unicode 13.0 characters, or am I missing something?
There are three things at play here:
The compiler, which is only relevant for source file handling. If you try to compile code that includes characters the compiler is unaware of, I would expect the compiler to treat those characters as "unknown" in terms of their Unicode category. (So you wouldn't be able to use them in identifiers, they wouldn't count as whitespace etc.)
The framework, which is relevant when you use methods that operate on strings, or things like char.GetUnicodeCategory() - but which will let you load data from files even if it doesn't "understand" some characters.
Whatever applications do with the data - often data is just propagated from system to system in an opaque way, but often there are also other operations and checks performed on it.
If you need to store some text in a database, and then display it on a user's screen, it's entirely possible for that text to go through various systems that don't understand some characters. That can be a problem, in terms of areas such as:
Equality and ordering: if two strings should be equal in a case-insensitive comparison, but the system doesn't know about some of the characters within those strings, it might get the wrong answer
Validation: if a string is only meant to contain characters within certain Unicode categories, but the system doesn't know what category a character is in, it logically doesn't know for sure whether the string is valid.
Combining and normalization: again in terms of validation, if your system is meant to validate that a string is only (say) 5 characters long, but that's in a particular normalization form, then you need to be able to perform that normalization in order to get the right answer.
(There are no doubt lots of similar other areas.)
The compiler is basically the least important part of this - it does matter what level of support the framework has, but whether it's actually a problem to be a bit out of date or not will depend on what's happening with the data.
I'm setting up a new server and want to support UTF-8 fully in my web application. I have tried this in the past on existing servers and always seem to end up having to fall back to ISO-8859-1.
Where exactly do I need to set the encoding/charsets? I'm aware that I need to configure Apache, MySQL, and PHP to do this — is there some standard checklist I can follow, or perhaps troubleshoot where the mismatches occur?
This is for a new Linux server, running MySQL 5, PHP, 5 and Apache 2.
Data Storage:
Specify the utf8mb4 character set on all tables and text columns in your database. This makes MySQL physically store and retrieve values encoded natively in UTF-8. Note that MySQL will implicitly use utf8mb4 encoding if a utf8mb4_* collation is specified (without any explicit character set).
In older versions of MySQL (< 5.5.3), you'll unfortunately be forced to use simply utf8, which only supports a subset of Unicode characters. I wish I were kidding.
Data Access:
In your application code (e.g. PHP), in whatever DB access method you use, you'll need to set the connection charset to utf8mb4. This way, MySQL does no conversion from its native UTF-8 when it hands data off to your application and vice versa.
Some drivers provide their own mechanism for configuring the connection character set, which both updates its own internal state and informs MySQL of the encoding to be used on the connection—this is usually the preferred approach. In PHP:
If you're using the PDO abstraction layer with PHP ≥ 5.3.6, you can specify charset in the DSN:
$dbh = new PDO('mysql:charset=utf8mb4');
If you're using mysqli, you can call set_charset():
$mysqli->set_charset('utf8mb4'); // object oriented style
mysqli_set_charset($link, 'utf8mb4'); // procedural style
If you're stuck with plain mysql but happen to be running PHP ≥ 5.2.3, you can call mysql_set_charset.
If the driver does not provide its own mechanism for setting the connection character set, you may have to issue a query to tell MySQL how your application expects data on the connection to be encoded: SET NAMES 'utf8mb4'.
The same consideration regarding utf8mb4/utf8 applies as above.
Output:
UTF-8 should be set in the HTTP header, such as Content-Type: text/html; charset=utf-8. You can achieve that either by setting default_charset in php.ini (preferred), or manually using header() function.
If your application transmits text to other systems, they will also need to be informed of the character encoding. With web applications, the browser must be informed of the encoding in which data is sent (through HTTP response headers or HTML metadata).
When encoding the output using json_encode(), add JSON_UNESCAPED_UNICODE as a second parameter.
Input:
Browsers will submit data in the character set specified for the document, hence nothing particular has to be done on the input.
In case you have doubts about request encoding (in case it could be tampered with), you may verify every received string as being valid UTF-8 before you try to store it or use it anywhere. PHP's mb_check_encoding() does the trick, but you have to use it religiously. There's really no way around this, as malicious clients can submit data in whatever encoding they want, and I haven't found a trick to get PHP to do this for you reliably.
Other Code Considerations:
Obviously enough, all files you'll be serving (PHP, HTML, JavaScript, etc.) should be encoded in valid UTF-8.
You need to make sure that every time you process a UTF-8 string, you do so safely. This is, unfortunately, the hard part. You'll probably want to make extensive use of PHP's mbstring extension.
PHP's built-in string operations are not by default UTF-8 safe. There are some things you can safely do with normal PHP string operations (like concatenation), but for most things you should use the equivalent mbstring function.
To know what you're doing (read: not mess it up), you really need to know UTF-8 and how it works on the lowest possible level. Check out any of the links from utf8.com for some good resources to learn everything you need to know.
I'd like to add one thing to chazomaticus' excellent answer:
Don't forget the META tag either (like this, or the HTML4 or XHTML version of it):
<meta charset="utf-8">
That seems trivial, but IE7 has given me problems with that before.
I was doing everything right; the database, database connection and Content-Type HTTP header were all set to UTF-8, and it worked fine in all other browsers, but Internet Explorer still insisted on using the "Western European" encoding.
It turned out the page was missing the META tag. Adding that solved the problem.
Edit:
The W3C actually has a rather large section dedicated to I18N. They have a number of articles related to this issue – describing the HTTP, (X)HTML and CSS side of things:
FAQ: Changing (X)HTML page encoding to UTF-8
Declaring character encodings in HTML
Tutorial: Character sets & encodings in XHTML, HTML and CSS
Setting the HTTP charset parameter
They recommend using both the HTTP header and HTML meta tag (or XML declaration in case of XHTML served as XML).
In addition to setting default_charset in php.ini, you can send the correct charset using header() from within your code, before any output:
header('Content-Type: text/html; charset=utf-8');
Working with Unicode in PHP is easy as long as you realize that most of the string functions don't work with Unicode, and some might mangle strings completely. PHP considers "characters" to be 1 byte long. Sometimes this is okay (for example, explode() only looks for a byte sequence and uses it as a separator -- so it doesn't matter what actual characters you look for). But other times, when the function is actually designed to work on characters, PHP has no idea that your text has multi-byte characters that are found with Unicode.
A good library to check into is phputf8. This rewrites all of the "bad" functions so you can safely work on UTF8 strings. There are extensions like the mb_string extension that try to do this for you, too, but I prefer using the library because it's more portable (but I write mass-market products, so that's important for me). But phputf8 can use mb_string behind the scenes, anyway, to increase performance.
Warning: This answer applies to PHP 5.3.5 and lower. Do not use it for PHP version 5.3.6 (released in March 2011) or later.
Compare with Palec's answer to PDO + MySQL and broken UTF-8 encoding.
I found an issue with someone using PDO and the answer was to use this for the PDO connection string:
$pdo = new PDO(
'mysql:host=mysql.example.com;dbname=example_db',
"username",
"password",
array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8"));
In my case, I was using mb_split, which uses regular expressions. Therefore I also had to manually make sure the regular expression encoding was UTF-8 by doing mb_regex_encoding('UTF-8');
As a side note, I also discovered by running mb_internal_encoding() that the internal encoding wasn't UTF-8, and I changed that by running mb_internal_encoding("UTF-8");.
First of all, if you are in PHP before 5.3 then no. You've got a ton of problems to tackle.
I am surprised that none has mentioned the intl library, the one that has good support for Unicode, graphemes, string operations, localisation and many more, see below.
I will quote some information about Unicode support in PHP by Elizabeth Smith's slides at PHPBenelux'14
INTL
Good:
Wrapper around ICU library
Standardised locales, set locale per script
Number formatting
Currency formatting
Message formatting (replaces gettext)
Calendars, dates, time zone and time
Transliterator
Spoofchecker
Resource bundles
Convertors
IDN support
Graphemes
Collation
Iterators
Bad:
Does not support zend_multibyte
Does not support HTTP input output conversion
Does not support function overloading
mb_string
Enables zend_multibyte support
Supports transparent HTTP in/out encoding
Provides some wrappers for functionality such as strtoupper
ICONV
Primary for charset conversion
Output buffer handler
mime encoding functionality
conversion
some string helpers (len, substr, strpos, strrpos)
Stream Filter stream_filter_append($fp, 'convert.iconv.ISO-2022-JP/EUC-JP')
DATABASES
MySQL: Charset and collation on tables and on the connection (not the collation). Also, don't use mysql - mysqli or PDO
postgresql: pg_set_client_encoding
sqlite(3): Make sure it was compiled with Unicode and intl support
Some other gotchas
You cannot use Unicode filenames with PHP and windows unless you use a 3rd part extension.
Send everything in ASCII if you are using exec, proc_open and other command line calls
Plain text is not plain text, files have encodings
You can convert files on the fly with the iconv filter
The only thing I would add to these amazing answers is to emphasize on saving your files in UTF-8 encoding, I have noticed that browsers accept this property over setting UTF-8 as your code encoding. Any decent text editor will show you this. For example, Notepad++ has a menu option for file encoding, and it shows you the current encoding and enables you to change it. For all my PHP files I use UTF-8 without a BOM.
Sometime ago I had someone ask me to add UTF-8 support for a PHP and MySQL application designed by someone else. I noticed that all files were encoded in ANSI, so I had to use iconv to convert all files, change the database tables to use the UTF-8 character set and utf8_general_ci collate, add 'SET NAMES utf8' to the database abstraction layer after the connection (if using 5.3.6 or earlier. Otherwise, you have to use charset=utf8 in the connection string) and change string functions to use the PHP multibyte string functions equivalent.
I recently discovered that using strtolower() can cause issues where the data is truncated after a special character.
The solution was to use
mb_strtolower($string, 'UTF-8');
mb_ uses MultiByte. It supports more characters but in general is a little slower.
In PHP, you'll need to either use the multibyte functions, or turn on mbstring.func_overload. That way things like strlen will work if you have characters that take more than one byte.
You'll also need to identify the character set of your responses. You can either use AddDefaultCharset, as above, or write PHP code that returns the header. (Or you can add a META tag to your HTML documents.)
I have just gone through the same issue and found a good solution at PHP manuals.
I changed all my files' encoding to UTF8 and then the default encoding on my connection. This solved all the problems.
if (!$mysqli->set_charset("utf8")) {
printf("Error loading character set utf8: %s\n", $mysqli->error);
} else {
printf("Current character set: %s\n", $mysqli->character_set_name());
}
View Source
Unicode support in PHP is still a huge mess. While it's capable of converting an ISO 8859 string (which it uses internally) to UTF-8, it lacks the capability to work with Unicode strings natively, which means all the string processing functions will mangle and corrupt your strings.
So you have to either use a separate library for proper UTF-8 support, or rewrite all the string handling functions yourself.
The easy part is just specifying the charset in HTTP headers and in the database and such, but none of that matters if your PHP code doesn't output valid UTF-8. That's the hard part, and PHP gives you virtually no help there. (I think PHP 6 is supposed to fix the worst of this, but that's still a while away.)
If you want a MySQL server to decide the character set, and not PHP as a client (old behaviour; preferred, in my opinion), try adding skip-character-set-client-handshake to your my.cnf, under [mysqld], and restart mysql.
This may cause trouble in case you're using anything other than UTF-8.
The top answer is excellent. Here is what I had to on a regular Debian, PHP, and MySQL setup:
// Storage
// Debian. Apparently already UTF-8
// Retrieval
// The MySQL database was stored in UTF-8,
// but apparently PHP was requesting ISO 8859-1. This worked:
// ***notice "utf8", without dash, this is a MySQL encoding***
mysql_set_charset('utf8');
// Delivery
// File *php.ini* did not have a default charset,
// (it was commented out, shared host) and
// no HTTP encoding was specified in the Apache headers.
// This made Apache send out a UTF-8 header
// (and perhaps made PHP actually send out UTF-8)
// ***notice "utf-8", with dash, this is a php encoding***
ini_set('default_charset','utf-8');
// Submission
// This worked in all major browsers once Apache
// was sending out the UTF-8 header. I didn’t add
// the accept-charset attribute.
// Processing
// Changed a few commands in PHP, like substr(),
// to mb_substr()
That was all!
My question is simple: Are strings in .net encoding agnostic?
I ask this because when I ingest an xml file that I know was encoded with some windows-1252 code page elements (i.e smart quotes), in the debugger viewing the string that is holding my xml seems to want to resolve the single "smart quote" to a triangle with a question mark in it. This makes me wonder if .NET is asserting that the string that is holding my XML is UTF8 and therefore cannot resolve the difference.
This is a problem, if so, because if the string gets converted then my webservice that is meant to scrub the windows smart quotes from my text will fail because it doesn't recognize the triangle/question-mark-thingy.
Please help.
Strings are always UTF-16. Any incoming or outgoing data must be converted to/from that encoding.
If you use a proper XML reading library, it will most likely handle it for you, as long as the XML has the appropriate XML prolog (but Windows-1252 support is not required for compliance with the XML specification).
.NET uses UTF16 for all strings in memory (surrogate characters may be thrown in where need be).
When loading some text file it either defaults to interpreting the file as UTF-8 or whatever encoding you tell it to use.
Since you don't show any source code I can only speculate how you read/load the XML and if the XML has the proper charset in its prolog... depending on the method .NET will default to UTF-8 and represent that as UTF16 in memory...
Please provide more details if the above didn't help...
No, strings in .NET are stored as Unicode codepoints in a limited 16-bit range. For those that overflow, surrogate characters are used.
Do not confuse the above-mentioned in memory representation with storage representation which highly depends on the chosen encoding scheme.
The string class is (mostly) encoding-agnostic. You error comes from the process of decoding bytes to a string. This process does not work for you. You need to tell the decoder to use your special encoding.
Why are strings only mostly agnostic? That is because they encode unicode chars as sequences of 16-bit values. But although a 16 bit value has only 64k possible values, a unicode char can have about 1 million different values. Therefore an encoding process needs to happen as well. This happens through the use of surrogates. The string class is essentially UTF-16.
No. From MSDN:
A string is a sequential collection of Unicode characters . . .
Scenario
You have lots of XML files stored as UTF-16 in a Database or on a Server where space is not an issue. You need to take a large majority of these files that you need to get to other systems as XML Files and it is critical that you use as little space as you can.
Issue
In reality only about 10% of the files stored as UTF-16 need to be stored as UTF-16, the rest can safely be stored as UTF-8 and be fine. If we can have the ones that need to be UTF-16 be such, and the rest be UTF-8 we can use about 40% less space on the file system.
We have tried to use great compression of the data and this is useful but we find that we get the same ratio of compression with UTF-8 as we get with UTF-16 and UTF-8 compresses faster as well. Therefore in the end if as much of the data is stored as UTF-8 as possible we can not only save space when stored uncompress, we can still save more space even when it is compressed, and we can even save time with the compression itself.
Goal
To figure out when there are Unicode characters in the XML file that require UTF-16 so we can only use UTF-16 when we have to.
Some Details about XML File and Data
While we control the schema for the XML itself, we do not control what type of "strings" can go in the values from a Unicode perspective as the source is free to provide Unicode data to use. However, this is rare so we would like not to have to use UTF-16 everytime just to support something that is only needed 10% of the time.
Development Environment
We are using C# with the .Net Framework 4.0.
EDIT: Solution
The solution is just to use UTF-8.
The question was based on my misunderstanding of UTF and I appreciate everyone helping set me straight. Thank you!
Edit: I didn’t realise that your question implies that you think that there are Unicode strings that cannot be safely encoded as UTF-8. This is not the case. The following answer assumes that what you really meant was that some strings will simply be longer (take more storage space) as UTF-8.
I would say even less than 10% of the files need to be stored as UTF-16. Even if your XML contains significant amounts of Chinese, Japanese, Korean, or another language that is larger in UTF-8 than UTF-16, it is still only an issue if there is more text in that language than there is XML syntax.
Therefore, my initial intuition is “use UTF-8 until it’s a problem”. It makes for consistency, too.
If you have serious reason to believe that a large proportion of the XML will be East Asian, only then you need to worry about it. In that case, I would apply a simple heuristic, like... go through the XML and count the number of characters greater than U+0800 (those are three bytes in UTF-8) and only if this is greater than the number of characters less than U+0080 (those are one byte in UTF-8), use UTF-16.
Encode everything in UTF-8. UTF-8 can handle anything UTF-16 can, and is almost surely going to be smaller in the case of an XML document. The only case in which UTF-8 would be larger than UTF-16 would be if the file was largely composed of characters beyond the BMP, and in the best case (ASCII-spec, which includes every character you can type on a standard U.S. 104-key) a UTF-8 file would be half the size of a UTF-16.
UTF-8 requires 2 bytes or less per character for all symbols at or below ordinal U07FF, and one byte for any character in the Extended ASCII codepage; that means UTF-8 will be at least equal to UTF-16 in size (and probably far smaller) for any document in a modern-day language using the Latin, Greek, Cyrillic, Hebrew or Arabic alphabets, including most of the common symbols used in algebra and the IPA. That's known as the Base Multilingual Plane, and encompasses more than 90% of all official national languages outside of Asia.
UTF-16, as a general rule, will give you a smaller file for documents written primarily in the Devanagari (Hindi), Japanese, Chinese, or Hangul (Korean) alphabets, or any ancient or "esoteric" alphabet (Cherokee or Inuit anyone?), and MAY be smaller in cases of documents that heavily use specialized mathematical, scientific, engineering or game symbols. If the XML you're working with is for localization files for India, China and Japan, you MAY get a smaller file size with UTF-16, but you will have to make your program smart enough to know the localization file is encoded that way.
You never 'need' to use UTF-16 instead of UTF-8 and the choice is not about 'safety'. Both encodings have the same encodable character repertoire.
There is no such thing as a document that has to be UTF-16. Any UTF-16 document can also be encoded as UTF-8. It is theoretically possible to have a document which is larger as UTF-8 than as UTF-16, but this is vanishingly unlikely, and not worth stressing over.
Just encode everything as UTF-8 and stop worrying about it.
There are no characters that require UTF-16 rather than UTF-8. Both UTF-8 and UTF-16 (and for that matter, UTF-32 along with some other non-recommended formats) can encode the entire UCS (that's what UTF means).
There are some streams that will be smaller in UTF-16 than in UTF-8. However, in practice such streams will largely contain Asian ideographs which are linguistically very concise. However, XML requires some characters in the 0x20-0x7F range with specific meanings, and are quite often using alphabet-based scripts for the element and attribute names.
Because of the aforementioned concision of these ideographs, the ratio of XML tags (including the element and attribute name along with the less-thans and greater-thans) to human-trageted text will be much higher than in languages that use alphabets and syllabaries. For this reason, even in cases where plain-text in UTF-16 would be appreciably smaller than the same text in UTF-8, when it comes to XML either this difference will be less, or the UTF-8 will still be smaller.
As a rule, use UTF-8 for transmission and storage.
Edit: Just noticed that you're compressing too. In which case, the balance is even less important, just use UTF-8 and be done with it.
I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files.
Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding.
This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms?
I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files.
My questions boil down to:
Is BOM-aware detection sufficient for the vast majority of files?
In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.")
Under what circumstances will a "valid" file fail with the C# encoder/decoder framework?
Is there a repository anywhere that has a multitude of files with various encodings to use for testing?
While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this.
So far I've found:
A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.)
Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh?
Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible.
My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files.
Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy.
Thanks.
There won't be an absolutely reliable way, but you may be able to get "pretty good" result with some heuristics.
If the data starts with a BOM, use it.
If the data contains 0-bytes, it is likely utf-16 or ucs-32. You can distinguish between these, and between the big-endian and little-endian variants of these by looking at the positions of the 0-bytes
If the data can be decoded as utf-8 (without errors), then it is very likely utf-8 (or US-ASCII, but this is a subset of utf-8)
Next, if you want to go international, map the browser's language setting to the most likely encoding for that language.
Finally, assume ISO-8859-1
Whether "pretty good" is "good enough" depends on your application, of course. If you need to be sure, you might want to display the results as a preview, and let the user confirm that the data looks right. If it doesn't, try the next likely encoding, until the user is satisfied.
Note: this algorithm will not work if the data contains garbage characters. For example, a single garbage byte in otherwise valid utf-8 will cause utf-8 decoding to fail, making the algorithm go down the wrong path. You may need to take additional measures to handle this. For example, if you can identify possible garbage beforehand, strip it before you try to determine the encoding. (It doesn't matter if you strip too aggressive, once you have determined the encoding, you can decode the original unstripped data, just configure the decoders to replace invalid characters instead of throwing an exception.) Or count decoding errors and weight them appropriately. But this probably depends much on the nature of your garbage, i.e. what assumptions you can make.
Have you tried reading a representative cross-section of your files from user, running them through your program, testing, correcting any errors and moving on?
I've found File.ReadAllLines() pretty effective across a very wide range of applications without worrying about all of the encodings. It seems to handle it pretty well.
Xmlreader() has done fairly well once I figured out how to use it properly.
Maybe you could post some specific examples of data and get some better responses.
This is a well known problem. You can try to do what Internet Explorer is doing. This is a nice article in The CodeProject that describes Microsoft's solution to the problem. However no solution is 100% accurate as everything is based on heuristcs. And it is also no safe to assume that a BOM will be present.
You may like to look at a Python-based solution called chardet. It's a Python port of Mozilla code. Although you may not be able to use it directly, its documentation is well worth reading, as is the original Mozilla article it references.
I ran into a similar issue. I needed a powershell script that figured out if a file was text-encoded ( in any common encoding ) or not.
It's definitely not exhaustive, but here's my solution...
PowerShell search script that ignores binary files