I am trying to read a dbf file through ADO using the FoxPro OLEDB driver. I can query fine however there are some special characters which do not seem to be coming through. They are not printable characters as disappear when clicked on however are definitely not the same via OLEDB as they are in FoxPro.
For example, the following field through Visual FoxPro:
When this is accessed through OLEDB it displays as the following:
I've narrowed this down to the fact that the first string contains the ASCII code 0 (null) character as the 10th character - this is valid however so I do not wish to remove it, but whatever I try the string ends after 9 characters when reading with ADO.
You don't show us any code and the image links are broken, we are left out with guesses. I have been using VFPOLEDB driver from C# for years and do no have this problem. I believe you are trying to describe a problem that exists on C# side and not VFP side. In VFP even the char(0) is a valid character. In C# however (docs are misleading IMO, says this is not the case but it is) strings are ASCIIZ strings where char(0) is accepted as the end of string. This should be your problem. You could simply read as a byte array instead, casting the field to a blob. Something like:
Instead of plain SQL like this:
select myField from myTable
Do like this and cast:
select cast(myField as w) as myField from myTable
EDIT: Images were not broken but blocked for me by my ISP, go figure why.
Related
I tried to use UTF-8 and ran into trouble.
I have tried so many things; here are the results I have gotten:
???? instead of Asian characters. Even for European text, I got Se?or for Señor.
Strange gibberish (Mojibake?) such as Señor or 新浪新闻 for 新浪新闻.
Black diamonds, such as Se�or.
Finally, I got into a situation where the data was lost, or at least truncated: Se for Señor.
Even when I got text to look right, it did not sort correctly.
What am I doing wrong? How can I fix the code? Can I recover the data, if so, how?
This problem plagues the participants of this site, and many others.
You have listed the five main cases of CHARACTER SET troubles.
Best Practice
Going forward, it is best to use CHARACTER SET utf8mb4 and COLLATION utf8mb4_unicode_520_ci. (There is a newer version of the Unicode collation in the pipeline.)
utf8mb4 is a superset of utf8 in that it handles 4-byte utf8 codes, which are needed by Emoji and some of Chinese.
Outside of MySQL, "UTF-8" refers to all size encodings, hence effectively the same as MySQL's utf8mb4, not utf8.
I will try to use those spellings and capitalizations to distinguish inside versus outside MySQL in the following.
Overview of what you should do
Have your editor, etc. set to UTF-8.
HTML forms should start like <form accept-charset="UTF-8">.
Have your bytes encoded as UTF-8.
Establish UTF-8 as the encoding being used in the client.
Have the column/table declared CHARACTER SET utf8mb4 (Check with SHOW CREATE TABLE.)
<meta charset=UTF-8> at the beginning of HTML
Stored Routines acquire the current charset/collation. They may need rebuilding.
UTF-8 all the way through
More details for computer languages (and its following sections)
Test the data
Viewing the data with a tool or with SELECT cannot be trusted.
Too many such clients, especially browsers, try to compensate for incorrect encodings, and show you correct text even if the database is mangled.
So, pick a table and column that has some non-English text and do
SELECT col, HEX(col) FROM tbl WHERE ...
The HEX for correctly stored UTF-8 will be
For a blank space (in any language): 20
For English: 4x, 5x, 6x, or 7x
For most of Western Europe, accented letters should be Cxyy
Cyrillic, Hebrew, and Farsi/Arabic: Dxyy
Most of Asia: Exyyzz
Emoji and some of Chinese: F0yyzzww
More details
Specific causes and fixes of the problems seen
Truncated text (Se for Señor):
The bytes to be stored are not encoded as utf8mb4. Fix this.
Also, check that the connection during reading is UTF-8.
Black Diamonds with question marks (Se�or for Señor);
one of these cases exists:
Case 1 (original bytes were not UTF-8):
The bytes to be stored are not encoded as utf8. Fix this.
The connection (or SET NAMES) for the INSERT and the SELECT was not utf8/utf8mb4. Fix this.
Also, check that the column in the database is CHARACTER SET utf8 (or utf8mb4).
Case 2 (original bytes were UTF-8):
The connection (or SET NAMES) for the SELECT was not utf8/utf8mb4. Fix this.
Also, check that the column in the database is CHARACTER SET utf8 (or utf8mb4).
Black diamonds occur only when the browser is set to <meta charset=UTF-8>.
Question Marks (regular ones, not black diamonds) (Se?or for Señor):
The bytes to be stored are not encoded as utf8/utf8mb4. Fix this.
The column in the database is not CHARACTER SET utf8 (or utf8mb4). Fix this. (Use SHOW CREATE TABLE.)
Also, check that the connection during reading is UTF-8.
Mojibake (Señor for Señor):
(This discussion also applies to Double Encoding, which is not necessarily visible.)
The bytes to be stored need to be UTF-8-encoded. Fix this.
The connection when INSERTing and SELECTing text needs to specify utf8 or utf8mb4. Fix this.
The column needs to be declared CHARACTER SET utf8 (or utf8mb4). Fix this.
HTML should start with <meta charset=UTF-8>.
If the data looks correct, but won't sort correctly, then
either you have picked the wrong collation,
or there is no collation that suits your need,
or you have Double Encoding.
Double Encoding can be confirmed by doing the SELECT .. HEX .. described above.
é should come back C3A9, but instead shows C383C2A9
The Emoji 👽 should come back F09F91BD, but comes back C3B0C5B8E28098C2BD
That is, the hex is about twice as long as it should be.
This is caused by converting from latin1 (or whatever) to utf8, then treating those
bytes as if they were latin1 and repeating the conversion.
The sorting (and comparing) does not work correctly because it is, for example,
sorting as if the string were Señor.
Fixing the Data, where possible
For Truncation and Question Marks, the data is lost.
For Mojibake / Double Encoding, ...
For Black Diamonds, ...
The Fixes are listed here. (5 different fixes for 5 different situations; pick carefully): http://mysql.rjweb.org/doc.php/charcoll#fixes_for_various_cases
I had similar issues with two of my projects, after a server migration. After searching and trying a lot of solutions, I came across with this one:
mysqli_set_charset($con,"utf8mb4");
After adding this line to my configuration file, everything works fine!
I found this solution for MySQLi—PHP mysqli set_charset() Function—when I was looking to solve an insert from an HTML query.
I was also searching for the same issue. It took me nearly one month to find the appropriate solution.
First of all, you will have to update you database will all the recent CHARACTER and COLLATION to utf8mb4 or at least which support UTF-8 data.
For Java:
while making a JDBC connection, add this to the connection URL useUnicode=yes&characterEncoding=UTF-8 as parameters and it will work.
For Python:
Before querying into the database, try enforcing this over the cursor
cursor.execute('SET NAMES utf8mb4')
cursor.execute("SET CHARACTER SET utf8mb4")
cursor.execute("SET character_set_connection=utf8mb4")
If it does not work, happy hunting for the right solution.
Set your code IDE language to UTF-8
Add <meta charset="utf-8"> to your webpage header where you collect data form.
Check your MySQL table definition looks like this:
CREATE TABLE your_table (
...
) ENGINE=InnoDB DEFAULT CHARSET=utf8
If you are using PDO, make sure
$options = array(PDO::MYSQL_ATTR_INIT_COMMAND=>'SET NAMES utf8');
$dbL = new PDO($pdo, $user, $pass, $options);
If you already got a large database with above problem, you can try SIDU to export with correct charset, and import back with UTF-8.
Depending on how the server is setup, you have to change the encode accordingly. utf8 from what you said should work the best. However, if you're getting weird characters, it might help if you change the webpage encoding to ANSI.
This helped me when I was setting up a PHP MySQLi. This might help you understand more: ANSI to UTF-8 in Notepad++
Ok. Will try to explain with images... This is my SQL Server and my query:
As you see I getting the result. But then I start my app in VS2013, put break point when I want to call my stored procedure and copy text from VS:
And paste Name in Qhuery:
But I didn't get the result! The names ABSOLUTELY THE SAME!
This Query doesnt't work:
SELECT TOP 1 [Employee].[EmployeeID]
FROM [Employee]
WHERE [Employee].[FullName] = 'Brad Oelmann'
I agree the initial suspect is a "special character" that shows up as whitespace pasting in SSMS.
It has happened to me filtering client data with t-sql.
To replace special characters, there is a good starting point here:
.NET replace non-printable ASCII with string representation of hex code
In that case, they're looking for "control characters" in particular and doing a fancy replacement, but the idea of finding the special characters RegEx is the same.
You can look at all kinds of special sets of characters here:
http://msdn.microsoft.com/en-us/library/20bw873z(v=vs.110).aspx
But it might be easier to define what you do want if you are doing something specific like a name.
For example, you can replace anything that isn't an English letter (for one example) with a space:
str = System.Text.RegularExpressions.Regex.Replace( _
str, _
"[^a-zA-Z]", _
" ")
It's really stupid, but I got simple solution. Since my DB Table contains only ~50 records, I retyped all names and now it works. So the problem was not in VS but in SQL Server side.
If somebady will have similar problem, first of all try to update data in your table somehow. You can try to select all data, copy-paste in in notepad and put it back in SQL Server.
I'm trying to make a c# project that reads from a MySQL database.
The data are inserted from a php page with utf-8 encoding. Both page and data is utf-8.
The data is self is greek words like "Λεπτομέρεια 3".
When fetching the data it looks like "ΛεπτομÎÏεια 3".
I have set 'charset=utf8' in the connection string and also tried with 'set session character_set_results=latin1;' query.
When doing the same with mysql (linux), MySQL Workbench, MySQL native connector for OpenOffice with OpenOffice Base, the data are displayed correctly.
I'm I doing something wrong or what else can I do?
Running the query 'SELECT value, HEX(value), LENGTH(value), CHAR_LENGTH(value) FROM call_attribute;' from inside my program.
It returns :
Value:
ΛεπτομÎÏεια 3
HEX(value) :
C38EE280BAC38EC2B5C38FE282ACC38FE2809EC38EC2BFC38EC2BCC38EC2ADC38FC281C38EC2B5C38EC2B9C38EC2B12033
LENGTH(value) :
49
CHAR_LENGTH(value) :
24
Any ideas???
You state that the first character of your data is capital lambda, Λ.
The UTF-8 represenation of this character is 0xCE 0x9B, whereas the HEX() value starts with C38E, which is indeed capital I with circumflex, as displayed in your question.
So I guess the original bug was not in the PHP configuration, and your impression that "data are displayed correctly" was wrong and due to an encoding problem.
Also note that the Greek alphabet only requires Latin-7, rather than Latin-1, when storing Greek data as single-byte characters rather than in Unicode.
Most likely, you have an encoding problem here, meaning different applications interpret the binary data as different character sets or encodings. (But lacking PHP and MySQL knowledge, I cannot really help you how to configure correctly).
You should try SET NAMES 'utf8' and have a look at this link
I've manage to solve my problem by setting the 'skip-character-set-client-handshake' in /etc/my.cnf'. After that everything was ok, the encoding of greek words was correct and the display was perfect.
One drawback was that I had to re-enter all the data into the database again.
I'm trying to add values in a paradox table with c#.
The point is that this table is containing localized strings, for which the Langdriver ANSII850 is required by the BDE.
I tried to use both OLEDB and Odbc drivers in .Net, but I cannot write correct values in my database. I always get encoding issues.
Example:
// ODBC Connection string (using string.Format for setting the path)
string connectionBase = #"Driver={{Microsoft Paradox Driver (*.db )}};DriverID=538;Fil=Paradox 5.X;DefaultDir={0};CollatingSequence=ASCII;";
// I tried to put the langdriver in the CollatingSequence parameter
string connectionBase = #"Driver={{Microsoft Paradox Driver (*.db )}};DriverID=538;Fil=Paradox 5.X;DefaultDir={0};CollatingSequence=ANSII850;";
// I tried the OleDb driver
string connectionBase = #"Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=Paradox 5.x;"Data Source={0};";
Then, I'm trying to insert the value "çã á çõ" in order to test. Depending on the driver I'm using, I get different results but the final string is never encoded correctly.
Edited:
Finally, I found a solution, but not ideal:
I'm able to switch from a langdriver to another by calling an external executable, written in delphi. In this case, I'm using ANSII850.
Then, I'm able to read data from my paradox tables. But I still don't get my data in a good format.
Strings from the tables are not encoded with the code page 850 either, trying to decode them with .Net tools just does not work
Instead, I'm manually tracking special chars (that are not correctly read) and replacing them by the correct utf8 chars.
For writing I'm doing the exact opposite.
It works, but it's still not ideal.
Are you sure you're using the BDE? Your examples refer to a lot of Microsoft parts.
The BDE used the higher codes for these "special characters" and a codepage to interpret them. Looks like 850 is what you think would be correct. If you can just send a string to the bde with the hex or decimal of the characters you want, you may be able to see if it will print that correctly.
I have a huge MySQL table which has its rows encoded in UTF-8 twice.
For example "Újratárgyalja" is stored as "Újratárgyalja".
The MySQL .Net connector downloads them this way. I tried lots of combinations with System.Text.Encoding.Convert() but none of them worked.
Sending set names 'utf8' (or other charset) won't solve it.
How can I decode them from double UTF-8 to UTF-8?
Peculiar problem, but I think I can reproduce it by a suitably-unholy mix of UTF-8 and Latin-1 (not by just two uses of UTF-8 without an interspersed mis-step in Latin-1 though). Here's the whole weird round trip, "there and back again" (Python 2.* or IronPython should both be able to reproduce this):
# -*- coding: utf-8 -*-
uni = u'Újratárgyalja'
enc1 = uni.encode('utf-8')
enc2 = enc1.decode('latin-1').encode('utf-8')
dec3 = enc2.decode('utf-8')
dec4 = dec3.encode('latin-1').decode('utf-8')
for x in (uni, enc1, enc2, dec3, dec4):
print repr(x), x
This is the interesting output...:
u'\xdajrat\xe1rgyalja' Újratárgyalja
'\xc3\x9ajrat\xc3\xa1rgyalja' Újratárgyalja
'\xc3\x83\xc2\x9ajrat\xc3\x83\xc2\xa1rgyalja' Ãjratárgyalja
u'\xc3\x9ajrat\xc3\xa1rgyalja' Ãjratárgyalja
u'\xdajrat\xe1rgyalja' Újratárgyalja
The weird string starting with à appears as enc2, i.e. two utf-8 encodings WITH an interspersed latin-1 decoding thrown into the mix. And as you can see it can be undone by the exactly-converse sequence of operations: decode as utf-8, re-encode as latin-1, re-decode as utf-8 again -- and the original string is back (yay!).
I believe that the normal round-trip properties of both Latin-1 (aka ISO-8859-1) and UTF-8 should guarantee that this sequence will work (sorry, no C# around to try in that language right now, but I would expect that the encoding/decoding sequences should not depend on the specific programming language in use).
When you write "The MySQL .Net connector downloads them this way." there's a good chance this means the MySQL .Net connector believes it is speaking Latin-1 to MySQL, while MySQL believes the conversation is in UTF-8. There's also a chance the column is declared as Latin-1, but actually contains UTF-8 data.
If it's the latter (column labelled Latin-1 but data is actually UTF-8) you will get mysterious collation problems and other bugs if you make use of MySQL's text processing functions, ORDER BY on the column, or other situations where the text "means something" rather than just being bytes sent over the wire.
In either case you should try to fix the underlying problem, not least because it is going to be a complete headache for whoever has to maintain the system otherwise.
You could try using
SELECT CONVERT(`your_column` USING ascii)
FROM `your_table`
at the MySQL query level. This is a stab in the dark, though.