how to input unicode character via summernote into database - c#

i try to input unicode character like " សួស្តី " into database via summernote but in database it display ??? instead of the unicode caracter, in database colliation i use latin_general_bin2 since the before i use summernote it could display unicode correctly, sorry may i know the correct way to do this.
above image is data inputed in database
and this is the data that i submit before it in database
and this is collation that i used

Related

Transform SQL Server text from French to Arabic

I have an existing SQL Server database where text is stored in Arabic. The default database collation is FRENCH_CI_AS, but the application uses Arabic. Displaying the data with ASP is not a problem, but I want to create a new database using UTF-8!
Text sample as it's stored in database :
ترأس وزير السكن والعمران ووزير الأشغال العمومية للجنة التقنية لمراقبة البناء
How I can transform text to get clear Arabic text in the database ?
Is there a solution using Excel? http://en.file-upload.net/download-10245297/test.xls.html
first of all use nvarchar() for type of Data in your Tables then when inserting data into your tabel insert like this
string Query="insert into tablename(columnName) values(N'value')...";
The strings need to be stored in the database as NVARCHAR instead of VARCHAR. This allows the database to store UTF16 encoded strings instead of ASCII with a CodePage. Of course this will double the amount of storage needed for the database.
From the screen shot it looks like the string is UTF8 being displayed as if it was ASCII and there does not appear to be a way to tell SQL this detail.
I share a little java project (with dependencies).
This project loads table data first and formats strings. The generated EXCEL worksheet can now imported using SSMS.
Java solution :
String charabia = "ترأس وزير السكن والعمر" ;
try {
String utf8String = new String(charabia.getBytes(), "UTF-8");
} catch (UnsupportedEncodingException e) {
}
My project download link : here

How to search in Arabic Text

I am storing arabic text in sqlite database in my wp8 app. Below is an example of the arabic text stored in the sqlite file.
ٱلْحَمْدُ لِلَّهِ رَبِّ ٱلْعَٰلَمِينَ
The user would be able to search like below:
الحمد
this should return the above text, but I know in normal it doesn't.
I am using very simple query in sqlite
select * from tbl where ArabicText like '%الحمد%'
and in c#
query = "select * from tbl where ArabicText like '%"+textToSearch+"%'"
The result is zero records, How should I search to retrieve the above single record?
Thanks!
The easy answer, is to just use base64_encode on queries going into the database, and decode on information coming out (both while saving and while searching). It's not elegant, but it allows UTF characters into a non-UTF database.

How to get Bytea Data in string from Postgres Database

Is there a way to get the actual encoded string saved in the Database of column with DataType Bytea. Because when the record is fetched in C# it returns as System.Byte[] which I don't want. I want the data which is saved in that column
E'\\\142\\\247\\\ and so on till the data ends.
I will appreciate your support
When I am querying the data through
SELECT tpl::TEXT from Cards where ecode="xyz";
I get the following error
Error: Cannot cast type bytea to text
Line1: Select tpl::TEXT from cards
Thank you
Like this
As you see that the Bytea column is showing System.Byte[] which was overwritten by my application because the code in C# stores the data in the DataTable column as System.Byte[] while updating the data I didn't decode it and update it .
I am using Navicat premium when I query data it shows me the result when I right click on the grid result and copy as insert statement it shows me two result for different rows
like this
INSERT INTO "public"."cards" ("ecode", "tpl") VALUES ('4210149888659', E'System.Byte[]');
INSERT INTO "public"."cards" ("ecode", "tpl") VALUES('3650257637661',E '\\247\\377\\001\\021\\340\\000\\230\\000\\002U\\000e\\000\\362\\000\\002-\\000\\253\\000p\\000\\002\\207\\000~\\000g\\000\\002\\215\\000{\\000\\317\\000\\002\\334\\000h\\000\\222\\000\\001|\\000\\004\\001U\\000\\002\\202\\000K\\000\\201\\000\\001\\000\\000\\204\\000\\241\\000\\001w\\000\\213\\000\\305\\000\\002\\021\\000V\\000\\237\\000\\002L\\001=\\001\\364\\000\\001X\\001"\\001\\313\\000\\002J\\000\\010\\001\\324\\000\\001\\370\\000\\037\\001J\\000\\002;\\0017\\000\\202\\000\\002\\300\\000\\317\\0007\\000\\002\\215\\000[\\000\\004\\011\\017\\007\\012\\005\\015\\014\\006\\016\\012\\007\\010\\005\\005\\007\\011\\010\\001\\004\\012\\017\\002\\003\\010\\012\\004\\010\\005\\003\\013\\014\\005\\017\\007\\003\\010\\003\\001\\011\\004\\012\\006\\020\\011\\005\\013\\015\\010\\002\\004\\005\\010\\007\\011\\012\\000\\002\\002\\020\\012\\003\\015\\000\\005\\002\\017\\003\\000\\006\\016\\020\\010\\017\\014\\000\\001\\012\\001\\010\\011\\002\\004\\007\\010\\000\\002\\006\\011\\007\\003\\020\\011\\003\\001\\005\\011\\000\\007\\002\\012\\002\\000\\020\\000\\016\\004\\017\\004\\003\\011\\017\\000\\003\\004\\000\\001\\007\\017\\002\\001\\017\\014\\006\\002\\016\\015\\011\\015\\006\\014\\016\\010\\020\\013\\000\\003\\006\\015\\002\\005\\020\\015\\016\\015\\004\\001\\003\\015\\010\\010\\006\\014\\002\\007\\020\\014\\011\\001\\000\\014\\010\\003\\016\\001\\015\\017\\020\\013\\006\\013\\016\\013\\011\\001\\014\\013\\004\\013\\002\\013\\001\\000'
);
You can't just convert it because PostgreSQL can't guarantee it can be converted safely. The best you can do is to convert the escaped form into a string and that's not what you probably want. Keep in mind that since bytea is binary data there is no way that PostgreSQL can be sure that the string that comes out will be legit. You could have embedded nulls in a UTF-8 string, for example, which could cause some fun with buffer overruns if you aren't careful.
This is the sort of thing that should be done in the client-side and you should assume that the data is binary, and not necessarily a valid string. If you want to store strings, store text fields, not bytea.

Easiet Way to Store Multi-line data from a Text Box

I have a multi-line text box that users can input social media address. So my question is what is the best way that I store these so that I retrieve them and link each of them. Wouldn't each one need to be stored in its own row. (Basically I would create a table UserSocialMedia, then use a Foreign key to link it to the table and select it for that user when display it, then link each of them). Or is there a way that I can store them all in one row and retrieve them and then link them?
Split out the text into individual addresses and store each one individually in a table, or better yet, modify your UI to accept URLs via multiple single-line inputs.
If you stored them as a single entity you would need to do additional parsing in your client to add markup, they would be horrible/inefficient to search and again would need additional parsing (for example to pull a facebook URL from a clump of others).
Even better would be to identify the site; facebook/twitter/bebo etc from the submitted url, store a siteID and just retain the profile specific part.
before saving the value to your Db, add something like this line to preserve line breaks:
var textToSave = txtMultiLine1.Text.Replace(Environment.NewLine, "<br />");
then normally save this value to your DB, when retrieving, you can just show it and they will be one after another in seperate lines :)
cheers!

read/write unicode data in MySql

I am using MySql DB and want to be able to read & write unicode data values. For example, French/Greek/Hebrew values.
My client program is C# (.NET framework 3.5).
How do i configure my DB to allow unicode? and how do I use C# to read/write values as unicode from MySql?
Upddate: 7 Sep. 09
OK, So my Schema, Table & columns are set to 'utf8' + collation 'utf8_general_ci'. I run the 'set names utf8' when the connection is opened. so far so good... but, still values are saved as '??????? '
any ideas?
The Solution!
OK, so for C# client to read & write unicode values, you must include in the connection string: charset=utf8
for example: server=my_sql_server;user id=my_user;password=my_password;database=some_db123;charset=utf8;
of course you should also define the relevant table as utf8 + collation utf8_bin.
The Solution!
OK, so for C# client to read & write unicode values, you must include in the connection string: charset=utf8
for example: server=my_sql_server;user id=my_user;password=my_password;database=some_db123;charset=utf8;
of course you should also define the relevant table as utf8 + collation utf8_bin.
You have to set the collation for your MySQL schema, tables or even columns.
Most of the time, the utf8_general_ci collation is used because it is case insensitive and accent insensitive comparisons.
On the other hand, utf8_unicode_ci is case sensitive and uses more advanced sorting technics (like sorting eszet ('ß') near 'ss'). This collation is a tiny bit slower than the other two.
Finally, utf8_bin compares string using their binary value. Thus, it also is case sensitive.
If you're using MySQL's Connector/NET (which I recommend), everything should go smoothly.
try to use this query before any other fetch or send:
SET NAMES UTF8
You need to set the db charset to UTF-8 (if you are using utf-8), collation for relevant tables/fields to utf, execute SET NAMES 'UTF-8' before doing queries, and of course make sure you set the proper encoding in the html that is showing the output.

Categories

Resources