How is azerbaijan characters insert in mssql? - c#

I have a program written using C# and wpf.I want to insert azerbaijan character to Mssql from this program but '?' character is showing in row. Mssql datatype is nvarchar(50). How am I do?
public void YeniLiderEkle(string LiderAdi, string sifre, int Admin){
BakuBusCenterDBDataContext db = new BakuBusCenterDBDataContext();
Liderler liderler = new Liderler();
liderler.AdiSoyadi = LiderAdi;
string a1 = StringReplace(LiderAdi.Replace(" "," ").Replace(" "," ").Trim().ToLower()).Split(' ')[0];
string l=a1.Split('_')[1].Substring(0,1);
liderler.KullaniciAdi = a1.Split('_')[0]+l;
db.Liderlers.InsertOnSubmit(liderler);
db.SubmitChanges();
}
And true word "sdəhr" but showing "sd?hr" from Mssql

For inserting varchar values you should wrap them with ' like 'Hello', for nvarchar you should also use N before the wrapping ' chars: N'Yaşasın Azerbaycan'.

Related

How to convert null value to default(float) 0.0 value and should save that default value into MYSql version(6.0.11-alpha-community)

Dynamically reading the excel data (from filezilla) and saving into mysql version (6.0.11-alpha-community)
I'm using the stored procedure (to capture the Excel.csv data) and save it into mysql version (6.0.11-alpha-community)
but while I'm executing or saving the data into mysql I'm getting this error:
In the above image 2021070701.csv and 2021070801.csv are reading and saving into mysql is fine because there is no null data in the excel file (particular cell).
when it reach the execution to this file 2021070201.csv I'm getting the "Format exception error"
The 2021070201.csv(excel) data look like this format:
(2021070201.csv excel data and blue highlighted cells are float values and the cells contains null data)
This is what I written logic to read the excel data and saving into mysql
else if (mpType.Equals("Indices", StringComparison.InvariantCultureIgnoreCase))
{
MAE_Repo.InsertNseIndicesPrice("Usp_tbl_Intraday_NseIndicesPrice", 0, Convert.ToInt32(res[0]), res[1], Convert.ToSingle(res[2]),
Convert.ToSingle(res[3]), Convert.ToSingle(res[4]), Convert.ToSingle(res[5]), Convert.ToSingle(res[6]),
Convert.ToSingle(res[7]), Convert.ToSingle(res[8]), Convert.ToSingle(res[9]), Convert.ToSingle(res[10]),
Convert.ToDateTime(res[11]), res[12], timeStamp, filename);
}
And this is what I written the SP logic:
Public static void InsertNseIndicesPrice(String ProcName, int ipID, int ipIndextoken, string ipSymbol, float ipCLOSE,
float ipOPEN, float ipHigh,float ipLow, float ipVolume, float ipVALUE, float ipprev_close, float ipCHANGE, float ipPer_Change,
DateTime ipUpdtime, string ipFlag, DateTime ipCreatedOn, string ipFileName)
{
con.Open();
MySqlCommand cmdSel = new MySqlCommand(ProcName, con);
cmdSel.CommandType = System.Data.CommandType.StoredProcedure;
cmdSel.Parameters.AddWithValue("#IPID", 0);
cmdSel.Parameters.AddWithValue("#IPIndextoken", ipIndextoken);
cmdSel.Parameters.AddWithValue("#IPSymbol", ipSymbol);
cmdSel.Parameters.AddWithValue("#IPCLOSE", ipCLOSE);
cmdSel.Parameters.AddWithValue("#IPOPEN", ipOPEN);
cmdSel.Parameters.AddWithValue("#IPHigh", ipHigh);
cmdSel.Parameters.AddWithValue("#IPLow", ipLow);
cmdSel.Parameters.AddWithValue("#IPVolume", ipVolume);
cmdSel.Parameters.AddWithValue("#IPVALUE", ipVALUE);
cmdSel.Parameters.AddWithValue("#IPprev_close", ipprev_close);
cmdSel.Parameters.AddWithValue("#IPCHANGE", ipCHANGE);
cmdSel.Parameters.AddWithValue("#IPPer_Change", ipPer_Change);
cmdSel.Parameters.AddWithValue("#IPUpdtime", ipUpdtime);
cmdSel.Parameters.AddWithValue("#IPFlag", ipFlag);
cmdSel.Parameters.AddWithValue("#IPCreatedOn", ipCreatedOn);
cmdSel.Parameters.AddWithValue("#IPFileName", ipFileName);
cmdSel.Parameters.AddWithValue("#OPType", 1);
MySqlDataAdapter da = new MySqlDataAdapter(cmdSel);
DataSet ds = new DataSet();
da.Fill(ds);
con.Close();
}
And this is my stored Procedure in mysql
DELIMITER $$
USE `marketprice_nse`$$
DROP PROCEDURE IF EXISTS `Usp_tbl_Intraday_NseIndicesPrice`$$
CREATE DEFINER=`root`#`localhost` PROCEDURE `Usp_tbl_Intraday_NseIndicesPrice`(IN IPID BIGINT, IN IPIndextoken INT, IN IPSymbol VARCHAR(20), IN IPCLOSE FLOAT,
IN IPOPEN FLOAT, IN IPHigh FLOAT,IN IPLow FLOAT, IN IPVolume FLOAT, IN IPVALUE FLOAT,IN IPprev_close FLOAT, IN IPCHANGE FLOAT, IN IPPer_Change FLOAT,
IN IPUpdtime DATETIME, IN IPFlag VARCHAR(1), IN IPCreatedOn DATETIME, IN IPFileName VARCHAR(30),IN OPType INT)
BEGIN
IF (OPType = 1)-- Insert
THEN
-- IF (SELECT 1 FROM tbl_intraday_nseindicesprice WHERE FileName = IPFileName) = 0
-- THEN
-- BEGIN
INSERT INTO tbl_Intraday_NseIndicesPrice (ID, Indextoken, Symbol, `CLOSE`, `OPEN`, High, Low, Volume, `VALUE`, prev_close, `CHANGE`,
Per_Change, Updtime, Flag, CreatedOn, FileName) VALUES (IPID, IPIndextoken, IPSymbol, IPCLOSE, IPOPEN, IPHigh, IPLow, IPVolume,
IPVALUE, IPprev_close, IPCHANGE, IPPer_Change, IPUpdtime, IPFlag, IPCreatedOn, IPFileName);
-- END;
-- END IF;
ELSEIF (OPType = 2)-- Check FileName availability
THEN
SELECT * FROM tbl_intraday_nseindicesprice WHERE FileName=IPFileName;
END IF;
END$$
DELIMITER ;
I tried different ways like
Single.Parse() Method
float.Parse() Method
Convert.ToSingle() Method
these methods are not solved the format exception
How can I save the default value(0.0) into mysql with out any format exception?
OK based on the comments this is the answer:
Instead of writing Convert.ToSingle(res[6]) you can write Convert.ToSingle(res[6] ?? "0.0"). Then if the value of res[6] is null, the string "0.0" will be used converted to a single instead.
If you need to worry about various locales having period or comma as decimal separator, you could simply use "0" as the default (which will also convert to a 0.0 float), or create a string having the local decimal separator
string defaultFloatValue = (0.0F).toString();
and use it instead of the "0.0" literal.

Creating SQL Statements from a Text File - C#

Good morning,
I´m trying to create a program to create statements in a .Sql document but i´m having some troubles.
this is my code so far:
string[] filas = File.ReadAllLines("c:\\temp\\Statements.txt");
StreamWriter sw = new StreamWriter("c:\\temp\\Statements.sql");
foreach (string fila in filas)
{
string sql = "INSERT ";
string[] campos = fila.Split(' ');
if (campos[0]== "1A")
{
sql += " INTO TABLE1 (field1) VALUES (" + campos[1] + ");";
}
else
{
sql += " INTO TABLE2 (field1,field2,field3) VALUES (" + campos[1] + "," + campos[2] + "," + campos[3] + ");";
}
sw.WriteLine(sql);
}
sw.Close();
{
the thing is:
I need to read a txt document (the lenght will change), and then transform it to a sql document with all the statements, there are only two tipes of lines starting in "1A" or "2B", example:
1A123456 456,67
2B123456 mr awesome great strt germany
1A123456 456,67
2B123456 mr awesome great strt germany
2B123456 mr awesome great strt germany
1A123456 456,67
1A123456 456,67
then im trying to "transform" that information on "inserts":
INSERT INTO TABLE1 (REF,MONEY) VALUES (A123456,456,67);
INSERT INTO TABLE2 (REF,NAME,ADR) VALUES (B123456,mr awesome,great strt);
INSERT INTO TABLE1 (REF,MONEY) VALUES (A123456,456,67);
INSERT INTO TABLE2 (REF,NAME,ADR) VALUES (B123456,mr awesome,great strt);
INSERT INTO TABLE2 (REF,NAME,ADR) VALUES (B123456,mr awesome,great strt);
INSERT INTO TABLE1 (REF,MONEY) VALUES (A123456,456,67);
INSERT INTO TABLE1 (REF,MONEY) VALUES (A123456,456,67);
my code is not working so well... i hope someone can help me a litte :).
regards.
Firstly I could not see space between 1A and 123456 . So if (campos[0]== "1A") will not work. Use contains method to do this check - if (campos[0].contains("1A"). you can alternately evaluate using startswith
Secondly you need to split 1A123456 to get A123456 .. you can use substring or similar functions for same. (Same for 2B)
Thirdly, you are splitting the string with ' ' - this could result in many more string than your anticipated strings. 2B123456 mr awesome great strt germany - in this case mr awesome great strt are all different. You need to put in logic to concatenate campos[1] & campos[2] and campos[3] & campos[1=4] in the case of 2B ....
Fourthly for the 1A case you need to split campos[1] using , as delimiter to get the two values you want
Hope this provides you enough guidance to solve your issue.
After some research and with the help from anil and Pikoh i found a good solution:
string mydate = DateTime.Now.ToString("yyyyMMdd");
string AÑO = DateTime.Now.ToString("yyyy");
string MES = DateTime.Now.ToString("MM");
string DIA = DateTime.Now.ToString("dd");
string sql = "INSERT ";
string[] campos = fila.Split(' ');
if (campos[0].StartsWith("1H"))
{
sql += "INTO TABLE (VALUES,VALUES,VALUES) VALUES (" + "'" + mydate + "'" + "," + "'" + campos[0].Substring(1, 8) + "'" + "," + "'" + campos[0].Substring(9, 7) + "'" + "," + "'" + campos[8] + "'" + ");";
Inserting data and manipulating strings was good, but now i have the last problem,
what happen if i need to make a "backspace" to an specific string because my logic cant pick the correct information? regards.

How to calculate hashbyte SHA1 using C#?

In a table I have a column URL which I am using to save urls. I am calculating the hash in another column by using formula (CONVERT([varbinary](20),hashbytes('SHA1',[URL]))). It's working fine.
Now I need to get similar function in C# to get hash so that I can compare and check that similar row doesnt exist before I insert a new row. I tried few links but no luck.
Here are the links:
http://weblogs.sqlteam.com/mladenp/archive/2009/04/28/Comparing-SQL-Server-HASHBYTES-function-and-.Net-hashing.aspx
How do I calculate the equivalent to SQL Server (hashbytes('SHA1',[ColumnName])) in C#?
** I found this link working. All I need to do is change formula in the db. but is it possible to make it in one line
**
http://forums.asp.net/t/1782626.aspx
DECLARE #HashThis nvarchar(4000);
DECLARE #BinHash varbinary(4000);
SELECT #HashThis = CONVERT(nvarchar(4000),'Password#Test');
SELECT #BinHash = HashBytes('SHA1', #HashThis);
SELECT cast(N'' as xml).value('xs:base64Binary(xs:hexBinary(sql:variable("#BinHash")))', 'nvarchar(4000)');
in c#
string pwd = "Password#Test";
var sha1Provider = HashAlgorithm.Create("SHA1");
var binHash = sha1Provider.ComputeHash(Encoding.Unicode.GetBytes(pwd));
Console.WriteLine(Convert.ToBase64String(binHash));
I am using sql server 2012. collation for the database is SQL_Latin1_General_CP1_CI_AS
Thanks
Paraminder
It's an encoding issue:
C#/.Net/CLR strings are, internally, UTF-16 encoded strings. That means each character is at least two bytes.
Sql Server is different:
char and varchar represent each character as a single byte using the code page tied to the collation used by that column
nchar and nvarchar represent each character as 2 bytes using the [old and obsolete] UCS-2 encoding for Unicode — something which was deprecated in 1996 with the release of Unicode 2.0 and UTF-16.
The big difference between UTF-16 and UCS-2 is that UCS-2 can only represent characters within the Unicode BMP (Basic Multilingual Plane); UTF-16 can represent any Unicode character. Within the BMP, as I understand it, UCS-2 and UTF-16 representations are identical.
That means that to compute a hash that is identical to the one that SQL Server computes, you're going to have to get a byte representation that is identical to the one that SQL Server has. Since it sounds like you're using char or varchar with the collation SQL_Latin1_General_CP1_CI_AS, per the documentation, the CP1 part means code page 1252 and the rest means case-insensitive, accent-sensitive. So...
You can get the encoding for code page 1252 by:
Encoding enc = Encoding.GetEncoding(1252);
Using that information, and given this table:
create table dbo.hash_test
(
id int not null identity(1,1) primary key clustered ,
source_text varchar(2000) collate SQL_Latin1_General_CP1_CI_AS not null ,
hash as
( hashbytes( 'SHA1' , source_text ) ) ,
)
go
insert dbo.hash_test ( source_text ) values ( 'the quick brown fox jumped over the lazy dog.' )
insert dbo.hash_test ( source_text ) values ( 'She looked like something that might have occured to Ibsen in one of his less frivolous moments.' )
go
You'll get this output
1: the quick brown fox jumped over the lazy dog.
sql: 6039D100 3323D483 47DDFDB5 CE2842DF 758FAB5F
c#: 6039D100 3323D483 47DDFDB5 CE2842DF 758FAB5F
2: She looked like something that might have occured to Ibsen in one of his less frivolous moments.
sql: D92501ED C462E331 B0E129BF 5B4A854E 8DBC490C
c#: D92501ED C462E331 B0E129BF 5B4A854E 8DBC490C
from this program
class Program
{
static byte[] Sha1Hash( string s )
{
SHA1 sha1 = SHA1.Create() ;
Encoding windows1252 = Encoding.GetEncoding(1252) ;
byte[] octets = windows1252.GetBytes(s) ;
byte[] hash = sha1.ComputeHash( octets ) ;
return hash ;
}
static string HashToString( byte[] bytes )
{
StringBuilder sb = new StringBuilder() ;
for ( int i = 0 ; i < bytes.Length ; ++i )
{
byte b = bytes[i] ;
if ( i > 0 && 0 == i % 4 ) sb.Append( ' ' ) ;
sb.AppendFormat( b.ToString("X2") ) ;
}
string s = sb.ToString() ;
return s ;
}
private static DataTable ReadDataFromSqlServer()
{
DataTable dt = new DataTable();
using ( SqlConnection conn = new SqlConnection( "Server=localhost;Database=sandbox;Trusted_Connection=True;"))
using ( SqlCommand cmd = conn.CreateCommand() )
using ( SqlDataAdapter sda = new SqlDataAdapter(cmd) )
{
cmd.CommandText = "select * from dbo.hash_test" ;
cmd.CommandType = CommandType.Text;
conn.Open();
sda.Fill( dt ) ;
conn.Close() ;
}
return dt ;
}
static void Main()
{
DataTable dt = ReadDataFromSqlServer() ;
foreach ( DataRow row in dt.Rows )
{
int id = (int) row[ "id" ] ;
string sourceText = (string) row[ "source_text" ] ;
byte[] sqlServerHash = (byte[]) row[ "hash" ] ;
byte[] myHash = Sha1Hash( sourceText ) ;
Console.WriteLine();
Console.WriteLine( "{0:##0}: {1}" , id , sourceText ) ;
Console.WriteLine( " sql: {0}" , HashToString( sqlServerHash ) ) ;
Console.WriteLine( " c#: {0}" , HashToString( myHash ) ) ;
Debug.Assert( sqlServerHash.SequenceEqual(myHash) ) ;
}
return ;
}
}
Easy!
I would suggest that that anytime a hash is created that it be done in a single place. Either in code or on the database. It will make your life easier in the long run. That would mean either changing you C# code to create the hash before inserting the record or doing the duplication check within a stored procedure instead.
Regardless though, the duplication check and insert should be synchronized such that no other inserts could occur between the time you check for any duplicates and when the record is actually inserted. Easiest way to do that would be to perform them both within the same transaction.
If you insist on leaving the logic as it stands I would then suggest that you create the hash in the database but expose it via a stored procedure or user defined function that could be called from your C# code.

Didn't find Error: Can't update MDB Tablerow with Cyrillic Chars

Information that could be important in advance:
Access 2003 Database (*.mdb)
Table is Linked to SQL Server 2005 Database --> Table
When linked to another Access Database --> Table it works
Program which i use to update : .net 2.0 based C#
Databaselanguage: German?
OleDbConnection used:
Connection = new OleDbConnection(#"Provider=Microsoft.Jet.OLEDB.4.0;" +
"Data Source=" + PathToDatabase + ";" +
"Jet OLEDB:System Database=" + PathToSystemFile+ ";" +
"User ID=" + SignedUserID + ";" +
"Password=" + SignedLoginKey + ";");
Problem:
i would like to Update a String, which i successfully have parsed to a SQL-Update Statement like:
UPDATE [Artikel] SET [Artikelbeschreibung]='УБИТЬ ДРОЗДА 4 СЕРИИ' WHERE products_id=32501;
my Table [Artikel] contains a row which met the requirements (products_id=32501)
when i update the string, no errors or exceptions where thrown.
When I check what has arrived in the Database i only see this:
????? ?????? 4 ?????
Encoding from File is UTF8, i've already tried this but with no luck: Convert ANSI (Windows 1252) to UTF8 in C#
here the steps my program do:
1. Load a file containing the sql statement with placeholder / information in which file, which section, which key the right information will be
EXAMPLE: UPDATE [Artikel] SET [Artikelbeschreibung]='<<C:\myfile.ini::MySection::MyKey>>' WHERE products_id=32501;
2. Grab Placeholder / Information
NOW I HAVE: <<C:\myfile.ini::MySection::MyKey>>
3. Parse, open File, Search for Section, Search for Key, responding Value of Key as String
RESPONSE = УБИТЬ ДРОЗДА 4 СЕРИИ
4. Replace <<C:\myfile.ini::MySection::MyKey>> with УБИТЬ ДРОЗДА 4 СЕРИИ in Original SQL Statement
RESULT: UPDATE [Artikel] SET [Artikelbeschreibung]='УБИТЬ ДРОЗДА 4 СЕРИИ' WHERE products_id=32501;
5. Take the string with the Result, open OleDbConnection with Connection as described above and
do this:
Connection.Open();
if (Connection != null)
{
Command = null;
Command = Connection.CreateCommand();
Command.CommandText = SQL;
Command.ExecuteNonQuery();
Connection.Close();
Connection = null;
}
6. Looking into my Database there is only '????? ?????? 4 ?????' instead of 'УБИТЬ ДРОЗДА 4 СЕРИИ'
Additional Informations: This Only occurs when my Table is linked to a SQL Server, when Table is linked to another Database or is Database directly it works fine.
Maybe someone can help me with this, i dunno where the error might be.
if more information is required then please write, I'll try as soon as possible such documents via "Edit"
You have two options:
Go to Regional Options and set the locale for non Unicode programs to a locale that uses Cyrillic characters.
Convert the Cyrillic string to unicode
UnicodeFromVal should be set the value of the first Unicode character in the Cyrillic alphabet.
Public Function AsUnicode(ByVal Source As String, UnicodeFromVal As Long) As String
Dim c As Long
Dim ByteString() As Byte
Dim AnsiValue As Integer
Dim Length As Long
Length = Len(Source)
ByteString = StrConv(Source, vbFromUnicode, GetSystemDefaultLCID())
For c = LBound(ByteString) To UBound(ByteString)
AnsiValue = ByteString(c)
If AnsiValue >= 224 And AnsiValue <= 250 Then
AsUnicode= AsUnicode & ChrW(CLng(AnsiValue - 224 + UnicodeFromVal))
Else
AsUnicode= AsUnicode & ChrW(AnsiValue)
End If
Next
End Function

Convert a file full of "INSERT INTO xxx VALUES" in to something Bulk Insert can parse

This is a followup to my first question "Porting “SQL” export to T-SQL".
I am working with a 3rd party program that I have no control over and I can not change. This program will export it's internal database in to a set of .sql each one with a format of:
INSERT INTO [ExampleDB] ( [IntField] , [VarcharField], [BinaryField])
VALUES
(1 , 'Some Text' , 0x123456),
(2 , 'B' , NULL),
--(SNIP, it does this for 1000 records)
(999, 'E' , null);
(1000 , 'F' , null);
INSERT INTO [ExampleDB] ( [IntField] , [VarcharField] , BinaryField)
VALUES
(1001 , 'asdg', null),
(1002 , 'asdf' , 0xdeadbeef),
(1003 , 'dfghdfhg' , null),
(1004 , 'sfdhsdhdshd' , null),
--(SNIP 1000 more lines)
This pattern continues till the .sql file has reached a file size set during the export, the export files are grouped by EXPORT_PATH\%Table_Name%\Export#.sql Where the # is a counter starting at 1.
Currently I have about 1.3GB data and I have it exporting in 1MB chunks (1407 files across 26 tables, All but 5 tables only have one file, the largest table has 207 files).
Right now I just have a simple C# program that reads each file in to ram then calls ExecuteNonQuery. The issue is I am averaging 60 sec/file which means it will take about 23 hrs for it to do the entire export.
I assume if I some how could format the files to be loaded with a BULK INSERT instead of a INSERT INTO it could go much faster. Is there any easy way to do this or do I have to write some kind of Find & Replace and keep my fingers crossed that it does not fail on some corner case and blow up my data.
Any other suggestions on how to speed up the insert into would also be appreciated.
UPDATE:
I ended up going with the parse and do a SqlBulkCopy method. It went from 1 file/min. to 1 file/sec.
Well, here is my "solution" for helping convert the data into a DataTable or otherwise (run it in LINQPad):
var i = "(null, 1 , 'Some''\n Text' , 0x123.456)";
var pat = #",?\s*(?:(?<n>null)|(?<w>[\w.]+)|'(?<s>.*)'(?!'))";
Regex.Matches(i, pat,
RegexOptions.IgnoreCase | RegexOptions.Singleline).Dump();
The match should be run once per value group (e.g. (a,b,etc)). Parsing of the results (e.g. conversion) is left to the caller and I have not tested it [much]. I would recommend creating the correctly-typed DataTable first -- although it may be possible to pass everything "as a string" to the database? -- and then use the information in the columns to help with the extraction process (possibly using type converters). For the captures: n is null, w is word (e.g. number), s is string.
Happy coding.
Apparently your data is always wrapped in parentheses and starts with a left parenthesis. You might want to use this rule to split(RemoveEmptyEntries) each of those lines and load it into a DataTable. Then you can use SqlBulkCopy to copy all at once into the database.
This approach would not necessarily be fail-safe, but it would be certainly faster.
Edit: Here's the way how you could get the schema for every table:
private static DataTable extractSchemaTable(IEnumerable<String> lines)
{
DataTable schema = null;
var insertLine = lines.SkipWhile(l => !l.StartsWith("INSERT INTO [")).Take(1).First();
var startIndex = insertLine.IndexOf("INSERT INTO [") + "INSERT INTO [".Length;
var endIndex = insertLine.IndexOf("]", startIndex);
var tableName = insertLine.Substring(startIndex, endIndex - startIndex);
using (var con = new SqlConnection("CONNECTION"))
{
using (var schemaCommand = new SqlCommand("SELECT * FROM " tableName, con))
{
con.Open();
using (var reader = schemaCommand.ExecuteReader(CommandBehavior.SchemaOnly))
{
schema = reader.GetSchemaTable();
}
}
}
return schema;
}
Then you simply need to iterate each line in the file, check if it starts with ( and split that line by Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries). Then you could add the resulting array into the created schema-table.
Something like this:
var allLines = System.IO.File.ReadAllLines(path);
DataTable result = extractSchemaTable(allLines);
for (int i = 0; i < allLines.Length; i++)
{
String line = allLines[i];
if (line.StartsWith("("))
{
String data = line.Substring(1, line.Length - (line.Length - line.LastIndexOf(")")) - 1);
var fields = data.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
// you might need to parse it to correct DataColumn.DataType
result.Rows.Add(fields);
}
}

Categories

Resources