The following is a function I am using to do a bulk Insert into DB2. The Errors property of the DB2BulkCopy class returns a DB2ErrorsCollection (see my local variable errorCollection). The problem is, the only errors I care about are the ones that would prevent a row from being inserted into the table.
An example of an error I'm getting that I don't care about is 01517 - "A character that could not be converted was replaced with a substitute character." This is returned in the DB2ErrorCollection object but I don't care about it.
Is there a way to filter out only errors that would have prevented a row from being inserted?
public static DB2ErrorCollection RunDb2BulkCopy(DB2Connection conn, DataTable table, string tableName)
{
DB2ErrorCollection errorCollection;
using (var bc = new DB2BulkCopy(conn))
{
//Have to provide column mappings below - requirement. This code assumes that the DataTable and
//the DB2 table have the same column names.
foreach (DataColumn c in table.Columns)
{
bc.ColumnMappings.Add(new DB2BulkCopyColumnMapping(c.ColumnName, c.ColumnName));
}
bc.DestinationTableName = tableName;
bc.WriteToServer(table);
errorCollection = bc.Errors;
}
return errorCollection;
}
The SQLSTATE values (for example 01517) have the first two characters indicating the classification or class-code.
'00' is success.
'01' in your case is a warning.
other values of the class code are errors you probably should not ignore.
The full list of class codes is here and it has links to tables that enumerate each code per class-code.
There may be language-level classes/sub-classes to handle such matters.
I'm trying to use SqlBulkCopy to copy data into an SQL database table however it is (wrongly) saying that the columns don't match. They do match. If I use a breakpoint to see the names of the columns being mapped, they're correct. The error message shows the name of the column, and it is correct.
This is my method. I have an identical method that does work and the only difference is where it gets the column names from. The strings containing the column names, however, are EXACTLY identical.
public static bool ManualMapImport(DataTable dataTable, string table)
{
if(dataTable != null)
{
SqlConnection connection = new SqlConnection(connectionString);
SqlBulkCopy import = new SqlBulkCopy(connection);
import.DestinationTableName = "[" + table + "]";
foreach (string s in Global.SelectedColumns)
{
/* The s string variable here is the EXACT same as
the c.ToString() in the other method below */
if (ColumnExists(table, s))
import.ColumnMappings.Add(s, s);
else
return false;
}
connection.Open();
import.WriteToServer(dataTable); //Error happens on this line
connection.Close();
return true;
}
else
{
return false;
}
}
This is the almost identical, working method:
public static bool AutoMapImport(DataTable dataTable, string table)
{
if (dataTable != null)
{
SqlConnection connection = new SqlConnection(connectionString);
SqlBulkCopy import = new SqlBulkCopy(connection);
import.DestinationTableName = "[" + table + "]";
foreach (DataColumn c in dataTable.Columns)
{
if (ColumnExists(table, c.ToString()))
import.ColumnMappings.Add(c.ToString(), c.ToString());
else
return false;
}
connection.Open();
import.WriteToServer(dataTable);
connection.Close();
return true;
}
else
{
return false;
}
}
If it helps, the column names are: ACT_Code, ACT_Paid, ACT_Name, ACT_Terminal_Code, ACT_TCustom1, ACT_TCustom2. These are exactly the same in the database itself. I'm aware that SqlBulkCopy mappings are case sensitive, and the column names are indeed correct.
This is the error message:
An unhandled exception of type 'System.InvalidOperationException'
occurred in System.Data.dll
Additional information: The given ColumnName 'ACT_Code' does not match
up with any column in data source.
Hopefully I'm just missing something obvious here, but I am well and truly lost.
Many thanks.
EDIT: For anyone happening to have the same problem as me, here's how
I fixed it.
Instead of having the ManualMapImport() method be a near-clone of
AutoMapImport(), I had it loop through the columns of the datatable
and change the names, then called AutoMapImport() with the amended
datatable, eliminating the need to try and map with plain strings at
all.
According to MSDN (here), the DataColumn.ToString() method returns "The Expression value, if the property is set; otherwise, the ColumnName property.".
I've always found the ToString() method to be wonky anyway (can change based on current state/conditions), so I'd recommend using the ColumnName property instead, as that's what you are actually trying to get out of ToString().
OK, failing that, then I'd have to guess that this is a problem with case-sensitivity in the names of the columns in the source datatable, as SQLBulkCopy is very case-sensitive even if the SQL DB is not. To address this, I would say that when you check to see if that column exists, then you should return/use the actual string from the datatable's column list itself, rather than using whatever string was passed in. This should be able to fix up any case or accent differences that your ColumnsExist routine might be ignoring.
I had the same problem... The message might seem a bit misleading, as it suggests you didn't perform the correct mapping.
To find the root of the problem I have decided to go step by step in adding table columns and calling the WriteToServer method.
Assuming you have a valid column mapping, you will have to ensure the following between the source DataTable and the destination table:
The column types and lengths (!) do match
You have provided a valid value for each non-empty (NOT NULL) destination column
If you don't control your identity column values and would like the SQL Server do this job for you, please make sure not to specify the SqlBulkCopyOptions.KeepIdentity option. In this case you don't add the identity column to your source either.
This should be all for your bulk insert to work. Hope it helps.
I'm getting this exception when trying to do an SqlBulkCopy from a DataTable.
Error Message: The given value of type String from the data source cannot be converted to type money of the specified target column.
Target Site: System.Object ConvertValue(System.Object, System.Data.SqlClient._SqlMetaData, Boolean, Boolean ByRef, Boolean ByRef)
I understand what the error is saying, but how I can I get more information, such as the row/field this is happening on? The datatable is populated by a 3rd party and can contain up to 200 columns and up to 10k rows. The columns that are returned depend on the request sent to the 3rd party. All of the datatable columns are of string type. The columns in my database are not all varchar, therefore, prior to executing the insert, I format the datatable values using the following code (non important code removed):
//--- create lists to hold the special data type columns
List<DataColumn> IntColumns = new List<DataColumn>();
List<DataColumn> DecimalColumns = new List<DataColumn>();
List<DataColumn> BoolColumns = new List<DataColumn>();
List<DataColumn> DateColumns = new List<DataColumn>();
foreach (DataColumn Column in dtData.Columns)
{
//--- find the field map that tells the system where to put this piece of data from the 3rd party
FieldMap ColumnMap = AllFieldMaps.Find(a => a.SourceFieldID.ToLower() == Column.ColumnName.ToLower());
//--- get the datatype for this field in our system
Type FieldDataType = Nullable.GetUnderlyingType(DestinationType.Property(ColumnMap.DestinationFieldName).PropertyType);
//--- find the field data type and add to respective list
switch (Type.GetTypeCode(FieldDataType))
{
case TypeCode.Int16:
case TypeCode.Int32:
case TypeCode.Int64: { IntColumns.Add(Column); break; }
case TypeCode.Boolean: { BoolColumns.Add(Column); break; }
case TypeCode.Double:
case TypeCode.Decimal: { DecimalColumns.Add(Column); break; }
case TypeCode.DateTime: { DateColumns.Add(Column); break; }
}
//--- add the mapping for the column on the BulkCopy object
BulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping(Column.ColumnName, ColumnMap.DestinationFieldName));
}
//--- loop through all rows and convert the values to data types that match our database's data type for that field
foreach (DataRow dr in dtData.Rows)
{
//--- convert int values
foreach (DataColumn IntCol in IntColumns)
dr[IntCol] = Helpers.CleanNum(dr[IntCol].ToString());
//--- convert decimal values
foreach (DataColumn DecCol in DecimalColumns)
dr[DecCol] = Helpers.CleanDecimal(dr[DecCol].ToString());
//--- convert bool values
foreach (DataColumn BoolCol in BoolColumns)
dr[BoolCol] = Helpers.ConvertStringToBool(dr[BoolCol].ToString());
//--- convert date values
foreach (DataColumn DateCol in DateColumns)
dr[DateCol] = dr[DateCol].ToString().Replace("T", " ");
}
try
{
//--- do bulk insert
BulkCopy.WriteToServer(dtData);
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
//--- handles error
//--- this is where I need to find the row & column having an issue
}
This code should format all values for their destination fields. In the case of this error, the decimal, the function that cleans that up will remove any character that is not 0-9 or . (decimal point). This field that is throwing the error would be nullable in the database.
The level 2 exception has this error:
Error Message: Failed to convert parameter value from a String to a Decimal.
Target Site: System.Object CoerceValue(System.Object, System.Data.SqlClient.MetaType, Boolean ByRef, Boolean ByRef, Boolean)
and the level 3 exception has this error:
Error Message: Input string was not in a correct format
Target Site: Void StringToNumber(System.String, System.Globalization.NumberStyles, NumberBuffer ByRef, System.Globalization.NumberFormatInfo, Boolean)
Does anyone have any ideas to fix? or any ideas to get more info?
For the people stumbling across this question and getting a similar error message in regards to an nvarchar instead of money:
The given value of type String from the data source cannot be converted to type nvarchar of the specified target column.
This could be caused by a too-short column.
For example, if your column is defined as nvarchar(20) and you have a 40 character string, you may get this error.
Source
Please use SqlBulkCopyColumnMapping.
Example:
private void SaveFileToDatabase(string filePath)
{
string strConnection = System.Configuration.ConfigurationManager.ConnectionStrings["MHMRA_TexMedEvsConnectionString"].ConnectionString.ToString();
String excelConnString = String.Format("Provider=Microsoft.ACE.OLEDB.12.0;Data Source={0};Extended Properties=\"Excel 12.0\"", filePath);
//Create Connection to Excel work book
using (OleDbConnection excelConnection = new OleDbConnection(excelConnString))
{
//Create OleDbCommand to fetch data from Excel
using (OleDbCommand cmd = new OleDbCommand("Select * from [Crosswalk$]", excelConnection))
{
excelConnection.Open();
using (OleDbDataReader dReader = cmd.ExecuteReader())
{
using (SqlBulkCopy sqlBulk = new SqlBulkCopy(strConnection))
{
//Give your Destination table name
sqlBulk.DestinationTableName = "PaySrcCrosswalk";
// this is a simpler alternative to explicit column mappings, if the column names are the same on both sides and data types match
foreach(DataColumn column in dt.Columns) {
s.ColumnMappings.Add(new SqlBulkCopyColumnMapping(column.ColumnName, column.ColumnName));
}
sqlBulk.WriteToServer(dReader);
}
}
}
}
}
Since I don't believe "Please use..." plus some random code that is unrelated to the question is a good answer, but I do believe the spirit was correct, I decided to answer this correctly.
When you are using Sql Bulk Copy, it attempts to align your input data directly with the data on the server. So, it takes the Server Table and performs a SQL statement similar to this:
INSERT INTO [schema].[table] (col1, col2, col3) VALUES
Therefore, if you give it Columns 1, 3, and 2, EVEN THOUGH your names may match (e.g.: col1, col3, col2). It will insert like so:
INSERT INTO [schema].[table] (col1, col2, col3) VALUES
('col1', 'col3', 'col2')
It would be extra work and overhead for the Sql Bulk Insert to have to determine a Column Mapping. So it instead allows you to choose... Either ensure your Code and your SQL Table columns are in the same order, or explicitly state to align by Column Name.
Therefore, if your issue is mis-alignment of the columns, which is probably the majority of the cause of this error, this answer is for you.
TLDR
using System.Data;
//...
myDataTable.Columns.Cast<DataColumn>().ToList().ForEach(x =>
bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping(x.ColumnName, x.ColumnName)));
This will take your existing DataTable, which you are attempt to insert into your created BulkCopy object, and it will just explicitly map name to name. Of course if, for some reason, you decided to name your DataTable Columns differently than your SQL Server Columns... that's on you.
#Corey - It just simply strips out all invalid characters. However, your comment made me think of the answer.
The problem was that many of the fields in my database are nullable. When using SqlBulkCopy, an empty string is not inserted as a null value. So in the case of my fields that are not varchar (bit, int, decimal, datetime, etc) it was trying to insert an empty string, which obviously is not valid for that data type.
The solution was to modify my loop where I validate the values to this (repeated for each datatype that is not string)
//--- convert decimal values
foreach (DataColumn DecCol in DecimalColumns)
{
if(string.IsNullOrEmpty(dr[DecCol].ToString()))
dr[DecCol] = null; //--- this had to be set to null, not empty
else
dr[DecCol] = Helpers.CleanDecimal(dr[DecCol].ToString());
}
After making the adjustments above, everything inserts without issues.
Make sure that the column values u added in entity class having get set properties also in the same order which is present in target table.
Not going to be everyone's fix, but it was for me:
So, i ran across this exact issue. The problem I seemed to have was when my DataTable didnt have an ID column, but the target destination had one with a primary key.
When i adapted my DataTable to have an id, the copy worked perfectly.
In my scenario, the Id column isnt very important to have the primary key so i deleted this column from the target destination table and the SqlBulkCopy is working without issue.
There is another issue you have to take care of it when you try mapping column which is string length,
for example TK_NO nvarchar(50) you will have to map to
the same length as the destination field.
I got the same error "occasionally". Note: column mapping was all correct, and that is why code worked most of the time.
And I found the root case to be string length issue. The target table column had datatype as nvarchar(255) - where as the value being sent was of length more then 255 chars in string.
Fixed it by increasing column length in DB.
p.s.: Sadly the error msg wont tell you which column of table is causing this error. That you have to guess/figure it out manually.
My issue was with the column mapping rather than the values. I was doing an extract from a dev system, creating the destination table, bulk copying the content, extracting from a prod system, adjusting the destination table and bulk copying the content so the column order from the 2 bulk copies wasn't matching
// explicitly setting the column mapping even though the source & destination column names
// are the same as the column orders will affect the bulk copy giving data conversion errors
foreach (DataColumn column in p_dataTable.Columns)
{
bulkCopy.ColumnMappings.Add(
new()
{
SourceColumn = column.ColumnName,
DestinationColumn = column.ColumnName
}
);
}
bulkCopy.WriteToServer(p_dataTable);
short answer: change type from nvarchar(size) to nvarchar(Max)
it is issue of string length size
note: all above suggestions made me write this short answer
Check The data you are writing to Server. May be data has delimiter which is not used.
like
045|2272575|0.000|0.000|2013-10-07
045|2272585|0.000|0.000;2013-10-07
your delimiter is '|' but data has a delimiter ';'.
So for this you are getting the error.
I have come across a problem in using the DataAdapter, which I hope someone can help with. Basically I am creating a system, which is as follows:
Data is read in from a data source (MS-Access, SQL Server or Excel), converted to data tables and inserted into a local SQL Server database, using DataAdapters. This bit works fine. The SQL server table has a PK, which is an identity field with auto increment set to on.
Subsequent data loads read in the data from the source and compare it to what we already have. If the record is missing then it is added (this works fine). If the record is different then it needs to be updated (this doesn't work).
When doing the differential data load I create a data table which reads in the schema from the destination table (SQL server) and ensures it has the same columns etc.
The PK in the destination table is column 0, so when a record is inserted all of the values from column 1 onwards are set (as mentioned this works perfectly.). I don't change the row status for items I am adding. The PK in the data table is set correctly and I can confirm this.
When updating data I set column 0 (the PK column) to be the value of the record I am updating and set all of the columns to be the same as the source data.
For updated records I call AcceptChanges and SetModified on the row to ensure (I thought) that the application calls the correct method.
The DataAdapter is set with SelectCommand and UpdateCommand using the command builder.
When I run, I have traced it using SQL profiler and can see that the insert command is being ran correctly, but the update command isn't being ran at all, which is the crux of the problem. For reference an insert table will look something like the following
PK Value1 Value 2 Row State
== ====== ======= =========
124 Test1 Test 2 Added
123 Test3 Test4 Updated
Couple of things to be aware of....
I have tested this by loading the row to be changed into the datatable, changing some column fields and running update and this works. However, this is impractical for my solution because the data is HUGE >1Gb so I can't simply load it into a datatable without taking a huge performance hit. What I am doing is creating the data table with a max of 500 rows and the running the Update. Testing during the initial data load showed this to be the most efficient in terms of memory useage and performance. The data table is cleared after each batch is ran.
Anyone any ideas on where I am going wrong here?
Thanks in advance
Andrew
==========Update==============
Following is the code to create the insert/update rows
private static void AddNewRecordToDataTable(DbDataReader pReader, ref DataTable pUpdateDataTable)
{
// create a new row in the table
DataRow pUpdateRow = pUpdateDataTable.NewRow();
// loop through each item in the data reader - setting all the columns apart from the PK
for (int addCount = 0; addCount < pReader.FieldCount; addCount++)
{
pUpdateRow[addCount + 1] = pReader[addCount];
}
// add the row to the update table
pUpdateDataTable.Rows.Add(pUpdateRow);
}
private static void AddUpdateRecordToDataTable(DbDataReader pReader, int pKeyValue,
ref DataTable pUpdateDataTable)
{
DataRow pUpdateRow = pUpdateDataTable.NewRow();
// set the first column (PK) to the value passed in
pUpdateRow[0] = pKeyValue;
// loop for each row apart from the PK row
for (int addCount = 0; addCount < pReader.FieldCount; addCount++)
{
pUpdateRow[addCount + 1] = pReader[addCount];
}
// add the row to the table and then update it
pUpdateDataTable.Rows.Add(pUpdateRow);
pUpdateRow.AcceptChanges();
pUpdateRow.SetModified();
}
The following code is used to actually do the update:
updateAdapter.Fill(UpdateTable);
updateAdapter.Update(UpdateTable);
UpdateTable.AcceptChanges();
The following is used to create the data table to ensure it has the same fields/data types as the source data
private static DataTable CreateDataTable(DbDataReader pReader)
{
DataTable schemaTable = pReader.GetSchemaTable();
DataTable resultTable = new DataTable(<tableName>); // edited out personal info
// loop for each row in the schema table
try
{
foreach (DataRow dataRow in schemaTable.Rows)
{
// create a new DataColumn object and set values depending
// on the current DataRows values
DataColumn dataColumn = new DataColumn();
dataColumn.ColumnName = dataRow["ColumnName"].ToString();
dataColumn.DataType = Type.GetType(dataRow["DataType"].ToString());
dataColumn.ReadOnly = (bool)dataRow["IsReadOnly"];
dataColumn.AutoIncrement = (bool)dataRow["IsAutoIncrement"];
dataColumn.Unique = (bool)dataRow["IsUnique"];
resultTable.Columns.Add(dataColumn);
}
}
catch (Exception ex)
{
message = "Unable to create data table " + ex.Message;
throw new Exception(message, ex);
}
return resultTable;
}
In case anyone is interested I did manage to get around the problem, but never managed to get the data adapter to work. Basically what I did was as follows:
Create a list of objects with an index and a list of field values as members
Read in the rows that have changed and store the values from the source data (i.e. the values that will overwrite the current ones in the object). In addition I create a comma separated list of the indexes
When I am finished I use the comma separated list in a sql IN statement to return the rows and load them into my data adapter
For each one I run a LINQ query against the index and extract the new values, updating the data set. This sets the row status to modified
I then run the update and the rows are updated correctly.
This isn't the quickest or neatest solution, but it does work and allows me to run the changes in batches.
Thanks
Andrew
Using C#
I have a datareader that return a lsit of records from a mysql database.
I am trying to write code that checks if the datareader isnull. The logic behind this is: If the datareader having field then display the info otherwise hide the field.
I have tried:
cmd1 = new OdbcCommand("Select * from tb_car where vehicleno = '" + textbox2.text + "';", dbcon);
dr1 = cmd1.ExecuteReader();
if (dr1["tb_car"]. != DBNull.Value)
{
textbox1.Text = "contains data";
}
else
{
textbox1.Text = "is null";
}
The above code gives me this error:
Exception Details: System.IndexOutOfRangeException: Additional
Any help would be greatly appreciated...
I see a few problems here... First, it looks like you're trying to access the table name in the line:
if(dr1["tb_car"] != DBNull.Value
You should be passing a FIELD NAME instead of the table name. So if the table named "tb_car" had a field called CarId, you would want to have your code look like:
if(dr1["CarId"] != DBNull.Value)
If I'm right, then there is probably no field named "tb_car", and the Index is Out of Range error is because the DataReader is looking for an item in the column collection named "tb_car" and not finding it. That's pretty much what the error means.
Second, before you can even check it , you have to call the DataReader's Read() command first to read a line from the database.
so really your code should look like...
while(dr.Read())
{
if(dr1["CarId"] != DBNull.Value)
{
....
and so on.
See here for the proper use of a DataReader: http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.read.aspx
Finally, if you're just checking to see if there are any rows in the table, you can ignore all of the above and use the HasRows property as in
if(dr.HasRows)
{
....
although if you're using the while(dr.Read()) syntax, the code in the while loop will only execute if there are rows in the first place, so the HasRows could potentially be unnecessary if you don't want to do anything with no results. You would still want to use it if you want to return a message like "no results found", of course..
Edit - Added
I think there's a problem also with the line
if(dr1["CarId"] != DBNull.Value)
You should be using if DataReader's IsDbNull() method. as in
if(dr.IsDbNull("CarId"))
Sorry I missed that the first time around.
Use dr1.Read() to check that there is a row before attempting to read values. Read gets the first row initially, and then returns subsequent rows, returning true if row available or empty/end of set.
eg.
// for reading one row
if (rd1.Read())
{
// do something with first row
}
// for reading thru multiple rows
while (rd1.Read())
{
// do something with current row
}