On my pc (Win7) the statement is running without error. If I copy the c# .exe to a server (Win2012 server) where the program should finally run then I get the error
ORA-01843: not a valid month
I read a csv-file and insert it into a oracle-db with the statement
command.CommandText = "INSERT INTO table (DATUM, ...) VALUES('" + dr[0].ToString() + "',..."')";
dr[0].ToString() has the value "01.06.2016"
The column DATUM is type DATE in the oracle-db.
I debugged the code with a messagebox and get this result:
I can't see any difference between those two statements, the left one from the server is calling the error when I execute int rowsupdated = command.ExecuteNonQuery();
I already compared the region settings and they are the same (german) on both systems. What else could cause the problem? Thanks
Part for filling the Datatable (source for dr):
StreamReader oStreamReader = new StreamReader(Zielverzeichnis + Dateiname, System.Text.Encoding.UTF8); //nach, für Umlaute
DataTable dtCSV_Import = null;
int RowCount = 0;
string[] ColumnNames = null;
string[] oStreamDataValues = null;
//using while loop read the stream data till end
while (!oStreamReader.EndOfStream)
{
String oStreamRowData = oStreamReader.ReadLine().Trim();
if (oStreamRowData.Length > 0)
{
oStreamDataValues = oStreamRowData.Split(';');
//Bcoz the first row contains column names, we will poluate
//the column name by
//reading the first row and RowCount-0 will be true only once
if (RowCount == 0)
{
RowCount = 1;
ColumnNames = oStreamRowData.Split(';');
dtCSV_Import = new DataTable();
//using foreach looping through all the column names
foreach (string csvcolumn in ColumnNames)
{
DataColumn oDataColumn = new DataColumn(csvcolumn.ToUpper(), typeof(string));
//setting the default value of empty.string to newly created column
oDataColumn.DefaultValue = string.Empty;
//adding the newly created column to the table
dtCSV_Import.Columns.Add(oDataColumn);
}
}
else
{
//creates a new DataRow with the same schema as of the oDataTable
DataRow oDataRow = dtCSV_Import.NewRow();
//using foreach looping through all the column names
//Prüfen was kleiner ist, Spalten aus XML oder tatsächliche Spalten in der CSV -> sonst Fehler [i]
if (oStreamDataValues.Length < ColumnNames.Length)
{
for (int i = 0; i < oStreamDataValues.Length; i++)
{
oDataRow[ColumnNames[i]] = oStreamDataValues[i] == null ? string.Empty : oStreamDataValues[i].ToString();
}
}
else
{
for (int i = 0; i < ColumnNames.Length; i++)
{
oDataRow[ColumnNames[i]] = oStreamDataValues[i] == null ? string.Empty : oStreamDataValues[i].ToString();
}
}
//adding the newly created row with data to the oDataTable
dtCSV_Import.Rows.Add(oDataRow);
}
}
}
//close the oStreamReader object
oStreamReader.Close();
//release all the resources used by the oStreamReader object
oStreamReader.Dispose();
If you are inserting values into a date column and try to insert a string value then Oracle will implicitly call TO_DATE() using the NLS_DATE_FORMAT session parameter as the format mask. If this format mask does not match then you will get an exception.
Session parameters can be set by individual users within their sessions - so if user Alice has the expected parameters this does not mean that user Bob will have the same parameters and the identical query you are using will not work as you are relying on implicitly casting values. Or even worse, Bob has the expected parameters today and then tomorrow decided that he would prefer his dates formatted as DD-MON-YYYY and changes his NLS_DATE_FORMAT and suddenly, without changing your code, everything breaks and you are going to have a very bad time debugging the error.
If you want to insert a date then either:
Pass it as a bind variable (the best option) without converting it to a string; or
Use date literals (i.e. DATE '2016-06-01'); or
Use TO_DATE() with a specified format mask (i.e. TO_DATE( '" + dr[0].ToString() + "', 'DD.MM.YYYY' )).
You can read about bind variables in the Oracle Documentation or in this SO question.
Related
I got data from text file into datatable and now when I am trying to insert that data into a SQL Server 2008 database, I get the following error:
InvalidOperationException: String or binary data would be truncated
I cannot get the source of error ie which record is throwing this error.
The code is as below
for (int i = 0; i < dt.Columns.Count; i++)
{
if (i == 159)
{
}
bulkCopy.ColumnMappings.Add(dt.Columns[i].ColumnName,DestTable.Columns[i].ColumnName);
}
bulkCopy.BulkCopyTimeout = 600;
bulkCopy.DestinationTableName = "dbo.TxtFileInfo";
bulkCopy.WriteToServer(dt);
I have the datatable in the dt variable. And matched columns for both datatable created from text file and also the empty table created in database to add the values to it.
I have copied all records from text file into datatable using below code.
while (reader.Read())
{
int count1 = reader.FieldCount;
for (int i = 0; i < count1; i++)
{
string value = reader[i].ToString();
list.Add(value);
}
dt.Rows.Add(list.ToArray());
list.Clear();
}
I have got proper records from the text file. Also the number of columns are equal. My database table TextToTable has all columns of datatype nvarchar(50) and I am fetching each record as string from text file. But during bulk insert the error shown is
Cannot convert string to nvarchar(50)
Seems you are trying to insert data which has less length in DB (for example: you have data length 20 but in DB accept only 10 )
To check data which is coming from text file. Have if condition to check length of data in your code and have break point to troubleshoot.
If yes then increase the length of column in DB.
alter tablename
alter column columnname varchar(xxx)
I'm trying to fetch data from multiple tables from an Oracle db and insert it into a sql db. The problem that I am running into is that I am fetching almost 50 columns of data all of different datatypes. I then proceed to insert these individual column values into a SQL statement which then inserts the data into the sql db. So the algo looks something like this:
Fetch row data{
create a variable for each individual column value ( int value = reader.getInt32(0); )
add a sqlparameter for it (command.Parameters.Add(new SqlParameter("value", value)); )
once all the 50 or so variables have been created make a sql statement
Insert into asdf values (value,........)
}
Doing it this way for a table with <10 columns seems ok but when it exceeds that length this process seems tedious and extraneous. I was wondering if there was a simpler way of doing this like fetch row data and automatically determine column data type and automatically create a varialbe and automatically insert into sql statement. I would appreciate it if anyone could direct me to the right way of doing this.
The data reader has a neutral GetValue method returning an object and the SqlCommand has an AddWithValue method that does not require to specify a parameter type.
for (int i = 0; i < reader.VisibleFieldCount; i++) {
object value = reader.GetValue(i);
command.Parameters.AddWithValue("#" + i, value);
}
You could also create the SQL command automatically
var columns = new StringBuilder();
var values = new StringBuilder();
for (int i = 0; i < reader.VisibleFieldCount; i++) {
values.Append("#").Append(i).Append(", ");
columns.Append("[").Append(reader.GetName(i)).Append("], ");
}
values.Length -= 2; // Remove last ", "
columns.Length -= 2;
string insert = String.Format("INSERT INTO myTable ({0}) VALUES ({1})",
columns.ToString(), values.ToString());
I am attempting to loop through a dataset's rows and columns in search for a match between the dataset's name column -- and the ColumnName from a DataReader object.
I have a new table called RECORDS which is empty at program startup. I also have a pre-populated table called ColumnPositions with a sub-set of column names found in the RECORDS table. This routine is intended to show a subset of all the available columns -- as a default display style.
My code works...except for the line of code that gets the dr["type"] value. I get the error:
The name 'colType' does not exist in the current context.
As you can clearly see, my string variables are declared outside the WHILE and FOREACH loops. The line statement colName = works just fine. But colType fails everytime. If I do a statement check in the Intermediate Window in VS2010 for ? dr["type"]" I get the result integer. But when I check ? colType, I get the above noted error message.
The intellisense for the DataRow object dr reveals an array of 6 items. Index 1 in the array maps to name. Index 2 maps to type. When I check the value of ? dr[2] in the Intermediate Window, the same result comes back integer. This is correct. But whenever this value is assigned to colType, VS2010 complains.
I'm no newbie to C# so I did a lot of testing and Googling before posting here. I'm hoping that this is a matter of me not seeing the forest through the trees.
Here's my code:
// get table information for RECORDS
SQLiteCommand tableInfo = new SQLiteCommand("PRAGMA table_info(Records)", m_cnCaseFile);
SQLiteDataAdapter adapter = new SQLiteDataAdapter(tableInfo);
DataSet ds = new DataSet();
adapter.Fill(ds);
DataTable dt = ds.Tables[0];
SQLiteCommand cmd = new SQLiteCommand("SELECT * FROM ColumnPositions WHERE ColumnStyle_ID = " + styleID + " ORDER BY ColumnPosition_ID ASC", m_cnCaseFile);
SQLiteDataReader colReader = cmd.ExecuteReader();
string colName = "";
string colType = "";
if (dt != null && colReader.HasRows)
{
while (colReader.Read())
{
foreach(DataRow dr in dt.Rows)
{
colType = Convert.ToString(dr["type"]);
colName = dr["name"].ToString();
if (colReader["ColumnName"].ToString() == colName)
{
DataGridViewColumn dgvCol = new DataGridViewColumn();
}
}
}
}
dt.Dispose();
colReader.Close();
Instead of using "dr["name"].ToString();", it is better to use "Convert.ToString(dr["name"]);"
Try using the array position instead of the column name:
colType = Convert.ToString(dr[1]);
and
colName = dr[0].ToString();
You probably don't need this, but here is the documentation for values returned by the SQLite PRAGMA table_info() command. LINK
I want to bulk upload csv file data to sql server 2005 from c# code but I am encountering the below error -
Received an invalid column length from the bcp client for colid 6.
when bulk copy write to database server
I know this post is old but I ran into this same issue and finally figured out a solution to determine which column was causing the problem and report it back as needed. I determined that colid returned in the SqlException is not zero based so you need to subtract 1 from it to get the value. After that it is used as the index of the _sortedColumnMappings ArrayList of the SqlBulkCopy instance not the index of the column mappings that were added to the SqlBulkCopy instance. One thing to note is that SqlBulkCopy will stop on the first error received so this may not be the only issue but at least helps to figure it out.
try
{
bulkCopy.WriteToServer(importTable);
sqlTran.Commit();
}
catch (SqlException ex)
{
if (ex.Message.Contains("Received an invalid column length from the bcp client for colid"))
{
string pattern = #"\d+";
Match match = Regex.Match(ex.Message.ToString(), pattern);
var index = Convert.ToInt32(match.Value) -1;
FieldInfo fi = typeof(SqlBulkCopy).GetField("_sortedColumnMappings", BindingFlags.NonPublic | BindingFlags.Instance);
var sortedColumns = fi.GetValue(bulkCopy);
var items = (Object[])sortedColumns.GetType().GetField("_items", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(sortedColumns);
FieldInfo itemdata = items[index].GetType().GetField("_metadata", BindingFlags.NonPublic | BindingFlags.Instance);
var metadata = itemdata.GetValue(items[index]);
var column = metadata.GetType().GetField("column", BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).GetValue(metadata);
var length = metadata.GetType().GetField("length", BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).GetValue(metadata);
throw new DataFormatException(String.Format("Column: {0} contains data with a length greater than: {1}", column, length));
}
throw;
}
One of the data columns in the excel (Column Id 6) has one or more cell data that exceed the datacolumn datatype length in the database.
Verify the data in excel. Also verify the data in the excel for its format to be in compliance with the database table schema.
To avoid this, try exceeding the data-length of the string datatype in the database table.
Hope this helps.
I faced a similar kind of issue while passing a string to Database table using SQL BulkCopy option. The string i was passing was of 3 characters whereas the destination column length was varchar(20). I tried trimming the string before inserting into DB using Trim() function to check if the issue was due to any space (leading and trailing) in the string. After trimming the string, it worked fine.
You can try text.Trim()
Check the size of the columns in the table you are doing bulk insert/copy. the varchar or other string columns might needs to be extended or the value your are inserting needs to be trim. Column order also should be same as in table.
e.g, Increase size of varchar column 30 to 50 =>
ALTER TABLE [dbo].[TableName]
ALTER COLUMN [ColumnName] Varchar(50)
I have been using SqlBulkCopy to transfer an Access database to SQL. Simply put, I create each table in SQL, load the corresponding Access table into a DataTable, then SqlBulkCopy the DataTable to the corresponding SQL table. Because I used a standardized function to do this, I did not setup any ColumnMappings. When there are no ColumnMappings, SqlBulkCopy seems to simply map column 0 to column 0, column 1 to column 1, etc.
In my case, I had to make sure that the column order in my Access table matched the column order in my new SQL table EXACTLY. Once I did that, it ran as expected.
Great piece of code, thanks for sharing!
I ended up using reflection to get the actual DataMemberName to throw back to a client on an error (I'm using bulk save in a WCF service). Hopefully someone else will find how I did it useful.
static string GetDataMemberName(string colName, object t) {
foreach(PropertyInfo propertyInfo in t.GetType().GetProperties()) {
if (propertyInfo.CanRead) {
if (propertyInfo.Name == colName) {
var attributes = propertyInfo.GetCustomAttributes(typeof(DataMemberAttribute), false).FirstOrDefault() as DataMemberAttribute;
if (attributes != null && !string.IsNullOrEmpty(attributes.Name))
return attributes.Name;
return colName;
}
}
}
return colName;
}
I got this error message with a much more recent ssis version (vs 2015 enterprise, i think it's ssis 2016). I will comment here because this is the first reference that comes up when you google this error message. I think it happens mostly with character columns when the source character size is larger than the target character size. I got this message when I was using an ado.net input to ms sql from a teradata database. Funny because the prior oledb writes to ms sql handled all the character conversion perfectly with no coding overrides. The colid number and the a corresponding Destination Input column # you sometimes get with the colid message are worthless. It's not the column when you count down from the top of the mapping or anything like that. If I were microsoft, I'd be embarrased to give an error message that looks like it's pointing at the problem column when it isn't. I found the problem colid by making an educated guess and then changing the input to the mapping to "Ignore" and then rerun and see if the message went away. In my case and in my environment I fixed it by substr( 'ing the Teradata input to the character size of the ms sql declaration for the output column. Check and make sure your input substr propagates through all you data conversions and mappings. In my case it didn't and I had to delete all my Data Conversion's and Mappings and start over again. Again funny that OLEDB just handled it and ADO.net threw the error and had to have all this intervention to make it work. In general you should use OLEDB when your target is MS Sql.
I just stumbled upon this and using #b_stil's snippet, I was able to figure the culprit column. And on futher investigation, I figured i needed to trim the column just like #Liji Chandran suggested but I was using IExcelDataReader and I couldn't figure out an easy way to validate and trim each of my 160 columns.
Then I stumbled upon this class, (ValidatingDataReader) class from CSVReader.
Interesting thing about this class is that it gives you the source and destination columns data length, the culprit row and even the column value that's causing the error.
All I did was just trim all (nvarchar, varchar, char and nchar) columns.
I just changed my GetValue method to this:
object IDataRecord.GetValue(int i)
{
object columnValue = reader.GetValue(i);
if (i > -1 && i < lookup.Length)
{
DataRow columnDef = lookup[i];
if
(
(
(string)columnDef["DataTypeName"] == "varchar" ||
(string)columnDef["DataTypeName"] == "nvarchar" ||
(string)columnDef["DataTypeName"] == "char" ||
(string)columnDef["DataTypeName"] == "nchar"
) &&
(
columnValue != null &&
columnValue != DBNull.Value
)
)
{
string stringValue = columnValue.ToString().Trim();
columnValue = stringValue;
if (stringValue.Length > (int)columnDef["ColumnSize"])
{
string message =
"Column value \"" + stringValue.Replace("\"", "\\\"") + "\"" +
" with length " + stringValue.Length.ToString("###,##0") +
" from source column " + (this as IDataRecord).GetName(i) +
" in record " + currentRecord.ToString("###,##0") +
" does not fit in destination column " + columnDef["ColumnName"] +
" with length " + ((int)columnDef["ColumnSize"]).ToString("###,##0") +
" in table " + tableName +
" in database " + databaseName +
" on server " + serverName + ".";
if (ColumnException == null)
{
throw new Exception(message);
}
else
{
ColumnExceptionEventArgs args = new ColumnExceptionEventArgs();
args.DataTypeName = (string)columnDef["DataTypeName"];
args.DataType = Type.GetType((string)columnDef["DataType"]);
args.Value = columnValue;
args.SourceIndex = i;
args.SourceColumn = reader.GetName(i);
args.DestIndex = (int)columnDef["ColumnOrdinal"];
args.DestColumn = (string)columnDef["ColumnName"];
args.ColumnSize = (int)columnDef["ColumnSize"];
args.RecordIndex = currentRecord;
args.TableName = tableName;
args.DatabaseName = databaseName;
args.ServerName = serverName;
args.Message = message;
ColumnException(args);
columnValue = args.Value;
}
}
}
}
return columnValue;
}
Hope this helps someone
I have a SQL data reader that reads 2 columns from a sql db table.
once it has done its bit it then starts again selecting another 2 columns.
I would pull the whole lot in one go but that presents a whole other set of challenges.
My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem.
I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database.
My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through.
Without a full Garbage Collect I will definately run out of ram.
Anyone got any ideas how I can do this?
I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.
Ok i hope this fits but here's the method in question ...
void AnalyseTable(string ObjectName, string TableName)
{
Console.WriteLine("Initialising analysis process for SF object \"" + ObjectName + "\"");
Console.WriteLine(" The data being used is in table [" + TableName + "]");
// get some helpful stuff from the databases
SQLcols = Target.GetData("SELECT Column_Name, Is_Nullable, Data_Type, Character_Maximum_Length FROM information_schema.columns WHERE table_name = '" + TableName + "'");
SFcols = SchemaSource.GetData("SELECT * FROM [" + ObjectName + "Fields]");
PickLists = SchemaSource.GetData("SELECT * FROM [" + ObjectName + "PickLists]");
// get the table definition
DataTable resultBatch = new DataTable();
resultBatch.TableName = TableName;
int counter = 0;
foreach (DataRow Column in SQLcols.Rows)
{
if (Column["Column_Name"].ToString().ToLower() != "id")
resultBatch.Columns.Add(new DataColumn(Column["Column_Name"].ToString(), typeof(bool)));
else
resultBatch.Columns.Add(new DataColumn("ID", typeof(string)));
}
// create the validation results table
//SchemaSource.CreateTable(resultBatch, "ValidationResults_");
// cache the id's from the source table in the validation table
//CacheIDColumn(TableName);
// validate the source table
// iterate through each sql column
foreach (DataRow Column in SQLcols.Rows)
{
// we do this here to save making this call a lot more later
string colName = Column["Column_Name"].ToString().ToLower();
// id col is only used to identify records not in validation
if (colName != "id")
{
// prepare to process
counter = 0;
resultBatch.Rows.Clear();
resultBatch.Columns.Clear();
resultBatch.Columns.Add(new DataColumn("ID", typeof(string)));
resultBatch.Columns.Add(new DataColumn(colName, typeof(bool)));
// identify matching SF col
foreach (DataRow SFDefinition in SFcols.Rows)
{
// case insensitive compare on the col name to ensure we have a match ...
if (SFDefinition["Name"].ToString().ToLower() == colName)
{
// select the id column and the column data to validate (current column data)
using (SqlCommand com = new SqlCommand("SELECT ID, [" + colName + "] FROM [" + TableName + "]", new SqlConnection(ConfigurationManager.ConnectionStrings["AnalysisTarget"].ConnectionString)))
{
com.Connection.Open();
SqlDataReader reader = com.ExecuteReader();
Console.WriteLine(" Validating column \"" + colName + "\"");
// foreach row in the given object dataset
while (reader.Read())
{
// create a new validation result row
DataRow result = resultBatch.NewRow();
bool hasFailed = false;
// validate it
object vResult = ValidateFieldValue(SFDefinition, reader[Column["Column_Name"].ToString()]);
// if we have the relevant col definition lets decide how to validate this value ...
result[colName] = vResult;
if (vResult is bool)
{
// if it's deemed to have failed validation mark it as such
if (!(bool)vResult)
hasFailed = true;
}
// no point in adding rows we can't trace
if (reader["id"] != DBNull.Value && reader["id"] != null)
{
// add the failed row to the result set
if (hasFailed)
{
result["id"] = reader["id"];
resultBatch.Rows.Add(result);
}
}
// submit to db in batches of 200
if (resultBatch.Rows.Count > 199)
{
counter += resultBatch.Rows.Count;
Console.Write(" Result batch completed,");
SchemaSource.Update(resultBatch, "ValidationResults_");
Console.WriteLine(" committed " + counter.ToString() + " fails to the database so far.");
Console.SetCursorPosition(0, Console.CursorTop-1);
resultBatch.Rows.Clear();
}
}
// get rid of these likely very heavy objects
reader.Close();
reader.Dispose();
com.Connection.Close();
com.Dispose();
// ensure .Net does a full cleanup because we will need the resources.
GC.Collect();
if (resultBatch.Rows.Count > 0)
{
counter += resultBatch.Rows.Count;
Console.WriteLine(" All batches for column complete,");
SchemaSource.Update(resultBatch, "ValidationResults_");
Console.WriteLine(" committed " + counter.ToString() + " fails to the database.");
}
}
}
}
}
Console.WriteLine(" Completed processing column \"" + colName + "\"");
Console.WriteLine("");
}
Console.WriteLine("Object processing complete.");
}
Could you post some code? .NET's data reader is supposed to be a 'fire-hose' that is stingy on RAM unless, as Freddy suggests, your column-data values are large. How long does this validation+DB write take?
In general, if a GC is needed and can be done, it will be done. I may sound like a broken record but if you have to GC.Collect() something else is wrong.
Open the reader with Sequential access, it might give you the behavior you need. Also, assuming that's a blob, you might also be better off by reading it in chunks.
Provides a way for the DataReader to handle rows that contain columns with large binary values. Rather than loading the entire row, SequentialAccess enables the DataReader to load data as a stream. You can then use the GetBytes or GetChars method to specify a byte location to start the read operation, and a limited buffer size for the data being returned.
When you specify SequentialAccess, you are required to read from the columns in the order they are returned, although you are not required to read each column. Once you have read past a location in the returned stream of data, data at or before that location can no longer be read from the DataReader. When using the OleDbDataReader, you can reread the current column value until reading past it. When using the SqlDataReader, you can read a column value can only once.