Is there a way to determine which column causes error? - c#

I am working on a C# project that gathers data from different sources and stores the data in a SQL Server database. I sometimes get String or binary data would be truncated. error which is very annoying. I want to determine which column causes this error and log it. Is there another way than checking parameter lengths?
What I did is if the column is varchar(50), checking if data length is greater than 50. I feel like this is like a walkaround and wonder if there is any other neat solution.
Edit
if(data1.Length>50) logIt("col1, data1, condition");
else if(data2.Length>80) logIt("col2, data2, condition");
else {
SqlParameter p1 = (new SqlParameter("#p1", DbType.String));
p1.Value = string.IsNullOrEmpty(data1) ? SqlString.Null : (object)data1;
s1.Parameters.Add(p1);
SqlParameter p2 = (new SqlParameter("#p2", DbType.String));
p2.Value = string.IsNullOrEmpty(data2) ? SqlString.Null : (object)data2;
s1.Parameters.Add(p2);
s1.ExecuteNonQuery("UPDATE mytable SET col1=#p1,col2=#p2 WHERE condition=#condition");
}
void logIt(string p){
using (StreamWriter writer = new StreamWriter("log.txt"))
{
writer.WriteLine("Caused by:"+ p);
writer.WriteLine(DateTime.Now);
writer.WriteLine("--------------------------------------------");
}
}

Personally I would always check all of my parameters against the table that contains the data they will be used in. If the table says varchar (50) my parameter should be no longer than 50. I never write a stored proc without doing this and I presume you can simlarly limit parameter defintion on the c# side. Don't allow a user to type in more information than your table field can accept.
If you are pulling data from other databases or Excel spreadsheets, then you may have a differnt set of problems.
First you need to examine the incoming data carefully and determine if the column size you currently have is too small for the needed content. I find often when the data is too large, the information in the field is garbage and should be thrown out (things like notes about a contact being placed in the email field for instance instead of an email). You may even want to consider if you shoud be validating the input data in some fields.
If you need to keep the size and you aren't concerned about losing the extra characters, then you need to cast the data to the correct size before inserting it into your database or you need to create an exception process that will pull out the records which cannot be sent to the database into an exception table and return them to the people who own the orginal data to be fixed.

As you mentioned, In one of the INSERT statements you are attempting to insert a too long string into a string (varchar (50)) column.
You have to increase the column size to 50+ or 100, to fix the issue. To find which column is causing this issue is put some logs or run SQL Profiler or Enable All exceptions from VS -> Debug Menu -> Exceptions (Check all exceptions).

Related

C# & SQL Server Insert ExecuteNonQuery: which column is causing the error "String or Binary Data Would Be Truncated?"

I'm trying to insert into a table in a C# program. I have this insert command:
var insertSql = #"INSERT INTO dbo.[Case]
VALUES (#Id, #IsDeleted, #CaseNumber, #ContactId, #AccountId, #ParentId, #SuppliedName, #SuppliedEmail, #SuppliedPhone, #SuppliedCompany, #Type, #RecordTypeId, #Status, #Reason, #Origin...
And then I've got many lines adding in the parameters like so:
var command = new SqlCommand(insertSql, easySoftConn);
if (case2.Id != null)
command.Parameters.AddWithValue("#Id", case2.Id);
else
command.Parameters.AddWithValue("#Id", DBNull.Value);
if (case2.IsDeleted != null)
{
if (case2.IsDeleted == "true")
command.Parameters.AddWithValue("#IsDeleted", 1);
else
command.Parameters.AddWithValue("#IsDeleted", 0);
}
else
command.Parameters.AddWithValue("#IsDeleted", DBNull.Value);
if (case2.CaseNumber != null)
command.Parameters.AddWithValue("#CaseNumber", case2.CaseNumber);
else
command.Parameters.AddWithValue("#CaseNumber", DBNull.Value);
if (case2.ContactId != null)
command.Parameters.AddWithValue("#ContactId", case2.ContactId);
else
command.Parameters.AddWithValue("#ContactId", DBNull.Value);
...
When I finally execute the insert:
try
{
command.ExecuteNonQuery();
}
catch (System.Data.SqlClient.SqlException e)
{
CLog.Write(e.Message.ToString(), CLog.ErrLvl.Error);...
}
I get the error:
String or binary data would be truncated
My issue is, the error doesn't tell me which column would be truncated. I've got 80 columns I'm inserting into, and I'd rather not go through them one-by-one. Is there a way to get the error handling to tell me exactly which field is throwing the error?
EDIT: I have a full stack trace in my log file but it still doesn't tell me which column, I just shortened it to the actual error here.
Switching to using strongly typed data access would head this one off sooner:
Add a dataset file to your project
Open it, right click the surface, add tableadapter, set connection parameters, add a query of SELECT * FROM [Case]
FInish the wizard, a datatable and adapter are generated. The DB is used to drive the creation, so all the string columns have a MaxLength property in the dataset that comes from the DB
Attempting to add a row to this table will now cause an error like "unable to set column XYZ, the value violates the MaxLength limit for the column"
Data access code looks like:
var dt = new YourDataSetNameHere.CaseDataTable();
dt.AddCaseRow(put, your, values, here, you , dont, need, to worry, about, null, this, or, data, type, that, because, VS, handles, it, all, for, you, in, the, DataSet.Designer.cs, file);
new YourDataSetNameHereTableAdapters.CaseTableAdapter().Update(dt); //save the new row;
So it'll save you a boatload of time writing boring data access code too
Depending on your SQL version you can apply a KB to get this to show more data as stated here - Link
It effectively starts to show messages like the following
Msg 2628, Level 16, State 6, Procedure ProcedureName, Line Linenumber
String or binary data would be truncated in table '%.*ls', column
'%.*ls'. Truncated value: '%.*ls'.
This came from this great post Link which goes much further to explain how you can try and search for the column should this not be possible. The post also talks about how you can do manual searching although I'd imagine if the list of columns is too large that may be something you want to avoid.
Looks like the value of one or more of your parameters has more length than the table cell can contain.
You should to look at table column definitions.

SSIS: Column Size Not Changing Based on Query

I have a package. It has a query that feeds into a Script Component.
In the query I am selecting a varchar(8) column from a table and then I CAST(myDateCol AS varchar(10)).
SELECT
myPK,
CAST(myDateCol AS varchar(10)), --myDateCol defined as varchar(8)
myOtherCol
FROM
MyServer.MySchema.MyTable
In my script, I am trying to add two characters to the Row.myDateCol in Input0 but I get a Buffer Error and it is in the property setter for myDateCol. You can see that it sets the property to 8 characters but errors out after that.
What I've done is add an output column with Length = 10, set it, and mapped that to the next component in the package but that seems a little silly.
Is there a way to force the size of your input columns based off of the query OR is there a way that I can manually force a refresh in case the package is just stuck thinking that I'm dealing with a varchar(8) as the CAST operation was added later?
Additional Info:
Row.myDateCol = "20170404"
Row.myDateCol = "2017-04-04" // Errors out
This is normal behavior for SSIS. When you create a data source which uses a SQL query, SSIS will look at your query and build the the metadata for the dataflow. The data source will only recalculate that metadata if you change the structure of your query, for example number columns or their names.
The easiest way to force a refresh of the data types without resorting to renaming columns is to go to the columns page of the data source editor, Untick and then tick the top tick box of the Available External Columns. This will deselect all columns and re-select them and at the same time refresh the metadata. You can easily confirm this by hovering your mouse over the External\Output column names listed in the lower section.
Your problem is the result of dealing with Date(Time) as text instead of the number(s) it is. And I really cannot tell from your question if you want to want to add the extra characters added in at the Data Layer (Sql) or at the Application (C#) Layer.
Casting VarChar(8) => VarChar(10) will still just return VarChar(8) if you don't fill in (pad) that value. You could try a Cast VarChar(8) to Char(10).
Another option would be a double conversion of your column value to Date and then back to your desired varchar(10).
SELECT myPK,
Convert(VarChar(10), Convert(Date, myDateCol, 112), 120),
myOtherCol
FROM
MyServer.MySchema.MyTable
So, after some playing around, I found that renaming the column changed the size to varchar(10) per below:
SELECT
myPK,
CAST(myDateCol AS varchar(10)) AS DATECOL,
myOtherCol
FROM
MyServer.MySchema.MyTable
I then changed it back
SELECT
myPK,
CAST(myDateCol AS varchar(10)),
myOtherCol
FROM
MyServer.MySchema.MyTable
And the change stuck. I don't know why or how but VS/SSIS somehow never refreshed itself to change to a different type. I assume it has no handling for query changes after the initial query is input unless names/aliases change.
This wasn't just my machine either. Weird.

Cannot insert sdo_geometry with more than 500 vertices

I have the following table
CREATE TABLE MYTABLE (MYID VARCHAR2(5), MYGEOM MDSYS.SDO_GEOMETRY );
AND the sql statement below:
INSERT INTO MYTABLE (MYID,MYGEOM) VALUES
( 255, SDO_GEOMETRY(2003, 2554, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1),
SDO_ORDINATE_ARRAY(-34.921816571,-8.00119170599993,
...,-34.921816571,-8.00119170599993)));
Even after read several articles about possible solutions, I couldn't find out how to insert this sdo_geometry object.
The Oracle complains with this message:
ORA-00939 - "too many arguments for funcion"
I know that it's not possible to insert more then 999 values at once.
I tried stored procedure solutions, but I'm not Oracle expert, and maybe I missed something.
Could someone give me an example of code in c# or plsql ( or the both ) with or without stored procedure, to insert that row?
I'm using Oracle 11g, OracleDotNetProvider v 12.1.400 on VS2015 AND my source of spatial data comes from an external json ( so, no database-to-database ) and I can only use solutions using this provider, without datafiles or direct database handling.
I'm using SQLDeveloper to test the queries.
Please, don't point me articles if you are not sure that works with this row/value
I finally found an effective solution. Here: Constructing large sdo_geometry objects in Sql Developer and SqlPlus. Pls-00306 Error
The limitation you see is old. It is based on the idea that no-one would ever write a function that would have more than 1000 parameters (actually 999 input parameters and 1 return value).
However with the advent of multi-valued attributes (VARRAYs) and objects, this is no longer true. In particular for spatial types, the SDO_ORDINATE attribute is really an object type (implemented as a VARRAY) and the reference to SDO_ORDINATE is the constructor of that object type. Its input can be an array (if used in some programming language) or a list of numbers, each one being considered a parameter to a function - hence the limit to 999 numbers).
That happens only if you hard-code the numbers in your SQL statement. But that is a bad practice generally. The better practice is to use bind variables, and object types are no exception. The proper way is to construct an array with the coordinates you want to insert and pass those to the insert statement. Or construct the entire SDO_GEOMETRY object as a bind variable.
And of course, the very idea of constructing a complex geometry entirely manually by hardcoding the coordinates is absurd. That shape will either be loaded from a file (and a loading tool will take care of that), or capture by someone drawing a shape over a map - and then your GIS/capture tool will pass the coordinates to your application for insertion into your database.
In other words, that limitation to 999 attributes / numbers is rarely seen in real life. When it does, it reflects misunderstandings on how those things work.

Store values in separate, C# type-specific columns or all in one column?

I'm building a C# project configuration system that will store configuration values in a SQL Server db.
I was originally going to set the table up as such:
KeyId int
FieldName varchar
DataType varchar
StringValue varchar
IntValue int
DecimalValue decimal
...
Values would be stored and retrieved with the value in the DataType column determining which Value column to use, but I really don't like that design. So I thought I'd go this route:
KeyId int
FieldName varchar
DataType varchar
Value varbinary
Here the value in DataType would still determine the type of Value brought back, but it would all be in one column and I wouldn't have to write a ton of overloads to accommodate the different types like I would have with the previous solution. I would just pull the Value in as a byte array and use DataType to perform whatever conversion(s) necessary to get my Value.
Is the varbinary approach going to cause any performance issues or is it just bad practice to drop all these different types of data into a varbinary? I've been searching around for about an hour and I can't get to a definitive answer.
Also, if there is a more preferred method anyone can think of to reach the same conclusion, I'm all ears (or eyes).
You could serialize your settings as JSON and just store that as a string. Then you have all the settings within one row and your clients can deserialize as needed. This is also a safe way to add additional settings at any time without any modifications to your database.
We are using the second solution and it works well. Remember, that the disk access is in orders of magnitude greater, than the ex. casting operation (it's milliseconds vs. nanoseconds, see ref), so do not look for bottleneck here.
The solution can be to implement polymorphic association (1, 2). But I dont think there is a need for that, or that you should do this. The second solution is close to non-Sql db - you can dump as a value anything, might be as well entire html markup for a page. It should be the caller responsability to know what to do wit the data.
Also, see threads on how to store settings in DB: 1, 2 and 3 for critique.

I Want to know the exact datatype(dbtype) of the ms access database table column in c#

need to know the exact info of database and containing tables using c#.
database is MS access.i want to full info of the tables in it like primary key,max length,not null of the columns in tables in ms access database,etc..
so whats the best way of doing it....
advanced thanx for any kind of help.
another issue is getschema gives me datatypes in numeric way like 130,131..
so how can i use them in create table query they give error
let me explain what i am trying to do.i want to recreate the database about which i have no information.i don't know about its size,tables,data or any thing.
actually i have succeeded to an extent.what i have done is i get the db name and create it with CatalogClass and with getschema(tables) i get all the table names and create them with create table from C#.then column names with alter table.and now i have to give it constraints which are in the DB which have been provided.
so,other then this method i have used is there any thing else which i am missing.any easy or better way available to do this.so, it can go faster
question is still open
I believe everything is documented at the link below, try to run it step by step with debug and then u can inspect the element and display every value you want.
http://msdn.microsoft.com/en-us/library/system.data.datatable.aspx
Primary Key:
DataTable.PrimaryKey
Max Length, of what? Records?
DataTable.Rows.Count
Columns?
DataTable.Columns.Rows
It appears that you are using a schema to return the field types. I have been testing, and something on these lines appears to return what you want.
ADODB.Connection cn = new ADODB.Connection();
ADODB.Recordset rs = new ADODB.Recordset();
string cnStr;
cnStr = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=Z:\\Docs\\Test.accdb";
string ssql = "Select * From Table1 where 1=2";
cn.Open(cnStr, null, null, 0);
rs.Open(ssql, cn, ADODB.CursorTypeEnum.adOpenKeyset,
ADODB.LockTypeEnum.adLockOptimistic, -1);
foreach (ADODB.Field fld in rs.Fields)
{
Console.WriteLine(fld.Type);
}
Console.Read();
rs.Close();
cn.Close();
For various types this returns:
adInteger
adVarWChar = Text
adDate
adInteger
adLongVarWChar = Memo
adVarWChar
adDate
adBoolean

Categories

Resources