I have a .net winforms application which is using SQLite DB and I am using System.Data.SQLite dll.
This is how I load a Datatable:
DataTable table = new DataTable();
SQLiteDataAdapter m_readingsDataTableDataAdapter;
SQLiteCommandBuilder m_readingsDataTableCommandBuilder;
m_readingsDataTableCommandBuilder = null;
m_readingsDataTableDataAdapter = null;
// Create fresh objects
m_readingsDataTableDataAdapter = new SQLiteDataAdapter(sql, Database.getInstance().connectionObj);
m_readingsDataTableCommandBuilder = new SQLiteCommandBuilder(m_readingsDataTableDataAdapter);
m_readingsDataTableDataAdapter.Fill(table);
return table;
This data table has one primary key and no other constraints.
I set it as a data source for DataGridView and after all the edits, I update the DataTable like this:
m_readingsDataTableDataAdapter.Update(table);
Occasionally, the updates fires an error and I don't know when, it stops throwing errors - probably after system restart ( not sure ). And then, the updates go fine until another situation where this update throws an error again. When the error occurs, it happens for all updates from then on even after application restart.
Error:
An unhandled exception of type 'System.Data.DBConcurrencyException' occurred in System.Data.dll
Additional information: Concurrency violation: the UpdateCommand affected 0 of the expected 1 records.
I would appreciate any help or questions as this is quite a critical section of my project.
Thanks.
Update:
Based on suggestions, to ensure no other part of the program is editing the row, I loaded the DataTable, updated the row immediately and updated it as per the following code. And I still got the error. There is no other program running on my machine so the row is not updated by any other program except my application:
DataTable table = new DataTable();
SQLiteDataAdapter m_readingsDataTableDataAdapter;
SQLiteCommandBuilder m_readingsDataTableCommandBuilder;
m_readingsDataTableCommandBuilder = null;
m_readingsDataTableDataAdapter = null;
// Create fresh objects
m_readingsDataTableDataAdapter = new SQLiteDataAdapter(sql, Database.getInstance().connectionObj);
m_readingsDataTableCommandBuilder = new SQLiteCommandBuilder(m_readingsDataTableDataAdapter);
m_readingsDataTableDataAdapter.Fill(table);
// Update immediately
table.Rows[0]["RS485_ADDRESS"] = "400";
m_readingsDataTableDataAdapter.Update(table); // - Still throws error
return table;
I had a similar issue that drove me nearly mad. I even debugged into System.Data.SQLite . My issue was caused by the dynamic typing of SQLite:
Dynamic typing means that the column type is just a hint for the type of the stored value but not an enforcement. Here is what happened in my application: An integer column contained an empty string. Fetching the data triggered an implicit conversion to the column type so the empty string was converted to zero. The DbDataAdapter used this zero value in the WHERE clause of an UPDATE or DELETE statement which failed miserably because the conversion of zero to a string is not an empty string. I changed the affected integer columns to contain NULL and everything is fine now.
The sad fact about that: It is not a bug in SQLite or ADO.NET . DbAdapter and DbCommandBuilder expect strong typing and SQLite doesn't provide that by design.
I've seen this problem, it seemed to appear in some cases and not others. I have resolved it a different way. Since this is primarily a data issue, it involves changing the update and delete statements. There may be cases where your code might need to be validated, I'm not referring to those cases, this is for when all else seem to be in order and "Concurrency Violation" persists. And a good starting place before you spend hours debugging is to address how the updates are handled.
Open the Data Set Designer.
Click on the Table Adapter.
In the properties, expand the UpdateCommand
Edit the SQL so the Where-Clause refers only to the primary-key.
Example: Suppose the column EmployeeID is the primary-key in our table
Change to:
update ..... WHERE EmployeeID=#EmployeeID
By default it's usually constructed as:
update ... where ID=#ID AND col1=#col1 AND col2=#col2 AND col3=#col3
The where-clause is formed with all the columns to handle multi-user updates, to enforce an implicit concurrency. So if one user has a copy of the record, and another user updates that version by the time the first user updates it, first user's version is not current. Otherwise, primary-keys should be used to refer to a specific record.
But, this is an update/delete policy decision that the development and database teams need to coordinate. For a single developer/dba in-one, have a little meeting with yourself. Don't take the Microsoft where-clause without your review and understanding and explicit implementation. Need to specifically address what happens when 2 users have a copy of the same record, both from the application and database levels. Database "Transaction Isolation" Levels in SQL Server is a way to address it. ADO, ADO.NET, OLE DB, ODBC all have isolation level settings. More on SQL Server Transactions and isolation levels
In some cases multiple users are not involved, or the application is updating internally, concurrency is fully controlled, where-clause can be formed with the primary-key.
In my case, the update is done without a user involvement, so it makes sense to use the primary key. If you understand the reasoning and the cause of the behavior, it's a matter of adjusting to fit your environment.
The error occurs when I try populating a datasets datatable.
ds = dsMyDataset;
sqlAdapter.Fill(ds, dsMyDataset.MyTable.TableName); // Uses the SQLAdapters 'select command'
On fill, I get this error:
Additional information: 'Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.'
I run this command in the command window:
dsMyDataset.MyTable.GetErrors()[0].RowError
And it informs me that "Column 'XMLHistory' exceeds the MaxLength limit."
Odd because in the dataset's length property specifies a length of '2147483647'for that field.
So in my sqladapter I specify a size:
sqlComm.Parameters.Add("#XMLHistory", SqlDbType.Text, 2147483647, "XMLHistory");
But this apparently does not help my issue out. Also noticed that the size in the DB for that column is '16'.
Now here's an odd thing I noticed. What this program allows me to do is archive data from one DB to another. So I have a list of tables that the user can select from to archive. What's odd is that if I select one table it archives just fine. This issue as I've listed above doesn't actually fire until after the first table archives and the system attempts to move on to the second table. The moment it tries to fill the DS, booooom. Dead.
Any suggestions/thoughts appreciated. Cheers.
I just had the same error when reading a database in an old project. Was weird because it was reporting cannot exceed length 110 on column B but in the DB design that was the size of its adjacent field A. I tried a reconfigure on the tabeladapter in the dataset designer and it still acted cross wired on the field max sizes. I then just dropped the table in the designer and re-added it. Then it worked. The database table design had been modified in the database since the xsd was created.
So I figured out the issue. One of the xmls that ties to the backend of the dataset itself had a node specifying max length. It was set to 7700. This 7700 was overwriting the property setting in the designer. Not sure how it got set to begin with, but it was super frustrating.
Thanks all for the help.
I'm trying to populate a DataTable, to build a LocalReport, using the following:
MySqlCommand cmd = new MySqlCommand();
cmd.Connection = new MySqlConnection(Properties.Settings.Default.dbConnectionString);
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT ... LEFT JOIN ... WHERE ..."; /* query snipped */
// prepare data
dataTable.Clear();
cn.Open();
// fill datatable
dt.Load(cmd.ExecuteReader());
// fill report
rds = new ReportDataSource("InvoicesDataSet_InvoiceTable",dt);
reportViewerLocal.LocalReport.DataSources.Clear();
reportViewerLocal.LocalReport.DataSources.Add(rds);
At one point I noticed that the report was incomplete and it was missing one record. I've changed a few conditions so that the query would return exactly two rows and... surprise: The report shows only one row instead of two. I've tried to debug it to find where the problem is and I got stuck at
dt.Load(cmd.ExecuteReader());
When I've noticed that the DataReader contains two records but the DataTable contains only one. By accident, I've added an ORDER BY clause to the query and noticed that this time the report showed correctly.
Apparently, the DataReader contains two rows but the DataTable only reads both of them if the SQL query string contains an ORDER BY (otherwise it only reads the last one). Can anyone explain why this is happening and how it can be fixed?
Edit:
When I first posted the question, I said it was skipping the first row; later I realized that it actually only read the last row and I've edited the text accordingly (at that time all the records were grouped in two rows and it appeared to skip the first when it actually only showed the last). This may be caused by the fact that it didn't have a unique identifier by which to distinguish between the rows returned by MySQL so adding the ORDER BY statement caused it to create a unique identifier for each row.
This is just a theory and I have nothing to support it, but all my tests seem to lead to the same result.
After fiddling around quite a bit I found that the DataTable.Load method expects a primary key column in the underlying data. If you read the documentation carefully, this becomes obvious, although it is not stated very explicitly.
If you have a column named "id" it seems to use that (which fixed it for me). Otherwise, it just seems to use the first column, whether it is unique or not, and overwrites rows with the same value in that column as they are being read. If you don't have a column named "id" and your first column isn't unique, I'd suggest trying to explicitly set the primary key column(s) of the datatable before loading the datareader.
Just in case anyone is having a similar problem as canceriens, I was using If DataReader.Read ... instead of If DataReader.HasRows to check existence before calling dt.load(DataReader) Doh!
I had same issue. I took hint from your blog and put up the ORDER BY clause in the query so that they could form together the unique key for all the records returned by query. It solved the problem. Kinda weird.
Don't use
dr.Read()
Because It moves the pointer to the next row.
Remove this line hope it will work.
Had the same issue. It is because the primary key on all the rows is the same. It's probably what's being used to key the results, and therefore it's just overwriting the same row over and over again.
Datatables.Load points to the fill method to understand how it works. This page states that it is primary key aware. Since primary keys can only occur once and are used as the keys for the row ...
"The Fill operation then adds the rows to destination DataTable objects in the DataSet, creating the DataTable objects if they do not already exist. When creating DataTable objects, the Fill operation normally creates only column name metadata. However, if the MissingSchemaAction property is set to AddWithKey, appropriate primary keys and constraints are also created." (http://msdn.microsoft.com/en-us/library/zxkb3c3d.aspx)
Came across this problem today.
Nothing in this thread fixed it unfortunately, but then I wrapped my SQL query in another SELECT statement and it work!
Eg:
SELECT * FROM (
SELECT ..... < YOUR NORMAL SQL STATEMENT HERE />
) allrecords
Strange....
Can you grab the actual query that is running from SQL profiler and try running it? It may not be what you expected.
Do you get the same result when using a SqlDataAdapter.Fill(dataTable)?
Have you tried different command behaviors on the reader? MSDN Docs
I know this is an old question, but for me the think that worked whilst querying an access database and noticing it was missing 1 row from query, was to change the following:-
if(dataset.read()) - Misses a row.
if(dataset.hasrows) - Missing row appears.
For anyone else that comes across this thread as I have, the answer regarding the DataTable being populated by a unique ID from MySql is correct.
However, if a table contains multiple unique IDs but only a single ID is returned from a MySql command (instead of receiving all Columns by using '*') then that DataTable will only organize by the single ID that was given and act as if a 'GROUP BY' was used in your query.
So in short, the DataReader will pull all records while the DataTable.Load() will only see the unique ID retrieved and use that to populate the DataTable thus skipping rows of information
Not sure why you're missing the row in the datatable, is it possible you need to close the reader? In any case, here is how I normally load reports and it works every time...
Dim deals As New DealsProvider()
Dim adapter As New ReportingDataTableAdapters.ReportDealsAdapter
Dim report As ReportingData.ReportDealsDataTable = deals.GetActiveDealsReport()
rptReports.LocalReport.DataSources.Add(New ReportDataSource("ActiveDeals_Data", report))
Curious to see if it still happens.
In my case neither ORDER BY, nor dt.AcceptChanges() is working. I dont know why is that problem for. I am having 50 records in database but it only shows 49 in the datatable. skipping first row, and if there is only one record in datareader it shows nothing at all.
what a bizzareeee.....
Have you tried calling dt.AcceptChanges() after the dt.Load(cmd.ExecuteReader()) call to see if that helps?
I know this is an old question, but I was experiencing the same problem and none of the workarounds mentioned here did help.
In my case, using an alias on the colum that is used as the PrimaryKey solved the issue.
So, instead of
SELECT a
, b
FROM table
I used
SELECT a as gurgleurp
, b
FROM table
and it worked.
I had the same problem.. do not used dataReader.Read() at all.. it will takes the pointer to the next row. Instead use directly datatable.load(dataReader).
Encountered the same problem, I have also tried selecting unique first column but the datatable still missing a row.
But selecting the first column(which is also unique) in group by solved the problem.
i.e
select uniqueData,.....
from mytable
group by uniqueData;
This solves the problem.
I am in the process of moving a database from one server to another. But now I am getting the error 'Invalid column name msrepl_tran_version'. This is a column which I deleted off the new database as it was related to replication which I no longer need.
I have recreated the Datasets, done a search for anything with msrepl_tran_version in the entire solution and nothing. I can't see where it is referencing this column from, it doesn't exist!
Any ideas would be much appreciated.
Thanks.
It is a transactional replication column and not straight forward to remove, it seems you haven't totally removed it...
how to remove msrepltranversion column
I'm hoping you guys can help me figure out why this is happening. I've been tearing my hair out trying to figure this out.
Here's an example directly from my code (with the boring bits cut out)
...(Set up the connection and command, initialize a datatable "dataTable")...
using (SqlDataReader reader = cmd.ExecuteReader())
{
//Query storage object
object[] buffer = new object[reader.FieldCount];
//Set the datatable schema to the schema from the query
dataTable = reader.GetSchemaTable();
//Read the query
while (reader.Read())
{
reader.GetValues(buffer);
dataTable.Rows.Add(buffer);
}
}
The error is
Input string was not in a correct format.Couldn't store in NumericScale Column. Expected type is Int16.
The specific column data types as returned by the schema are (ordered by column)
System.Data.SqlTypes.SqlInt32
System.Data.SqlTypes.SqlInt32
System.Data.SqlTypes.SqlByte
System.Data.SqlTypes.SqlMoney
System.Data.SqlTypes.SqlString
System.Data.SqlTypes.SqlGuid
System.Data.SqlTypes.SqlDateTime
It would appear that the data that should be in column #5 is actually appearing in column #3. But that is pure speculation.
What I know is that in order to use a dataTable "dynamically" with a query that can continue any number of different types of data the best route is to use GetSchemaTable() to retrieve it.
What I Saw In The Debugger
When I dropped into the debugger I took a look at dataTable's types built from the schema vs. the types returned to the object from reader.GetValues(). They are exactly the same.
It seems like dataTable.Rows.Add(buffer) is adding the columns a few columns off from where it should be. But this shouldn't be possible. Especially with the schema being directly built from the reader. I've played with options such as "CommandBehavior.KeyInfo" within ExecuteReader() and still had the same error occur.
Note: I need to run the query this way to enable the end-user to halt the query mid-read. Please do not suggest I scrap this and use an SqlDataAdapter or DataTable.Load() solution.
I'd really appreciate any help. Thank you!
Method DbDataReader.GetSchemaTable() return metadata table containing column names and types. It is not an empty table that one could expect. For more details see MSDN
I'm sorry but GetSchemaTable retrieves schema for a given table and GetValues retrieves the actual row data. E.g. your dataTable will contain columns like column name, column type, etc. (see MSDN reference) while your buffer will contain the actual data whose representation will differ from what the dataTable contains.
Why don't you just use:
dataTable.Fill();
Why are you manually loading it one row at a time?
Please check what you are inserting into the column row which gas GUID data type. I got the same error and found that while inserting records after reading records from csv file, there were couple of empty spaces in csv file.