The error occurs when I try populating a datasets datatable.
ds = dsMyDataset;
sqlAdapter.Fill(ds, dsMyDataset.MyTable.TableName); // Uses the SQLAdapters 'select command'
On fill, I get this error:
Additional information: 'Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.'
I run this command in the command window:
dsMyDataset.MyTable.GetErrors()[0].RowError
And it informs me that "Column 'XMLHistory' exceeds the MaxLength limit."
Odd because in the dataset's length property specifies a length of '2147483647'for that field.
So in my sqladapter I specify a size:
sqlComm.Parameters.Add("#XMLHistory", SqlDbType.Text, 2147483647, "XMLHistory");
But this apparently does not help my issue out. Also noticed that the size in the DB for that column is '16'.
Now here's an odd thing I noticed. What this program allows me to do is archive data from one DB to another. So I have a list of tables that the user can select from to archive. What's odd is that if I select one table it archives just fine. This issue as I've listed above doesn't actually fire until after the first table archives and the system attempts to move on to the second table. The moment it tries to fill the DS, booooom. Dead.
Any suggestions/thoughts appreciated. Cheers.
I just had the same error when reading a database in an old project. Was weird because it was reporting cannot exceed length 110 on column B but in the DB design that was the size of its adjacent field A. I tried a reconfigure on the tabeladapter in the dataset designer and it still acted cross wired on the field max sizes. I then just dropped the table in the designer and re-added it. Then it worked. The database table design had been modified in the database since the xsd was created.
So I figured out the issue. One of the xmls that ties to the backend of the dataset itself had a node specifying max length. It was set to 7700. This 7700 was overwriting the property setting in the designer. Not sure how it got set to begin with, but it was super frustrating.
Thanks all for the help.
Related
The more I read on this, the more confused I get, so hope someone can help. I have a complex database setup, which sometimes produces the error on update:
"Concurrency violation: the UpdateCommand affected 0 of the expected 1 records"
I say sometimes, because I cannot recreate conditions to trigger it consistently. I have a remote mySQL database connected to my app through the DataSource Wizard, which produces the dataset, tables and linked DataTableAdapters.
My reading suggests that this error is meant to occur when there is more than one open connection to the database trying to update the same record? This shouldn't happen in my instance, as the only updates are sequential from my app.
I am wondering whether it has something to do with running the update from a background worker? I have my table updates in one, for example, thusly:
Gi_gamethemeTableAdapter.Update(dbDS.gi_gametheme)
Gi_gameplaystyleTableAdapter.Update(dbDS.gi_gameplaystyle)
Gi_gameTableAdapter.Update(dbDS.gi_game)
These run serially in the backgroundworker, however, so unsure about this. The main thread also waits for it to finish, and there are no other db operations going on before or after this is started.
I did read about going into the dataset designer view, choosing "configure" in the datatableadapter > advanced options and setting "Use optimistic concurrency" to false. This might have worked (hard to say because of the seemingly random nature of the error), however, there are drawbacks to this that I want to avoid:
I have around 60 tables. I don't want to do this for each one.
I sometimes have to re-import the mysql schema into the dataset designer, or delete a table and re-add it. This would obviously lose this setting and I would have to remember to do it on all of them again, potentially. I also can't find a way to do this automatically in code.
I'm afraid I'm not at code level in terms of the database updates etc, relying on the Visual Studio wizards. It's a bit late to change the stack as well (e.g. can't change to Entity Framework etc).
SO my question is:
what is/how can I find what's causing the error?
What can I do about it?
thanks
When you have tableadapters that download data into datatables, they can be configured for optimistic concurrency
This means that for a table like:
Person
ID Name
1 John
They might generate an UPDATE query like:
UPDATE Person SET Name = #newName WHERE ID = #oldID AND Name = #oldName
(In reality they are more complex than this but this will suffice)
Datatables track original values and current values; you download 1/"John", and then change the name to "Jane", you(or the tableadapter) can ask the DT what the original value was and it will say "John"
The datatable can also feed this value into the UPDATE query and that's how we detect "if something else changed the row in the time we had it" i.e. a concurrency violation
Row was "John" when we downloaded it, we edited to "Jane", and went to save.. But someone else had been in and changed it to "Joe". Our update will fail because Name is no longer "John" that it was (and we still think it is) when we downloaded it. By dint of the tableadapter having an update query that said AND Name = #oldName, and setting #oldName parameter to the original value somedatarow["Name", DataRowVersion.Original].Value (i.e. "John") we cause the update to fail. This is a useful thing; mostly they will succeed so we can opportunistically hope our users can update our db without needing to get into locking rows while they have them open in some UI
Resolving the cases where it doesn't work is usually a case of coding up some strategy:
My changes win - don't use an optimistic query that features old values, just UPDATE and erase their changes
Their changes win - cancel your attempts
Re-download the latest DB state and choose what to do - auto merge it somehow (maybe the other person changed fields you didn't), or show the user so they can pick and choose what to keep etc (if both people edited the same fields)
Now you're probably sat there saying "but noone else changes my DB" - we can still get this though, if the database has changed some values upon one save and you don't have the latest ones in your dataset..
There's another option in the tableadapter wizardd - "refresh the dataset" - it's supposed to run a select after a modification to import any latest database calculated values (like auto inc primary keys or triggers/defaults/etc). Some query like INSERT INTO Person(Name) VALUES(#name) is supposed to silently have a SELECT * FROM PERSON WHERE ID = last_inserted_id() tagged on the end of it to retrieve the latest values
Except "refresh the dataset" doesn't work :/
So, while I can't tell you exactly why youre getting your CV exception, I hope that explaining why they occur and pointing out that there are sometimes bugs that cause them (insert new record, calculated ID is not retreieved, edit this recent record, update fails because data wasn't fresh) will hopefully arm you with what you need to find the problem: when you get one, keep the app stopped on the breakpoint and inspect the datarow: take a look at the query being run and what original/current values are being put as parameters - inspect the original and current values held by the row using the overload of the Item indexer that allows you to state the version you want and look in the DB
Somewhere in all of that there will be the mismatch that explains why 0 records were updated - the db has "Joe" as the name or 174354325 as the ID, your datarow has "John" as the original name or -1 as the ID (it never refreshed), and the WHERE clause is finding 0 records as a result
Some of your tables will contain a field that is marked as [ConcurrencyCheck] or [TimeStamp] concurrency token.
When you update a record, the SQL generated will include a WHERE [ConcurrencyField]='Whatever the value was when the record was retrieved'.
If that record was updated by another thread or process or something other than the current thread, then your UPDATE will return 0 records updated, rather than the 1 (or more) that was expected.
What can you do about it? Firstly, put a try/catch(DbConcurrencyException) around your code. Then you can re-read the offending record and try and update it again.
I have a .net winforms application which is using SQLite DB and I am using System.Data.SQLite dll.
This is how I load a Datatable:
DataTable table = new DataTable();
SQLiteDataAdapter m_readingsDataTableDataAdapter;
SQLiteCommandBuilder m_readingsDataTableCommandBuilder;
m_readingsDataTableCommandBuilder = null;
m_readingsDataTableDataAdapter = null;
// Create fresh objects
m_readingsDataTableDataAdapter = new SQLiteDataAdapter(sql, Database.getInstance().connectionObj);
m_readingsDataTableCommandBuilder = new SQLiteCommandBuilder(m_readingsDataTableDataAdapter);
m_readingsDataTableDataAdapter.Fill(table);
return table;
This data table has one primary key and no other constraints.
I set it as a data source for DataGridView and after all the edits, I update the DataTable like this:
m_readingsDataTableDataAdapter.Update(table);
Occasionally, the updates fires an error and I don't know when, it stops throwing errors - probably after system restart ( not sure ). And then, the updates go fine until another situation where this update throws an error again. When the error occurs, it happens for all updates from then on even after application restart.
Error:
An unhandled exception of type 'System.Data.DBConcurrencyException' occurred in System.Data.dll
Additional information: Concurrency violation: the UpdateCommand affected 0 of the expected 1 records.
I would appreciate any help or questions as this is quite a critical section of my project.
Thanks.
Update:
Based on suggestions, to ensure no other part of the program is editing the row, I loaded the DataTable, updated the row immediately and updated it as per the following code. And I still got the error. There is no other program running on my machine so the row is not updated by any other program except my application:
DataTable table = new DataTable();
SQLiteDataAdapter m_readingsDataTableDataAdapter;
SQLiteCommandBuilder m_readingsDataTableCommandBuilder;
m_readingsDataTableCommandBuilder = null;
m_readingsDataTableDataAdapter = null;
// Create fresh objects
m_readingsDataTableDataAdapter = new SQLiteDataAdapter(sql, Database.getInstance().connectionObj);
m_readingsDataTableCommandBuilder = new SQLiteCommandBuilder(m_readingsDataTableDataAdapter);
m_readingsDataTableDataAdapter.Fill(table);
// Update immediately
table.Rows[0]["RS485_ADDRESS"] = "400";
m_readingsDataTableDataAdapter.Update(table); // - Still throws error
return table;
I had a similar issue that drove me nearly mad. I even debugged into System.Data.SQLite . My issue was caused by the dynamic typing of SQLite:
Dynamic typing means that the column type is just a hint for the type of the stored value but not an enforcement. Here is what happened in my application: An integer column contained an empty string. Fetching the data triggered an implicit conversion to the column type so the empty string was converted to zero. The DbDataAdapter used this zero value in the WHERE clause of an UPDATE or DELETE statement which failed miserably because the conversion of zero to a string is not an empty string. I changed the affected integer columns to contain NULL and everything is fine now.
The sad fact about that: It is not a bug in SQLite or ADO.NET . DbAdapter and DbCommandBuilder expect strong typing and SQLite doesn't provide that by design.
I've seen this problem, it seemed to appear in some cases and not others. I have resolved it a different way. Since this is primarily a data issue, it involves changing the update and delete statements. There may be cases where your code might need to be validated, I'm not referring to those cases, this is for when all else seem to be in order and "Concurrency Violation" persists. And a good starting place before you spend hours debugging is to address how the updates are handled.
Open the Data Set Designer.
Click on the Table Adapter.
In the properties, expand the UpdateCommand
Edit the SQL so the Where-Clause refers only to the primary-key.
Example: Suppose the column EmployeeID is the primary-key in our table
Change to:
update ..... WHERE EmployeeID=#EmployeeID
By default it's usually constructed as:
update ... where ID=#ID AND col1=#col1 AND col2=#col2 AND col3=#col3
The where-clause is formed with all the columns to handle multi-user updates, to enforce an implicit concurrency. So if one user has a copy of the record, and another user updates that version by the time the first user updates it, first user's version is not current. Otherwise, primary-keys should be used to refer to a specific record.
But, this is an update/delete policy decision that the development and database teams need to coordinate. For a single developer/dba in-one, have a little meeting with yourself. Don't take the Microsoft where-clause without your review and understanding and explicit implementation. Need to specifically address what happens when 2 users have a copy of the same record, both from the application and database levels. Database "Transaction Isolation" Levels in SQL Server is a way to address it. ADO, ADO.NET, OLE DB, ODBC all have isolation level settings. More on SQL Server Transactions and isolation levels
In some cases multiple users are not involved, or the application is updating internally, concurrency is fully controlled, where-clause can be formed with the primary-key.
In my case, the update is done without a user involvement, so it makes sense to use the primary key. If you understand the reasoning and the cause of the behavior, it's a matter of adjusting to fit your environment.
I have long field in my table. When I make the copies by bulkcopy the following message appears.
ORA-26089: LONG column "CORPO_EMAIL" must be specified last
I do not understand the reason for this error. Is there any special configuration I need to do for this column in a DataTable?
To resolve the issue you need to specify the LONG column ie, CORPO_EMAIL at the last.
The cause of this error is that a client of the direct path API specified a LONG column to be loaded, but the LONG column was not the last column to be specified.
Also check How to solve Error Code ORA-26089.
I'm trying to populate a DataTable, to build a LocalReport, using the following:
MySqlCommand cmd = new MySqlCommand();
cmd.Connection = new MySqlConnection(Properties.Settings.Default.dbConnectionString);
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT ... LEFT JOIN ... WHERE ..."; /* query snipped */
// prepare data
dataTable.Clear();
cn.Open();
// fill datatable
dt.Load(cmd.ExecuteReader());
// fill report
rds = new ReportDataSource("InvoicesDataSet_InvoiceTable",dt);
reportViewerLocal.LocalReport.DataSources.Clear();
reportViewerLocal.LocalReport.DataSources.Add(rds);
At one point I noticed that the report was incomplete and it was missing one record. I've changed a few conditions so that the query would return exactly two rows and... surprise: The report shows only one row instead of two. I've tried to debug it to find where the problem is and I got stuck at
dt.Load(cmd.ExecuteReader());
When I've noticed that the DataReader contains two records but the DataTable contains only one. By accident, I've added an ORDER BY clause to the query and noticed that this time the report showed correctly.
Apparently, the DataReader contains two rows but the DataTable only reads both of them if the SQL query string contains an ORDER BY (otherwise it only reads the last one). Can anyone explain why this is happening and how it can be fixed?
Edit:
When I first posted the question, I said it was skipping the first row; later I realized that it actually only read the last row and I've edited the text accordingly (at that time all the records were grouped in two rows and it appeared to skip the first when it actually only showed the last). This may be caused by the fact that it didn't have a unique identifier by which to distinguish between the rows returned by MySQL so adding the ORDER BY statement caused it to create a unique identifier for each row.
This is just a theory and I have nothing to support it, but all my tests seem to lead to the same result.
After fiddling around quite a bit I found that the DataTable.Load method expects a primary key column in the underlying data. If you read the documentation carefully, this becomes obvious, although it is not stated very explicitly.
If you have a column named "id" it seems to use that (which fixed it for me). Otherwise, it just seems to use the first column, whether it is unique or not, and overwrites rows with the same value in that column as they are being read. If you don't have a column named "id" and your first column isn't unique, I'd suggest trying to explicitly set the primary key column(s) of the datatable before loading the datareader.
Just in case anyone is having a similar problem as canceriens, I was using If DataReader.Read ... instead of If DataReader.HasRows to check existence before calling dt.load(DataReader) Doh!
I had same issue. I took hint from your blog and put up the ORDER BY clause in the query so that they could form together the unique key for all the records returned by query. It solved the problem. Kinda weird.
Don't use
dr.Read()
Because It moves the pointer to the next row.
Remove this line hope it will work.
Had the same issue. It is because the primary key on all the rows is the same. It's probably what's being used to key the results, and therefore it's just overwriting the same row over and over again.
Datatables.Load points to the fill method to understand how it works. This page states that it is primary key aware. Since primary keys can only occur once and are used as the keys for the row ...
"The Fill operation then adds the rows to destination DataTable objects in the DataSet, creating the DataTable objects if they do not already exist. When creating DataTable objects, the Fill operation normally creates only column name metadata. However, if the MissingSchemaAction property is set to AddWithKey, appropriate primary keys and constraints are also created." (http://msdn.microsoft.com/en-us/library/zxkb3c3d.aspx)
Came across this problem today.
Nothing in this thread fixed it unfortunately, but then I wrapped my SQL query in another SELECT statement and it work!
Eg:
SELECT * FROM (
SELECT ..... < YOUR NORMAL SQL STATEMENT HERE />
) allrecords
Strange....
Can you grab the actual query that is running from SQL profiler and try running it? It may not be what you expected.
Do you get the same result when using a SqlDataAdapter.Fill(dataTable)?
Have you tried different command behaviors on the reader? MSDN Docs
I know this is an old question, but for me the think that worked whilst querying an access database and noticing it was missing 1 row from query, was to change the following:-
if(dataset.read()) - Misses a row.
if(dataset.hasrows) - Missing row appears.
For anyone else that comes across this thread as I have, the answer regarding the DataTable being populated by a unique ID from MySql is correct.
However, if a table contains multiple unique IDs but only a single ID is returned from a MySql command (instead of receiving all Columns by using '*') then that DataTable will only organize by the single ID that was given and act as if a 'GROUP BY' was used in your query.
So in short, the DataReader will pull all records while the DataTable.Load() will only see the unique ID retrieved and use that to populate the DataTable thus skipping rows of information
Not sure why you're missing the row in the datatable, is it possible you need to close the reader? In any case, here is how I normally load reports and it works every time...
Dim deals As New DealsProvider()
Dim adapter As New ReportingDataTableAdapters.ReportDealsAdapter
Dim report As ReportingData.ReportDealsDataTable = deals.GetActiveDealsReport()
rptReports.LocalReport.DataSources.Add(New ReportDataSource("ActiveDeals_Data", report))
Curious to see if it still happens.
In my case neither ORDER BY, nor dt.AcceptChanges() is working. I dont know why is that problem for. I am having 50 records in database but it only shows 49 in the datatable. skipping first row, and if there is only one record in datareader it shows nothing at all.
what a bizzareeee.....
Have you tried calling dt.AcceptChanges() after the dt.Load(cmd.ExecuteReader()) call to see if that helps?
I know this is an old question, but I was experiencing the same problem and none of the workarounds mentioned here did help.
In my case, using an alias on the colum that is used as the PrimaryKey solved the issue.
So, instead of
SELECT a
, b
FROM table
I used
SELECT a as gurgleurp
, b
FROM table
and it worked.
I had the same problem.. do not used dataReader.Read() at all.. it will takes the pointer to the next row. Instead use directly datatable.load(dataReader).
Encountered the same problem, I have also tried selecting unique first column but the datatable still missing a row.
But selecting the first column(which is also unique) in group by solved the problem.
i.e
select uniqueData,.....
from mytable
group by uniqueData;
This solves the problem.
I'm asking about a problem with C# and Visual Studio 2012; I'm trying to resize a column of a table in my database, effect with the dataTableAdapter on a Dataset.xsd
I'm using DataTableAdapter from a stored procedure with a SELECT statement to populate a DataGridView, reports and many more.
I created the table long time ago, but now there is an a problem with it.
I had to increase the length of a column and I changed the appropriate column length of the DataTable also. But it didn't give me the solution. still whenever I Fill or Get data through that DataTableAdapter it response with the previous (original) size of the column.
But when I create a new DataTable and redirect my code to the new DataTableAdapter, it works.
Why is this happening ?
Because redirecting code to the new DataTableAdapter is little bit difficult because I don't know all the places it use in the entire solution.
And also if can please tell me how to add new column to the table and deal with the DataTableAdapter with it also.
Thanks and waiting for your reply.
after doing small research with my friends , i got an proper way to fix this error.
not even re-sizing, but also any other change with the database Table or storedprocedure you have to reconfigure the dataTableAdaptor, unless it just work as , when it was created.
it will continue with major errors or sometimes, it will function incorrectly, even you cant figure out there is an error.
so whenever you do any change with the database Table or storedprocedure go to the dataSet.xsd , where the dataTable locate and right on the dataTable , then configure it again.
this saved me and worked.