I perform a set of operations on a dataset table:
MyDataSet sharedDS = new MyDataSet();
MyDataSet referenceDS = new MyDataSet();
sharedDS.Table1.Reset();
sharedDS.Merge(referenceDS);
I get a System.ArgumentException: Column_X does not exist in Table1 if I try to access the column this way:
MyDataSet.Table1.FindByKey().Column_X
However, this way everything's fine:
MyDataSet.Table1.FindByKey()["Column_X"]
Can anyone explain what's the issue here?
Reference (originally meant for another problem): Reset primary key
I think this line :
sharedDS.Table1.Reset();
is causing you trouble.
I think the .reset is clearing the schema. Use .Clear() istead!
Related
I'm trying to populate a DataTable, to build a LocalReport, using the following:
MySqlCommand cmd = new MySqlCommand();
cmd.Connection = new MySqlConnection(Properties.Settings.Default.dbConnectionString);
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT ... LEFT JOIN ... WHERE ..."; /* query snipped */
// prepare data
dataTable.Clear();
cn.Open();
// fill datatable
dt.Load(cmd.ExecuteReader());
// fill report
rds = new ReportDataSource("InvoicesDataSet_InvoiceTable",dt);
reportViewerLocal.LocalReport.DataSources.Clear();
reportViewerLocal.LocalReport.DataSources.Add(rds);
At one point I noticed that the report was incomplete and it was missing one record. I've changed a few conditions so that the query would return exactly two rows and... surprise: The report shows only one row instead of two. I've tried to debug it to find where the problem is and I got stuck at
dt.Load(cmd.ExecuteReader());
When I've noticed that the DataReader contains two records but the DataTable contains only one. By accident, I've added an ORDER BY clause to the query and noticed that this time the report showed correctly.
Apparently, the DataReader contains two rows but the DataTable only reads both of them if the SQL query string contains an ORDER BY (otherwise it only reads the last one). Can anyone explain why this is happening and how it can be fixed?
Edit:
When I first posted the question, I said it was skipping the first row; later I realized that it actually only read the last row and I've edited the text accordingly (at that time all the records were grouped in two rows and it appeared to skip the first when it actually only showed the last). This may be caused by the fact that it didn't have a unique identifier by which to distinguish between the rows returned by MySQL so adding the ORDER BY statement caused it to create a unique identifier for each row.
This is just a theory and I have nothing to support it, but all my tests seem to lead to the same result.
After fiddling around quite a bit I found that the DataTable.Load method expects a primary key column in the underlying data. If you read the documentation carefully, this becomes obvious, although it is not stated very explicitly.
If you have a column named "id" it seems to use that (which fixed it for me). Otherwise, it just seems to use the first column, whether it is unique or not, and overwrites rows with the same value in that column as they are being read. If you don't have a column named "id" and your first column isn't unique, I'd suggest trying to explicitly set the primary key column(s) of the datatable before loading the datareader.
Just in case anyone is having a similar problem as canceriens, I was using If DataReader.Read ... instead of If DataReader.HasRows to check existence before calling dt.load(DataReader) Doh!
I had same issue. I took hint from your blog and put up the ORDER BY clause in the query so that they could form together the unique key for all the records returned by query. It solved the problem. Kinda weird.
Don't use
dr.Read()
Because It moves the pointer to the next row.
Remove this line hope it will work.
Had the same issue. It is because the primary key on all the rows is the same. It's probably what's being used to key the results, and therefore it's just overwriting the same row over and over again.
Datatables.Load points to the fill method to understand how it works. This page states that it is primary key aware. Since primary keys can only occur once and are used as the keys for the row ...
"The Fill operation then adds the rows to destination DataTable objects in the DataSet, creating the DataTable objects if they do not already exist. When creating DataTable objects, the Fill operation normally creates only column name metadata. However, if the MissingSchemaAction property is set to AddWithKey, appropriate primary keys and constraints are also created." (http://msdn.microsoft.com/en-us/library/zxkb3c3d.aspx)
Came across this problem today.
Nothing in this thread fixed it unfortunately, but then I wrapped my SQL query in another SELECT statement and it work!
Eg:
SELECT * FROM (
SELECT ..... < YOUR NORMAL SQL STATEMENT HERE />
) allrecords
Strange....
Can you grab the actual query that is running from SQL profiler and try running it? It may not be what you expected.
Do you get the same result when using a SqlDataAdapter.Fill(dataTable)?
Have you tried different command behaviors on the reader? MSDN Docs
I know this is an old question, but for me the think that worked whilst querying an access database and noticing it was missing 1 row from query, was to change the following:-
if(dataset.read()) - Misses a row.
if(dataset.hasrows) - Missing row appears.
For anyone else that comes across this thread as I have, the answer regarding the DataTable being populated by a unique ID from MySql is correct.
However, if a table contains multiple unique IDs but only a single ID is returned from a MySql command (instead of receiving all Columns by using '*') then that DataTable will only organize by the single ID that was given and act as if a 'GROUP BY' was used in your query.
So in short, the DataReader will pull all records while the DataTable.Load() will only see the unique ID retrieved and use that to populate the DataTable thus skipping rows of information
Not sure why you're missing the row in the datatable, is it possible you need to close the reader? In any case, here is how I normally load reports and it works every time...
Dim deals As New DealsProvider()
Dim adapter As New ReportingDataTableAdapters.ReportDealsAdapter
Dim report As ReportingData.ReportDealsDataTable = deals.GetActiveDealsReport()
rptReports.LocalReport.DataSources.Add(New ReportDataSource("ActiveDeals_Data", report))
Curious to see if it still happens.
In my case neither ORDER BY, nor dt.AcceptChanges() is working. I dont know why is that problem for. I am having 50 records in database but it only shows 49 in the datatable. skipping first row, and if there is only one record in datareader it shows nothing at all.
what a bizzareeee.....
Have you tried calling dt.AcceptChanges() after the dt.Load(cmd.ExecuteReader()) call to see if that helps?
I know this is an old question, but I was experiencing the same problem and none of the workarounds mentioned here did help.
In my case, using an alias on the colum that is used as the PrimaryKey solved the issue.
So, instead of
SELECT a
, b
FROM table
I used
SELECT a as gurgleurp
, b
FROM table
and it worked.
I had the same problem.. do not used dataReader.Read() at all.. it will takes the pointer to the next row. Instead use directly datatable.load(dataReader).
Encountered the same problem, I have also tried selecting unique first column but the datatable still missing a row.
But selecting the first column(which is also unique) in group by solved the problem.
i.e
select uniqueData,.....
from mytable
group by uniqueData;
This solves the problem.
I have to copy rows from one table to another. In the source table I can have RowError set on rows.
When I do this:
targetTable.BeginLoadData();
targetTable.Load( new DataTableReader( sourceTable ) )
targetTable.EndLoadData();
The target table does not get row errors copied on its rows from source table.
Can anyone tell what am I supposed to do to make it work?
Thanks.
EDIT: I do not want to lose the data already present in the target table. Nor do I want to change the reference.
Try this:
targetTable = sourceTable.Copy();
Creating a reader won't give you the expected result in this case, because its (reader) goal will be extracting the value of each row, not the appendant properties.
Update:
In this case, you should:
foreach (DataRow drImport in sourceTable.Rows) {
targetTable.ImportRow(drImport);
}
Sorry, just before posting I've seen your other observation about the reference. I'm afraid you can't have the same row (same reference) assigned to two ore more tables. See this.
What I'm trying to do with the code is to export a dataset to XML.
This is what I'm currently using:
dataSet.WriteXml(fileDialog.FileName, XmlWriteMode.WriteSchema);
My dataSet is a typed dataset properly formed (by this I mean, all tables have PK, and FK relations are set between all existing tables in the dataSet). Some relationships are nested relationships. The table "TABLE" has two FK and at the same time is parent to other 8 tables.
I get the following error: "Cannot proceed with serializing DataTable 'TABLE'. It contains a DataRow which has multiple parent rows on the same Foreign Key."
Cna anyone give me some pointers on what I'm doing wrong? and why I'm getting this error message?
Thanks in advance.
I know it's a bit late, but I have found a workaround.
I ran into the same problem while trying to read a schema into a dataset that has the relations. The error you will get in that case is:
'The same table '{0}' cannot be the child table in two nested relations'
I will share what I have learned
The dataset operates in two modes, though you CAN NOT tell this from the outside.
(a) I'm a strict/manually created dataset, don't like nested relations
(b) I'm a container for a serialized object, everything goes.
The dataset you have created is currently an 'a', we want to make it a 'b'.
Which mode it operates in is decided when a DatSet is 'loaded' (xml) and or some other considerations.
I spend feverish hours reading the code of the DataSet to figure out a way to fool it, and I found that MS can fix the problem with just the addition of a property on the dataset and a few additional checks. Checkout the source code for the DataRelation: http://referencesource.microsoft.com/#System.Data/System/Data/DataRelation.cs,d2d504fafd36cd26,references and that the only method we need to fool is the 'ValidateMultipleNestedRelations' method.)
The trick is to fool the dataset into thinking it build all relationships itself. The only way I found to do that is to actually make the dataset create them, by using serialization.
(We are using this solution in the part of oursystem where we're creating output with a DataSet oriented 3rd party product.)
In meta, what you want to do is:
Create your dataset in code, including relationships. Try if you
can mimic the MS naming convention (though not sure if required)
Serialize your dataset (best to have not any rows in it)
Make the serialized dataset look like MS serialized it. (I'll
expand on this below)
Read the modified dataset into a new instance.
Now you can import your rows, MS does not check the relationships,
and things should work.
Some experimentation taught me that in this situation, less is more.
If a DataSet reads a schema, and finds NO relationships or Key-Columns, it will operate in mode 'b' otherwise it will work in mode 'a'.
It COULD be possible that we can still get a 'b' mode dataset with SOME relationships or Key-Columns, but this was not pertinent for our problem.
So, here we go, this code assumes you have an extension method 'Serialize' that knows how to handle a dataset.
Assume sourceDataSet is the DataSet with the schema only.
Target will be the actually usable dataset:
var sourceDataSet = new DataSet();
var source = sourceDataSet.Serialize();
// todo: create the structure of your dataset.
var endTagKeyColumn = " msdata:AutoIncrement=\"true\" type=\"xs:int\" msdata:AllowDBNull=\"false\" use=\"prohibited\" /";
var endTagKeyColumnLength = endTagKeyColumn.Length - 1;
var startTagConstraint = "<xs:unique ";
var endTagConstraint = "</xs:unique>";
var endTagConstraintLength = endTagConstraint.Length - 1;
var cleanedUp = new StringBuilder();
var subStringStart = 0;
var subStringEnd = source.IndexOf(endTagKeyColumn);
while (subStringEnd > 0)
{
// throw away unused key columns.
while (source[subStringEnd] != '<') subStringEnd--;
if (subStringEnd - subStringStart > 5)
{
cleanedUp.Append(source.Substring(subStringStart, subStringEnd - subStringStart));
}
subStringStart = source.IndexOf('>', subStringEnd + endTagKeyColumnLength) + 1;
subStringEnd = source.IndexOf(endTagKeyColumn, subStringStart);
}
subStringEnd = source.IndexOf(startTagConstraint, subStringStart);
while (subStringEnd > 0)
{
// throw away relationships.
if (subStringEnd - subStringStart > 5)
{
cleanedUp.Append(source.Substring(subStringStart, subStringEnd - subStringStart));
}
subStringStart = source.IndexOf(endTagConstraint, subStringEnd) + endTagConstraintLength;
subStringEnd = source.IndexOf(startTagConstraint, subStringStart);
}
cleanedUp.Append(source.Substring(subStringStart + 1));
target = new DataSet();
using (var reader = new StringReader(cleanedUp.ToString()))
{
target.EnforceConstraints = false;
target.ReadXml(reader, XmlReadMode.Auto);
}
Note, so as I said at the start, I had to fix this problem when we are loading the dataset, and though you are saving the dataset, the workaround will be the same.
The two foreign keys are causing the problem. The other end of the keys are considered to be parents, so you've got two of them. In writing the XML, an element can have only one parent (unless the element appears twice, once under each parent). Possible solutions include removing one of the foreign keys (which I appreciate may break your app in other ways), or, depending on how your dataSet is initialised, try setting EnforceConstraints to false.
During an update command I received the following error:
Operation is not valid due to the current state of the object
I tried to remove one column from the update command and it works fine.
This column is a FK that is similar to the other FK that works fine.
This is the code that executes the update:
ti.NumeroTitolo = titolo.Numero;
ti.RKTipoTitoloGenereTitolo = titolo.RkTipoTitoloGenereTitolo;
ti.RKBanca = titolo.RkBanca;
ti.DataScadenza = titolo.DataScadenza;
ti.RKTipoEsito = titolo.RkTipoEsito;
ti.ImportoTitolo = titolo.ImportoTitolo;
_dc.SubmitChanges();
Grenade's answer actually helped me because I was coming across this exception when attempting to reassign a foreign key. The relationship/constraint was preventing the key from being reassigned.
However, I was able to access the relationship item directly and reassign it, thereby reassigning the foreign key.
product.manufacturer_id = manufacturerID; //This caused the above exception
product.Manufacturer = new Manufacturer(manufacturerID);
//or
product.Manufacturer = OtherManufacturer;
The problem may be caused by a relationship or other constraint. For example if you are attempting to drop a row who's Id is referenced by some other table with a relationship. Perhaps if you post the SQL or LINQ query that is giving the error we can help you more.
I was getting the same error trying to create a subscription. Found this post:
I was also having this issue. I found a good post that told how to add a key to the web.config file in the folder C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportManager.
I had to remove the <> to make it show.
add key="aspnet:MaxHttpCollectionKeys" value="10000"
http://social.msdn.microsoft.com/Forums/ar-SA/sqlreportingservices/thread/c9d4431a-0d71-4782-8f38-4fb4d5b7a829
I've tried to find an answer to do this online but apparently it can't be done (I mean at the app level, not database). I have a need to clear out my dataset completely and reset the primary key at the same time. Any ideas?
Alternatively, one hack i can use is to reinitialize the dataset but that doesn't seem possible as well since the dataset is shared between different classes in the app (I'm creating the shared dataset in Program.cs).
Thanks
Farooq
Update:
ok i tried this:
MyDataSet sharedDS = new MyDataSet();
.
.
.
CleanDS()
{
MyDataSet referenceDS = new MyDataSet();
sharedDS.Table1.Reset();
sharedDS.Merge(referenceDS);
}
My original problem is solved but now I get an System.ArgumentException for Column1 does not belong to Table1 where I can see the columns in the DataSet Viewer as well as see the populated rows. Also note that I can manually re-create the entire DataSet and I still get the same error. Any ideas?
i tried it with the autoincrementseed and autoincrementstep and it finally works. here's for the reference of others:
sharedDS.Clear();
sharedDS.Table1.Columns[0].AutoIncrementStep = -1;
sharedDS.Table1.Columns[0].AutoIncrementSeed = -1;
sharedDS.Table1.Columns[0].AutoIncrementStep = 1;
sharedDS.Table1.Columns[0].AutoIncrementSeed = 1;
please see reasoning in this thread:
http://www.eggheadcafe.com/community/aspnet/10/25407/autoincrementseed.aspx
and:
http://msdn.microsoft.com/en-us/library/system.data.datacolumn.autoincrementseed(VS.85).aspx
thanks all for your help!
Does the DataSet.Clear() method do what you want?
Edit
Because you say you want to reset the primary key, you must be talking about on the DataTables the DataSet holds. Have you tried calling DataSet.Tables["MyTable"].Reset(), which should reset the table to it's original state, rather than the whole DataSet?
Edit 2
If this is an autoincrementing column, have you tried resetting the DataColumn's AutoIncrementSeed property to 0?