I have a winforms app that uses a strongly typed custom DataSet for holding data for processing. It gets populated with data from a database.
I have a user control that takes any custom dataset and displays the contents in a data grid. This is used for testing and debugging. To make the control reusable, I treat the custom dataset as a normal System.Data.DataSet.
I have extended the control to allow the dataset to be saved to an XML file and also load previously saved XML files.
What I'm now trying to do is take the loaded data file, which is treated as a standard DataSet, and cast it back to the Custom Dataset. This shouldn't be difficult but I am getting the following System.InvalidCastException message:
Unable to cast object of type 'System.Data.DataSet' to type
'CostingDataSet'.
Here is an example of the problem code (It's the last line of the 3 that generates the exception):
DataSet selected = debugDisplay.SelectedDataSet;
CostingDataSet tempDS = new CostingDataSet();
tempDS = (CostingDataSet)selected.Copy();
Can anyone give me a steer on how to fix this?
Edit:
Following the comments from nEM I implemented this and all was good.
foreach (System.Data.DataTable basicDT in selected.Tables)
{
DataTable dt = tempDS.Tables[basicDT.TableName];
dt = basicDT.Copy();
}
In addition, the code suggested by SSarma also works.
From what I have gathered from this website, you can't cast a regular dataset into a typed one which makes sense as its strongly typed and has certain specifications. If you have saved it as a regular dataset, when you deserialise it, the XML has no recollection of it ever being created as a typed dataset. For the xml file, you only ever saved a regular dataset so it is equivalent to trying to convert a standard dataset into a typed one by explicit casting which isn't allowed.
You could create a populate method that takes in a regular dataset as an argument which copies all the data into your typed dataset.
This is assuming that you are serialising it as a standard dataset.
How about using Streams (sorry following code is not tested) but you get the idea
DataSet selected = debugDisplay.SelectedDataSet;
string ds1 = selected.GetXml();
CostingDataSet tempDS = new CostingDataSet();
System.IO.MemoryStream ms = new System.IO.MemoryStream(ds1.Length);
selected.WriteXml(ms);
ms.Position = 0;
tempDS.ReadXml(ms);
Related
If I understand this right, do I have to use the table adapters to get data into my typed data set, I can't just create my strongly typed data-set and have the data autoload? (im also using a .net 3.5 project in VS2012)
for example, I have to do this to get data (If I do it this way, I get data)
var a = new EvtDataSetTableAdapters.tblFileTableAdapter();
a.GetData();
versus just doing this, (if I do it this way, I get nothing... and I could understand if its lazy loading...??)
EvtDataSet o = new EvtDataSet();
var r = o.tblFile.Select();
All DataSets (type and untyped) are database-agnostic, ie any DataTable can be filled from Oracle just as easy as from MS-Sql. The DataSet has no knowledge of schema or connection strings.
You need an Adapter to read to/from a backing store.
(And DataTable.Select() is probably from Linq-to-Datasets).
The much maligned, much misunderstood strongly typed dataset!
Typically yes you would use a TableAdapter to load the data, and to perform updates.
Using the designer you would add parameter queries to the table adapter to support the operations your program requires eg select * from customers where customerid = #customerid
Call this FillbyCustomerid.
Then you would pull the data for the selected customer using the TableAdapter by something along the lines of:
dim ta as new dscustomerstableadapters.customertableadapter
dim ds as new dsCustomers
ta.fillbycustomerid (ds.customers, ourid)
BEST usage of typed dataset: Ignore, never use. Go OTM instead. Use LINQ. Datasets - typed and untyped - where bad when they were in .NET 1.0, since then even MS has realized alternatives. Not used one in 10 years, not going to use one.
Exception: Reporting applications where the SQL is "entered externally", so you basically just Need a generic data Container.
Use EntityFramework or one of the tons of alternatives.
We have created a custom dataset and populating it with some data.
Before adding data, we are adding columns in the data set as follows
DataSet archiveDataset = new DataSet("Archive");
DataTable dsTable = archiveDataset.Tables.Add("Data");
dsTable.Columns.Add("Id", typeof(int));
dsTable.Columns.Add("Name", typeof(string));
dsTable.Columns.Add("LastOperationBy", typeof(int));
dsTable.Columns.Add("Time", typeof(DateTime))
Once the Dataset is create, we are filling values as follows
DataRow dataRow = dsTable.NewRow();
dataRow["Id"] = source.Id;
dataRow["Name"] = source.Name;
dataRow["LastOperationBy"] = source.LastOperationBy;
dataRow["Time"] = source.LaunchTime;
Is there any better and managed way of doing this. can I make the code more easy to write using enum or anything else to reduce the efforts?
You could try using a Typed Dataset.
This should get rid of the ["<column_name>"] ugliness.
If the dataset has a structure similar to tables in a database, then Visual Studio makes it really easy to create one: just click Add -> New Item somewhere in the solution and choose DataSet. VS will show a designer where you can drag tables from your server explorer.
Update (after response to Simon's comment):
A typed dataset is in fact an XSD (XML Schema Definition).
What I did in a similar case was:
created an empty DataSet (using Add -> New Item -> DataSet)
opened the newly created file with a text editor (by dafault, in VS it shows the XSD designer)
paste the XSD that I had created manually
You could also choose to use the designer to create the schema.
Considering your comment "I am using Dataset to export data to a XML file" I recommend using a different technology such as
Linq to XML http://msdn.microsoft.com/en-us/library/bb387061.aspx or
Xml Serialzation http://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlserializer.aspx
Or better yet of is doesnt have to be XML (and you only want hierarchical readable text consider JSON instead http://james.newtonking.com/pages/json-net.aspx
You can bind dataset in two way first one is using database second one is add manually.
After create column for dataset you can add using Loops you can add it if you have 10000 of entries.
You can use Reflection. Another option is to use EntityFramework or NHibernate to map the columnnames and datastructure columns and then avoid these code to fill each field manually. But they will add more complexity. Also Performance wise the your code is better.
What I'm trying to do with the code is to export a dataset to XML.
This is what I'm currently using:
dataSet.WriteXml(fileDialog.FileName, XmlWriteMode.WriteSchema);
My dataSet is a typed dataset properly formed (by this I mean, all tables have PK, and FK relations are set between all existing tables in the dataSet). Some relationships are nested relationships. The table "TABLE" has two FK and at the same time is parent to other 8 tables.
I get the following error: "Cannot proceed with serializing DataTable 'TABLE'. It contains a DataRow which has multiple parent rows on the same Foreign Key."
Cna anyone give me some pointers on what I'm doing wrong? and why I'm getting this error message?
Thanks in advance.
I know it's a bit late, but I have found a workaround.
I ran into the same problem while trying to read a schema into a dataset that has the relations. The error you will get in that case is:
'The same table '{0}' cannot be the child table in two nested relations'
I will share what I have learned
The dataset operates in two modes, though you CAN NOT tell this from the outside.
(a) I'm a strict/manually created dataset, don't like nested relations
(b) I'm a container for a serialized object, everything goes.
The dataset you have created is currently an 'a', we want to make it a 'b'.
Which mode it operates in is decided when a DatSet is 'loaded' (xml) and or some other considerations.
I spend feverish hours reading the code of the DataSet to figure out a way to fool it, and I found that MS can fix the problem with just the addition of a property on the dataset and a few additional checks. Checkout the source code for the DataRelation: http://referencesource.microsoft.com/#System.Data/System/Data/DataRelation.cs,d2d504fafd36cd26,references and that the only method we need to fool is the 'ValidateMultipleNestedRelations' method.)
The trick is to fool the dataset into thinking it build all relationships itself. The only way I found to do that is to actually make the dataset create them, by using serialization.
(We are using this solution in the part of oursystem where we're creating output with a DataSet oriented 3rd party product.)
In meta, what you want to do is:
Create your dataset in code, including relationships. Try if you
can mimic the MS naming convention (though not sure if required)
Serialize your dataset (best to have not any rows in it)
Make the serialized dataset look like MS serialized it. (I'll
expand on this below)
Read the modified dataset into a new instance.
Now you can import your rows, MS does not check the relationships,
and things should work.
Some experimentation taught me that in this situation, less is more.
If a DataSet reads a schema, and finds NO relationships or Key-Columns, it will operate in mode 'b' otherwise it will work in mode 'a'.
It COULD be possible that we can still get a 'b' mode dataset with SOME relationships or Key-Columns, but this was not pertinent for our problem.
So, here we go, this code assumes you have an extension method 'Serialize' that knows how to handle a dataset.
Assume sourceDataSet is the DataSet with the schema only.
Target will be the actually usable dataset:
var sourceDataSet = new DataSet();
var source = sourceDataSet.Serialize();
// todo: create the structure of your dataset.
var endTagKeyColumn = " msdata:AutoIncrement=\"true\" type=\"xs:int\" msdata:AllowDBNull=\"false\" use=\"prohibited\" /";
var endTagKeyColumnLength = endTagKeyColumn.Length - 1;
var startTagConstraint = "<xs:unique ";
var endTagConstraint = "</xs:unique>";
var endTagConstraintLength = endTagConstraint.Length - 1;
var cleanedUp = new StringBuilder();
var subStringStart = 0;
var subStringEnd = source.IndexOf(endTagKeyColumn);
while (subStringEnd > 0)
{
// throw away unused key columns.
while (source[subStringEnd] != '<') subStringEnd--;
if (subStringEnd - subStringStart > 5)
{
cleanedUp.Append(source.Substring(subStringStart, subStringEnd - subStringStart));
}
subStringStart = source.IndexOf('>', subStringEnd + endTagKeyColumnLength) + 1;
subStringEnd = source.IndexOf(endTagKeyColumn, subStringStart);
}
subStringEnd = source.IndexOf(startTagConstraint, subStringStart);
while (subStringEnd > 0)
{
// throw away relationships.
if (subStringEnd - subStringStart > 5)
{
cleanedUp.Append(source.Substring(subStringStart, subStringEnd - subStringStart));
}
subStringStart = source.IndexOf(endTagConstraint, subStringEnd) + endTagConstraintLength;
subStringEnd = source.IndexOf(startTagConstraint, subStringStart);
}
cleanedUp.Append(source.Substring(subStringStart + 1));
target = new DataSet();
using (var reader = new StringReader(cleanedUp.ToString()))
{
target.EnforceConstraints = false;
target.ReadXml(reader, XmlReadMode.Auto);
}
Note, so as I said at the start, I had to fix this problem when we are loading the dataset, and though you are saving the dataset, the workaround will be the same.
The two foreign keys are causing the problem. The other end of the keys are considered to be parents, so you've got two of them. In writing the XML, an element can have only one parent (unless the element appears twice, once under each parent). Possible solutions include removing one of the foreign keys (which I appreciate may break your app in other ways), or, depending on how your dataSet is initialised, try setting EnforceConstraints to false.
Background
Here is my issue. Earlier in the course of my program a System.Data.DataSet was serialized out to a file. Then sometime later the data set schema was changed in the program, specifically a column was added to one of the tables.
This data set has been created using VS and C#, so it has all the properties able to access the rows and columns by name (through the Microsoft generated code). It has all the files (.xsd, .cs, etc.) that VS needs to know what the data set looks like and the names therein.
The file is loaded and saved through XML Serialization. This causes an issue now because when I deserialize the old file it loads in the data related to the old schema. This works for the most part, but the object that is created (the data set) has everything but the column that was added later. So, when trying to access the new column it fails because the deserialization did not know about it and the entire column winds up being null.
This now causes more issues because it throws an exception when trying to access that column (because it's null) through the properties of the data set.
Question
My question is, can I somehow add in the column after deserialization? I apparently need to add it so that it complies with the Microsoft generated code because doing this:
myDataSet.myTable.Columns.Add("MyMissingColumn");
...does not add the column it needs. It may add a column, but the row property myDataRow.MyMissingColumn returns null and errors out.
Do I need to somehow copy the new schema into this object? Again, the only reason this is failing is because the old file was serialized using the old schema.
Any suggestions are appreciated.
Why don't you load the schema from the new schema file, and then load the old data. Provided your column allows nulls it should be fine.
DataSet data = new DataSet();
data.ReadXmlSchema(schemaFile);
data.ReadXml(dataFile, XmlReadMode.IgnoreSchema);
Otherwise just add it on the fly:
if (!data.Tables[0].Columns.Contains("SomeId"))
{
var column = new DataColumn("SomeId", typeof(int));
// give it a default value if you don't want null
column.DefaultValue = 1;
// should it support null values?
column.AllowDBNull = false;
data.Tables[0].Columns.Add(column);
}
you are adding a new column without specify its data type, strange, I would specify typeof(string) using another overload of the Add.
beside this, it's understandable that you cannot do: myDataRow.MyMissingColumn because there was no type/column mapping in the initial version of the XSD, but can you anyway access to this column by name or index?
try something like this
myDataRow["MyMissingColumn"] = "Test";
var s = myDataRow["MyMissingColumn"].ToString();
I am using vs2008 with CrystalReports and I'm wandering how can i dynamically add rows of data to a *.rpt file?(using c#).
In more detail i want to create a a function that populates a *.rpt file with data that might contain lists(for example "FirstName", "LastName", List<"Friend"> ;..Friend beeing an object that contains multiple fields like "FriendNr", "Address",....).
the code that i used so far is:
ReportDocument rpt = new ReportDocument();
MemoryStream stream = new MemoryStream();
string filename = filepath + "/myRpt.rpt";
rpt.Load(filename);
rpt.SetParameterValue(0, myObject.FirstName);
rpt.SetParameterValue(1, myObject.LastName);
Inside the rpt file i have placed FieldObjects(Parameter Fields), and i populate the file with data by assigning the desired values to these objects (" rpt.SetParameterValue(0, myObject.FirstName);" )
Please help me find a way to populate the report with the rows of data contained in the List also.
Thanks a lot for your time.
I don't think it is possible to add data rows to a report this way. I suggest using a Typed DataSet as your report data source. The report can then display as many Friend objects as you require.
Dynamic 'rows' approaches:
1). You could add more items to the parameter's CurrentValues collection. Not sure how you are using this in the report, but it may work for your purposes. Look at the ParameterFieldDefinition class for more information.
2). Create a DataSet, modify as necessary, then assign it to the report. Use the ReportDocument.SetDataSource() method to bind a report to data programmatically.
3). Another approach is to build a report that uses XML data, then programmatically modify the XML, then refresh the report.