I have code like below (I've generified and reduced it to represent just the issue at hand). The code works, that is it takes in a DataGridView.DataSource and ulitmately, using EPPlus, outputs the data to an Excel file. My question relates to covariance and how to use it, I think.
So you see it builds newList based on the type that it has found in the DataSource. Then a little further down it adds the data using the Properties, someClassObject.Name, .Address and .Phone that are unique to this type.
My problem is that there are about 75 different classes that could be passed in through the DataGridView parameter. Each class has its own unique properties (i.e. not necessarily Name, Address, Phone) though all of the objects in given DataGridView.DataSource are of the same class.
I could have a giant switch statement based on type.FullName and then each would have its own for loop to assign the Property values to the cell. That would work but would be incredibly cumbersome. Is there a better way to do this?
public void openExcelReport(ref DataGridView dataGridView, bool bolSave = false, bool bolOpen = true, string pageTitle = "EXPORTED DATA")
{
// put dataGridView.DataSource into a List
object firstItem = null;
var myDataSource = dataGridView.DataSource;
var myList = ((System.Windows.Forms.BindingSource)dataGridView.DataSource).List;
firstItem = ((System.Collections.IList)myList)[0];
var type = firstItem.GetType();
Type PROJECT1_TYPE = typeof(Project1.SomeClass);
Type PROJECT2_TYPE = typeof(Project2.SomeOtherClass); // many more of these
dynamic newList = null;
if (type.FullName.Equals(PROJECT1_TYPE.FullName))
{
newList = new List<Project1.SomeClass>();
foreach (Project1.SomeClass someClassObject in myList)
{
newList.Add(someClassObject);
}
}
ExcelPackage package = new ExcelPackage();
using ((package)) // use EPPlus
{
// Create the worksheet
ExcelWorksheet worksheet = package.Workbook.Worksheets.Add("Worksheet 1");
// Load the datatable into the sheet, starting from cell A1. Print the column names on row 1
System.Data.DataTable dataTable = new System.Data.DataTable();
dataTable.Columns.Add("Id");
dataTable.Columns.Add("FirstColumn", typeof(string));
dataTable.Columns.Add("SecondColumn", typeof(string));
dataTable.Columns.Add("ThirdColumn", typeof(string));
dataTable.Columns[0].AutoIncrement = true;
var column_id = 0;
foreach (Project1.SomeClass someClassObject in "FirstColumn")
{
DataRow dataRow = dataTable.NewRow();
dataRow["FirstColumn"] = someClassObject.Name;
dataRow["SecondColumn"] = someClassObject.Address;
dataRow["ThirdColumn"] = someClassObject.Phone
dataTable.Rows.Add(dataRow);
column_id += 1;
}
// worksheet is now populated, so save Excel File
...
}
Instead of doing the DataRow creation within this function, you could move it out to the class implementations using a common interface to enforce it, for instance:
public interface DataRowConvertable
{
DataRow GetDataRow();
}
public class SomeClass : DataRowConvertable
{
public SomeClass() { }
public SomeClass(string name, string address, string phone)
{
Name = name;
Address = address;
Phone = phone;
}
public string Name { get; set; }
public string Address { get; set; }
public string Phone { get; set; }
public DataRow GetDataRow()
{
DataRow row = GetDataTable().NewRow();
row["Name"] = this.Name;
row["Address"] = this.Address;
row["Phone"] = this.Phone;
return row;
}
public static DataTable GetDataTable()
{
DataTable table = new DataTable("SomeClassTable");
table.Columns.Add("Name", typeof(string));
table.Columns.Add("Address", typeof(string));
table.Columns.Add("Phone", typeof(string));
return table;
}
}
You could take it further, but this should give you a good alternative and a place to start. You can either leave the GetDataTable function public, and use that as well to create your table instance, or make it private and only use it internally. I would opt for the former and use that in your function to initialize the table before filling it. You could even get rid of the static modifier and add it to your interface, but I prefer the static usage of it in this instance since it is not reliant on the instance of the class and the data involved, only on the structure.
Either way, you could then change the code you have above to look like this:
ExcelWorksheet worksheet = package.Workbook.Worksheets.Add("Worksheet 1");
System.Data.DataTable dataTable = Project1.SomeClass.GetDataTable();
foreach (Project1.SomeClass someClassObject in myList)
{
dataTable.Rows.Add(someClassObject.GetDataRow());
}
If you need an incremented id column, you could easily add that in the GetDataTable/GetDataRow functions and update them just as you were above.
This is just a quick example, it could very likely be cleaned up and optimized some, but it still conveys the idea. Hope it helps you out some.
Related
I have DataTable object with test data:
DataTable testData = new DataTable();
I'd like to assign data to variables which are the same as column names. I can do it like this:
string foo = testData.Rows[1]["foo"].ToString();
string bar = testData.Rows[1]["bar"].ToString();
or:
string foo = testData.Rows[1][nameof(foo)].ToString();
string bar = testData.Rows[1][nameof(bar)].ToString();
But, I dont want to call variable name every time, I'd like to use somethink like this:
string foo = testData.Rows[1][nameof(this)].ToString();
string bar = testData.Rows[1][nameof(this)].ToString();
Is it possible?
Why do you considder
string foo = testData.Rows[1][nameof(foo)].ToString();
to me more elegant than
string anyName = testData.Rows[1]["foo"].ToString();
You´d have to provide the name anyway. However variable-names don´t mean anything and are just arbitrary to be more readable.
Instead of relying on variable-names why not create a list of names and access the rows by the elements wthin that list?
var myList = new List<string> {"foo", "bar", ... };
Now you can just loop your list and get the rows value:
foreach(var name in myList)
{
var a = testData.Rows[1][name].ToString();
// do something with a
}
You probably want a class to represent the data in the row. You could then populate an object with those properties, either using reflection or serialization. Here's a possibility using simple reflection:
class MyRow
{
public string foo { get; set; }
public string bar { get; set; }
}
var row = testData.Rows[1];
var myRow = new MyRow();
foreach (DataColumn col in testData.Columns)
{
var prop = typeof(MyRow).GetProperty(col.ColumnName);
prop.SetValue(myRow, (string)(row[col] ?? string.Empty), null);
}
You now have an object that has properties of foo and bar.
Or, using serialization instead, it looks like the DataTable serializes nicely into a collection of objects so you can serialize the whole table, then grab the record you want after you deserialize the table:
var tableJson = JsonConvert.SerializeObject(testData);
myRow = JsonConvert.DeserializeObject<MyRow[]>(tableJson)[1];
loop through columns:
foreach(DataColumn col in testData.Columns)
{
var colValue = testData.Rows[1][col.ColumnName];
}
I'm using FastMember.ObjectReader to copy a list of structs to a DataTable, which I then use as the DataSource of a gridview:
struct Foo {
[DisplayName("title1")]
public string Bar { get; set; }
}
...
var rows = new List<Foo>();
rows.Add(new Foo { Bar = "somethingsomething" });
DataTable table = new DataTable();
using (var reader = ObjectReader.Create(rows)) {
table.Load(reader);
}
grid.DataSource = table.DefaultView;
If I select the list itself as the DataSource, the DisplayNames are used as column titles instead of the struct member name:
How can I recreate that when using FastMember.ObjectReader?
Oh, I see what you mean; you want the IDataReader to expose the [DisplayName] in the metadata; however, the primary way that is exposed is via GetSchemaTable(), and AFAIK there is no recognised key to represent [DisplayName]. It would be incorrect to pass that as the name, IMO.
Running a quick test:
var table = new DataTable();
table.Columns.Add("foo").Caption = "bar";
var schema = table.CreateDataReader().GetSchemaTable();
foreach(DataRow row in schema.Rows)
{
foreach(DataColumn col in schema.Columns)
{
Console.WriteLine($"{col.ColumnName}={row[col]}");
}
Console.WriteLine();
}
shows that indeed it is unlikely to expect it there:
ColumnName=foo
ColumnOrdinal=0
ColumnSize=-1
NumericPrecision=
NumericScale=
DataType=System.String
ProviderType=
IsLong=False
AllowDBNull=True
IsReadOnly=False
IsRowVersion=False
IsUnique=False
IsKey=False
IsAutoIncrement=False
BaseCatalogName=
BaseSchemaName=
BaseTableName=
BaseColumnName=foo
AutoIncrementSeed=0
AutoIncrementStep=1
DefaultValue=
Expression=
ColumnMapping=1
BaseTableNamespace=
BaseColumnNamespace=
this means that there isn't really anything I can suggest other than to manually populate the .Caption, perhaps use fast-member to get the data.
Is it possible to cast a type from a variable? I'm extracting data from a spreadsheet into a class, but because some columns are strings, and others DateTime, I really want a do-all command, that I don't need to map everything manually.
What I have so far:
foreach (PropertyInfo pinfo in asset.GetType().GetProperties())
{
string columnType = (from DataColumn column in data.Columns where column.ColumnName == pinfo.Name select column).First().DataType.ToString();
pinfo.SetValue(asset, row.Field<columnType>(pinfo.Name));
}
At the moment, it doesn't like row.Field<columnType>(), because it's not a real type.
Is it possible to do something like the above, where I'm getting the type contained in a column, and casting this to retrieve the data for that column? I'm in this situation, as I want to retrieve anything using the following statement, regardless of whether it's a string, int or DateTime.
var foo = row.Field<string>("Column Name");
Is there any generic command I can use? Thanks
PropertyInfo.SetValue accepts value as object. If your DataTable has column of appropriate type, then just get column value with row[columnName] which also will be returned as object, and use it to set property value:
foreach (PropertyInfo pinfo in asset.GetType().GetProperties())
{
pinfo.SetValue(asset, row[pinfo.Name]);
}
Sample: consider you have class
public class User
{
public int Id { get; set; }
public string Name { get; set; }
}
And you have DataTable which has columns with appropriate types:
DataTable dt = new DataTable();
dt.Columns.Add("Id", typeof(int));
dt.Columns.Add("Name", typeof(string));
dt.Rows.Add(1, "Bob");
dt.Rows.Add(2, "John");
Then filling user from DataTable will look like:
var user = new User();
var row = dt.Rows[0];
foreach (PropertyInfo pinfo in user.GetType().GetProperties())
pinfo.SetValue(user, row[pinfo.Name]);
NOTE: You can skip properties which don't have appropriate type, also you can handle case when there is no column with property name:
foreach (PropertyInfo pinfo in user.GetType().GetProperties())
{
if (!dt.Columns.Contains(pinfo.Name) ||
dt.Columns[pinfo.Name].DataType != pinfo.PropertyType)
continue;
pinfo.SetValue(user, row[pinfo.Name]);
}
I have a collection of objects with no key or order or other obvious index.
I wish to store data regarding each object in a DataTable. I thought an elegant way of doing this would be to store a reference in the owner column, and make that columns type typeof(MyClass).
However, when I try to do this in practice, it doesn't work (it says the primary keys collide). Turns out that putting the instances into a row field just writes "MyProgram.MyClass" into the field - presumably the output of toString even though that row's type was supposed to be MyClass not string.
Here is some sample code which works in LINQPad:
void Main()
{
// Create a table
var table = new DataTable();
var ownerColumn = new DataColumn("Owner", typeof(MyClass));
var primaryKey = new[] { ownerColumn };
table.Columns.AddRange(primaryKey);
table.PrimaryKey = primaryKey;
table.Columns.Add(new DataColumn("Some Data", typeof(int)) { DefaultValue = 0 });
// Create 2 objects
var c1 = new MyClass();
var c2 = new MyClass();
// Store their data in the table
var row = table.NewRow();
row["Owner"] = c1;
row["Some Data"] = 1;
table.Rows.Add(row);
row = table.NewRow();
row["Owner"] = c2;
row["Some Data"] = 2;
table.Rows.Add(row);
}
// Define other methods and classes here
class MyClass {
}
What do I do to solve this? Do I have to make an id field in MyClass, then use id to fill in the owner column, and then make sure each object receives a unique id at creation myself?
You have to implement System.IComparable (non-generic version) interface on MyClass so that DataTable knows how to to compare the value of the column. If this interface is not defined, the code falls back on comparing object.ToString() results.
You can use auto increment column :
DataTable dTable = new DataTable();
DataColumn auto = new DataColumn("AutoID", typeof(System.Int32));
dTable.Columns.Add(auto);
auto.AutoIncrement = true;
auto.AutoIncrementSeed = 1;
auto.ReadOnly = true;
I'm running into what appears to be unexpected behavior when utilizing the FieldNotInFile attribute in my mapping file. Please see below, abbreviated examples of what I have configured.
The mapping for the header record is defined separately to keep the option open for MasterDetail engine:
public class HeaderMapping
{
public string ID;
public DateTime RptDateFrom;
public DateTime RptDateTo;
public DateTime GenerationDate;
....
}
I would like to combine the values retrieved from the header into the final record result so they are specified with the FieldNotInFile attribute to be added later.
public class RecordMapping
{
// Values not in source
[FieldNotInFile()]
public string ID;
[FieldNotInFile()]
public DateTime RptDateFrom;
[FieldNotInFile()]
public DateTime RptDateTo;
[FieldNotInFile()]
public DateTime GenerationDate;
// Start values from source
public string RowPrefix;
public string Field1;
public string Field2;
public string Field3;
....
}
In the engine execution I have defined two instances. The first to capture the single header record and parse out its values. The AfterReadRecord event is used to stop the engine after the first line.
static void Main(string[] args)
{
// Extract the header
FileHelperEngine<HeaderMapping> headerEngine = new FileHelperEngine<HeaderMapping>();
headerEngine.AfterReadRecord +=
new FileHelpers.Events.AfterReadHandler<HeaderMapping>(AfterHeaderRead);
HeaderMapping[] headerRecord = headerEngine.ReadFile(source.FullName);
// Capture Values
companyId = headerRecord[0].ID;
rptDateFrom = headerRecord[0].RptDateFrom;
rptDateTo = headerRecord[0].RptDateTo;
generationDate = headerRecord[0].GenerationDate;
....
Next the record engine is created. The BeforeReadRecord event is used to insert the previously captured values into the placeholders signified in the RecordMapping with FieldNotInFile attributes.
....
// Extract the Records
FileHelperEngine<RecordMapping> recordEngine = new FileHelperEngine<RecordMapping>();
recordEngine.BeforeReadRecord +=
new FileHelpers.Events.BeforeReadHandler<RecordMapping>(BeforeThisRecord);
DataTable outputTable = recordEngine.ReadFileAsDT(source.FullName);
}
....
private static void BeforeThisRecord(EngineBase engine, BeforeReadEventArgs<RecordMapping> e)
{
e.Record.ID = companyId;
e.Record.RptDateFrom = rptDateFrom;
e.Record.RptDateTo = rptDateTo;
e.Record.GenerationDate = generationDate;
}
The outputTable result is not as expected. The fields marked as FieldNotInFile are completely omitted from the DataTable result. When debugging the process, the BeforeThisRecord method executes correctly and assigns the appropriate values but this is not reflected in the output. The DataTable columns are output as RowPrefix, Field1, Field2, etc. and not ID, RptDateFrom, RptDateTo, GenerationDate, RowPrefix, etc.
Strangely when I use the alternate method..
List <RecordMapping> recordList = recordEngine.ReadFileAsList(source.FullName);
The list items contain the RecordMapping objects with ALL of the correct values. It seems as though the DataTable translation of FieldNotInFile attributes is the culprit. Am I doing this wrong? Is this a bug?
You are correct that ReadFileAsDT() does not include the FieldNotInFile fields in the DataTable. It might be a bug, but honestly, I'm not sure how FieldNotInFile is supposed to be used - it's not in the documentation here.
I think you're better off using the Master Detail engine or alternatively just doing
RecordMapping[] recordMappings = recordEngine.ReadFile(source.FullName);
and then if you really need a DataTable, populate it yourself with something like:
DataTable outputTable = new DataTable(); // New data table.
outputTable.Columns.Add("ID", typeof(int)); // Add all columns.
outputTable.Columns.Add("RptDateFrom", typeof(DateTime));
outputTable.Columns.Add("RptDateTo", typeof(DateTime));
outputTable.Columns.Add("GenerationDate", typeof(DateTime));
outputTable.Columns.Add("RowPrefix", typeof(String));
outputTable.Columns.Add("Field1", typeof(String));
outputTable.Columns.Add("Field2", typeof(String));
outputTable.Columns.Add("Field3", typeof(String));
foreach (RecordMapping recordMapping in recordMappings)
{
outputTable.Rows.Add(
companyId,
rptDateFrom,
rptDateTo,
generationDate,
recordMapping.RowPrefix,
recordMapping.Field1,
recordMapping.Field2,
recordMapping.Field3)
}