I'm struggling to find the best way to store and represent the data I have in SQL (MySQL DB) and C# windows form.
My data when mapped to classes which looks like this;
public class Parent
{
public string UniqueID { get; set; } //Key
public DateTime LoadTime { get; set; }
public string Reference { get; set; }
private List<Child> Elements { get; set; }
}
public class Child
{
public int MemberCode { get; set; } //Composite key
public int ElementCode { get; set; } //Composite key
public Object Data { get; set; }
}
My data is very dynamic. So a parent record can have any number of child records.
In the child record then the MemberCode and ElementCode are actually foreign keys to other tables/classes, which when a look-up is performed gives me details of what the data actually is. For example
MemberCode = 1 & ElementCode = 1 means data is a Date
MemberCode = 1 & ElementCode = 3 means data is a telephone number
MemberCode = 2 & ElementCode = 11 means data is a Product Code
MemberCode = 2 & ElementCode = 12 means data is a Service Code
etc
These effectively combine to indicate what the column name is, and these are always the same (so MemberCode = 1 & ElementCode = 1 will always be a Date no matter which child object it is associated with).
At the moment these are references/lookups but I could also put the data in a variable in the class as that might make it easier. Then it would be more like a Key Value Pair.
At the moment in my DB I have these stored as two tables, with the child record also containing the UniqueID from the parent. But I'm, not sure that this is the best way as I will explain.
My tables are created as such
CREATE TABLE `PARENT` (
`ID` INT(11) NOT NULL AUTO_INCREMENT,
`LOADTIME` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`REFERENCE` VARCHAR(100) NOT NULL,
PRIMARY KEY (`ID`)
)
CREATE TABLE `CHILD` (
`ID` INT(11) NOT NULL,
`MEMBER_CODE` INT(11) NOT NULL,
`ELEMENT_CODE` INT(11) NOT NULL,
`DATA` VARCHAR(4000) NULL DEFAULT NULL,
PRIMARY KEY (`ID`, `MEMBER_CODE`, `ELEMENT_CODE`),
CONSTRAINT `fk_ID` FOREIGN KEY (`ID`) REFERENCES `Parent` (`ID`)
)
Now what I want to do is to flatten out this data so that I can display a single parent record with all child records as a single row. I ideally want to display it in an ObjectListView (http://objectlistview.sourceforge.net/cs/index.html) but can consider datagrid if it makes life easier.
Because my data is dynamic, then I'm struggling to flatten this out and if I select 10 parent records then each can have different number of child elements, and each can have different MemberCodes and ElementCode, which means that they are effectively different columns.
So my data could look like the following (but on a larger scale);
But because of the dynamic nature of the data, then I struggling to do this. Either in SQL or in Objects in my code. Maybe there is even another way to store my data which would suit it better.
After many many days working on this then I have managed to resolve this issue myself. What I done was the following;
In my original child class then the MemberCode and ElementCode make a unique key that basically stated what the column name was. So I took this a step further and added a "Column_Name" so that I had
public class Child
{
public int MemberCode { get; set; } //Composite key
public int ElementCode { get; set; } //Composite key
public string Column_Name { get; set; } //Unique. Alternative Key
public Object Data { get; set; }
}
This was obviously reflected in my database table as well.
My SQL to extract the data then looked like this;
select p.UniqueID, p.LoadTime, p.reference, c.MemberCode, c.ElementCode , c.column_name, c.Data
from parent as p, child as c
where p.UniqueID = c.UniqueID
//aditional filter criteria
ORDER BY p.UniqueID, MemberCode, ElementCode
ordering by the UniqueID first is critical to ensure the records are in the right order for later processing.
The I would use a dynamic and a ExpandoObject() to store the data.
So I iterate over the result to the convert the sql result into this structure as follows;
List<dynamic> allRecords = new List<dynamic>(); //A list of all my records
List<dynamic> singleRecord = null; //A list representing just a single record
bool first = true; //Needed for execution of the first iteration only
int lastId = 0; //id of the last unique record
foreach (DataRow row in args.GetDataSet.Tables[0].Rows)
{
int newID = Convert.ToInt32(row["UniqueID"]); //get the current record unique id
if (newID != lastId) //If new record then get header/parent information
{
if (!first)
allRecords.Add(singleRecord); //store the last record
else
first = false;
//new object
singleRecord = new List<dynamic>();
//get parent information and store it
dynamic head = new ExpandoObject();
head.Column_name = "UniqueID";
head.UDS_Data = row["UniqueID"].ToString();
singleRecord.Add(head);
head = new ExpandoObject();
head.Column_name = "LoadTime";
head.UDS_Data = row["LoadTime"].ToString();
singleRecord.Add(head);
head = new ExpandoObject();
head.Column_name = "reference";
head.UDS_Data = row["reference"].ToString();
singleRecord.Add(head);
}
//get child information and store it. One row at a time
dynamic record = new ExpandoObject();
record.Column_name = row["column_name"].ToString();
record.UDS_Data = row["data"].ToString();
singleRecord.Add(record);
lastId = newID; //store the last id
}
allRecords.Add(singleRecord); //stores the last complete record
Then I have my information stored dynamically in the flat manner that I required.
Now the next problem was the ObjectListView I wanted to use. This could not accept such dynamic types.
So I had the information stored within my code as I wanted, but I could still not display it as was required.
The solution was that was to use a variant of the ObjectListView known as the DataListView. This is effectively the same control but can be data bound.
Another alternative would also be to use a DatagridView, but I wanted to stick to the ObjectListView for other reasons.
So now I had to convert my dynamic data into a Datasource. This I done as follows;
DataTable dt = new DataTable();
foreach (dynamic record in allRecords)
{
DataRow dr = dt.NewRow();
foreach (dynamic item in record)
{
var prop = (IDictionary<String, Object>)item;
if (!dt.Columns.Contains(prop["Column_name"].ToString()))
{
dt.Columns.Add(new DataColumn(prop["Column_name"].ToString()));
}
dr[prop["Column_name"].ToString()] = prop["UDS_Data"];
}
dt.Rows.Add(dr);
}
Then I simply assign my datasource to the DataListView, generate the columns, and hey presto I now have my dynamic data extracted, flattened and displayed how I require.
Related
Sometimes, we would like to change order details by adding, removing, and editing orders by customer's request or depends on stock quantity.
So now want get some list and update including remove, edit, add rows, then save on database
What's the best efficiently way as C#, EntityFramework?
public class OrderDetail
{
public int Id { get; set; }
public int OrderId {get; set; }
public int Qty{ get; set; }
public string ItemName { get; set; }
}
/// Dummy db, OrderDetail Table
{
{1, 1000, 24,"A"},
{2, 1000, 12,"B"}
}
public void Update()
{
using(var db = new xxEntities())
{
// Get All orders, OrderId==1000, total 2rows
List<OrderDetails> list = db.OrderDetails.Where(x=> x.OrderId==1000).ToList();
// remove some row or rows
var temp1 = list.First(x=> x.Id==1);
list.Remove(temp);
// edit some row or rows
var temp2 = list.First(x=> x.Id==2);
temp2.Qty=100;
// add some row or rows
list.Add(new OrderDetail{ Id=3, OrderId=1000, Qty=2, ItemName="C"});
list.Add(new OrderDetail{ Id=4, OrderId=1000, Qty=2, ItemName="D"});
// Apply all changes
db.SaveChanges();
}
}
Additional Question
public void UpdateOrder(int orderId, List<OrderDetail> newOrders)
{
var result = db.OrderDetails.Where(x=>x.OrderId==orderId).ToList();
result = newOrders;
// it does not work
//db.OrderDetails.Update(result);
db.OrderDetails.RemoveRange(result);
db.OrderDetails.AddRange(newOrders);
db.SaveChange();
}
is it right approach to update multiple rows?
As mentioned in another answer... EF will create individual statements for each of the changes that are detected (i.e., updates, inserts, deletes) and submit them inside a single transaction. Gets the job done but is potentially very "chatty". Benefit is that you don't need to worry about the details of how it's getting done. Pretty easy to just modify the data object and call SaveChanges.
If you can consider not using EF for updates such as this... one way we do this kind of update is by creating a System.Data.DataTable and using that as a table-valued parameter into a stored procedure (if your datastore supports it).
Meta-code:
var dt = new DataTable();
var newRow = dt.NewRow();
newRow["column1"] = newdata;
dt.Rows.Add(newRow);
Then just use dt as your input parameter and let the stored proc determine the insert/update/delete operations.
If you want to Add / Remove / Update rows from your tables in Entity Framework, you have to Add / Remove / Update the items in your DbSet, not in fetched data.
using (var dbContext = new OrderContext())
{
// Add one Order
Order orderToAdd = new Order
{
// fill required properties; don't fill primary key
}
var addedOrder = dbContext.Orders.Add(orderToAdd);
// note: addedOrder has no Id yet.
// Add several Orders
IEnumerable<Order> orders = ...
dbContext.Orders.AddRange(orders);
dbContext.SaveChanges();
// now they've got their id:
Debug.Assert(addedOrder.Id != 0);
Debug.Assert(orders.All(order => order.Id != 0);
}
To Remove, you'll first have to fetch the complete Order
int orderIdToDelete = ...
using (var dbContext = new OrderContext())
{
Order orderToDelete = dbContext.Orders.Find(orderIdToDelete);
dbContext.Orders.Remove(orderToDelete);
var ordersToDelete = dbContext.Orders
.Where(order => order.Date.Year < 2000)
.ToList();
dbContext.Orders.RemoveRange(ordersToDelete);
// the orders are not deleted yet.
dbContext.SaveChanges();
}
To Update, you first have to get the value:
int orderIdToUpdate = ...
Order orderToUpdate = dbContext.Orders.Find(orderIdToUpdate);
orderToUpdate.Date = DateTime.Today;
var today = Datetime.Today;
var dateLimit = today.AddDays(-28);
var nonPaidOrders = dbContext.Orders
.Where(order => !order.Paid && order.Date < dateLimit)
.ToList();
foreach (var order in nonPaidOrders)
{
this.SendReminder(order);
order.ReminderDate = today;
}
dbContext.SaveChanges();
There is no "most efficient" way outside of making all changes then calling SaveChanges. upon which Ef will issue a lot of SQL Statements (one per operation).
There is most efficient way because there is no way to change the way Ef works and there is exactly one way Ef does its updates. They do NOT happen at the same time. Period. They happen in one transaction, one after the other, when you call SaveChanges.
I'm bulk inserting rows into a table (which has a identity column which auto increments every time a new row is inserted) based on the following post
https://stackoverflow.com/a/5942176/3861992
After all rows are inserted, how do I get the list of ids of the rows that are recently inserted?
Thanks
EntityFrameWork(EF) after insert entity and SaveChanges(). it sets the value of Id.
Suppose that the entity you want to enter into database is as follows:
public class EntityToInsert
{
public int Id { get; set; }
public string Name { get; set; }
public int Age { get; set; }
}
And you want to insert a list of entity:
var list = new List<EntityToInsert>()
{
new EntityToInsert() {Name = "A", Age = 15},
new EntityToInsert() {Name = "B", Age = 25},
new EntityToInsert() {Name = "C", Age = 35}
};
foreach (var item in list)
{
context.Set<EntityToInsert>().Add(item);
}
context.SaveChanges();
// get the list of ids of the rows that are recently inserted
var listOfIds=list.Select(x => x.Id).ToList();
I hope this helps.
When all rows are really inserted in the database(after calling SaveChanges() in Entity Framework), the real IDs of these rows are populated.
So after SaveChanges() you will have IDs there in inserted objects without doing any query.
Try this:
dbcontext.Entry( [object] ).GetDatabaseValues();
This is for a single row. If my internet connection at the moment wasn't so slow I'd look up the documentation to see if it's easy to get multiple rows. At the very least you can iterate through your list of database objects and get each entries values. That however may not be the fastest solution.
I have a table where i store data mappings. My model for that looks like so
public class MapperTable
{
public string EE_First_Name {get; set;}
public string EE_Last_Name {get; set;}
public string EE_MI {get; set;}
}
The purpose of this table is to store mappings from a csv so that i can then create an object with those fields.
So that if the first name field in a csv is FirstName* it is matched to my table and then i create a new object with the value from FirstName* and set the field name to EE_First_Name
Im saving the mappings via Id and when the user selects an Id I use that particular mapping to map the data from the csv.
public MapperTableConvertCsvUsingMap(DataTable csv){
var namesFromColumnCsvMap = DataAccess.ExportCsvMaps.FindByExp(x => x.ConfigId == idINt).FirstOrDefault();
foreach(DataRow row in Csv.Rows)
{
var csvMapped = new MapperTable {
EE_First_Name = row[namesFromColumnCsvMap.EE_First_Name__.TrimEnd()].ToString(),
EE_Last_Name__ = row[namesFromColumnCsvMap.EE_Last_Name__.TrimEnd()].ToString(),
EE_MI = row[namesFromColumnCsvMap.EE_MI.TrimEnd()].ToString(),
};
}
This works if all the columns are mapped correctly only. If they don't match exactly it blows up. What would be a better way to match the column headers from the csv to the column header definitions stored in my table to create a new object?
I have a csv that looks like this
It needs to look like this
The column mapping needs to get stored in the database so that the process can be repeated. The incoming csv may be different than the one shown but should be able to be mapped to the final one using the stored values in the database.
Check first that the column exists in the table/csv to be mapped before trying to access it. You can then access the column if it exists or return some default value like empty string "" or something else.
public MapperTable[] ConvertCsvUsingMap(DataTable csv) {
var namesFromColumnCsvMap = DataAccess.ExportCsvMaps.FindByExp(x => x.ConfigId == idINt).FirstOrDefault();
foreach (DataRow row in csv.Rows) {
var csvMapped = new MapperTable {
EE_First_Name = namesFromColumnCsvMap.EE_First_Name != null && csv.Columns.Contains(namesFromColumnCsvMap.EE_First_Name.TrimEnd()) ? row[namesFromColumnCsvMap.EE_First_Name.TrimEnd()].ToString() : "",
EE_Last_Name = namesFromColumnCsvMap.EE_Last_Name != null csv.Columns.Contains(namesFromColumnCsvMap.EE_Last_Name.TrimEnd()) ? row[namesFromColumnCsvMap.EE_Last_Name.TrimEnd()].ToString() : "",
EE_MI = namesFromColumnCsvMap.EE_MI != null && csv.Columns.Contains(namesFromColumnCsvMap.EE_MI.TrimEnd()) ? row[namesFromColumnCsvMap.EE_MI.TrimEnd()].ToString() : "",
};
//...
}
}
The code above checks first that the mapped column name exists in the table to be mapped before trying to get the value in the row. If it does not find a match it sets the mapped object property to an empty string.
We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns.
Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application.
Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code?
My concern is the condition that decides which table to query. The variable condition is
like
if the condition is A, then load from TableX
if the condition is B then load from TableA and TableY.
We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table.
I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general.
So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing.
Thanks,
A simple query like this helps you to know each column name of a table in Oracle.
Select COLUMN_NAME from user_tab_columns where table_name='EMP'
Use it in your code :)
Ok, MNC, try this for size (paste it into a new console app):
using System;
using System.Collections.Generic;
using System.Linq;
using Test.Api;
using Test.Api.Classes;
using Test.Api.Interfaces;
using Test.Api.Models;
namespace Test.Api.Interfaces
{
public interface ITable
{
int Id { get; set; }
string Name { get; set; }
}
}
namespace Test.Api.Models
{
public class MemberTable : ITable
{
public int Id { get; set; }
public string Name { get; set; }
}
public class TableWithRelations
{
public MemberTable Member { get; set; }
// list to contain partnered tables
public IList<ITable> Partner { get; set; }
public TableWithRelations()
{
Member = new MemberTable();
Partner = new List<ITable>();
}
}
}
namespace Test.Api.Classes
{
public class MyClass
{
private readonly IList<TableWithRelations> _tables;
public MyClass()
{
// tableA stuff
var tableA = new TableWithRelations { Member = { Id = 1, Name = "A" } };
var relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 2,
Name = "B"
}
};
tableA.Partner = relatedclasses;
// tableB stuff
var tableB = new TableWithRelations { Member = { Id = 2, Name = "B" } };
relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 3,
Name = "C"
}
};
tableB.Partner = relatedclasses;
// tableC stuff
var tableC = new TableWithRelations { Member = { Id = 3, Name = "C" } };
relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 2,
Name = "D"
}
};
tableC.Partner = relatedclasses;
// tableD stuff
var tableD = new TableWithRelations { Member = { Id = 3, Name = "D" } };
relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 1,
Name = "A"
},
new MemberTable
{
Id = 2,
Name = "B"
},
};
tableD.Partner = relatedclasses;
// add tables to the base tables collection
_tables = new List<TableWithRelations> { tableA, tableB, tableC, tableD };
}
public IList<ITable> Compare(int tableId, string tableName)
{
return _tables.Where(table => table.Member.Id == tableId
&& table.Member.Name == tableName)
.SelectMany(table => table.Partner).ToList();
}
}
}
namespace Test.Api
{
public class TestClass
{
private readonly MyClass _myclass;
private readonly IList<ITable> _relatedMembers;
public IList<ITable> RelatedMembers
{
get { return _relatedMembers; }
}
public TestClass(int id, string name)
{
this._myclass = new MyClass();
// the Compare method would take your two paramters and return
// a mathcing set of related tables that formed the related tables
_relatedMembers = _myclass.Compare(id, name);
// now do something wityh the resulting list
}
}
}
class Program
{
static void Main(string[] args)
{
// change these values to suit, along with rules in MyClass
var id = 3;
var name = "D";
var testClass = new TestClass(id, name);
Console.Write(string.Format("For Table{0} on Id{1}\r\n", name, id));
Console.Write("----------------------\r\n");
foreach (var relatedTable in testClass.RelatedMembers)
{
Console.Write(string.Format("Related Table{0} on Id{1}\r\n",
relatedTable.Name, relatedTable.Id));
}
Console.Read();
}
}
I'll get back in a bit to see if it fits or not.
So what you are really after is designing a rule engine for building dynamic queries. This is no small undertaking. The requirements you have provided are:
Store rules (what you call a "condition variable")
Each rule selects from one or more tables
Additionally some rules specify columns to be excluded from a table
Rules which select from multiple tables are satisfied with the UNION ALL operator; tables whose projections do not match must be brought into alignment with null columns.
Some possible requirements you don't mention:
Format masking e.g. including or excluding the time element of DATE columns
Changing the order of columns in the query's projection
The previous requirement is particularly significant when it comes to the multi-table rules, because the projections of the tables need to match by datatype as well as number of columns.
Following on from that, the padding NULL columns may not necessarily be tacked on to the end of the projection e.g. a three column table may be mapped to a four column table as col1, col2, null, col3.
Some multi-table queries may need to be satisfied by joins rather than set operations.
Rules for adding WHERE clauses.
A mechanism for defining default sets of excluded columns (i.e. which are applied every time a table is queried) .
I would store these rules in database tables. Because they are data and storing data is what databases are for. (Unless you already have a rules engine to hand.)
Taking the first set of requirements you need three tables:
RULES
-----
RuleID
Description
primary key (RuleID)
RULE_TABLES
-----------
RuleID
Table_Name
Table_Query_Order
All_Columns_YN
No_of_padding_cols
primary key (RuleID, Table_Name)
RULE_EXCLUDED_COLUMNS
---------------------
RuleID
Table_Name
Column_Name
primary key (RuleID, Table_Name, Column_Name)
I've used compound primary keys just because it's easier to work with them in this context e.g. running impact analyses; I wouldn't recommend it for regular applications.
I think all of these are self-explanatory except the additional columns on RULE_TABLES.
Table_Query_Order specifies the order in which the tables appear in UNION ALL queries; this matters only if you want to use the column_names of the leading table as headings in the CSV file.
All_Columns_YN indicates whether the query can be written as SELECT * or whether you need to query the column names from the data dictionary and the RULE_EXCLUDED_COLUMNS table.
No_of_padding_cols is a simplistic implementation for matching projections in those UNION ALL columns, by specifying how many NULLs to add to the end of the column list.
I'm not going to tackle those requirements you didn't specify because I don't know whether you care about them. The basic thing is, what your boss is asking for is an application in its own right. Remember that as well as an application for generating queries you're going to need an interface for maintaining the rules.
MNC,
How about creating a dictionary of all the known tables involved in the application process up front (irrespective of the combinations - just a dictionary of the tables) which is keyed on tablename. the members of this dictionary would be a IList<string> of the column names. This would allow you to compare two tables on both the number of columns present dicTable[myVarTableName].Count as well as iterating round the dicTable[myVarTableName].value to pull out the column names.
At the end of the piece, you could do a little linq function to determine the table with the greatest number of columns and create the structure with nulls accordingly.
Hope this gives food for thought..
I have need to select a number of 'master' rows from a table, also returning for each result a number of detail rows from another table. What is a good way of achieving this without multiple queries (one for the master rows and one per result to get the detail rows).
For example, with a database structure like below:
MasterTable:
- MasterId BIGINT
- Name NVARCHAR(100)
DetailTable:
- DetailId BIGINT
- MasterId BIGINT
- Amount MONEY
How would I most efficiently populate the data object below?
IList<MasterDetail> data;
public class Master
{
private readonly List<Detail> _details = new List<Detail>();
public long MasterId
{
get; set;
}
public string Name
{
get; set;
}
public IList<Detail> Details
{
get
{
return _details;
}
}
}
public class Detail
{
public long DetailId
{
get; set;
}
public decimal Amount
{
get; set;
}
}
Normally, I'd go for the two grids approach - however, you might also want to look at FOR XML - it is fairly easy (in SQL Server 2005 and above) to shape the parent/child data as xml, and load it from there.
SELECT parent.*,
(SELECT * FROM child
WHERE child.parentid = parent.id FOR XML PATH('child'), TYPE)
FROM parent
FOR XML PATH('parent')
Also - LINQ-to-SQL supports this type of model, but you need to tell it which data you want ahead of time. Via DataLoadOptions.LoadWith:
// sample from MSDN
Northwnd db = new Northwnd(#"c:\northwnd.mdf");
DataLoadOptions dlo = new DataLoadOptions();
dlo.LoadWith<Customer>(c => c.Orders);
db.LoadOptions = dlo;
var londonCustomers =
from cust in db.Customers
where cust.City == "London"
select cust;
foreach (var custObj in londonCustomers)
{
Console.WriteLine(custObj.CustomerID);
}
If you don't use LoadWith, you will get n+1 queries - one master, and one child list per master row.
It can be done with a single query like this:
select MasterTable.MasterId,
MasterTable.Name,
DetailTable.DetailId,
DetailTable.Amount
from MasterTable
inner join
DetailTable
on MasterTable.MasterId = DetailTable.MasterId
order by MasterTable.MasterId
Then in psuedo code
foreach(row in result)
{
if (row.MasterId != currentMaster.MasterId)
{
list.Add(currentMaster);
currentMaster = new Master { MasterId = row.MasterId, Name = row.Name };
}
currentMaster.Details.Add(new Detail { DetailId = row.DetailId, Amount = row.Amount});
}
list.Add(currentMaster);
There's a few edges to knock off that but it should give you the general idea.
select < columns > from master
select < columns > from master M join Child C on M.Id = C.MasterID
You can do it with two queries and one pass on each result set:
Query for all masters ordered by MasterId then query for all Details also ordered by MasterId. Then, with two nested loops, iterate the master data and create a new Master object foreach row in the main loop, and iterate the details while they have the same MasterId as the current Master object and populate its _details collection in the nested loop.
Depending on the size of your dataset you can pull all of the data into your application in memory with two queries (one for all masters and one for all nested data) and then use that to programatically create your sublists for each of your objects giving something like:
List<Master> allMasters = GetAllMasters();
List<Detail> allDetail = getAllDetail();
foreach (Master m in allMasters)
m.Details.Add(allDetail.FindAll(delegate (Detail d) { return d.MasterId==m.MasterId });
You're essentially trading memory footprint for speed with this approach. You can easily adapt this so that GetAllMasters and GetAllDetail only return the master and detail items you're interested in. Also note for this to be effective you need to add the MasterId to the detail class
This is an alternative you might consider. It does cost $150 per developer, but time is money too...
We use an object persistence layer called Entity Spaces that generates the code for you to do exactly what you want, and you can regenerate whenever your schema changes. Populating the objects with data is transparent. Using the objects you described above would look like this (excuse my VB, but it works in C# too):
Dim master as New BusinessObjects.Master
master.LoadByPrimaryKey(43)
Console.PrintLine(master.Name)
For Each detail as BusinessObjects.Detail in master.DetailCollectionByMasterId
Console.PrintLine(detail.Amount)
detail.Amount *= 1.15
End For
With master.DetailCollectionByMasterId.AddNew
.Amount = 13
End With
master.Save()