Get Id's of recently inserted rows in Entity Framework - c#

I'm bulk inserting rows into a table (which has a identity column which auto increments every time a new row is inserted) based on the following post
https://stackoverflow.com/a/5942176/3861992
After all rows are inserted, how do I get the list of ids of the rows that are recently inserted?
Thanks

EntityFrameWork(EF) after insert entity and SaveChanges(). it sets the value of Id.
Suppose that the entity you want to enter into database is as follows:
public class EntityToInsert
{
public int Id { get; set; }
public string Name { get; set; }
public int Age { get; set; }
}
And you want to insert a list of entity:
var list = new List<EntityToInsert>()
{
new EntityToInsert() {Name = "A", Age = 15},
new EntityToInsert() {Name = "B", Age = 25},
new EntityToInsert() {Name = "C", Age = 35}
};
foreach (var item in list)
{
context.Set<EntityToInsert>().Add(item);
}
context.SaveChanges();
// get the list of ids of the rows that are recently inserted
var listOfIds=list.Select(x => x.Id).ToList();
I hope this helps.

When all rows are really inserted in the database(after calling SaveChanges() in Entity Framework), the real IDs of these rows are populated.
So after SaveChanges() you will have IDs there in inserted objects without doing any query.

Try this:
dbcontext.Entry( [object] ).GetDatabaseValues();
This is for a single row. If my internet connection at the moment wasn't so slow I'd look up the documentation to see if it's easy to get multiple rows. At the very least you can iterate through your list of database objects and get each entries values. That however may not be the fastest solution.

Related

C#, Entity Framework, How to Update(delete, add, edit) multiple rows at same time?

Sometimes, we would like to change order details by adding, removing, and editing orders by customer's request or depends on stock quantity.
So now want get some list and update including remove, edit, add rows, then save on database
What's the best efficiently way as C#, EntityFramework?
public class OrderDetail
{
public int Id { get; set; }
public int OrderId {get; set; }
public int Qty{ get; set; }
public string ItemName { get; set; }
}
/// Dummy db, OrderDetail Table
{
{1, 1000, 24,"A"},
{2, 1000, 12,"B"}
}
public void Update()
{
using(var db = new xxEntities())
{
// Get All orders, OrderId==1000, total 2rows
List<OrderDetails> list = db.OrderDetails.Where(x=> x.OrderId==1000).ToList();
// remove some row or rows
var temp1 = list.First(x=> x.Id==1);
list.Remove(temp);
// edit some row or rows
var temp2 = list.First(x=> x.Id==2);
temp2.Qty=100;
// add some row or rows
list.Add(new OrderDetail{ Id=3, OrderId=1000, Qty=2, ItemName="C"});
list.Add(new OrderDetail{ Id=4, OrderId=1000, Qty=2, ItemName="D"});
// Apply all changes
db.SaveChanges();
}
}
Additional Question
public void UpdateOrder(int orderId, List<OrderDetail> newOrders)
{
var result = db.OrderDetails.Where(x=>x.OrderId==orderId).ToList();
result = newOrders;
// it does not work
//db.OrderDetails.Update(result);
db.OrderDetails.RemoveRange(result);
db.OrderDetails.AddRange(newOrders);
db.SaveChange();
}
is it right approach to update multiple rows?
As mentioned in another answer... EF will create individual statements for each of the changes that are detected (i.e., updates, inserts, deletes) and submit them inside a single transaction. Gets the job done but is potentially very "chatty". Benefit is that you don't need to worry about the details of how it's getting done. Pretty easy to just modify the data object and call SaveChanges.
If you can consider not using EF for updates such as this... one way we do this kind of update is by creating a System.Data.DataTable and using that as a table-valued parameter into a stored procedure (if your datastore supports it).
Meta-code:
var dt = new DataTable();
var newRow = dt.NewRow();
newRow["column1"] = newdata;
dt.Rows.Add(newRow);
Then just use dt as your input parameter and let the stored proc determine the insert/update/delete operations.
If you want to Add / Remove / Update rows from your tables in Entity Framework, you have to Add / Remove / Update the items in your DbSet, not in fetched data.
using (var dbContext = new OrderContext())
{
// Add one Order
Order orderToAdd = new Order
{
// fill required properties; don't fill primary key
}
var addedOrder = dbContext.Orders.Add(orderToAdd);
// note: addedOrder has no Id yet.
// Add several Orders
IEnumerable<Order> orders = ...
dbContext.Orders.AddRange(orders);
dbContext.SaveChanges();
// now they've got their id:
Debug.Assert(addedOrder.Id != 0);
Debug.Assert(orders.All(order => order.Id != 0);
}
To Remove, you'll first have to fetch the complete Order
int orderIdToDelete = ...
using (var dbContext = new OrderContext())
{
Order orderToDelete = dbContext.Orders.Find(orderIdToDelete);
dbContext.Orders.Remove(orderToDelete);
var ordersToDelete = dbContext.Orders
.Where(order => order.Date.Year < 2000)
.ToList();
dbContext.Orders.RemoveRange(ordersToDelete);
// the orders are not deleted yet.
dbContext.SaveChanges();
}
To Update, you first have to get the value:
int orderIdToUpdate = ...
Order orderToUpdate = dbContext.Orders.Find(orderIdToUpdate);
orderToUpdate.Date = DateTime.Today;
var today = Datetime.Today;
var dateLimit = today.AddDays(-28);
var nonPaidOrders = dbContext.Orders
.Where(order => !order.Paid && order.Date < dateLimit)
.ToList();
foreach (var order in nonPaidOrders)
{
this.SendReminder(order);
order.ReminderDate = today;
}
dbContext.SaveChanges();
There is no "most efficient" way outside of making all changes then calling SaveChanges. upon which Ef will issue a lot of SQL Statements (one per operation).
There is most efficient way because there is no way to change the way Ef works and there is exactly one way Ef does its updates. They do NOT happen at the same time. Period. They happen in one transaction, one after the other, when you call SaveChanges.

Update collection from DbSet object via Linq

i know it is not complicated but i struggle with it.
I have IList<Material> collection
public class Material
{
public string Number { get; set; }
public decimal? Value { get; set; }
}
materials = new List<Material>();
materials.Add(new Material { Number = 111 });
materials.Add(new Material { Number = 222 });
And i have DbSet<Material> collection
with columns Number and ValueColumn
I need to update IList<Material> Value property based on DbSet<Material> collection but with following conditions
Only one query request into database
The returned data from database has to be limited by Number identifier (do not load whole database table into memory)
I tried following (based on my previous question)
Working solution 1, but download whole table into memory (monitored in sql server profiler).
var result = (
from db_m in db.Material
join m in model.Materials
on db_m.Number.ToString() equals m.Number
select new
{
db_m.Number,
db_m.Value
}
).ToList();
model.Materials.ToList().ForEach(m => m.Value= result.SingleOrDefault(db_m => db_m.Number.ToString() == m.Number).Value);
Working solution 2, but it execute query for each item in the collection.
model.Materials.ToList().ForEach(m => m.Value= db.Material.FirstOrDefault(db_m => db_m.Number.ToString() == m.Number).Value);
Incompletely solution, where i tried to use contains method
// I am trying to get new filtered collection from database, which i will iterate after.
var result = db.Material
.Where(x=>
// here is the reasonable error: cannot convert int into Material class, but i do not know how to solve this.
model.Materials.Contains(x.Number)
)
.Select(material => new Material { Number = material.Number.ToString(), Value = material.Value});
Any idea ? For me it is much easier to execute stored procedure with comma separated id values as a parameter and get the data directly, but i want to master linq too.
I'd do something like this without trying to get too cute :
var numbersToFilterby = model.Materials.Select(m => m.Number).ToArray();
...
var result = from db_m in db.Material where numbersToFilterBy.Contains(db_m.Number) select new { ... }

Flattern child/parent data with unknown number of columns

I'm struggling to find the best way to store and represent the data I have in SQL (MySQL DB) and C# windows form.
My data when mapped to classes which looks like this;
public class Parent
{
public string UniqueID { get; set; } //Key
public DateTime LoadTime { get; set; }
public string Reference { get; set; }
private List<Child> Elements { get; set; }
}
public class Child
{
public int MemberCode { get; set; } //Composite key
public int ElementCode { get; set; } //Composite key
public Object Data { get; set; }
}
My data is very dynamic. So a parent record can have any number of child records.
In the child record then the MemberCode and ElementCode are actually foreign keys to other tables/classes, which when a look-up is performed gives me details of what the data actually is. For example
MemberCode = 1 & ElementCode = 1 means data is a Date
MemberCode = 1 & ElementCode = 3 means data is a telephone number
MemberCode = 2 & ElementCode = 11 means data is a Product Code
MemberCode = 2 & ElementCode = 12 means data is a Service Code
etc
These effectively combine to indicate what the column name is, and these are always the same (so MemberCode = 1 & ElementCode = 1 will always be a Date no matter which child object it is associated with).
At the moment these are references/lookups but I could also put the data in a variable in the class as that might make it easier. Then it would be more like a Key Value Pair.
At the moment in my DB I have these stored as two tables, with the child record also containing the UniqueID from the parent. But I'm, not sure that this is the best way as I will explain.
My tables are created as such
CREATE TABLE `PARENT` (
`ID` INT(11) NOT NULL AUTO_INCREMENT,
`LOADTIME` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`REFERENCE` VARCHAR(100) NOT NULL,
PRIMARY KEY (`ID`)
)
CREATE TABLE `CHILD` (
`ID` INT(11) NOT NULL,
`MEMBER_CODE` INT(11) NOT NULL,
`ELEMENT_CODE` INT(11) NOT NULL,
`DATA` VARCHAR(4000) NULL DEFAULT NULL,
PRIMARY KEY (`ID`, `MEMBER_CODE`, `ELEMENT_CODE`),
CONSTRAINT `fk_ID` FOREIGN KEY (`ID`) REFERENCES `Parent` (`ID`)
)
Now what I want to do is to flatten out this data so that I can display a single parent record with all child records as a single row. I ideally want to display it in an ObjectListView (http://objectlistview.sourceforge.net/cs/index.html) but can consider datagrid if it makes life easier.
Because my data is dynamic, then I'm struggling to flatten this out and if I select 10 parent records then each can have different number of child elements, and each can have different MemberCodes and ElementCode, which means that they are effectively different columns.
So my data could look like the following (but on a larger scale);
But because of the dynamic nature of the data, then I struggling to do this. Either in SQL or in Objects in my code. Maybe there is even another way to store my data which would suit it better.
After many many days working on this then I have managed to resolve this issue myself. What I done was the following;
In my original child class then the MemberCode and ElementCode make a unique key that basically stated what the column name was. So I took this a step further and added a "Column_Name" so that I had
public class Child
{
public int MemberCode { get; set; } //Composite key
public int ElementCode { get; set; } //Composite key
public string Column_Name { get; set; } //Unique. Alternative Key
public Object Data { get; set; }
}
This was obviously reflected in my database table as well.
My SQL to extract the data then looked like this;
select p.UniqueID, p.LoadTime, p.reference, c.MemberCode, c.ElementCode , c.column_name, c.Data
from parent as p, child as c
where p.UniqueID = c.UniqueID
//aditional filter criteria
ORDER BY p.UniqueID, MemberCode, ElementCode
ordering by the UniqueID first is critical to ensure the records are in the right order for later processing.
The I would use a dynamic and a ExpandoObject() to store the data.
So I iterate over the result to the convert the sql result into this structure as follows;
List<dynamic> allRecords = new List<dynamic>(); //A list of all my records
List<dynamic> singleRecord = null; //A list representing just a single record
bool first = true; //Needed for execution of the first iteration only
int lastId = 0; //id of the last unique record
foreach (DataRow row in args.GetDataSet.Tables[0].Rows)
{
int newID = Convert.ToInt32(row["UniqueID"]); //get the current record unique id
if (newID != lastId) //If new record then get header/parent information
{
if (!first)
allRecords.Add(singleRecord); //store the last record
else
first = false;
//new object
singleRecord = new List<dynamic>();
//get parent information and store it
dynamic head = new ExpandoObject();
head.Column_name = "UniqueID";
head.UDS_Data = row["UniqueID"].ToString();
singleRecord.Add(head);
head = new ExpandoObject();
head.Column_name = "LoadTime";
head.UDS_Data = row["LoadTime"].ToString();
singleRecord.Add(head);
head = new ExpandoObject();
head.Column_name = "reference";
head.UDS_Data = row["reference"].ToString();
singleRecord.Add(head);
}
//get child information and store it. One row at a time
dynamic record = new ExpandoObject();
record.Column_name = row["column_name"].ToString();
record.UDS_Data = row["data"].ToString();
singleRecord.Add(record);
lastId = newID; //store the last id
}
allRecords.Add(singleRecord); //stores the last complete record
Then I have my information stored dynamically in the flat manner that I required.
Now the next problem was the ObjectListView I wanted to use. This could not accept such dynamic types.
So I had the information stored within my code as I wanted, but I could still not display it as was required.
The solution was that was to use a variant of the ObjectListView known as the DataListView. This is effectively the same control but can be data bound.
Another alternative would also be to use a DatagridView, but I wanted to stick to the ObjectListView for other reasons.
So now I had to convert my dynamic data into a Datasource. This I done as follows;
DataTable dt = new DataTable();
foreach (dynamic record in allRecords)
{
DataRow dr = dt.NewRow();
foreach (dynamic item in record)
{
var prop = (IDictionary<String, Object>)item;
if (!dt.Columns.Contains(prop["Column_name"].ToString()))
{
dt.Columns.Add(new DataColumn(prop["Column_name"].ToString()));
}
dr[prop["Column_name"].ToString()] = prop["UDS_Data"];
}
dt.Rows.Add(dr);
}
Then I simply assign my datasource to the DataListView, generate the columns, and hey presto I now have my dynamic data extracted, flattened and displayed how I require.

SQLite exception deleting row using sqlite-net

I got tired to search so here it goes my first SO question hoping someone had the same problem and can help me
Goal
I am trying to store my application data with a SQLite database
Application description
Windows 8 app C# XAML with local SQLite database using SQLite for Windows Runtime Extension and sqlite-net library
Table definition
public class Product {
private int _id;
[SQLite.PrimaryKey, SQLite.AutoIncrement]
public int ID
{
get { return _id; }
set { _id = value; }
}
private string _date;
public string DATE
{
get { return _date; }
set { _date = value; }
}
private string _desc;
public string DESC
{
get { return _desc; }
set { _desc = value; }
}
}
Problem1
public int Insert (object obj) description says the following:
Inserts the given object and retrieves its auto incremented primary key if it has one.
However everytime I insert a row it return 1. I can sucessfully insert with a auto-incremet ID but somehow it does not return me its ID. Why?
Problem 2
I can insert new rows but not delete them
Working around problem 1 to get last row generated ID, I try to delete rows but with no success.
See this example test that always fails:
using (var db = new SQLiteConnection(Path.Combine(_path, _dbname)))
{
var p1 = new Product() { DESC = "insert1", DATE = DateTime.Now.ToString() };
db.Insert(p1);
p1.ID = 1;
var p2 = new Product() { DESC = "insert2", DATE = DateTime.Now.ToString() };
// I am sure that this row is getting ID 2, so it will not have a wrong ID
p2.ID = 2;
db.Insert(p2);
var p3 = new Product() { DESC = "insert3", DATE = DateTime.Now.ToString() };
db.Insert(p3);
p3.ID = 3;
db.Delete<Product>(p2);
}
As you can see I try to insert 3 rows and delete the second one. The rows are inserted but I get the following SQLite.SQLiteException exception:
unable to close due to unfinalized statements or unfinished backups
Why? I don't open other connections before and after that.
Thanks in advance
Solved
Problem 1
+1 and thanks for #Bridgey for pointing out that function does not match it description and for the relevant search
The function does not return ID as it says but it defines the object ID. So when I insert a new Product, Product.ID will have last inserted ID.
Problem 2
I changed db.Delete<Product>(p2); to db.Delete(p2); and now it works. SQLite-net correctly identify the row as Product. I still don't know why the unable to close due to unfinalized statements or unfinished backups exception was happening. If someone knows why tell me please.
I think for problem 2, the issue is that you are passing the Product object as the parameter for the Delete method. The documentation says: Deletes the object with the specified primary key. I think the following should work:
db.Delete<Product>(p1.ID);
Regarding problem 1, the code of the Insert method of the sqlite-net package ends:
var count = insertCmd.ExecuteNonQuery (vals);
if (map.HasAutoIncPK) {
var id = SQLite3.LastInsertRowid (Handle);
map.SetAutoIncPK (obj, id);
}
return count;
As you can see, count is returned, even if id is set.
EDIT: Actually, according to the author this is deliberate.
"Insert returns the number of rows modified. The auto incremented columns are stored in the object. Please see the doc comments."
https://github.com/praeclarum/sqlite-net/issues/37

Best way to dynamically get column names from oracle tables

We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns.
Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application.
Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code?
My concern is the condition that decides which table to query. The variable condition is
like
if the condition is A, then load from TableX
if the condition is B then load from TableA and TableY.
We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table.
I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general.
So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing.
Thanks,
A simple query like this helps you to know each column name of a table in Oracle.
Select COLUMN_NAME from user_tab_columns where table_name='EMP'
Use it in your code :)
Ok, MNC, try this for size (paste it into a new console app):
using System;
using System.Collections.Generic;
using System.Linq;
using Test.Api;
using Test.Api.Classes;
using Test.Api.Interfaces;
using Test.Api.Models;
namespace Test.Api.Interfaces
{
public interface ITable
{
int Id { get; set; }
string Name { get; set; }
}
}
namespace Test.Api.Models
{
public class MemberTable : ITable
{
public int Id { get; set; }
public string Name { get; set; }
}
public class TableWithRelations
{
public MemberTable Member { get; set; }
// list to contain partnered tables
public IList<ITable> Partner { get; set; }
public TableWithRelations()
{
Member = new MemberTable();
Partner = new List<ITable>();
}
}
}
namespace Test.Api.Classes
{
public class MyClass
{
private readonly IList<TableWithRelations> _tables;
public MyClass()
{
// tableA stuff
var tableA = new TableWithRelations { Member = { Id = 1, Name = "A" } };
var relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 2,
Name = "B"
}
};
tableA.Partner = relatedclasses;
// tableB stuff
var tableB = new TableWithRelations { Member = { Id = 2, Name = "B" } };
relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 3,
Name = "C"
}
};
tableB.Partner = relatedclasses;
// tableC stuff
var tableC = new TableWithRelations { Member = { Id = 3, Name = "C" } };
relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 2,
Name = "D"
}
};
tableC.Partner = relatedclasses;
// tableD stuff
var tableD = new TableWithRelations { Member = { Id = 3, Name = "D" } };
relatedclasses = new List<ITable>
{
new MemberTable
{
Id = 1,
Name = "A"
},
new MemberTable
{
Id = 2,
Name = "B"
},
};
tableD.Partner = relatedclasses;
// add tables to the base tables collection
_tables = new List<TableWithRelations> { tableA, tableB, tableC, tableD };
}
public IList<ITable> Compare(int tableId, string tableName)
{
return _tables.Where(table => table.Member.Id == tableId
&& table.Member.Name == tableName)
.SelectMany(table => table.Partner).ToList();
}
}
}
namespace Test.Api
{
public class TestClass
{
private readonly MyClass _myclass;
private readonly IList<ITable> _relatedMembers;
public IList<ITable> RelatedMembers
{
get { return _relatedMembers; }
}
public TestClass(int id, string name)
{
this._myclass = new MyClass();
// the Compare method would take your two paramters and return
// a mathcing set of related tables that formed the related tables
_relatedMembers = _myclass.Compare(id, name);
// now do something wityh the resulting list
}
}
}
class Program
{
static void Main(string[] args)
{
// change these values to suit, along with rules in MyClass
var id = 3;
var name = "D";
var testClass = new TestClass(id, name);
Console.Write(string.Format("For Table{0} on Id{1}\r\n", name, id));
Console.Write("----------------------\r\n");
foreach (var relatedTable in testClass.RelatedMembers)
{
Console.Write(string.Format("Related Table{0} on Id{1}\r\n",
relatedTable.Name, relatedTable.Id));
}
Console.Read();
}
}
I'll get back in a bit to see if it fits or not.
So what you are really after is designing a rule engine for building dynamic queries. This is no small undertaking. The requirements you have provided are:
Store rules (what you call a "condition variable")
Each rule selects from one or more tables
Additionally some rules specify columns to be excluded from a table
Rules which select from multiple tables are satisfied with the UNION ALL operator; tables whose projections do not match must be brought into alignment with null columns.
Some possible requirements you don't mention:
Format masking e.g. including or excluding the time element of DATE columns
Changing the order of columns in the query's projection
The previous requirement is particularly significant when it comes to the multi-table rules, because the projections of the tables need to match by datatype as well as number of columns.
Following on from that, the padding NULL columns may not necessarily be tacked on to the end of the projection e.g. a three column table may be mapped to a four column table as col1, col2, null, col3.
Some multi-table queries may need to be satisfied by joins rather than set operations.
Rules for adding WHERE clauses.
A mechanism for defining default sets of excluded columns (i.e. which are applied every time a table is queried) .
I would store these rules in database tables. Because they are data and storing data is what databases are for. (Unless you already have a rules engine to hand.)
Taking the first set of requirements you need three tables:
RULES
-----
RuleID
Description
primary key (RuleID)
RULE_TABLES
-----------
RuleID
Table_Name
Table_Query_Order
All_Columns_YN
No_of_padding_cols
primary key (RuleID, Table_Name)
RULE_EXCLUDED_COLUMNS
---------------------
RuleID
Table_Name
Column_Name
primary key (RuleID, Table_Name, Column_Name)
I've used compound primary keys just because it's easier to work with them in this context e.g. running impact analyses; I wouldn't recommend it for regular applications.
I think all of these are self-explanatory except the additional columns on RULE_TABLES.
Table_Query_Order specifies the order in which the tables appear in UNION ALL queries; this matters only if you want to use the column_names of the leading table as headings in the CSV file.
All_Columns_YN indicates whether the query can be written as SELECT * or whether you need to query the column names from the data dictionary and the RULE_EXCLUDED_COLUMNS table.
No_of_padding_cols is a simplistic implementation for matching projections in those UNION ALL columns, by specifying how many NULLs to add to the end of the column list.
I'm not going to tackle those requirements you didn't specify because I don't know whether you care about them. The basic thing is, what your boss is asking for is an application in its own right. Remember that as well as an application for generating queries you're going to need an interface for maintaining the rules.
MNC,
How about creating a dictionary of all the known tables involved in the application process up front (irrespective of the combinations - just a dictionary of the tables) which is keyed on tablename. the members of this dictionary would be a IList<string> of the column names. This would allow you to compare two tables on both the number of columns present dicTable[myVarTableName].Count as well as iterating round the dicTable[myVarTableName].value to pull out the column names.
At the end of the piece, you could do a little linq function to determine the table with the greatest number of columns and create the structure with nulls accordingly.
Hope this gives food for thought..

Categories

Resources