I have a table Rules on my database. I insert rules like:
Rule[] rulesToInsert = // some array of rules to insert
using(var db = new MyEntities())
{
foreach(var rule in rulesToInsert)
db.Rules.Add(rule);
db.SaveChanges();
}
When I retrieve later the rules that I have just added I notice they are in a different order. What is the best way to retrieve them in the order I added them? Should I call db.SaveChanges() every time I add a new rule? Or should I add a new column called SortOrder? Why are the items not being added in the order I added them?
Edit
The id is a guid (string) because one rule can have other rules. In other words I am creating a tree structure. (The rules table has a foreign key to itself). It was crashing when I used the primary key as an integer and it autoincremented so I just used a guid instead. I guess I will add a separate column called sort order.
Tables have no sort order (new rows are not guaranteed to be added to the end or any other place). The only safe way to retrieve rows in any particular order is to have a query with Order by.
So yes you will need to add a SortOrder column. (Can just set it as an identity column.)
If you want your items to be inserted in the order you add them in the foreach statement, you have to make a big compromise, to call the db.SaveChanges in each iteration.
foreach(var rule in rulesToInsert)
{
db.Rules.Add(rule);
db.SaveChanges();
}
I say that's a big compromise, because for each rule you have to insert you have to make a round-trip to the database, instead of doing only one round-trip as in your original code.
One possible workaround, it would be to add an extra column in the corresponding table in your database, that would hold the information of order. If you do so, you could add one more property in the rule object and refactor a bit your code. Then you will have the expected result.
Related
I have a table that contains a non primary key RequestID. When I do a bulkInsert, all the records must have the same RequestID. But If I do another BulkInsert, the next inserted rows must have RequestID incremented :
NewRequestID = PreviousRequestID + 1
The only solution I found so far -and I don't like it by the way-, is to get the last record everytime before inserting the new records.
Why I dont like this approach ? because the database is supposed to be relationnel, which means there is "no specific order". Besides, I don't have primary keys or Dates to order with.
What is the best way to implement this?
(I've added c# tag because i am using EF. if there is an easy solution with EF)
You could take a number of different approaches:
Are you guaranteed that your RequestID's are always incremented? If so, you could query table for largest RequestID and that should represent the "last one inserted."
You could track state somewhere in your application, but this is likely dangerous in scenarios where service fails/restarts (unless state is tracked externally).
Assuming you have control over the schema, if you don't want to update the particular table schema you are speaking of, you could create another table to track the last RequestID used, and retrieve it from there (which would protect you against service restarts/failures).
Those are a few that come to mind.
UPDATE:
Assuming RequestID isn't a particular type of identifier, you could use timestamp - which will always be incremented when you do a new batch, however, I'm not sure if you needed it to always be incremented by exactly '1' which would preclude this approach.
I have been using linq to sql for a little while and often come up against this type of problem....
e.g
I have 2 db tables
-Table: Invoice ("Id" int auto-increment, "InvoiceDate" datetime)
-Table: InvoiceItems ("Id" int auto-increment, "InvoiceId" int (FK), "SomeReference" varchar(50))
The "SomeReference" field holds a value that is a combination of the Id from the parent Invoice record and some random characters. eg. "145AHTL"
Before i can set the value of SomeReference I need to know the value of the Invoice Id, but this only gets populated when it is saved to the DB. I have both parent and child records in the same Linq to SQl DB Context but I only want to perform "SubmitChanges" to the parent Invoice record only, so that i can then populate the SomeReference in the child record. I dont want to have the child InvoiceItem record saved to the DB before SomeReference is set.
How can I achieve this using Linq to Sql?
I understand that linq to sql uses the "Unit of Work" idea for saving to db, but I dont understand how I can avoid unnecessarily saving records to the db when they are not ready to be saved just yet. If there is no way around this, then why do developers bother with linq to sql, as this seems like such a huge drawback?
edit: should note that this example is just something i came up with to help describe my problem.
You can not. Not this way. And this is the only way (linq dues not support sequences). Brutally speaking - you have to fix your logic. The Id of an invoice is not a refernce field. It should not ever never be the number. This is a logical field and should be handled by your logic, outside the Id.
You example can be done, but you need to forget about the SQL and the database, but think in an ORM way.
Two issues need to be addressed in your example
First inserting the master and detail at the same time
Pseudo code for how it works:
using (var dc = new datacontext())
var master = new masterentity;
master.somedata = "data";
dc.tb_master.InsertOnSumbut(master)
var detail = new detailentity
detail.tb_master = master
dc.tb_detail.InsertOnSubmit(detail)
Submitchanges()
So you assign the entities to eachother, not the keys.
Second: the SomeReference
This first part however, does not give you the somereference field, only sets the the foreign key properly.
Your somereference field contains redundant data (not necessary) so that needs to be solved.
The somereference is a string + the ID.
So you store the string part in a column in the database (and only that) and you implement a custom property somereference by using a partial class.
public partial class tb_detail
{
public string somereference
{
get
{
return _id.ToString() + _somestring;
}}}
I have a table in a SQL database which has a ID column (auto incrementing) and is set to be the primary key. The table consists of this ID and an account name.
I then have a bit of code which reads this table and populates a listview with the data. The problem is, if I order by the account name - I get duplicates listed in the listview. If I order it by the ID, I don't see any duplicates.
The original data in the SQL database contains no duplicate account names, so obviously that's what i'd like to see in the listview.
This is the Linq i'm using to grab the data...
public static IEnumerable<Client2> GetClientList()
{
return (IEnumerable<Client2>)from c in entity.Client2s
orderby c.AccountName
select c;
}
And this is the code which is being used to create the listview...
// Clear the listview
listViewClient.Items.Clear();
// Get imported client list from database
foreach (Client2 c in SQLHandler.GetClientList())
{
ListViewItemClient lvi = new ListViewItemClient(c.AccountName, c);
listViewClient.Items.Add(lvi);
}
As I say, if I change this to orderby c.ID then it returns data as expected. I've also tried adding an index to AccountName. I do use a custom listview item subclass, but all that does is store a reference to the Client object.
Any idea how I can resolve this?
Thanks,
Just to clarify for anyone else potentially reading this issue, it was programmer error. My data did indeed contain duplicates but because of the sort order, they weren't listed together and therefore I didn't see them when manually checking the data. It was only when I started displaying the ID that I realised they weren't sequential.
Given the following code (which is mostly irrelevant except for the last two lines), what would your method be to get the value of the identity field for the new record that was just created? Would you make a second call to the database to retrieve it based on the primary key of the object (which could be problematic if there's not one), or based on the last inserted record (which could be problematic with multithreaded apps) or is there maybe a more clever way to get the new value back at the same time you are making the insert?
Seems like there should be a way to get an Identity back based on the insert operation that was just made rather than having to query for it based on other means.
public void Insert(O obj)
{
var sqlCmd = new SqlCommand() { Connection = con.Conn };
var sqlParams = new SqlParameters(sqlCmd.Parameters, obj);
var props = obj.Properties.Where(o => !o.IsIdentity);
InsertQuery qry = new InsertQuery(this.TableAlias);
qry.FieldValuePairs = props.Select(o => new SqlValuePair(o.Alias, sqlParams.Add(o))).ToList();
sqlCmd.CommandText = qry.ToString();
sqlCmd.ExecuteNonQuery();
}
EDIT: While this question isn't a duplicate in the strictest manner, it's almost identical to this one which has some really good answers: Best way to get identity of inserted row?
It strongly depends on your database server. For example for Microsoft SQL Server you can get the value of the ##IDENTITY variable, that contains the last identity value assigned.
To prevent race conditions you must keep the insert query and the variable read inside a transaction.
Another solution could be to create a stored procedure for every type of insert you have to do and make it return the identity value and accept the insert arguments.
Otherwise, inside a transaction you can implement whatever ID assignment logic you want and be preserved from concurrency problems.
Afaik there is not finished way.
I solved by using client generated ids (guid) so that my method generated the id and returns it to the caller.
Perhaps you can analyse some SqlServer systables in order to see what has last changed. But you would get concurrency issues (What if someone else inserts a very similar record).
So I would recommend a strategy change and generate the id's on the clients
You can take a look at : this link.
I may add that to avoid the fact that multiple rows can exist, you can use "Transactions", make the Insert and the select methods in the same transaction.
Good luck.
The proper approach is to learn sql.
You can do a SQL command followed by a SELECT in one run, so you can go in and return the assigned identity.
See
The Story
I'm going to write up some code to manage the deleted items in my application, but I'm going to soft delete them so I could return them back when I need. I have a hierarchy to respect in my application's logic when it comes to hiding or deleting items.
I logically place my items in three containers to the country, city, district and brand.
Each item should belong to a country, a city, a district and a brand.
Now, if I deleted a country it should delete the cities, districts, brands, and items that belongs to the given country. and if I deleted the city it should also delete the whole stuff under it (districts, brands, etc)
A Note
When I delete a country and delete the associated brands, I should take care that a brand might have items in more than one country.
The Question
Do you suggest to
Flag the items (whether it's country, city, item, etc) as deleted and this will require a lot of code to check every time when any item is loaded from the database, if it's deleted or not and also some extra fields to mark if the city it belongs to is deleted, and the country it belongs to is deleted and so on.
Move the deleted stuff each to a specific table (DeletedCountries, Deleted Cities, etc)
and save the the IDs of the items it was associated with so I could insert them back later to it's original table. and of course this will save my application all the code that will manage to check all the deleted items and make sure all the hierarchy is deleted.
Maybe you have a better approach/advice/idea about achieving such a thing!
For argument's sake, one advantage of solution #2 (moving deleted items to their own tables) is if you have lots and lots of records, you would not have to worry about indexing records in respect to their "deleted" state.
With that said, if I were going to "move" data from table to table (via delete followed by insert) I would make sure to do it in 1 transaction.
I'm using a technique right now where we are storing a 'DeleteDate' on every user maintained table in our database. The DeleteDate field is a smalldatetime data type with a default value of 6/1/2079
Coupled with an index on the DeleteDate field, we are able to use a standard View or User-Defined-Function to return only the 'current' records (that is, those records with a delete date in the future). All queries route through this index when looking for current data, and deletes become a trivial update query.
There are some additional logic checks that need to be done for related tables. But that is part of the price of having to never worry about a user 'accidentally' deleting valuable data.
In the future, when these tables are excessively large and there are a lot of deleted records present, we can partition the table first on the DeleteDate. This will move all 'deleted' records away from the 'live' records.
Flagging an item as delete really complicates the information retrieval, and also, you need to deal with cascade remove by yourself.
I would choose the "mail box" approach, that move deleted records to different table. I have done a project that use soft-delete, and I end up put all delete calls to Stored Procedure and handle the copy and remove in Stored Procedure.
You should manage your hierarchy by tagging all subitems as deleted. This way if your eg. product belongs to a brand, you can check only if brand is deleted. You should also put your logic on data retrieval side, to avoid unnecessary gathering of deleted information.
SELECT
*
FROM
products p,
category c
WHERE
p.catId = c.Id
AND NOT c.Deleted
And above all, information about deleted category should be indexed.
CREATE PRIMARY INDEX ON category (Id)
CREATE INDEX ON category (Deleted)
or
CREATE INDEX ON category (Id, Deleted)
I think to flag the item is the best approach and even i also use the mail approach for the purpose of soft delete.
Yea that requires much stuff to take in account to manage but yet i didn't found any other way. I just add the one extra column to each and every table that is Status whose datatype is bit.
Thanks
How complex a delete technique are you asking for?
With just one date field and no audit log, you can have an instant deleted flag. If datefield is null, then it's not been deleted. You can then use that datefield on the index (if the index allows nulls).
If you want something more complex, then you could use extra tables. Will you allow it to be deleted, undeleted, redeleted and maintain a record of each of those? If so, keep a separate table for action logging and keep only the one record with a boolean field (actually a join on that table might be faster, depends on the data)
If you often reconsistute the items, flagging is a preferable means, but you end up having to alter your data access to avoid showing the items that are flagged, which can be rather painful if you have already set up a lot of code accessing your data, so moving may be better if you have a lot of "legacy" code accessing the data. If it is rare, and you are also interested in a history log, moving to another database table works well.
One easy way to achieve either is to use a trigger that changes the delete row and does the operation. If you actually do need to delete items, however, the flag option becomes a royal PITA when you flag rather than move items. The reason a trigger is easier in many cases is you capture every delete, not just those that are initiated by code.