On every page of my website a token is passed in as a querystring parameter. The server-side code checks to see if the token already exists in the database. (The token is a uniqueidentifider field in the database). If the token exists then it will use the existing one, if not then it will create a new row with the new token.
The problem is once in a while I see a duplicate record in the database (two rows with the same uniqueidentifider). I have noticed the record insertion times were about half a second apart. My only guess is when the site is being visited for the first time the aspx pages weren't fully compiled. So it takes a few seconds and the user goes to another page of the site by typing in a different url and the two requests were executed almost at the same time.
Is there a way to prevent this duplicate record problem from happening? (on the server-side or in the database??...)
This is the code in questions that's part of every page of the website.
var record = (from x in db.Items
where x.Token == token
select x).FirstOrDefault();
if (record == null)
{
var x = new Item();
x.Id = Guid.NewGuid();
x.Token = token;
db.Items.InsertOnSubmit(x)
db.SubmitChanges;
}
Yes, create a unique index on your token field.
create unique index tab_token on your_table(token);
This way, database will make sure you will never store two records with the same token value. Keep in mind that your code might fail when running this due to the index constraint, so make sure you are catching that exception in your code and treat it accordingly.
What is probably happening is that two request are being served at the exact same time and some racing conditions are causing two tokens getting the same value. It would help to identify your problem if you post some code.
Related
I have a c# web api hosted in iis which has a post method that takes a list of document ids to insert into a lotus notes database.
The post method can be called multiple times and I want to prevent insertion of duplicate documents.
This is the code(in a static class) that is called from the post:
lock (thisLock)
{
var id = "some unique id";
doc = vw.GetDocumentByKey(id, false);
if (doc == null)
{
NotesDocument docNew = db.CreateDocument();
//some more processing
docNew.Save(true, false, false);
}
}
Even with the lock in place, I am running into scenarios where duplicate documents are inserted. Is it because a request can be execute on a new process? What is the best way to prevent it from happening?
Your problem is: getdocumentbykey depends on the view index being up to date. On a busy server there is no guarantee that this is true. You can TRY to call a vw.Update, but unfortunately this does not trigger an update of the view index, so it might be without any effect (it just updates the vw object to represent what has changed in the backend, if the backend did not update, then it does nothing).
You could use db.Search('IdField ="' & id & '"', Nothing, 0) instead, as the search does not rely on an index to be rebuilt. This will be slightly slower, but should be way more accurate.
you might want to store the inserted ids in some singleton object or even simply static list. And lock on this list - whoever obtains the lock verifies that the ids it wants to insert are not present and then adds them to the list itself.
You need to keep them only for a short length of time, just so that 2 concurrent posts with the same content does not update plus normal view index gets updated. So rather store timestamp along id, so you can clean out older records if the list grows long.
I have stored the user profile in another table, called Agent. The Agent table does not contain any navigation property since i couldn't find a way to store user id in the Agents table( in the examples from msdn, even they didn't do it).
An Agent has many packages, and now i have a create packages view. I wanted to store the logged in Agent's ID to get stored in the Packages. So in the package controller in the Post Create actionResult, i'm doing something like this:
var user = adb.Users.Where(p => p.Id == User.Identity.GetUserId()).Single();
var agent = db.Agents.Where(p => p.ID == user.Agent.ID).Single();
(I have 2 datacontext, one from the application user, and one i got from datafirst EF, i tried merging them, but i got errors.)
I am getting errors.
System.InvalidOperationException
The Enumerable.Single returns the only element of a sequence, and throws an exception if there is not exactly one element in the sequence. It throws InvalidOperationException , because the input sequence contains more than one element OR the input sequence is empty.
Instead of using Enumerable.Single, I would recommend using SingleOrDefault. This method returns null if there are no elements in the Enumerable rather than InvalidOperationException. You can then check for null value and take appropriate action if there is no entry in the Enumerable.
Ref https://msdn.microsoft.com/en-us/library/vstudio/bb342451(v=vs.100).aspx
I think the problem is in tables data (Users, Agents).
If you search by Id, I hope it's a key for tables, so I decided that you didn't add same records in one table, so we can remove variant that there are duplicates. If you use User.Identity.GetUserId() it's give background for decide that your user account already exist and the data in Users table is normal. InvalidOperationException in those places you can get if there are duplicates or data not exist. Variants with duplicates is left and I hope you use ID like key for table, so your problem is: You haven't any users in Agent table with ID which you get from user.Agent.ID. Two ways: wrong data in Agent or you get incorrect Id from user.Agent.ID(maybe problem in mapping). Try to create new User and new Agent for him and run your code. Are you sure that for all users should be an agent?
i am using dynamodb service for data insertion. But its working randomly. sometimes it insert values and most of the times it is skipping. Although i am sending different primary key all times. Following code i am using. Please advice. Thank you
Dictionary<string, AttributeValue> attributes = new Dictionary<string, AttributeValue>();
foreach (KeyValuePair<string, object> entry in paramDictioonary)
{
if (entry.Value == "")
{
attributes[entry.Key.ToString()] = new AttributeValue { S = "Empty Value" };
}
else
attributes[entry.Key.ToString()] = new AttributeValue { S = entry.Value.ToString() };
}
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
{
PutItemRequest request = new PutItemRequest
{
TableName = "tableNamehere",
Item = attributes
};
client.PutItem(request);
}
Please help. Thanks in advance
Kind Regards.
We have been fighting with this problem for the last 48 hours until we finally re-read the description of the Put operation.
We had created a time based key and had 6 instances inserting 3-4 records per second. The result we saw was for 1200 records inserted only 600-700 made it into dynamo db and cloud search.
What we realised was, and maybe it's also effecting you, is that the Put operation will over write records with the same key without returning an exception. It therefore looked in our case that that Dynamo DB was dropping records on insert where in reality we must have been creating duplicate keys and therefore records were over writing each other.
I hope this helps.
What you're describing shouldn't happen; If you are looking at a table very quickly after data is inserted (less than a second) you might not see it, because Dynamo allows inconsistent reads. If you're not seeing data after minutes (or ever), then either your PUTs are not successful, or Dynamo is having problems.
To prove that your bug is really happening, you can look at wire logs of the DynamoDB client (I'm not sure how to enable this in C#, I'm a Java guy) and find a request that you PUT to Dynamo, and then try to read it minutes later and confirm that you can't. If you take the RequestId that AmazonAWS provides as a response on both of these requests (the PUT that put the data and the GET that gets the data), you can give these to AmazonAWS and have them look into it.
However my guess is that if you go through the work to get this logging working and look into it, you might find a bug where you aren't successfully storing the data.
I've a simple client registration system that runs over a network. The system is supposed to generate a unique three digit ID (primary key) with the current year concatenated (e.g. 001-2013). However, I've encountered the problem that the same primary keys being generated when two users from different computers (over a LAN) try to register different clients.
What if the user cancels the registration after an ID is already generated? I've to reuse that ID for another client. I've read about static variable but it didn't solve my problem. I really appreciate your ideas.
Unique and sequential IDs are hard to implement. To completely achive it you would have to serialize commiting creation of client information so ID generated only when data is actually stored, otherwise you'll endup with holes when something wrong happened during submittion.
If you don't need strict sequential numbers - giving out ranges of ID (1-22, 23-44,...) to each system is common approach. Instead of ranges you can give out lists of IDs to use ({1,3,233,234}, {235,236,237}) if you need to use as many IDs as possible.
Issue:
New item -001 is created, but not saved yet
New item -002 is created, but not saved yet
Item -001 is cancelled
What to do with ID -001?
The easiest solution is to simply not assign an ID until an item is definitely stored.
An alternative is, when finally saving an item, you look up the first free ID. If the item from step 2 (#2) is saved before the one from step 1, #2 gets ID -001. When #1 then gets saved, the saving logic sees that its claimed ID (-001) is in use, so it'll assign -002. So ID's get reassigned.
Finally you can simply find the next free ID when creating a new item. In the three steps described above, this'll mean you initially have a gap where -001 is supposed to be. If you now create a new item, your code will see -001 is unused and will assign that to the new item.
But, and that totally depends on your requirements which you didn't specify, now -001 was created later in time than -002, I do not know if that is allowed. Furthermore at any given moment you can have a gap in your numbering where an item has been cancelled. If it happens at the end of a reporting period, this will cause errors (-033, -034, -036).
You also might want to include an auto-incrementing primary key instead of this invoice number or whatever it is.
I'm creating a maker-checker functionality where, maker creates a record and it is reviewed by the checker. The record creation will not happen until the checker approves it. If the checker rejects it then it should be rolled back.
The idea which i know is create a temporary table of records which will hold the records to be created until the checker approves it. But this method will have 2X number of tables to be created to cover all functionalities.
What are the best practices to achieve this?
Is it something which can be done using Windows Work Flow.? (WWF)
Just add a few more columns to your db.
Status : Current status of record ( Waiting for approval, Approved,
Declined . etc)
Maker : Username or id
Checker : Username or id
Checker Approval/Rejection Time
Alternatively if you have lots of tables like this needs to maker/checker workflow you can add another table and reference all other record to this table.
Windows workflow foundation also could work for you but i persnoally find it difficult to use
If you want revisions for the recored you also need more columns. Such as Revision Number and IsLastRevision.
I dont know what you are using for accessing db and modifiying records. If you are using OR/M you might override Update and do revisions on all saves . For Example
void Update(Entity e )
{
Entity n = new Entity();
// Create a copy of e ( With AutoMapper for example or manually )
e.Current = true;
e.RevisionNumber += 1;
Update(e);
Insert(n);
}
In this case you will have two options:
Create two identical tables and use one table for approved data and one for requested data.
OR
You can create two rows for a user with TypeID(requested/approved). in this case user will create a request with typeID = requested when approver approved that request you simply override current approved record with new one otherwise simply mark requested row with rejected status.