detect duplicate insert - c#

Is there an easier way to prevent a duplicate insert after refresh? The way I do it now is to select everything with the all fields except ID as parameters; if a record exists i don't insert. Is there a way to possible detect refresh?

Assuming it's a database, you could put a unique constraint on the combination of "all fields except ID" and catch the exception on an insert or update.

I agree with #Austin Salonen that you should start by protecting the DB with primary keys, unique constraints and foreign keys.
That done, many websites will include some JS behind submit buttons to disable the button immediately before sending on the request. This way, users who double click don't send two requests.

I think you may want to the EXISTS function.
Here's a simple explanation of EXISTS I found through google.

Like Dereleased said, use a 303-based redirect. Make the form submission use POST and then after saving have it send a 303 header and send them to the post-submit URL via a Location header which will be fetched via GET and a refresh will not be re-posting data.

It has been a long time since I have done any real web work. But back in the 1.1 days I remember using ids associated with a postback to determine if a refresh had occured.
After a quick search I think this is the article I based my solution from:
http://msdn.microsoft.com/en-us/library/ms379557(VS.80).aspx
It basically shows you how to build a new page class that you can inherit from. The base class will expose a method that you call when you are doing something that shouldn't be repeated on a refresh, and an IsPageRefresh method to track if a refresh has occured.
That article was the basis for alot of variations with similar goals, so it should be a good place to start. Unfortunately I can't remember enough about how it went to really give any more help.

I second the option to redirect a user to another (confirmation) page after the request has been submitted (a record inserted into the database). That way they will not be able to do a refresh.
You could also have a flag that indicates whether the insert request has been submitted and store it either on the page (with javascript) or in the session. You could also go further and store it somewhere else but that's an architectural decision on the part of your web application.
If you're using an AJAX request to insert a record then it's a bit harder to prevent this on the client side.
I'd rather do an indicator/flag then compare the fields. This, of course, depends on your records. For example, if it is a simple shop and the user wants to make an identical order then you will treat it as a duplicate order and effectively prevent the functionality.

What DB are you using? If it's MySQL, and certain other factors of your implementation align, you could always use INSERT IGNORE INTO .... EDIT: Struck for SQL Server
Alternatively, you could create "handler" pages, e.g. your process looks like this:
User attempts to perform "action"
User is sent to "doAction.xxx"
"doAction.xxx" completes, and redirects to "actionDone.xxx"
???
Profit!
EDIT: After re-reading your question, I'm going to lean more towards the second solution; by creating an intermediate page with a redirect (usually an HTTP/1.1 303 See Other) you can usually prevent this kind of confusion. Checking uniques on the database is always a good idea, but for the simple case of not wanting a refresh to repeat the last action, this is an elegant and well-established solution.

Related

How to add a constant querystring parameter in ASP.NET

I have a large ASP.NET/C# application that we're currently expanding on. I need to add a querystring parameter to a page and have that parameter be added automatically to every request thereafter.
For example, let's say the user chooses mode=1. I need to add &mode=1 to the querystring of every single link that is clicked from that point forward. At any point, the user can change the mode=2. So then I need that change to be reflected on each subsequent request.
And yes, I realize that what I'm basically looking for is to store a flag in either a session variable or a cookie. However, we've done that and we're having issues with it not persisting correctly. If nothing else, I'd like to put this in the querystring if only for testing purposes to see if the issue is simply with session/cookie state, or if somewhere in our code it's getting improperly reset.
I would personally try and rework this approach as it sounds messy and it likely to cause issues down the track - sessions are perfect for this type of thing.
If you really need to do this I would suggest writing up a http handler to intercept all http requests and make sure these parameters are in included in each request
See http://support.microsoft.com/kb/308001

Saving values server side vs. using Sessions and Request.QueryString in ASP.net

I'm making a simple asp.net app that displays data which can be filtered based on a few different parameters. As such, the different filters that are currently selected need to be saved somewhere. I'm fairly new to .NET, and am wondering the best way to save this information. I noticed a coworker used Request.QueryString in conjunction with the Sessions dictionary. Something like this on page load:
protected void Page_Load(object sender, EventArgs e)
{
if (Request.QueryString["Category"] != null &&
Request.QueryString["Value"] != null)
{
string Category = Request.QueryString["Category"];
string CategoryValue = Request.QueryString["Value"];
// selectedFacets is the server side dictionary
selectedFacets.Add(Category, CategoryValue);
}
}
The QueryString here is changed when the user presses a button on the webpage, updating the URL.
My question is why even bother with the QueryString at all when all we're using it for is saving a value server side anyway? Wouldn't just making the button an asp controller be easier, something like:
protected void exampleCatexampleVal_Button_onClick(object sender, EventArgs e)
{
selectedFacets.Add(exampleCat, exampleVal);
}
Similar business goes on the with the Sessions dictionary: it's just used to save a bunch of values to variables on the server, so why use it in the first place? I'm sure there's a good reason, but right now they just seems overly complicated for what they do. Thank you!
Based on your code examples, I understand that you're talking about ASP.NET WebForms.
Your use case is not complete, but I'll show here some alternatives to achieve your goal. If you give further information, I'll gladly update my answer.
Before we get to it, let me just put things clear: HTTP is stateless. Understanding this basic rule is very important. It means that your server will receive a request, send it to your app (and the .NET process), get the resulting page (and assets) and send it back to the client (mostly, a browser). End of story.
(Almost) Everything that you've created to respond to the request will be lost. And that's why we have options on where to store objects/values across requests.
Server-side Session
This is one of the easiest options. You simply call this.Session.Add("key", object) and it's done. Let's dig into it:
It will use server resources. That is, the most you use the session, the most memory (and other resources, as needed) your app will consume.
It will be harder to scale, because data will be on your server memory. Vertical scale may be an option, according to your hardware, but horizontal scale will be limited. You can use a session-server or store session on a SQL Server database, but it won't be so efficient anymore.
It's attached to your client session. It will be lost if the user opens another browser or sends a link to his friend.
It's relatively safe. I say relatively because of the options below. At least it's server side.
GET arguments (AKA QueryString)
That's another option, and you know it already. You can send data back and forth using the querystring (?that=stuff&on=the&URL=youKnow).
It's limited to 2000 characters and that must be serializable. That's why you probably won't put a DataGrid there.
The user may change it. Be aware! Always sanitize data from the QueryString.
User is free to bookmark the link or send it to a friend and stuff will be the same. That's nice, mind you.
ViewState
You may have heard about it, it's the engine that makes WebForms so lovely (and so hateful). By default, each controller on your page will have its state serialized to the viewstate, which is a huge hidden field with encrypted data on your page. Go on, click "View source" and look for it. Don't scream, please. You may add arbitrary data to the ViewState just like the Session.
It's on the client side. Don't trust it.
It will be send back and forth on each request, so it will consume extra bandwidth.
It will take time to be deserialized/serialized on each request/response.
Data must be serializable (you know what I mean).
So, by now I hope that you have enough information to make your own decision.
If I missed anything, please let me know.
Have a look at this MSDN Article first. I read through it, and it may answer your question for you.
http://msdn.microsoft.com/en-us/magazine/cc300437.aspx
What you're missing, is how the asp.net page lifecycle works:
http://msdn.microsoft.com/en-us/library/ms178472(v=vs.100).aspx
The thing is, that 'server variable' won't persist between postbacks (AFAIK). It's only useful inside that page, right then. As soon as the page is disposed at the end of the cycle, that dictionary is gone. So you have to store it somewhere if you want to use it later. The article I referenced shows you all the places that you can store information to persist it and where you store it depends on how long you need it to persist, and how many users should have access to it.
Now, certainly, if you DON'T want to persist that dictionary, then sure, just store it the page variable. That's just fine. There's no reason to persist data that you never need again.
It's always good to keep in mind that there is a slight performance hit when storing and retrieving session state from database or from separate process service (StateServer). If session state is stored in-memory, you cannot scale your application to a web farm and this wastes valueable memory in the web server.
On the other hand, using query string values won't waste your memory and resources, it is fast and you don't have to think about managing session state. It gives SEO benefit and allows bookmarks/favorites.
However, you can store only limited amount of data using query string (WWW FAQs: What is the maximum length of a URL). It can also pose a security risk, if sensitive data is exposed or a malicious user tries to find a bug in your code that mishandles URL values. SQL injection attack is one scenario. What you can do is encrypt sensitive values.
Overall there are many aspects to consider here.
One of the benefits of using query string values is if you need bookmarks/favorites stored in your application that would allow a user to directly navigate to a page and have certain values be passed into the page without the assistance of an in-memory cache. Take the following for example:
I have a product page that displays a grid view of products that can be filtered by a category name. If I use the categoryId value in the query string, then I can make a bookmark to this page and then later click on the bookmark and the page will work as I left it earlier. Depending upon in-memory cache, that may or may not be there, would not guarantee that my bookmark would work every time.

Alter URL parameter using C#, MVC 3 and KO

Here is my situation:
I have a search page that pulls data from a database. Each record shown has a key attached to it in order to pull data from the database for that record. When a link to a document for a record is clicked, this key is added on to the URL using KO data-bind and control is passed to the corresponding MVC Controller.
Here is my problem:
That key is displayed in the URL. I cannot allow that. A user of this website is only allowed access to certain records. It is unacceptable if the user is able to see any record simply by changing the last number or two of the key in the URL. The best solution I've come up with so far is to use AES256 encryption to encrypt each key as the search results are processed, then decrypt after the encryption is passed to another controller. This works great except when I get to the environment where HTTPS is used. I get 400 errors.
Am I over-thinking this? Is there a way, using MVC and KO, to mask the key from the URL entirely? Or should the encryption be allowed in the URL even when using HTTPS?
Here are some examples for clarification:
Without any changes to my code, here is how a URL would look:
https://www.website.com/Controller/Method/1234
Using encryption, I come up with something like this:
https://www.website.com/Controller/Method/dshfiuij823o==
This would work fine as long as it works with HTTPS.
One way or another, I need to scramble the key in the URL or get rid of it. Or determine a way to not run a search with the key every time the controller is called.
Thank you all for any help.
Unless I'm missing something really obvious here, can't you, on the web service side of things, check the if the logged in user has the correct permissions to the record and, if not, don't show the record?
This should ideally be done at the searching level so the user doesn't see any of the files they can't get access to anyway. And even if they change the keys in the browser, they still won't have access.
If there is no membership system, then there's going to need to be one implemented if you really want to make your site secure. Otherwise, you're playing with fire. Otherwise, you're going to need to set your documents to "public" or "private", in which will still require a database-level change.
Edit
If you really need to make your ID's unguessable, don't encrypt them, go for something a lot more simple and create GUIDs for them at your database level. Then your URL would contain the GUID instead of an encrypted key. This would be a lot more efficient due to you not having to encrypt/decrypt your record IDs on every call.
This, however, is still not 100% secure and I doubt would pass PCI Data Security checks as people can still look at (and copy/paste) GUIDs from the query string, just as easy as they could with encrypted strings. Realistically, you need a membership system to be fully compliant.
I agree with thedixon. You should be checking that a user has permission to view any of the items anyway.
I also agree that using GUIDs is a good idea.
However, if you're suck with ints as ids, here's a simple approach: when creating the URL: multiply the id by a large integer, such as 12345. Then when processing a request, divide the number in the URL by your "secret" number. It isn't fool-proof. But a person guessing would only have a tiny chance of getting a real ID--specifically, a 1 in 12345 chance of getting a real ID.

Stop users overwriting each other

I'm wanting to stop two users accidently overwriting each other when updating a record. That is to say two users load a page with record A on it. User one updates record to AB and user two updates it to AC.
I don't just want the last to hit the database to override. I need a mechanism to say the record has been updated so yours can't be saved.
Now the two ideas I have is to time stamp the records and check that. If it doesn't match up don't allow the update. The second method is to GUID the record each time an update is performed, check the GUID and if it doesn't match don't update.
Are either of these methods valid, if so, which is best. If not, what do you suggest. This is in C# if it makes a difference
Thanks
The two methods you've mentioned are effectively equivalent - either way you've got a unique identifier for "the record at the checkout time" effectively. I don't have any particular view on which is better, although a timestamp obviously gives you the added benefit of data about when the record was valid.
An alternative is to remember what the previous values were, and only allow an update if they match - that way if user A started editing a record, then user B goes in and changes something, then changes it back, user A's edits are still valid.
The term for these techniques is optimistic locking (or optimistic concurrency control).
There is actually a third method. To do the update, issue an update statement of this form:
UPDATE table SET update_field = new_value
WHERE db_pk = my_pk // assume primary key immutable
AND update_field = original_field_value
where original_field_value is the value of the field before the update was attempted. This update will fail if someone else has modified update_field, unless they have changed it to the same value that you have.
You're describing Optimistic Locking, a valid and useful technique.
See references here.
Either method is valid for checking.
As to which is the best you have to look at the size of your app and how long it will take to implement each one. So if this is only ever going to happen occasionally then I'd prob go for the quicker solution and implement the timestamp option.
If you want something more detailed google concurrency - heres an article to start with - concirrency
I am using the first option. Update the timestamp on each update. So at the time of update we check the equality of the timestamp.
Do the terms optimistic and pessimistic locking ring a bell. These are the two recognised approaches to the problem you are describing. It sounds like you are working in a web environment. In this case the former option (optimistic locking) is more appropriate. You have gone on to describe how this would generally be implemented. It is common to use a timestamp or a version number to check if the record has been updated since the record was retrieved. One other thing to consider is to let your users know that there has been a change to the underlying data and potentially give them the option to choose between what they have attempted to save and what was save by another user. This choice really depends on what the business rules are.

ASP.NET MVC: Verify that editing record is allowed (ownership)

I have a multi-user ASP.NET MVC application. The users are not supposed to see or do anything with each other's data.
One of my controller actions is the obligatory POST to /Edit to edit a record (e.g. a contact). Now here is my problem: What if somebody forges a simple POST to /Edit (which automatically model-binds to my contact class) and edits somebody else's information? As each record is identified by Id, all that would have to be done is make a fake POST with Id XXX and then record # XXX would be overwritten with whatever the attacker supplied. How can I stop this?
The only thing I thought of is fetching the original instance every time first from the DB, check that it is in fact within the user's scope of editable objects (the ones he'd usually see to edit) and only if that check passes to proceed with UpdateModel and committing the update changes.
Is there a better way?
Edit: This is not a Cross Site/CSRF attack. Another logged in user can do this.
Authorization for the view/page and authorization for the particular object are really two separate concepts. The best approach is problem to use an Authorize attribute in conjunction with the ASP.NET roles system to either grant or deny access to a given page. Once you have verified that the user has access to the page, then you can verify whether he has the permission he is requesting for the object on which he is requesting it. I use this approach in my application, and it works great. By using the Authorize filter first, it significantly improves performance since the actual object permission checking is a much heavier operation.
Also, I use a home brewed rules system to actually set and determine whether the user has access to the object. For example, in my system, administrators have full access to every object. (That's a rule.) The user who creates the objects has full access to the object (also specified by a rule). Additionally, a user's manager has full access to every thing his employees have access to (again specified by a rule.) My application then evaluates the object to see if any of the rules apply--starting with the lest complex rules first and then moving on to the more complex rules last. If any rule is positive, I discontinue rule evaluation and exit the function.
What you could do is exclude the ID in the model binding with this syntax:
public ActionResult Edit([Bind(Exclude="Id")] User userToEdit)
and then fetch the ID from the current logged in user instead, so that it is only the logged in user that can edit his own items and noone elses.
Loading the original record first and checking the owner sounds like a good approach to me. Alternatively you could add a hidden field containing the record ID and cryptrographically sign that field to make sure it can't be changed, or take the record ID, hash it using the user ID as a salt and check that (assuming you're using the membership providers you should use the provider unique ID, not the login name)
This question reminded me of an article that covers a similar issue (in light of URL manipulation attacks) that i had bookmarked. They deal with an authenticated user messing with the data of another user. You might find it useful:
link text
Edit: This link should be correct:
Prevent URL manipulation attacks

Categories

Resources