Here is my situation:
I have a search page that pulls data from a database. Each record shown has a key attached to it in order to pull data from the database for that record. When a link to a document for a record is clicked, this key is added on to the URL using KO data-bind and control is passed to the corresponding MVC Controller.
Here is my problem:
That key is displayed in the URL. I cannot allow that. A user of this website is only allowed access to certain records. It is unacceptable if the user is able to see any record simply by changing the last number or two of the key in the URL. The best solution I've come up with so far is to use AES256 encryption to encrypt each key as the search results are processed, then decrypt after the encryption is passed to another controller. This works great except when I get to the environment where HTTPS is used. I get 400 errors.
Am I over-thinking this? Is there a way, using MVC and KO, to mask the key from the URL entirely? Or should the encryption be allowed in the URL even when using HTTPS?
Here are some examples for clarification:
Without any changes to my code, here is how a URL would look:
https://www.website.com/Controller/Method/1234
Using encryption, I come up with something like this:
https://www.website.com/Controller/Method/dshfiuij823o==
This would work fine as long as it works with HTTPS.
One way or another, I need to scramble the key in the URL or get rid of it. Or determine a way to not run a search with the key every time the controller is called.
Thank you all for any help.
Unless I'm missing something really obvious here, can't you, on the web service side of things, check the if the logged in user has the correct permissions to the record and, if not, don't show the record?
This should ideally be done at the searching level so the user doesn't see any of the files they can't get access to anyway. And even if they change the keys in the browser, they still won't have access.
If there is no membership system, then there's going to need to be one implemented if you really want to make your site secure. Otherwise, you're playing with fire. Otherwise, you're going to need to set your documents to "public" or "private", in which will still require a database-level change.
Edit
If you really need to make your ID's unguessable, don't encrypt them, go for something a lot more simple and create GUIDs for them at your database level. Then your URL would contain the GUID instead of an encrypted key. This would be a lot more efficient due to you not having to encrypt/decrypt your record IDs on every call.
This, however, is still not 100% secure and I doubt would pass PCI Data Security checks as people can still look at (and copy/paste) GUIDs from the query string, just as easy as they could with encrypted strings. Realistically, you need a membership system to be fully compliant.
I agree with thedixon. You should be checking that a user has permission to view any of the items anyway.
I also agree that using GUIDs is a good idea.
However, if you're suck with ints as ids, here's a simple approach: when creating the URL: multiply the id by a large integer, such as 12345. Then when processing a request, divide the number in the URL by your "secret" number. It isn't fool-proof. But a person guessing would only have a tiny chance of getting a real ID--specifically, a 1 in 12345 chance of getting a real ID.
Related
Let me explain the problem by giving an example:
Imagine that we are creating a blog application which in users can leave comments, so we need to ensure if a user is a human or not. Keep it in mind that due to some limitations, I can't use Google reCaptcha, so I have to create my own captcha code.
So far there is no problem since there are a bunch of samples on the internet and most of them use Sessions to keep last created code and some of them use an encrypted key, in URL or a hidden input in forms.
Firstly, if I use Sessions I'm not able to display several blog posts which contain a unique captcha code that I can validate them after user submission because I just keep the last generated random code.
Secondly, if I use an encrypted key and reveal it in output HTML or in query string it would be easy for robots to use a generated key several times unless I store these keys after submission. But if I choose to use disposable keys I have to search my database every time to prevent generating duplicated random code.
The question is What's the best approach to let users open several posts and leave comment for each one with less code and I/O complexity.
I have a number of locations in a number of applications I have built where a page accepts a QueryString in the following format: http://localhost/MySite.aspx?ID=ab1cbabe-42e2-4d15-ab11-17534b829381
These pages will then take the query string, attempt to parse it and display the data that matches the guid using a database call with strongly typed values.
Example:
Guid value;
if (Guid.TryParse(Request.QueryString["ID"], out value))
{
SomeControl.Datasource = DatabaseCall(value);
SomeControl.Databind();
}
What this obviously means is that any user (provided they have the guid for the data) can technically access any other users data. Obviously predicting guids is next to an impossibility but I'm still uneasy about it.
How does everyone else deal with this problem? Is there a "correct" way? Is it even worth worrying about?
In various circumstances it absolutely is worth worrying about.
People tend to post or email URIs without stripping away the query strings
Most browsers store the whole uri including the query string in a history
Most browsers even offer autocomplete in the address bar which lets you try through already visited resources
The http request can be intercepted pretty much anywhere on its way from client to server, exposing the query string
I'd recommend some kind of user-based authentication mechanism like asp.net's membership provider.
In case you already are using some authentication, linking resource guids to their respective user ids in an association table might do the trick.
You answered your own question: "Obviously predicting guids is next to an impossibility"
However, the proper way to implement user access, is to build and manage an ACL. You can't simply rely on a unique string for that, because even if users don't guess the string, an attacker can still sniff the traffic and reuse the GUIDs they found.
I agree with #Laurent.
But - it depends on your type of business. For extreme security-related contexts such as banking, money transactions, sensitive personal data etc., I would go to an encrypted cookie, or simple - a unique key that is passed in the query string (as you asked about), but not a guid but something far longer (just make sure it's randomness is fairly hard to predict), along with a background task on the server that invalidates "tokens" that are older than X minutes, to mitigate the risk of stealing URLs.
Consider resorting to some standard mechanism such as ASP.NET Membership.
Project type: Asp MVC 2/NHibernate/C#
Problem
If you have an edit page in an web application you will come to the problem that you have to send and then receive the id of the entity you're editing, the IDs of sub-entities, entities that can be selected by dropdownmenus,...
As it is possible to modify a form-post, an evil user could try to send back another ID which maybe would grant him more rights (if i.e. that ID was related to a security entity).
My approach
Create a GUID and associate it with the ID
Save the association in the http session
Wait for the response and extract the real ID out of the received GUID.
Question:
What techniques do you use to obfusicate an entity-ID?
If you're doing that much for GUIDs, why not just use GUIDs for the identity of the entity itself that's actually stored in the database (though I'd advise against it)?
Or you could have a server side encryption scheme that encrypts and then subsequently decrypts the id (this is a long the same lines as what you're doing except you're not storing anything random like this in the session (yuck :) ).
You could even forget trying to do this at all since a lot of sites are "affected" by this issue, and it's obviously not a problem (StackOverflow for example). The overhead is just too much.
Also, if you're worried about security, why don't you have some sort of granular permissions set on the individual action/even entity level. This would solve some problems as well.
EDIT:
Another problem with your solution is inconsistent unique identifiers. If a user says "ID as23423he423fsda has 'invalid' data", how do you know which ID it belongs to if it's changing on every request (assuming you're going to change the id in the URL as well)? You'd be much better of with an encryption algorithm that always hashes to the same value therefore, you can easily perform a lookup (if you need it) and also the user has consistent identifiers.
Your controllers should be immune to modified POST data. Before displaying or modifying records belonging to a user, you should always check whether the records in question belong to the authenticated user.
So I have never had to use cookies before but now I am making a Shopping Cart that they can keep coming back to and leaving but I want it to store what they added.
What I am wondering:
How do check if a cookie exists and then create or update it, is that the best way to think about using a cookie?
How exactly would I store the data, in particular I want to store a list of IDs like "5,6,7,8", should I just use one string for this or is there a faster/better way than reading/parsing/writing something like that? I mean I suppose I would just keep adding new_value + ',' to the end, is there an append for cookie variables?
Does the cookie have some unique identifier that I would use to be sure I don't write duplicates or something?
Note: It's easy to look up 'HOW' like for seeing the syntax but I'm really trying to grasp the 'BEST WAY' or most ideal, how it was meant to be used, or how all you programmers found is the most fruitful way to utilize them in this type of scenario.
The winning answer to this similar question suggests that you only store the user ID in the cookie. The rest goes in the database.
If you can consider other approaches besides cookies, many folks prefer using session over using cookies. For one thing, you don't always have a lot of room in a cookie.
Storing the shopping cart in a cookie means that you will have no record of what people were shopping for but didn't purchase.
OTOH, using the cookie is using the shoppers' storage space and preserving your own. That could be significant over time and a lot of shoppers.
I solved this in the past by creating a class to manage the cookies (e.g.CookieManager) with static methods I passed an HttpRequest object to.
I was trying to solve a very similar problem, so I created a Count cookie and then a cookie which stored the information I wanted to save (in your case an ID number). I only wanted to save the last 5 items a user viewed, so I would manage this in my CookieManager class, dequeuing the oldest cookie and queuing up latest. The Count cookie kept track of how many cookies I had. Obviously, this isn't very high tech or secure, but for this project that was completely unnecessary. Anything you want to be robust should be saved on a database, or elsewhere server-side.
I want to further explain why you only store a guid that maps to a userid in a cookie. There are two main reasons:
Performance. As slow as it may seem to pull data from a database, you have to remember that cookie data is not free. It has to be uploaded from the user's browser to your web server, and even high-speed broadband connections tend to have much slower upload speeds. By contrast, your database likely has a gigabit link (sometimes even faster) directly to the web server. So what you really want in your cookie for best performance is a guid that maps directly to the primary key of your database table.
Security. Data in cookies is stored in a plain text file on the user's computer. You never know where a user will access your site from; it could be a very public place that's not appropriate to keep such data.
So is there any data you can use cookies for directly? As it happens, there is. Cookies have the nice property of sticking with a particular machine and browser. These days a lot of people will access the web from more than one place. Perhaps a work computer, a home computer, a smart phone, a netbook... all of which may have different screen sizes and other peculiarities. So what you can do with a cookie is store information specific to that combination of user+location.
Is there an easier way to prevent a duplicate insert after refresh? The way I do it now is to select everything with the all fields except ID as parameters; if a record exists i don't insert. Is there a way to possible detect refresh?
Assuming it's a database, you could put a unique constraint on the combination of "all fields except ID" and catch the exception on an insert or update.
I agree with #Austin Salonen that you should start by protecting the DB with primary keys, unique constraints and foreign keys.
That done, many websites will include some JS behind submit buttons to disable the button immediately before sending on the request. This way, users who double click don't send two requests.
I think you may want to the EXISTS function.
Here's a simple explanation of EXISTS I found through google.
Like Dereleased said, use a 303-based redirect. Make the form submission use POST and then after saving have it send a 303 header and send them to the post-submit URL via a Location header which will be fetched via GET and a refresh will not be re-posting data.
It has been a long time since I have done any real web work. But back in the 1.1 days I remember using ids associated with a postback to determine if a refresh had occured.
After a quick search I think this is the article I based my solution from:
http://msdn.microsoft.com/en-us/library/ms379557(VS.80).aspx
It basically shows you how to build a new page class that you can inherit from. The base class will expose a method that you call when you are doing something that shouldn't be repeated on a refresh, and an IsPageRefresh method to track if a refresh has occured.
That article was the basis for alot of variations with similar goals, so it should be a good place to start. Unfortunately I can't remember enough about how it went to really give any more help.
I second the option to redirect a user to another (confirmation) page after the request has been submitted (a record inserted into the database). That way they will not be able to do a refresh.
You could also have a flag that indicates whether the insert request has been submitted and store it either on the page (with javascript) or in the session. You could also go further and store it somewhere else but that's an architectural decision on the part of your web application.
If you're using an AJAX request to insert a record then it's a bit harder to prevent this on the client side.
I'd rather do an indicator/flag then compare the fields. This, of course, depends on your records. For example, if it is a simple shop and the user wants to make an identical order then you will treat it as a duplicate order and effectively prevent the functionality.
What DB are you using? If it's MySQL, and certain other factors of your implementation align, you could always use INSERT IGNORE INTO .... EDIT: Struck for SQL Server
Alternatively, you could create "handler" pages, e.g. your process looks like this:
User attempts to perform "action"
User is sent to "doAction.xxx"
"doAction.xxx" completes, and redirects to "actionDone.xxx"
???
Profit!
EDIT: After re-reading your question, I'm going to lean more towards the second solution; by creating an intermediate page with a redirect (usually an HTTP/1.1 303 See Other) you can usually prevent this kind of confusion. Checking uniques on the database is always a good idea, but for the simple case of not wanting a refresh to repeat the last action, this is an elegant and well-established solution.