Related
I've used ASP.NET MVC with entity framework (most recent of both) to extend an existing website's data model, but without changing user accounts and user authentication. Feel free to answer this by pointing me to other documentation or learning resources.
My question is: How does ASP.NET handle the passing of user credentials over the wire, the authentication of those credentials, and storing user account credentials? And, how would I do that myself 'manually', in terms of securing their information on front end, in transit, processing, and storage?
Security at the various stages:
Front End - No idea, but make sure forms are validated, I'm guessing? Is this the user's problem?
Transit - Use an encrypted protocol (HTTPS?), but I'm not sure how to set that up in terms of appropriate controller methods, views, and certificates.
Processing - Decrypt username/password to plaintext, hash both and find matching record in the user account table, overwrite or make sure plaintext variables aren't hanging around in code.
Storage - Only store hashes of username/password on the database.
Then once authenticated, create user-session/key that will expire at some point. Again, I'm not sure how to do this 'manually' with ASP.NET, but I know that it happens with the built-in/default login setup.
Front End
Data is not secure. Passwords are entered in an input with type "password", which will obfuscated the entered information (preventing over-the-shoulder style attacks). However, the plain-text value is exposed via JavaScript and can be read by keyloggers or other client-side malware. There's not much you can do about any of this. Ultimately, the end-user is responsible for the security of their machine.
Transit
Always, always, always use HTTPS. It's not foolproof, as was seen by the recent Heartbleed attack, but it's better than just sending everything plain-text over the wire with HTTP. Except for fundamental flaws like Heartbleed, with HTTPS, you need only worry about protecting your certificate's secret key. HTTPS is utilizes two-way encryption with secret and shared keys. The shared key is sent to the client allowing them to encrypt what it sends, but not decrypt, while the secret key allows the encrypted text sent by the client to be decrypted server-side. Hence, the need to protect your secret key.
As far as your controller actions go, if you want to enforce HTTPS only on the action, such that if the user can only access your login page, for example, at https://domain.com/login, rather than http://domain.com/login, you'd add the attribute, [RequireHTTPS]. This attribute can be added at the action level to protect just that action, at the controller level to protect all actions within that controller, or globally to force your entire application to be HTTPS only.
Processing
You do not manually decrypt the username/password. If you're using HTTPS, your application will be handed the already decrypted values by the web server. Dealing with plain-text in your application code is not problematic and is pretty much necessary. I suppose that if some malware running on your server could gain access to the IIS process in memory and decompile the machine code at runtime into to something usable where they could get at the plain-text password, etc., it would be possible to exploit this, but it's a non-trivial hack and would require your server to be severely compromised already.
Storage
Of course you only store hashes in persistent storage. These are created with one-way encryption, where you have a key and generally a randomized IV value. As long as you do not leak both the key and the IV, it is impossible to decrypt the stored value into the original string. The only vector for attack is collisions, where you essentially encrypt millions of different strings in different ways and check for a match against the target encrypted value. However, most modern encryption algorithms make this sort of attack nearly impossible, requiring even a supercomputing platform hundreds or even thousands of years to ever create a viable collision. Just stay way from MD5, which does regularly emit collisions and has entire blackhat databases devoted to matching up encrypted values to plain-text values.
Is it possible to safely include a password in a query string for a c# asp.net site.
Few assumptions and things I know -
The site does not and will not have links/images/javascript/analytics to/from other sites. So no referrer links to worry about.
ALL communication with the web browser will be over https.
I know that the query string will remain in the history of the
computer.
More than just the password/username is needed to login. So simply
pasting the url back into the browser will not result in a login.
I know the site may be susceptible to cross site scripting and replay attacks. How do I mitigate these?
Given the above scenario, how should I include a password in a query string?
Please don't ask me 'why', I know this is not a good idea, but it is what the client wants.
SSL
You can safely send the password to a web server using a SSL connection. This encrypts all the communication between the client/server.
Hide In The Header
Basic authentication protocols place the user/password information in the HTTP request header. C# and many other web server languages can access this information, and use it to authenticate the request. When mixed with SSL this is very safe.
Register An Application Key
If none of the above is possible, then it's recommended that you create a unique key for each user. Rather then send their password this key is used. The advantage is that the key is stored in the database and can be removed. The user's password remains unchanged, but they must register again to get a new key. This is good if there is a chance someone could abuse their key.
Perform Hand Shaking
Hand shaking is where the client makes a request to the server, and the server sends back a randomly generated key. The client then generates a hash from that key using a secret, and sends it back to the server. The server can then check if the client knew the correct secret. The same thing can be done where the password is the secret instead and the client includes username details in the request. This can authenticate without ever sending the password.
Encrypt Password
If none of the above are possible options, then you could attempt to use JavaScript to encrypt the password before it's sent via an open URL. I found an open source version the AES block cipher. The project is called JSAES and supports 128 to 256 bit encryption. There might be other JS libraries that do the same thing.
It is generally not advisable to put secrets in a query string, which can then be book marked and copied, exposing the password at-rest in history files, cookies, etc.
To just safeguard the password in this use-case, an option would be to hash the password (one-way, not reversible). In this way, the actual password is not known in transit nor at-rest but... it implies that an attacker can still use the hashed value to login to the server that would presumably compare the hash value to its store for authentication.
Update: Switching to stateless (JWT) sessions
In the olden days when buggies were a thing (okay - they are still a thing with some fringe groups but) - we used "sessions".
A "session-ID" (see JSESSION_ID) for example in Java/J2EE/Servlet based systems was stored as a cookie. That value, being a random number, was hard to guess - but it had problems from hijacking to memory and lookup overhead on the server.
In 2020 times (as of this writing) ... JSON Web Tokens (JWT) can be used to securely encapsulate the user-session information and be pushed back down in an immutable cookie without ever exposing the password and with very little server overhead.
In this model, after login, the server issues a token (using OAUTH2 or related), which has an expiration time-stamp.
This data and possibly other session information can then be encrypted, hashed, signed and wrapped up in a JWT (token) - as a cookie back to the web-browser.
Ref: https://oauth.net/2/jwt/
At this point, the client cannot do anything to compromise (or even view) the cookie because any sensitive data should have been encrypted (using AES256 et al or better) and the contents hashed and the hash signed. What this means is that when the server gets the token back, it looks at the timestamp and may throw it out - forcing re-authentication and then...
Can otherwise verify it signed the content, hash the contents and verify the hash and decrypt data if needed (which would not include the password but rather just the ID of the user - which is verified and not necessarily a secret per se).
This can include already-looked-up scopes (authorization) for what the user can do etc - avoiding round trips to the authentication server until the token times out.
Thus the above (using JWTs, hashing, signing, encrypting - into a cookie) is the recommended way to both go stateless and avoid passing around a secret between the client and server.
Ref: https://auth0.com/blog/stateless-auth-for-stateful-minds/
Additionally, consider that multi-factor authentication schemes (see Google authenticator) and related systems are a much stronger security posture (stealing password is not enough and the keys auto-rotate on external systems) but do require semi-frequent access to the rotating key system, which inhibits a smooth user experience to some extent.
Update: Multi-Factor auth by Google and others has gotten much better.
Older companies still use SMS one-time-passwords (OTP) ... which can be compromised by going to a wireless company store and claiming SIM card loss (given a known phone-number).
Google and other more advanced companies comparatively use rotating tokens that can be embedded in a smartphone app that then are used for many services.
Google even has push-notification where user just confirms by button press: "Yes - it is me".
I'm doing some basic security work for an application. When a user logs in, their credentials are validated via Active Directory. Sometimes, a user requests changes that cause the program to restart. Since this application is not a single instance program, I simply launch another instance and close the current one. Everything within the application is fine.
However, users aren't happy that they have to re-login every time it restarts. So I pieced together some basic security using SecureString to store the password in the application. If the application restarts, the password gets decrypted then re-encrypted using a basic implementation of Rijndael's algorithm from CodeProject. The application passes the username and encrypted password as command line parameters to the new instance. The application needs to encrypt the password, because any further calls of wmic process would show the password. The new instance decrypts the password, validates it against Active Directory silently, and stores it as a SecureString as usual.
I'm not too familiar with general security practices. I'm a little nervous about returning the password from the decryption method. It's not being stored in any variable per se, because the call is made right in the Active Directory validation request. I'm still not sure if the returned password is accessible somewhere in memory or if it's stored in a register.
This process doesn't need to be the greatest security ever. It just needs to discourage people from getting a password in cleartext if they can easily access the memory contents.
Many thanks!
What is your attack model?
If you assume the server is owned by an attacker you have lost in
all cases.
If you assume the server is safe you don't need any
encryption at all.
So I guess my recommendation is to drop all server-side encryptions you are doing because you assume that the server is safe anyway. Nobody can access the memory of your server, and if someone could you'd be owned anyway.
Function return values are, in .NET apps, pushed onto an "evaluation stack", which resides in protected memory within the process. However, you're talking about a string, and that's a reference type, so what's on the evaluation stack is a pointer to that string's location on the heap. Heap memory is relatively insecure because it can be shared, and because it lives as long as the GC doesn't think it needs to be collected, unlike the evaluation or call stacks which are highly volatile. But, to access heap memory, that memory must be shared, and your attacker must have an app with permission from the OS and CLR to access that memory, and that knows where to look.
There are much easier ways to get a plaintext password from a computer, if an attacker has that kind of access. A keylogger can watch the password being typed in, or another snooper could watch the actual handle on the unmanaged side of the GDI UI and see the textbox that's actually displayed in the Windows GUI get the plaintext value (it's only obfuscated on the display). All that without even trying to crack .NET's code access security or protected memory.
If your attacker has this kind of control, you have lost. Therefore, that should be the first line of defense; make sure there is no such malware on the client computer, and that the instance of your client app that the user is attempting to log into has not been replaced with a cracked lookalike.
As far as obfuscated password storage between instances, if you're worried about mem-snooping, a symmetric algorithm like Rijndael is no defense. If your attacker can see the client computer's memory, he knows the key that was used to encrypt it because your application will need to know it in order to decrypt it; it will thus either be hard-coded into the client app or it will be stored near the secure string. Again, if your attacker has this kind of control, you have lost if you do your authentication client-side.
I would, instead, use a service layer on a physically and electronically secured machine to provide any functionality of your app that would be harmful to you if misused by an attacker (primarily data retrieval/modification). That service layer could be used both to authenticate and to authorize the user to perform whatever the client app would allow.
Consider the following:
The user enters their credentials into your client app. These credentials can be the same as the AD credentials but they will not be used as such. The only way to prevent a keylogger or other malware seeing this is to ensure that no such malware exists on the computer, through enforcement of a good AV software.
The client app connects to your service endpoint through WCF. The endpoint can be signed with an X.509 certificate; not NSA-level security, but at least you can be confident you're talking to the server under your control.
The client app then hashes your user's password with something that produces a large digest, like SHA-512. This in itself is not secure; it's too fast and the entropy of your user's password is too low, to prevent an attacker cracking the hash. However, again, they have to have control of the computer to see the hash, and we're going to further obfuscate it.
The client app transmits the username, password and the Hardware ID of the client computer over the WCF channel.
The server gets these credentials. Notice that the server doesn't get a plaintext password; this is for a reason.
The server cuts the hashed password into 256-bit halves. The first half is then BCrypted (using an implementation configured to be suitably slow; 10 or 11 "rounds" will usually do it), and compared with a hashed value in a user database. If they match, the DB returns the user's AD credentials, which have been symmetrically encrypted with the other half of the password hash. This is why a plaintext password is never sent; the server doesn't have to know it, but an attacker would in order to get anything meaningful out of a stolen copy of the user database.
The server decrypts the AD credentials, submits them to AD, and receives the IPrincipal representing that user's identity and security context. The IPrincipal implementation will contain zero information that could be used to crack the user's account.
The server generates a cryptographically-random 128-bit value, concatenates the 128-bit Hardware GUID, and hashes it with SHA512. It used half of that hash to symmetrically encrypt the key value that was used to decrypt the AD credentials. It then BCrypts the other half, and stores that hash beside the encrypted key.
The server then transmits back three pieces of information over the secure WCF channel; the IPrincipal that AD produced, the unhashed 128-bit random value (the "transfer token"), and another cryptographically-random value of arbitrary length (the "session token").
The client app is now authenticated on the client side, meaning you can control user access to code by interrogating the IPrincipal for AD role membership, and the server is now also confident that the user who has the session token is a real user. When making any further calls to the service (data retrieval/persistence), the client should use the WCF channel that was negotiated, AND pass its session token. The combination of WCF channel and session token is one-time and unique; using an old token on a new channel, or passing the wrong token on the same channel, indicates the session has been compromised. Above all, none of the persistent data stored anywhere at anytime in either client or server can be used to get the AD credentials and authenticate.
Now, when your client application closes, all "session state" is lost between client and server; the session token is not valid for any other negotiated channel. So, you've lost authentication; the next client who connects could be anyone regardless of who they say they are. This is where the "transfer token" comes in:
The "transfer token" is a free pass back into the system. It is one-time, and expires if unused 18 hours after it was issued.
The client application, when closing, passes two pieces of information to the new instance (however it chooses to do so); the user name of the person who logged in, and the "transfer token".
The new instance of the client application takes these two pieces of information, and also gets the Hardware ID of the client machine. It negotiates a secure connection with the WCF service, and passes these three pieces of information.
If the user last logged in more than 18 hours ago (not 24 hours, so they can't show up a minute before they did yesterday and restart the app), or if you want to be really paranoid, more than 8 hours ago, the app immediately returns an error that the transfer token for that account is out of date.
The service takes the transfer token, concatenates the Hardware ID, SHA-512s it, BCrypts half, and compares the result to the stored second verification value. Only the proper combination of the transfer token and the machine that last logged in will produce the correct hash. If it matches, the other half of the hash is used to decrypt the key that will then decrypt the AD info.
The service then proceeds as if the user had provided the application password hash, decrypting the AD info, retrieving the IPrincipal, generating a new transfer token, session token, and re-encrypting the key for the AD data.
If any part of this process fails (trying to use an incorrect token including using the same token twice, or using the token from a different machine or for a different user), the service reports back that the credentials are invalid. The client app will then fall back to the standard user-password verification.
Here's the rub; this system, by relying on a secret password that is not persisted anywhere except the user's mind, has no back doors; administrators cannot retrieve a lost password to the client app. Also, the AD credentials, when they have to change, can only be changed from within the client app; the user can't be forced to change their password by AD itself on a Windows login, because doing so will destroy the authentication scheme they need to get into the client app (the encrypted credentials will no longer work, and the client app credentials are needed to re-encrypt the new ones). If you were somehow able to intercept this validation inside AD, and the client's app credentials were the AD credentials, you could change the credentials in the user app automatically, but now you're using one set of credentials to obfuscate the same set of credentials, and if that secret were known you're hosed.
Lastly, this variant of this security system functions solely on one principle; that the server is not currently being compromised by an attacker. Someone can get in, download offline data, and they're stuck; but if they can install something to monitor memory or traffic, you're hosed, because when the credentials (either username/password hash or transfer token/hardware ID) come in and are verified, the attacker now has the key to decrypt the user's AD credentials. Usually, what happens is that the client never sends the decryption key, only the verification half of the hashed password, and then the server sends back the encrypted credentials; but, you are considering the client to be a bigger security risk than the server, so as long as that is true, it's best to keep as little plaintext as possible on the client for any length of time.
Take a look at this:
Password Salt (Wikipedia)
To summarize the approach:
When you want to store password P to disk, generate a random string S (called the salt) and then store the tuple (S, SHA1Hash(P + S)).
When you want to check a password attempt P' against the stored password, compare SHA1Hash(P' + S) to the stored hash.
When you pass around this password attempt you have, pass around only the hashed version, i.e., SHA1Hash(P' + S).
I am creating a client server communication based on Asynchronous Sockets, my client will send the username and password to the server then the server will replay whether the account is valid, so i want to secure this steps so no one could record the conversation and keep sending it to my client to achieve illegal entry to the secret data
[The Question {Simplified}] How to securely authenticate the client to the server ... ?
[NOTE] I know SSL but i cant afford paying for a certificate so i need a free alternative to provide secure communication between my client and server.
As always, the most secure password is the one, that the server doesn't know, and that is never transmitted. So what you could do is:
On the server, store the username, a random salt ("account salt") and a secure hash of the salted password ("server shared secret").
On login, in a first step let the client transmit only the username (not secret)
The server should reply with the account salt (not secret) and a randomly generated session salt (not secret). It is important, that the server generates the session salt.
On the client, salt the password with the account salt and hash it (keep this as " client shared secret"), then salt the result with the session salt and hash it again. Transmit this as an authentication token (not secret)
On the server, take the salted hash from your DB, salt it with the session salt and hash it - if this matches the authentication token, the connection is authenticated. (Client is authenticated to server)
if you want to additionaly authenticate the server to the client, you repeat the procedure: Client generates a salt, server creates token from it by salting/hashing the stored secret.
If you want to authenticate the single requests (not only the connection), salt them with the shared secret and hash them, send this as a per-request authentication field. Since in a valid login server shared secret and client shared secret are identical,both sides should come to the same result, thus verifying the authentication field.
I typically tell people that if they find themselves doing crypto themselves they are inventing security problems. :) The odds are good you're missing edge cases. I would suggest relying on something that exists already and has been heavily secured.
If you're using managed sockets, there is a version of the stream class that does crypto for you (NegotiateStream). I would suggest starting there and seeing if it can do what you need w/o you having to invent your own.
You could use a combination of public and symmetric keys in order to secure authentication.
First send a public key for the client to send his authentication data encrypted in. If the data is valid, you could then have the client generate his own public key, and have both send symmetric keys to each other via each other's public key.
Something like that should work.
I know that this was posted a few years ago, but I thought that I would add my own two cents here now. Things have changed in the last couple of year. This might help some one else.
I do not want to take anything away from Eugen, excellent work.
The best way to encrypt the traffic between your client and your server is still using SSL/TLS. You can now get free licenses from https://letsencrypt.org/.
It sounds like you already had SSL figured out, so I would plug in with the free certs that you get from the above link
Good luck,
- Andrew
I am using asp.net mvc 2.0 and I am wondering how secure is it to put information in a cookie?
Like I put in my cookie a forms authentication ticket that is encrypted so can I put information that could be sensitive in there?
string encryptedTicket = FormsAuthentication.Encrypt(authTicket)
HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket);
Like I am not storing the password or anything like that but I want to store the UserId because currently every time the user makes a request to my site I have to do a query and get that users Userid, since every table in my db requires you to use the userId to get the right row back.
So these start to add up fast so I rather have it that if a user is authenticated once then that's it till they need to be re-authenticated again. If I would store this userId I could save so many requests to the database.
Yet I don't want it floating around in clear text as potential someone could use it to try to get a row out of a database when they really should not be.
Show how good is this encryption that Authentication uses?
The encryption is good enough, that's not the weak link.
The weak link is that the cookie value could be intercepted, and someone else could impersonate the user.
So, the information in the cookie is safe enough, but you can't protect the cookie itself.
The title of your question doesn't really match what you are asking. There are two different things you are asking here.
1. Is there a secure way to store data in a cookie?
The answer is yes. To safely store data in a cookie you have to encrypt the data you want to store then sign the encrypted data. Encrypting it prevents attackers from being able to read the data, signing it prevents attackers from modifying the data. This will ensure the integrity of the data. You could use this method to store data about the user you want to keep private (their email address, date of birth, etc). Note: Securely storing data in a cookie is not a safe way to authenticate a user! If you stored the user id of the user in the signed and encrypted cookie, nothing prevents an attacker from stealing the entire cookie and sending it back to the server. There's no way of knowing if the authentication cookie came from the same browser where the user entered their user name and password. Which leads us to the second question (the one you were actually asking)...
2. Is there a secure way to authenticate a user with a cookie?
Yes. To securely authenticate a user you must combine the techniques from question 1 with SSL (https). SSL ensures that only the browser will be able to access the authentication cookie (the signed encrypted cookie with the user id in it). This means that your login process (accepting the users name and password, as well as setting the authentication cookie) must happen over SSL. You must also set the HttpCookie.Secure property to true when you set the authentication cookie on the server. This tells the browser to only include this cookie when making requests to your website over SSL. You should also include an expiration time in the encrypted auth cookie to protect against someone forgetting to log out of your site while they are at the library. A side affect of this approach is that only pages on your site that are SSL will be able to authenticate the user. Which brings up a third question...
3. How do you securely authenticate a user without using SSL?
You don't. But you do have options. One strategy is to create two auth cookies at login, one regular cookie and one that is ssl-only (both encrypted and signed though). When performing sensitive operations on the users behalf, require the page be in SSL and use the SSL-only cookie. When doing non-sensitive operations (like browsing a store that is customized based on the country their account is in) you can use the regular auth cookie. Another option is to split the page so that information that requires knowing who the user is is retrieved async via AJAX or json. For example: You return the entire page of the blog over http and then you make an SSL AJAX request to get the current users name, email, profile pic, etc. We use both of these techniques on the website I work on.
I know this question was asked almost a year ago. I'm writing this for posterities sake. :-)
Along with cookie encryption, you should also implement a rotating token to prevent replay attacks.
The idea being that the encrypted cookie contains some value which can be compared to a known value on the server. If the data matches, then the request succeeds. If the data doesn't match then you are experiencing a replay attack and need to kill the session.
UPDATE
One of the comments asked if I meant to store the value in the cookie. The answer is yes. The ENTIRE cookie should be encrypted, which can be automatically done through the use of an HttpModule. Inside the encrypted cookie is any of your normal information + the changing token.
On each post back, check the token. If it's valid, allow the transaction, create a new random token, store in the cookie, and send that back to the browser. Again, in an encrypted form.
The result is that your cookie is secure (you are using 3DES?) and any attacker would have an extremely limited window of opportunity to even attempt a replay attack. If a token didn't pass muster, you could simply sound the alarm and take appropriate measures.
All that's needed server side is to keep track of the user and their current token. Which is usually a much smaller db hit than having to look up little things like the users name on each page load.
UPDATE 2
I've been trying to figure out whether this is better or worse than keeping the changing value stored in session. The conclusion I've come to is that storing a rotating value in session on the web server does absolutely nothing to prevent replay attacks and is therefore less secure than putting that value in a cookie.
Consider this scenario. Browser makes request. Server looks at the session id and pulls up the session objects, work is then performed, and the response is sent back to the browser. In the meantime, BlackHat Bob recorded the transaction.
Bob then sends the exact same request (including session id) to the server. At this point there is absolutely no way for the server to know that this is a request from an attacker. You can't use IP as those might change due to proxy use, you can't use browser fingerprinting as all of that information would have been recorded in the initial exchange. Also, given that sessions are usually good for at least 30 minutes and sometimes much longer, the attacker has a pretty good sized window to work in.
So, no matter what, to prevent replay you have to send a changing token to the browser after each request.
Now this leaves us with the question about whether to also store values such as the user id in an encrypted cookie or store it server side in a session variable. With session you have concerns such as higher memory and cpu utilization as well as potential issues with load balancing etc. With cookies you have some amount of data that is less than 4kb, and, properly done, in the 1kb or less range that gets added to each request. I guess it will boil down to whether you would rather add more / larger servers and internal networking equipment to handle the requests (session) or pay for a slightly larger internet pipe (cookie).
As you've stated, a good practice for storing any data in cookies is to encrypt the data. Encrypt before putting into the cookie, and decrypt after reading it.
In the example of storing a user identifier, choose something that's not likely to be used against your system. For the user id, use a guid rather than the likely incrementing integer that's the PK on the database table. The guid won't be easily changed to successfully guess another user during an attack on your system.
Once the user has been identified or authenticated, go ahead and store the user object, or key properties in Session.
In an ideal world with an ideal cipher this wouldn't be a problem. Unfortunately in the real world nothing is ideal, and there never will be an ideal cipher. Security is about solving these real world threats. Cryptographic systems are always vulnerable to attack, weather it be a trivial(brute force) attack or by a flaw in the primitive its self. Further more it is most likely that you will botch the implementation of the primitive, common mistakes include non-random or null IV, Key management, and incorrect block Cipher mode.
In short this is a gross misuse of cryptography. This problem is best sovled by avoiding it all together by using a session variable. This is why sessions exist, The whole point is to link a browser to state data stored on the server.
edit: Encrypting cookies has led to the ASP.NET oracle padding attack. This should have been avoided all together by using a Cryptographic Nonce. Like i said, this is a gross misuse of cryptography.
For your very specific scenario (user id), the short answer is NO!
For the long answer, imagine this hypothetical scenario:
You navigate to stackoverflow.com;
Fill your username/password and submit the form;
The server sends you a cookie containing your user ID, which is going to be used to identify you on the next requests;
Since your connection was NOT secure (HTTPS), a bad guy sniffed it, and captured the cookie.
The bad guy gains access to your account because the server didn't store, let's say, your IP address, and thus, can't distinguish between your machine and the bad guy's.
Still in this scenario, imagine the server stored your IP address, but you're on a corporate LAN, and your external IP is the same of another 100 machines. Imagine that someone that has access to one of these machines copied your cookie. You already got it, right? :-)
My advice is: put sensitive information on a HTTP session.
UPDATE: having a session implies a cookie (or at least an ugly URL), thus leading us back to the very same problem: it can be captured. The only way to avoid that is adding end-to-end encryption: HTTP+SSL = HTTPS.
And if someone says "oh, but cookies/sessions should be enough to discourage most people", check out Firesheep.
It's okay (not great, but not wrong) from a security standpoint. From a performance standpoint, however, it's something you want to avoid.
All cookies are transmitted from client to server on every request. Most users may have fast broadband connections these days, but those connections are asymetric — the upstream bandwidth used for transmitting cookie data is often still very limited. If you put too much information in your cookies, it can make your site appear sluggish, even if your web server is performing with capacity to spare. It can also push your bandwidth bill up. These are points that won't show up in your testing, which most likely happens all on your corporate network where upstream bandwidth from client to server is plentiful.
A better (general) approach is to just keep a separate token in the cookie that you use as a key to a database lookup for the information. Database calls are also relatively slow (compared to having the information already in memory or in the request), but primary key lookups like this aren't bad and it's still better then sending the data potentially a quarter of the way around the world on every request. This is better for security as well, because it keeps the data off the user's machine and off the wire as much as possible. This token should not be something like the userid from your question, but rather something more short-lived — a key used to index and hide away larger blocks of data, of which your userid is perhaps one part.
For your userID, which is likely only a single integer, as well as other small and important data, keep it in memory on the web server. Put it in the session.
The use you are looking at is the exact intended purpose of being able to store information in the Forms Auth Ticket.
No. It have been shown with Padding oracle attack that receiving encrypt data (CBC) can be dangerous because of the errors leakage.
I'm definitely not a crypto expert but I recently saw a demo where encrypted view-state was decrypt using this attack.
Encrypting the userid value in the cookie only prevents the user from knowing what the value is. It does not
prevent cookie replay (use SSL to
prevent an attacker from intercepting
a victim's cookie)
prevent tampering
(an attacker can still blindly flip
bits in the encoded cookie with a
chance that it will decode to a valid
userid, use an HMAC to prevent this)
completely prevent a user from getting the decrypted value (the user can brute force the value off line, use a strong encryption key to make success less probable)
Encrypting the cookie also introduces a key management problem. For example, when you rotate the encryption key you have to make sure "live" sessions with the old key won't immediately fail. You thought about managing the encryption key, right? What happens when admins leave? It's compromised? etc.
Does your architecture (load balancers, server distribution, ...) preclude using server-side session objects to track this information? If not, tie the userid to the session object and leave the "sensitive" data on the server -- the user only needs to have a session cookie.
A session object would probably be a lot easier and more secure for your stated concern.
To ensure proper auth cookie protection, you should make sure that you specify a secure encryption/hashing scheme (usually in the web.config) by setting the machineKey validation="SHA1" property (I use SHA1 but you can replace that with a different provider if desired). Then, make sure that your forms auth has the protection="All" attribute set so that the cookie is both hashed AND encrypted. Non-repudiation and data security, all in one :)
ASP.NET will handle encrypting/decrypting [EDIT: only the auth cookie!] the cookie for you, although for custom cookies you can use the FormsAuthentication.Encrypt(...) method, so just make sure that you're not sending the UserId via other cleartext means like the querystring.
HttpCookie c;
c.Secure = true;
Obviously this only works when transmitting via SSL, but this setting tells the browser not to send the cookie unless the connection is SSL, thus preventing interception via man-in-the-middle attacks. If you are not using a secure connection, the data in the cookie is totally visible to anyone passively sniffing the connection. This is, incidentally, not as unlikely as you'd think, considering the popularity of public wifi.
The first thing to address is whether the connections involved are secure. If they are not, assume the cookie or anything else between you and the client will be intercepted.
A cookie can be a liability once it is on the client machine. Review cookie theft, cross-site request forgery, confused deputy problem.
Cookies limitations: size, may be disabled and security risk(tampering). Of course if you encrypt cookie, there could be a performance hit. Session and Viewstate would be good alternative.
If you want it to be stored at client side, viewstate would be better. You can encrypt the string userid and store in viewstate. Session would be best option.
If your database calls are slow, consider caching
Viewstate