offline security in .NET applocation storing Obfuscated Key inside the code - c#

I am developing a C# app that can NOT connect to the internet.
This app will produce a configuration file for my hardware.
what i want to be sure, is that when i give the configuration file to my assistant he doesn't change it and puts in the hardware a file with different configuration.
To avoid this i am currently using a simple method.
In the code i've a key: abcd123
My application produce a configuration file and then:
HASH the configuration file and encrypts the HASH with the KEY: abcd123 stored in the
string variable KEY
give the configuration file to my assistant who loads it in the hardware
now, the hardware has the KEY abcd123, it decrypts the config and HASH the payload. IF the 2 HASH are THE SAME i assume that my assistant did not change the configuration file.
What i am concerned about is that the KEY store in the code in .NET is very easy to recover without obfuscation.
I've thus bought Crypto Obfuscator, but i don't know how much my key is secure.
I am not skilled enough to de-compile my program and see if the key is still in clear or not.
What methods can you suggest me to make me REASONABLY secure that my key is "safe" ?
I understand that a there is no way to secure it, but i just want a reasonable additional security to my automated obfuscation of which i dont know much.
I hope i've been clear but ask for clarification.

what i want to be sure, is that when i give the configuration file to my assistant he doesn't change it and puts in the hardware a file with different configuration.
You are looking for a digital signature algorithm. Broadly, there are two ways to do this:
Use a symmetric algorithm that computes a crypto-reliable hash (SHA-2 for example). You publish hash and corresponding file. Hardware gets the file, computes the hash with the same algorithm and asserts that it is the same as the publicly published hash. See how Apache log4net provides libraries and the matched signatures.
Another approach would be to use a public-private key signature, for example with RSA. A sample is available from msdn, where it shows how to sign an XML document using RSACryptoServiceProvider. You can create a private-public key pair. Private key is used to create a digital signature and you keep it to yourself. With public key you can only verify signature, and this is what you deploy to the device.
Another safety tip from the link above:
Never store or transfer the private key of an asymmetric key pair in
plaintext. Never embed a private key directly into your
source code. Embedded keys can be easily read from an assembly using
the Ildasm.exe (MSIL Disassembler) or by opening the assembly in a
text editor such as Notepad.

You can only get real security if you don't put the key in your code.
One way to do this is to use public key cryptography (for example RSA).
You create a public and a private key. The private key neverleaves your system.
You use the private key to sign the config file.
The software running on the hardware contains the public key and uses it to verify the signature of the config file without any need for a network connection.
Even if the whole world knows your public key nobody can create a valid signature without the private key (which is only available to you).

Related

External authenticate with tachograph smartcard

I'm using C# with a c# library to read out the identification data from tachograph cards.
I only need to read out the unique card ID and can do this with a driver card, I send the correct APDUs to do the trick. I'm now trying to read out the ID of a company card doing the same things, as described in the documentation I could find (ECE/TRANS/SC.1/2006/2 and sub appendixes). This doesn't work.
If I understand the documentation correctly, the problem is that after selecting a DF and an EF after that, the Authentication has to be redone (on a company card only) to read out the unique identification data from the EF. Now, reading the documentation, I can understand I have to use a "manage security environment" to set/request a public key? Then use an "internal authenticate", "get challenge", run an "external authenticate" and finally use a "read binary" to read out the data. But only after setting the logic to the correct EF. Am I correct in this matter?
If I'm correct, does anyone understand where/how I can request the public key from the card? and what algorithm is used to decrypt the challenge using the public key and eventually, what to send back to the card?
If I did not understand it correctly, can anyone explain the steps in authenticating with a tachograph smartcard? Using idiot terms would be appreciated as I'm completely new in this line of work and still trying to learn.
I had no detailed look into Tachograph specification for years, but this might help you to start:
a challenge is just a random number, you have to encrypt it (symmetric algorithm) or sign it (asymmetric algorithm) by yourself. The appropriate algorithm has to be defined before, since the card has to follow the same rules for checking.
external authenticate is far more likely using symmetric cryptography (nobody likes to sign something unknown, which could also be a hash code for a message)
there are two standardized modes for retrieving the public key:
either as response to a special Generate Asymmetric Key Pair command (there is one, which just reads but does not generate a new key)
or in some file to be read using standard READ commands (e. g. a certificate which has the advantage, that you may check its signature). Which case applies has to be stated in the specification.
Manage Security Environment just informs the card, which key to use for a subsequent operation like PSO, giving its Control Reference Template, id and usage qualifier

explicit or implicit key of encryption in c#

i am planing to write a C# class library which will include my encryption algorithms like RC4, DES. These are single key encryption algorithms.
Now i want the best secure decision to protect my key. Should i put my key hardcoded inside the DLL or should i set my key from my external application which uses the DLL? Which one do you think is more secure when you consider the decompling tools?
Loudly thinking:
if the key is hardcoded in my security library and someone find the DLL and import it to his C# application, can he easly decode my chipper data?
if the key is not hardcoded in my security library but is set from my external application, someone needs to decompile also my external application to find my key?
Setting the key values from my external application which will use the security DLL seems more secure to me. What do you think?
Thanks.
Hard Coded Key
If you include the shared/private key in the DLL, then anyone who has a copy of the DLL will have a copy of the key. If your model is to share your application with multiple users, all users will have the same encryption key, and can decrypt anything encrypted by another user. If your application is easily available, then you have to assume the attacker has the application (and therefore the key) as well.
It also means that all developers will have access to the production encryption key, since they have the source code. QE will also have access, as they probably have access to the binary. Either of these two insider groups will be able to decrypt anythign that your application protects for your customers.
Is this what you want? It's generally a bad practice, but it's worse in some environments than others. For example, if you're writing code to learn how to write crypto and nothing more, it probably doesn't matter - just make sure nobody else can use it :) If you're writing a service, it's a bad practice and introduces risk, but it's not the worst thing you could do. If you're writing something that will be shared to multiple customers, then you defeat the purpose of encrypting by including the key in the binary.
And it's not really that hard to generate random data (using a cryptographically strong random number generator), store it in a file, and use that file as your encryption key. My recommendation is go with the separate key.
Separate Key
If you ship the key in a separate file, you eliminate all the risks introduced by shipping the key in the binary but you introduce others. Or, stated differently, now that your crypto can do some good, you need to make sure you do it right or it'll still be useless.
The key needs to be generated using a cryptographically strong random number generator, so that it's not predictable. The key needs to be stored securely - the whole path to the key file needs to be protected, and you should consider using a password protected store (like a keystore) to ensure that only users with the right password can access the key. Of course, that last one depends upon your deployment model, and if you need unattended restart. And the key needs to be used securely - constant order operations, don't act an encryption or decryption oracle, verify integrity of data before semantically parsing it, etc.

How can I securely embed a static string (key) in C#?

I'm looking for a way to securely store an API key in a WP7 application. The key is a string and is currently hard coded into the code (see below). I know that someone with a reflector program could easily view this. Is there a better way to package this key as part of my app? Would a resource be more secure?
string key = "DSVvjankjnersnkaecjnDFSD44VDS23423423rcsedzcadERVSDRFWESDVTsdt";
Thank you in advance.
Have a look at Safeguard Database Connection Strings and Other Sensitive Settings in Your Code, it is a good read. Your question is under the "Hiding Keys in the Application Source Code" section.
Excerpt:
If you define the key in the application, in addition to obfuscating the assembly, try not to store the actual key bytes in the source code. Instead, implement key-generation logic using persistent characteristics, such as the encryption algorithm, key size, pass phrase, initialization vector, and salt (see an example at Encrypt and Decrypt Data Using a Symmetric (Rijndael) Key). This will introduce an extra layer of indirection, so the key will not be accessible by simply dumping the symbols from the application binary. As long as you do not change key-generation logic and key characteristics, the resulting key is guaranteed to be the same. It may also be a good idea not to use static strings as key-generation characteristics, but rather build them on the fly. Another suggestion would be to treat the assembly the same way as the data store should be treated, that is, by applying the appropriate ACLs. And only use this option as a last resort, when none of the other data protection techniques work and your only alternative is leaving sensitive data unencrypted.
I've read through all these answers, and I don't think there is any way you can securely embed this - regardless of where you put it, or how you obfuscate it. As long as its in your XAP and decoded within the application then it will always be available to hacking.
If you need to ship the key inside the xap with a reasonable degree of protection, then I think #maka's answer yields your best bet - obfuscate it as best you can - but don't think this will make you secure - i.e. don't do this for your mobile banking apps!
Alternatively, if you really need security then don't operate solely within the app - use a web server as well. For example, if you were doing a Facebook app and needed to somehow protect your facebook secret key, then you would need to redirect the user from your app to a web page on your server for authentication. That web page would then need to guide the user through the process of getting an access token - and then just that access token (along with the public appid) would need to go back to your app. And for those webservices which require knowledge of the secret key to accompany every call, then I'm afraid every single call will probably need to go via your server.
You can encrypt Api key with ProtectedData and then decrypt it in runtime. This is good tutorial how to encrypt data in Windows Phone: Encryption in Mango
May be you can encrypt it before hand and save it in app.config. And while reading it decrypt it using the same algorithm.
You could use DotFuscator to disable the ability to use reflector.
But, this will not allow you to change the key without recompiling.
In the past I've used the following method in other (web/winform-based) software:
http://weblogs.asp.net/jgalloway/archive/2008/04/13/encrypting-passwords-in-a-net-app-config-file.aspx
It's not an answer maybe, but sure it's a suggestion:
Store encrpyted key in a db. And store encrypted "db password" in app.config.
Use two proper string encrypt/decrypt algorithm, let's say algorithm x and y.
Put encrypted db password in app.config before to publish it.
Decypt app.config password(algo y) to connect the db for taking new encrpyted string(real one).
Close the connection and decyrpt new string with algorithm x if reflector/etc. not running.
Use it.
Dispose the object that holds the string.

PRIVATE key encryption in .Net

I am looking for a way to do private key encryption in C#.
I thought I could use the RSACryptoServiceProvider, but it only supports public key encryption.
The only thing I found on the subject was this project, but I would rather use something I can find in .net: http://www.codeproject.com/KB/security/PrivateEncryption.aspx
Please note I am not looking for signing.
Please note I require asymmetric encryption.
Any idea's?
Background story:
I am sending an encrypted file to another system which is running an application. The encryption is making sure the file cannot be altered (more or less) or viewed by anyone. The application is able to decrypt the file using the public key and do something with it.
I know pretty much anyone is able to get the public key from the application, this is not a problem in this case.
The encryption is making sure the file cannot be altered or viewed by anyone
Public Key Encryption - when done "by the book" - seperates encryption and signing ("cannot be altered"). In your case, use two key pairs, one within the application and one at your site. Encrypt the file using the public application key and sign the file using your private one.
This is a really widespread usage, I even like to call it "Best practice". As for the downvote, I can only guess that ruling out signing in your question triggered this.
It should not be done. Sign instead and use symmetric encryption.

How can I secure an "enabled functions" license file for my program?

My Application can perform 5 business functions. I now have a requirement to build this into the licensing model for the application.
My idea is to ship a "keyfile" with the application. The file should contain some encrypted data about which functions are enabled in the application and which are not. I want it semi hack proof too, so that not just any idiot can figure out the logic and "crack" it.
The decrypted version of this file should contain for example:
BUSINESS FUNCTION 1 = ENABLED
BUSINESS FUNCTION 2 = DISABLED.... etc
Please can you give me some ideas on how to do this?
While it could definitely be done using Rijndael, you could also try an asymmetric approach to the problem. Require the application to decrypt the small settings file on start up using a public key and only send them new configuration files encrypted using the private key.
Depending on the size of your configuration file, this will cause a performance hit on startup compared to the Rijndael algorithm, but even if the client decompiles the program and gets your public key its not going to matter in regards to the config file since they won't have the private key to make a new one.
Of course none of this considers the especially rogue client who decompiles your program and removes all the checking whatsoever ... but chances are this client won't pay for your product no matter what you do thus putting you in a position of diminishing returns and a whole new question altogether.
Probably the easiest secure solution is to actually use online activation of the product. The client would install your product, enter his key (or other purchase identification -- if you purchase online this could all be integrated, if you are selling a box, the key is more convenient).
You then use this identification to determine what features are available and send back an encrypted "keyfile" (as you term it), but also a custom key (it can be randomly generated, both the key and key file would be stored on your server -- associated with that identification).
You then need to make sure the key file doesn't work on other computers, you can do this by having the computer send back it's machine ID and use that as added salt.
I've been pondering using custom built assemblies for the purpose of application licensing. The key file approach is inherently flawed. Effectively, it's a bunch of flags saying "Feature X is enabled, Feature Y is not". Even if we encrypt it, the application will have all the functionality built in - along with the method to decrypt the file. Any determined hacker is unlikely to find it terribly hard to break this protection (though it may be enough to keep the honest people honest, which is really what we want).
Let's assume this approach of encrypted "Yay/Nay" feature flags is not enough. What would be better is to actually not ship the restricted functionality at all. Using dynamic assembly loading, we can easily put just one or two core functions from each restricted feature into another assembly and pull them in when needed. These extra "enablement" assemblies become the keyfiles. For maximum security, you can sign them with your private key, and not load them unless they're well signed.
Moreover, for each customer, your build and licensing process could include some hard to find customer specific data, that effectively ties each enablement assembly to that customer. If they choose to distribute them, you can track them back easily.
The advantage of this approach over simple Yay/Nay key files is that the application itself does not include the functionality of the restricted modes. It cannot be hacked without at least a strong idea of what these extra assemblies do - if the hacker removes their loading (as they would remove the keyfile), the code just can't function.
Disadvantages of this approach include patch release, which is somewhat mitigated by having the code in the keyfile assemblies be simple and compact (yet critical). Custom construction of an assembly for each customer may be tricky, depending on your distribution scenario.
You could achieve this fairly easily using Rijndael, however the problem is the fact that the code will contain your Key in your current design. This basically means someone will disassemble your code to find the key and boom, goodbye protection. You could slow this process by also obfuscating your code, but again, if they want to get it, they will get it.
However, this aside, to answer your question, this code should work for you:
http://www.dotnetspark.com/kb/810-encryptdecrypt-text-files-using-rijndael.aspx
I find Perforce-style protection scheme easiest to implement and use, while at the same time being quite hack-proof. The technique uses a plain text file with a validation signature attached at the last line. For example:
----(file begin)
key1: value1
key2: value2
expires: 2010-09-25
...
keyN: valueN
checksum: (base64-encoded blob)
---- (file end)
You would choose an assymetric (public/private key) encryption algorithm + hashing algorithm of your choice. Generate your reference public/private key pair. Include the public key in your program. Then write a small utility program that will take an unsigned settings file and sign it - compute the digital signature for the contents of the file (read settings file, compute hash, encrypt this hash using private key) and attach it (e.g. base64-encoded) as "checksum" in the last line.
Now when your program loads the settings file, you would read the embedded public key and validate the digital signature (read file contents, strip the last line, compute hash; compare this value against checksum from last line base64 decoded and run through the assymetric decryption using embedded public key). If the validation succeeds, you know the settings file has not been tampered with.
I find the advantages to be that the settings are in plain text (so for example the customer can see when the license expires or what features they paid for), however changing even a single character in the file with result in the digital signature check failing. Also, keep in mind that you are now not shipping any private knowledge with your program. Yes, the hackers can reverse-engineer your program, but they will only find the public key. To be able to sign an altered settings file, they will have to find the private key. Good luck doing that unless you're a three-letter agency... :-).
Use any 'Cryptography' method to implement this.
Just check out the namespace 'System.Security.Cryptography'
The above namespace providing many encryption and decryption functions to protect secret data.
You have another method to implement this using registry.
You can store data in windows registry.
Better to encrypt data before store into registry.
ROT-13!
Edit:
ROT-13 is a simple substitution cipher in which each letter is substituted by the letter 13 letters before it in the alphabet. (NOTE: alternatively, you can use the ascii-value 13 less than the given char to support more than [ A-Z0-9]).
For more info see wikipedia.

Categories

Resources