The metadata of the .Net EXE shows that it has been using SHA1 for its internal purpose.
The property navigation is : Metadata->Headers->FileInfo->SHA1
Steps to reproduce:
Create any console app with .Net Framework or.Net Core
Generate the EXE
Use any .Net Reflector to view Metadata. For Eg. dotPeek
Load the EXE and navigate to the above path - Metadata->Headers->FileInfo->SHA1
It shows SHA1 is key and has some value associated with it.
Screenshot of the same:
Questions:
As it is known that SHA1 is not secure and SHA256 should be used everywhere.
What is this property about and where is it used internally?
Do we have the option to change it to SHA256 due to security reasons?
Docs: PE Format#Certificate Data
A PE image hash (or file hash) is similar to a file checksum in that the hash algorithm produces a message digest that is related to the integrity of a file. However, a checksum is produced by a simple algorithm and is used primarily to detect whether a block of memory on disk has gone bad and the values stored there have become corrupted. A file hash is similar to a checksum in that it also detects file corruption. However, unlike most checksum algorithms, it is very difficult to modify a file without changing the file hash from its original unmodified value. A file hash can thus be used to detect intentional and even subtle modifications to a file, such as those introduced by viruses, hackers, or Trojan horse programs.
Emphasis mine.
Modifying (or recreating) an executable and making it have the same hash is still not trivial, not even for SHA-1. See also Cryptography.SE: How secure is SHA1? What are the chances of a real exploit?.
Related
My situation is as follows:
I have to deploy a series of .NET desktop applications consisting of a file with encrypted data and an executable that will access that data and decrypt some parts of it in runtime.
What I need to achieve is that each data container should only be decryptable by that specific .exe it is provided with.
The first idea was to encrypt the data using, say, the hash value of the .exe file as a symmetric key and during decryption calculate the hash value of the .exe file in runtime and decrypt the parts of the data container with it.
However, the problem with that approach is that the user can easily look into the .NET assembly with ILSpy or any other decompiler and discover the whole encryption algorithm which will enable the user to decrypt all the data containers in my series of applications.
Another solution that comes to my mind is to make a small C native library (that is less easy to decomplile) that will perform some manipulations with the .exe assembly information and generate a key for decryption based on it (let's consider the user lazy enough so that he will not try to intercept the key from the memory).
But ideally I wouldn't like to resort to any languages other than C# because porting the application to other platforms with Mono will require additional effort (P/Invokes and so).
So my question is: is there a way I can encrypt the data so that only a certain application would be able to decrypt it?
Of course I understand that in case of a local application it is impossible to keep the data absolutely secure but I need to make the 'hacking' at least not worth the effort. Are there any reasonable solutions or I will have to stick to one of my ideas I described above?
Thank you in advance!
The simple answer is no.
To encrypt and decrypt data, you need an algorithm and, optionally, a secret or key. If a computer can execute the algorithm, someone else can learn what it is. Ignoring decompilation and disassembly, a user could just look at the instructions executed by the CPU and piece together the algorithm.
This leaves the secret. Unfortunately, if the computer or program can access or derive a secret, so can someone with root or administrator rights on that computer for the same reasons above.
However, maybe you are just thinking about the problem the wrong way. If you want the program to access data that no one else can, consider making that data available from a server that users must authenticate to access. Use SSL so data is protected in transit and encrypt the data locally using a key that only the local user and local administrators can access. It is not perfect but it is about the best you are going to get in the general case.
If you need more protection than that, you may want to consider hardware dongles but this gets expensive and complex quite quickly.
I am wondering if I can make the MD5 for a dll/exe consistant after a new build?
Every time I rebuild my project and get a different MD5 with the tool "Microsoft File Checksum Integrity Verifier".
I found some articals about the issue, someone said it was due to the timestamp on the head of PE32 file. I have no knowledge about it. Please could anyone help? Thank you in advance!
Below is how I get the MD5 sum. The MD5Compare.exe are exactly the same except that they are not created in the same build.
C:\Users\Administrator>fciv.exe D:\Lab\MD5Compare\MD5Compare\bin\Debug\2 -wp MD5
Compare.exe
//
// File Checksum Integrity Verifier version 2.05.
//
5cdca6373aca0e588e1e3df92a1d5d0a MD5Compare.exe
C:\Users\Administrator>fciv.exe D:\Lab\MD5Compare\MD5Compare\bin\Debug\2 -wp MD5
Compare.exe
//
// File Checksum Integrity Verifier version 2.05.
//
cf5caace5481edc79fd7bf3e99b48a5b MD5Compare.exe
No, the checksum has to be different because the data in the file has actually changed, even if no code has - no functional difference in compilation been made, no new features added to the assembly - since the timestamp of the build, for one, will be different.
So you need to take into account metadata here, and how it is stored/affects the properties of a file on a file system, and therefore integrity checks.
Please consider what MD5 is supposed to do: It's supposed to ensure that nobody has changed your files on a binary level. It's supposed to ensure that your file is exactly the same. Having multiple builds (different files) have the same MD5-checksum would defeat the purpose of having MD5.
If you can change the files while the checksum stays the same, so could hackers.
I am working on transferring files over the network. There is zero tolerance for data loss during the transfers. I've been asked to compute the SHA256 values for the original and the copied file to verify the contents are the same. So far I have made comparisons based on copying and pasting the file, and letting Windows rename the file with the -copy appended to the filename. I have also tried renaming the file after the rename above, as well as removing the file extension. So far they all produce the same hash. I've also coded altering file attributes (just changed lastWrittenTime and fileCreationTime) and this does not seem to have an effect on the hash.
Checksum result of copying and pasting a file(explorer appends "-copy to name):
E7273D248F191A0F914837A21BE39D229D790CA242D38651BAA06DAC9EBB63F7
E7273D248F191A0F914837A21BE39D229D790CA242D38651BAA06DAC9EBB63F7
Checksum result of renaming the -copy in explorer:
E7273D248F191A0F914837A21BE39D229D790CA242D38651BAA06DAC9EBB63F7
E7273D248F191A0F914837A21BE39D229D790CA242D38651BAA06DAC9EBB63F7
Checksum result of changing file extension:
E7273D248F191A0F914837A21BE39D229D790CA242D38651BAA06DAC9EBB63F7
E7273D248F191A0F914837A21BE39D229D790CA242D38651BAA06DAC9EBB63F7
What part/s of the file are used when the hash is created?
Ok, zero tolerance was a bit much, if the hash doesn't match the file will have to be resent.
The entire binary file contents are streamed through the hashing algorithm. File metadata (such as name, date etc) doesn't play a part.
First, a general recommendation: don't do this. Use rsync or something similar to do bulk file transfers. Rsync has years of optimisations and debugging behind it, has countless options to control how (and whether) the copying happens, and is available on Windows. Don't waste time building something that has already been built.
But if you must…
Hashing algorithms generally care about bytes, not files. When applying SHA256 to a file, you are simply reading the bytes and passing them through the algo.
If you want to hash paths, permissions, etc, you should do this at the directory level, because these things constitute the "contents" of a directory. There is no standard byte-level representation of directories, so you'll have make one up yourself. Something that looks like a directory listing in sorted order usually suffices. And make sure that each entry contains the hash of the corresponding thing, be it a file or another directory. This way, the hash of the directory uniquely specifies not only the name and attributes of each child, but, recursively, the entire contents of the subdirectory.
Note: the fact that identical files have the same hash can actually work in your favour, by avoiding transmission of the second file once the system realises that a file with the same hash is already present at the destination. Of course, you would have to code for this explicitly. But also note that doing so can allow super-cheap syncing when files have been moved or copied, since they will have the same hash as before. Only affected directories (from the immediate parent(s) to the root) will have different hash values.
Finally, a minor quibble: there is no such thing as zero tolerance. Forget whether SHA256 collisions will happen in the lifetime of the Universe. A gamma ray can flip the bit that says, "These two files don't match!" Such flippings happen exceedingly rarely, but more often than you might think. In a noisy quantum universe, we should avoid talking in absolutes.
I am developing a C# app that can NOT connect to the internet.
This app will produce a configuration file for my hardware.
what i want to be sure, is that when i give the configuration file to my assistant he doesn't change it and puts in the hardware a file with different configuration.
To avoid this i am currently using a simple method.
In the code i've a key: abcd123
My application produce a configuration file and then:
HASH the configuration file and encrypts the HASH with the KEY: abcd123 stored in the
string variable KEY
give the configuration file to my assistant who loads it in the hardware
now, the hardware has the KEY abcd123, it decrypts the config and HASH the payload. IF the 2 HASH are THE SAME i assume that my assistant did not change the configuration file.
What i am concerned about is that the KEY store in the code in .NET is very easy to recover without obfuscation.
I've thus bought Crypto Obfuscator, but i don't know how much my key is secure.
I am not skilled enough to de-compile my program and see if the key is still in clear or not.
What methods can you suggest me to make me REASONABLY secure that my key is "safe" ?
I understand that a there is no way to secure it, but i just want a reasonable additional security to my automated obfuscation of which i dont know much.
I hope i've been clear but ask for clarification.
what i want to be sure, is that when i give the configuration file to my assistant he doesn't change it and puts in the hardware a file with different configuration.
You are looking for a digital signature algorithm. Broadly, there are two ways to do this:
Use a symmetric algorithm that computes a crypto-reliable hash (SHA-2 for example). You publish hash and corresponding file. Hardware gets the file, computes the hash with the same algorithm and asserts that it is the same as the publicly published hash. See how Apache log4net provides libraries and the matched signatures.
Another approach would be to use a public-private key signature, for example with RSA. A sample is available from msdn, where it shows how to sign an XML document using RSACryptoServiceProvider. You can create a private-public key pair. Private key is used to create a digital signature and you keep it to yourself. With public key you can only verify signature, and this is what you deploy to the device.
Another safety tip from the link above:
Never store or transfer the private key of an asymmetric key pair in
plaintext. Never embed a private key directly into your
source code. Embedded keys can be easily read from an assembly using
the Ildasm.exe (MSIL Disassembler) or by opening the assembly in a
text editor such as Notepad.
You can only get real security if you don't put the key in your code.
One way to do this is to use public key cryptography (for example RSA).
You create a public and a private key. The private key neverleaves your system.
You use the private key to sign the config file.
The software running on the hardware contains the public key and uses it to verify the signature of the config file without any need for a network connection.
Even if the whole world knows your public key nobody can create a valid signature without the private key (which is only available to you).
My Application can perform 5 business functions. I now have a requirement to build this into the licensing model for the application.
My idea is to ship a "keyfile" with the application. The file should contain some encrypted data about which functions are enabled in the application and which are not. I want it semi hack proof too, so that not just any idiot can figure out the logic and "crack" it.
The decrypted version of this file should contain for example:
BUSINESS FUNCTION 1 = ENABLED
BUSINESS FUNCTION 2 = DISABLED.... etc
Please can you give me some ideas on how to do this?
While it could definitely be done using Rijndael, you could also try an asymmetric approach to the problem. Require the application to decrypt the small settings file on start up using a public key and only send them new configuration files encrypted using the private key.
Depending on the size of your configuration file, this will cause a performance hit on startup compared to the Rijndael algorithm, but even if the client decompiles the program and gets your public key its not going to matter in regards to the config file since they won't have the private key to make a new one.
Of course none of this considers the especially rogue client who decompiles your program and removes all the checking whatsoever ... but chances are this client won't pay for your product no matter what you do thus putting you in a position of diminishing returns and a whole new question altogether.
Probably the easiest secure solution is to actually use online activation of the product. The client would install your product, enter his key (or other purchase identification -- if you purchase online this could all be integrated, if you are selling a box, the key is more convenient).
You then use this identification to determine what features are available and send back an encrypted "keyfile" (as you term it), but also a custom key (it can be randomly generated, both the key and key file would be stored on your server -- associated with that identification).
You then need to make sure the key file doesn't work on other computers, you can do this by having the computer send back it's machine ID and use that as added salt.
I've been pondering using custom built assemblies for the purpose of application licensing. The key file approach is inherently flawed. Effectively, it's a bunch of flags saying "Feature X is enabled, Feature Y is not". Even if we encrypt it, the application will have all the functionality built in - along with the method to decrypt the file. Any determined hacker is unlikely to find it terribly hard to break this protection (though it may be enough to keep the honest people honest, which is really what we want).
Let's assume this approach of encrypted "Yay/Nay" feature flags is not enough. What would be better is to actually not ship the restricted functionality at all. Using dynamic assembly loading, we can easily put just one or two core functions from each restricted feature into another assembly and pull them in when needed. These extra "enablement" assemblies become the keyfiles. For maximum security, you can sign them with your private key, and not load them unless they're well signed.
Moreover, for each customer, your build and licensing process could include some hard to find customer specific data, that effectively ties each enablement assembly to that customer. If they choose to distribute them, you can track them back easily.
The advantage of this approach over simple Yay/Nay key files is that the application itself does not include the functionality of the restricted modes. It cannot be hacked without at least a strong idea of what these extra assemblies do - if the hacker removes their loading (as they would remove the keyfile), the code just can't function.
Disadvantages of this approach include patch release, which is somewhat mitigated by having the code in the keyfile assemblies be simple and compact (yet critical). Custom construction of an assembly for each customer may be tricky, depending on your distribution scenario.
You could achieve this fairly easily using Rijndael, however the problem is the fact that the code will contain your Key in your current design. This basically means someone will disassemble your code to find the key and boom, goodbye protection. You could slow this process by also obfuscating your code, but again, if they want to get it, they will get it.
However, this aside, to answer your question, this code should work for you:
http://www.dotnetspark.com/kb/810-encryptdecrypt-text-files-using-rijndael.aspx
I find Perforce-style protection scheme easiest to implement and use, while at the same time being quite hack-proof. The technique uses a plain text file with a validation signature attached at the last line. For example:
----(file begin)
key1: value1
key2: value2
expires: 2010-09-25
...
keyN: valueN
checksum: (base64-encoded blob)
---- (file end)
You would choose an assymetric (public/private key) encryption algorithm + hashing algorithm of your choice. Generate your reference public/private key pair. Include the public key in your program. Then write a small utility program that will take an unsigned settings file and sign it - compute the digital signature for the contents of the file (read settings file, compute hash, encrypt this hash using private key) and attach it (e.g. base64-encoded) as "checksum" in the last line.
Now when your program loads the settings file, you would read the embedded public key and validate the digital signature (read file contents, strip the last line, compute hash; compare this value against checksum from last line base64 decoded and run through the assymetric decryption using embedded public key). If the validation succeeds, you know the settings file has not been tampered with.
I find the advantages to be that the settings are in plain text (so for example the customer can see when the license expires or what features they paid for), however changing even a single character in the file with result in the digital signature check failing. Also, keep in mind that you are now not shipping any private knowledge with your program. Yes, the hackers can reverse-engineer your program, but they will only find the public key. To be able to sign an altered settings file, they will have to find the private key. Good luck doing that unless you're a three-letter agency... :-).
Use any 'Cryptography' method to implement this.
Just check out the namespace 'System.Security.Cryptography'
The above namespace providing many encryption and decryption functions to protect secret data.
You have another method to implement this using registry.
You can store data in windows registry.
Better to encrypt data before store into registry.
ROT-13!
Edit:
ROT-13 is a simple substitution cipher in which each letter is substituted by the letter 13 letters before it in the alphabet. (NOTE: alternatively, you can use the ascii-value 13 less than the given char to support more than [ A-Z0-9]).
For more info see wikipedia.