I am creating an application with the purpose of receiving an encrypted string, decrypting it and passing the clear text string as arguments to a PowerShell script.
The executable has to be self contained, cannot connect to things like a SQL DB or anything alike. The cipher will always be the same, which means that the password/salt can't really be random either.
I know that hardcoding the password/salt is not really a good idea, but I'm struggling with how store a password/salt that doesn't change in a secure way in a self-contained executable.
Right now what I'm doing is rather than having a static string as the password/hash, I create a password and salt based on the modified date of the executable itself (with a few more things done to it). If the executable changes I'll have to recreate the cipher as the previous one cannot be decoded anymore, but at least I'm not really hardcoding a password and/or salt.
Still, I'm not sure just how secure this is and am sure there has to be a better way.
Any suggestions?
EDIT
The only place where this will be used is inside a task sequence running inside SCCM, which means that users won't be able to interact with the computer at all during the time that the task sequence is running (assuming that debug mode is not enabled, else there's also far worse things to worry about).
So I could potentially pass it in clear text to the script as no one would be able to read it since they can't interact with the PC, but then SCCM would automatically output it to logs, which obviously I don't want. I could write it on the script which would avoid having it on the logs, but if someone gets a hold of the script, bearing in mind it's a script and not compiled code, they'd know the password.
Remember the password/salt are not actually hardcoded strings as it is, they are generated during runtime, so they will not be visible using a disassembler.
This article can help you to decide how you need to design password storage
http://flagdefenders.blogspot.in/2012/12/how-to-save-password-securely.html
Related
I'm looking for Unity function to determine if my game has been de-compiled/ recompiled or modified in any way.
Yes, there is a Unity function for this but it can still be circumvented.
This can be done with Application.genuine which returns false when the application is altered in any way after it was built.
if (Application.genuineCheckAvailable)
{
if (Application.genuine)
{
Debug.Log("Not tempered");
}
}
The problem is that if the person is smart enough to de-compile, modify and compile the game, he/she can also remove the check above so the check above becomes useless. Any type of program genuinity or authenticity check can be removed as long as it is running on the player's machine.
EDIT
You can make it harder to be circumvented by doing the following:
1.Go to File --> Build Settings... then select your platform.
2.Click on Player Settings --> Other Settings and then change the Scripting Backend from Mono to IL2CPP(C++).
This will make it harder to circumvent that but it is still possible to be circumvented.
TL;DR: Thats frankly not possible.
You can never determine whether your program was decompiled, because there exists no measure to determine whether that happened. And every executable can be disassembled into at least assembler even if you scramble and screw up your data. You can make it hard to understand your source code though using obfuscating software. The ultimate obfuscator would be the M/o/Vuscator, which changes all assembler commands into mov instructions, which make it a pain in the butt to understand anything. But it also is slow as heck and probably not what you want (btw. this works because the mov-instruction is touring-complete in the x86 Instruction set. There is a great talk about it here). When you follow this trend further down the rabbit hole you can also use the exact same assembler code (around 10-20ish instructions) to create all programms possible which will make it impossible to get to your source code by simply disassembling your code.
Staying in the realm of the possible though: No, you are not able to prevent people from disassembling or decompiling your code. But you can make it harder (not impossible) to understand.
Detecting a change in the executable is on the possible side, though. Altough probably not feasible for you.
The main problem beeing that any code you build into the app to detect changes can be patched away. So you'll need to prevent that. But there is no practicall way of preventing that...
If you try to detect changes in your app by using a signature of the original and compare that to the actual signature for example, you can just exclude that check in the recompiled version. You can try to verify the signature against a server, but that can still be circumvented by removing the server check. You can force a server check for multiplayer games, but then we'll just use a fake signature. If you then want to calculate the signature on your server to prevent tampering, we'll just give you the original file and run the recompiled one.
There is one way (altough not feasible as mentioned above) to actually absolutely protect parts of your code against decompiling. The mechanism is called "BlurryBox" and was developed at KIT in germany. As I can't seem to find a proper document as a reference, here is what it does to archieve this.
The mechanism uses a stick with an encrypted storage and a microcontroller to do encryption. You put the parts of your code you want to protect (something that is called regularly, is necessary but not that time critical) into the encrypted storage. As it is impossible to retrieve the key [citation needed], you cannot access the code. The microcontroller then takes commands from your programm to call one of the encrypted functions in the storage with given parameters and to return the result. Because it is not possible to read the code you need to analyze its behaviour. Here comes the "Blurry" part of the box. Each function you store needs to have a small and well defined set of allowed parameters. Every other set of parameters leads into a trap that kills your device. As the attacker has no specs as to what the valid parameters are, this method gives you profable security against tampering with the code (as they state). There might be some mistakes on how this exactly works though as I'm writing this down from my memory.
You could try mimicking that behaviour with a server you control (code on the server and IP bans for trying to understand the code)
My situation is as follows:
I have to deploy a series of .NET desktop applications consisting of a file with encrypted data and an executable that will access that data and decrypt some parts of it in runtime.
What I need to achieve is that each data container should only be decryptable by that specific .exe it is provided with.
The first idea was to encrypt the data using, say, the hash value of the .exe file as a symmetric key and during decryption calculate the hash value of the .exe file in runtime and decrypt the parts of the data container with it.
However, the problem with that approach is that the user can easily look into the .NET assembly with ILSpy or any other decompiler and discover the whole encryption algorithm which will enable the user to decrypt all the data containers in my series of applications.
Another solution that comes to my mind is to make a small C native library (that is less easy to decomplile) that will perform some manipulations with the .exe assembly information and generate a key for decryption based on it (let's consider the user lazy enough so that he will not try to intercept the key from the memory).
But ideally I wouldn't like to resort to any languages other than C# because porting the application to other platforms with Mono will require additional effort (P/Invokes and so).
So my question is: is there a way I can encrypt the data so that only a certain application would be able to decrypt it?
Of course I understand that in case of a local application it is impossible to keep the data absolutely secure but I need to make the 'hacking' at least not worth the effort. Are there any reasonable solutions or I will have to stick to one of my ideas I described above?
Thank you in advance!
The simple answer is no.
To encrypt and decrypt data, you need an algorithm and, optionally, a secret or key. If a computer can execute the algorithm, someone else can learn what it is. Ignoring decompilation and disassembly, a user could just look at the instructions executed by the CPU and piece together the algorithm.
This leaves the secret. Unfortunately, if the computer or program can access or derive a secret, so can someone with root or administrator rights on that computer for the same reasons above.
However, maybe you are just thinking about the problem the wrong way. If you want the program to access data that no one else can, consider making that data available from a server that users must authenticate to access. Use SSL so data is protected in transit and encrypt the data locally using a key that only the local user and local administrators can access. It is not perfect but it is about the best you are going to get in the general case.
If you need more protection than that, you may want to consider hardware dongles but this gets expensive and complex quite quickly.
Lets say my program is an Anti-Virus.
Lets also say I have a file, called "Signatures.dat". It contains a list of viruses to scan.
I would like to encrypt that file in a way that it can be opened my by anti-virus on any computer but the users wont able to see the content of that file.
How would I accomplish that task ?
I was looking at thigs like DPAPI, but I dont think that would work in my case because it's based on User's setting. I need my solution to be universal.
I've got a method to encrypt it, but then I am not sure how to store the keys.
I know that storing it in my code is really unsecure, so I am really not sure what to do at this point.
You want the computers of the users to be able to read the file, and you want the computers of the users to be unable to read the file. As you see, this is a contradiction, and it cannot be solved.
What you are implementing is basically a DRM scheme. Short of using TPM (no, that doesn't work in reality, don't even think about it), you simply cannot make it secure. You can just use obfuscation to make it as difficult as possible to reverse-engineer it and retrieve the key. You can store parts of the key on a server and retrieve it online (basically doing what EA did with their games) etc., but you probably will only make your product difficult to use for legitimate users, and anyone who really wants to will still be able to get the key, and thus the file.
In your example are you trying to verify the integrity of the file (to ensure it hasn't been modified), or hide the contents?
If you are trying to hide the contents then as has been stated ultimately you can't.
If you want to verify the file hasn't been modified than you can do this via hashes. You don't appear to have confused the two use-cases but sometimes people assume you use encryption to ensure a file hasn't been tampered with.
Your best bet might be to use both methods - encrypt the file to deter casual browsers, but know that this is not really going to deter anyone with enough time. Then verify the hash of the file with your server (use https, and ensure you validate the certificates thumbprints). This will ensure the file hasn't been modified even if someone has cracked your encryption.
Does anybody know the solution for this? I create an exe file of my software. After first installation I have to disable the exe, so it cannot be run again because when someone purchases the software from me they can install it only once.
To do this you'll need to store something somewhere, that something could be:
A file
A registry entry
A call to a web service you own that stores a unique identifier for the machine, and is checked on subsequent installation attempts (Note: If you choose this method you must be clear and up-front with your users that it's what you're doing).
Bear in mind that a determined user will be able to circumvent file and registry methods and also quite possibly the web service method. The former two by using something such as Process Monitor to identify the files/registry entries you're writing to and clear them. For the latter, by using something like Fiddler to identify the web service calls you're making and replacing the responses with ones that allow them to bypass your protection.
Remember, ultimately the user can disassemble your code and remove the protection mechanisms you've put in place, so don't rely on them being 100% un-breakable
Forget it, mate. It's software - you absolutely cannot enforce something like that because the user has complete control over the environment where the binary runs, including reverse engineering, virtualization, backups etc. etc. And the ones who you want to foil are precisely the ones who will go to any length to thwart any protection measure you could invent.
No, the only thing that works is to force an online connection and register, on your system, the fact that a particular binary was installed once, then forbid it the next time. That requires you to make each installer different and have a cryptographically strong key generator, and it's still susceptible to replay attacks - but it's the only thing that is not useless by definition.
(Well, either that, or make your software so insanely great that people will fall in love you and want to give you the money. That solution is probably even harder.)
You could store the installation path in the registry or some secret location and have your .exe check that if it has started from a location different than the one stored, to simply exit, as you probably don't want to tell the user what you are doing.
So, to start off, I want to point out that I know that these things are never fool-proof and if enough effort is applied anything can be broken.
But: Say I hand a piece of software to someone (that I have written) and get them to run it. I want to verify the result that they get. I was thinking of using some sort of encryption/hash that I can use to verify that they've run it and obtained a satisfactory result.
I also don't want the result to be "fakeable" (though again, I know that if enough effort to break it is applied etc etc...). This means therefore, that if I use a hash, I can't just have a hash for "yes" and a hash for "no" (as this means the hash is going to be only one of 2 options - easily fakeable).
I want the user of the tool to hand something back to me (in possibly an email for example), something as small as possible (so for example, I don't want to be trawling through lines and lines of logs).
How would you go about implementing this? I possibly haven't explained things the greatest, but hopefully you get the gist of what I want to do.
If anyone has implemented this sort of thing before, any pointers would be much appreciated.
This question is more about "how to implement" rather than specifically asking about code, so if I've missed an important tag please feel free to edit!
I think what you're looking for is non-repudiation. You're right, a hash won't suffice here - you'd have to look into some kind of encryption and digital signature on the "work done", probably PKI. This is a pretty wide field, I'd say you'll need both authentication and integrity verification (e.g. Piskvor did that, and he did it this way at that time).
To take a bird's eye view, the main flow would be something like this:
On user's computer:
run process
get result, add timestamp etc.
encrypt, using your public key
sign, using the user's private key (you may need some way to identify the user here - passphrases, smart cards, biometrics, ...)
send to your server
On your server:
verify signature using the user's public key
decrypt using your private key
process as needed
Of course, this gets you into the complicated and wonderful world that is Public Key Infrastructure; but done correctly, you'll have a rather good assurance that the events actually happened the way your logs show.
I'm pasting in one of your comments here, because it goes to the heart of the matter:
Hi Eric. I should have pointed out
that the tool isn't going out
publically, it will go to a select
list of trusted users. The tool being
disassembled isn't an issue. I'm not
really bothered about encryption, all
I need to do is be able to verify that
they ran a specific process and got a
legitimate result. The tool verifies
stuff, so I don't want them to just
assume that something works fine and
not run the tool.
So basically, the threat we're protecting against is lazy users, who will fail to run the process and simply say "Yes Andy, I ran it!". This isn't too hard to solve, because it means we don't need a cryptographically unbreakable system (which is lucky, because that isn't possible in this case, anyway!) - all we need is a system where breaking it is more effort for the user than just following the rules and running the process.
The easiest way to do this is to take a couple of items that aren't constant and are easy for you to verify, and hash them. For example, your response message could be:
System Date / Time
Hostname
Username
Test Result
HASH(System Date / Time | Hostname | Username | Test Result)
Again, this isn't cryptographically secure - anyone who knows the algorithm can fake the answer - but as long as doing so is more trouble than actually running the process, you should be fine. The inclusion of the system date/time protects against a naive replay attack (just sending the same answer as last time), which should be enough.
How about you take the output of your program (either "yes" or "no"?), and concatenate it with a random number, then include the hash of that string?
So you end up with the user sending you something like:
YES-3456234
b23603f87c54800bef63746c34aa9195
This means there will be plenty of unique hashes, despite only two possible outputs.
Then you can verify that md5("YES-3456234") == "b23603f87c54800bef63746c34aa9195".
If the user is not technical enough to figure out how to generate an md5 hash, this should be enough.
A slightly better solution would be concatenate another (hard-coded, "secret") salt in order to generate the hash, but leave this salt out of the output.
Now you have:
YES-3456234
01428dd9267d485e8f5440ab5d6b75bd
And you can verify that
md5("YES-3456234" + "secretsalt") == "01428dd9267d485e8f5440ab5d6b75bd"
This means that even if the user is clever enough to generate his own md5 hash, he can't fake the output without knowing the secret salt as well.
Of course, if he is clever enough, he can extract the salt from your program.
If something more bullet-proof is needed, then you're looking at proper cryptographic signature generation, and I'll just refer you to Piskvor's answer, since I have nothing useful to add to that :)
In theory this is possible by using some sort of private salt and a hashing algorithm, basically a digital signature. The program has a private salt that it adds to the input before hashing. Private means the user does not have access to it, you however do know the salt.
The user sends you his result and the signature generated by the program. So you can now confirm the result by checking if hash(result + private_salt) == signature. If it is not, the result is forged.
In practice this is almost impossible, because you cannot hide the salt from the user. It's basically the same problem that is discussed in this question: How do you hide secret keys in code?
You could make the application a web app to which they have no source code access or access to the server on which it runs. Then your application can log its activity, and you can trust those logs.
Once a user has an executable program in their hands, then anything can be faked.
It's worth noting that you aren't really looking for encryption.
The "non-repudiation" answer is almost on the money, but all you really need to guarantee where your message has come from is to securely sign the message. It doesn't matter if someone can intercept and read your message, as long as they can't tamper with it without you knowing.
I've implemented something similar before information was sent plaintext - because it wasn't confidential - but an obfuscated signing mechanism meant that we could be (reasonably) confident that the message was generated by our client software.
Note that you can basically never guarantee security if the app is on someone else's hardware - but security is never about "certainty", it's about "confidence" - are you confident enough for your business needs that the message hasn't been tampered with?