Verify Currently Running Executable - c#

I'm looking for the right approach to verify a currently running executable from within that executable.
I've already found a way to compute a (SHA256) hash for the file that is currently running.
The problem is: Where do I safely store this hash? If I store it in a config file, a malicious user can just calculate his own hash and replace it. If I store it in the executable itself, it can be overridden with a hex editor probably.
A suggestion I read was to do an asymmetrical en- (or was it de-) cryption, but how would I go about this?
A requirement is that the executable code hashes and en/decrypts exactly the same on different computers, otherwise I can't verify correctly. The computers will all be running the same OS which is Windows XP (Embedded).
I'm already signing all of my assemblies, but I need some added security to successfully pass our Security Target.
For those who know, it concerns FPT_TST.1.3: The TSF shall provide authorised users with the capability to verify the integrity of stored TSF executable code.

All the comments, especially the one from Marc, are valid.
I think your best bet is to look at authenticode signatures - that's kind of what they're meant for. The point being that the exe or dll is signed with a certificate (stamping your organisation's information into it, much like an SSL request) and a modified version cannot (in theory plus with all the normal security caveats) be re-signed with the same certificate.
Depending upon the requirement (I say this because this 'security target' is a bit woolly - the ability to verify the integrity of the code can just as easily be a walkthrough on how to check a file in windows explorer), this is either enough in itself (Windows has built-in capability to display the publisher information from the certificate) or you can write a routine to verify the authenticode certificate.
See this SO Verify whether an executable is signed or not (signtool used to sign that exe), the top answer links to an (admittedly old) article about how to programmatically check the authenticode certificate.
Update
To follow on from what Marc suggested - even this won't be enough if a self-programmatic check is required. The executable can be modified, removing the check and then deployed without the certificate. Thus killing it.
To be honest - the host application/environment really should have it's own checks in place (for example, requiring a valid authenticode certificate) - if it's so important that code isn't modified then it should have its own steps for doing so. I think you might actually be on a wild goose chase.
Just put whatever check will take least amount of effort on your behalf without worrying too much about the actual security it apparently provides - because I think you're starting from an impossible point. If there is actually any genuine reason why someone would want to hack the code you've written, then it won't just be a schoolboy who tries to hack it. Therefore any solution available to you (those mentioned in comments etc) will be subverted easily.
Rent-a-quote final sentence explaining my 'wild goose chase' comment
Following the weakest link principle - the integrity of an executable file is only as valid as the security requirements of the host that runs that executable.
Thus, on a modern Windows machine that has UAC switched on and all security features switched on; it's quite difficult to install or run code that isn't signed, for example. The user must really want to run it. If you turn all that stuff down to zero, then it's relatively simple. On a rooted Android phone it's easy to run stuff that can kill your phone. There are many other examples of this.
So if the XP Embedded environment your code will be deployed into has no runtime security checks on what it actually runs in the first place (e.g. a policy requiring authenticode certs for all applications) then you're starting from a point where you've inherited a lower level of security than you actually supposed to be providing. No amount of security primitives and routines can restore that.

Since .NET 3.5 SP1, the runtime is not checking the strong name signature.
When your assemblies are strong named, so I suggest to check the signature by code.
Use the native mscoree.dll with p/Invoke.
private static class NativeMethods
{
[DllImport("mscoree.dll")]
public static extern bool StrongNameSignatureVerificationEx([MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, byte dwInFlags, ref byte pdwOutFlags);
}
Than you can use the assemlby load event and check every assembly that is loaded into your (current) app domain:
AppDomain.CurrentDomain.AssemblyLoad += CurrentDomain_AssemblyLoad;
private static void CurrentDomain_AssemblyLoad(object sender, AssemblyLoadEventArgs args)
{
Assembly loadedAssembly = args.LoadedAssembly;
if (!VerifytrongNameSignature(loadedAssembly))
// Do whatever you want when the signature is broken.
}
private static bool VerifytrongNameSignature(Assembly assembly)
{
byte wasVerified = 0;
return NativeMethods.StrongNameSignatureVerificationEx(assembly.Location, 1, ref wasVerified);
}
Of course, someone with enough experience can patch out the "check code" from you assemlby, or simply strip the strong name from your assembly..

Related

Where can I store (and manage) Application license information? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am developing a Windows Application. That requires users to register to use it...
Now, I am storing my license info as a file in APpData. But deleting that file resets the trial version date. So, I am now planning to save it in registry.
But, Most of the Users will not have administrative privileges (Limited Users) in Windows to Access the registry.
What can I do ? Where can I save my serial number and date ?
In my opinion the point is you have to change how you manage your license.
Where
If they delete license data file then trial restarts? Do not start application if file doesn't exist and create it with an install action first time it's installed.
Now you face a second problem: what if they uninstall and reinstall application? Second step is to move this file to application data folder (for example Environment.SpecialFolder.CommonApplicationData). This is just little bit more safe (because application data won't be deleted when uninstall) but it's still possible for them to manually find and delete it. If application will be installed by low privileges users there isn't much you can do (you can't try to hide license somewhere in Registry).
Now it's a game between you and crackers. They'll win, always. You'll only make life of legitimate users more hard so read cum grano salis. Where you may store license data:
Registry. Pro: easy to do. Cons: easy to crack and for low privileges user it's valid only for one user per time. A registry key (in a per-user base) can be somehow hidden if it has \0 in its name. Take a look to this nice post.
File. Pro: easy to do and IMO little bit more safe than Registry. Cons: easy to crack (but you can hide it more, see later).
Application itself (appending data to your executable, few words about that on this post). Pro: harder to detect. Cons: an antivirus may see this as...a virus and an application update may delete license too (of course if you don't handle this situation properly) so it'll make your code and deployment more complicated.
How to hide license in a file?
If you're going with a file (it doesn't matter where it's located) you may consider to make crackers life (little bit) harder. Two solutions come to my mind now:
Alternate Data Streams. File is attached to another file and they won't see it with just a search in Windows Explorer. Of course there are utilities to manage them but at least they have to explictly search for it.
Hide it inside application data (a bitmap, for example, using steganography). They just don't know it's license data, what's more safe? Problem is they can easy decompile your C# program to see what you do (see paragraph about Code Obfuscation).
Probably many others (fantasy here is our master) but don't forget...crackers will find it (if they really want) so you have to balance your effort.
How
Keeping your license schema you're now on a dead path. Decision you have to take is if the risk they use trial longer than allowed is higher than risk they stop to use your application because of boring protection.
Validation
If you can assume they have a network connection then you may validate license on-line (only first time they run your application) using some unique ID (even if it's about Windows 8 you may take a look to this post here on SO). Server side validation can be pretty tricky (if you want to do it in the right way), in this post is explained an example of program flow to manage that in a proper way.
Data Obfuscation/Encryption
Your license file/data is now in a safe place. Hardly crackers will find it. Now you need another step: obfuscation. If your license data is in plain text once they found your file it's too easy to change it. You have some options (ordered by increased security and complexity):
Obfuscate your files. If they can't understand what's inside a file with a simple text editor (or even a hex editor) then they'll need more time and effort to crack it. For example you may compress them: see this post about XML file obfuscation with compression. Note that also a simple base64 encoding will obfuscate your text files.
Encrypt them wit a symmetric algorithm. Even a very simple one will work well, here you're just trying to hide data. See this post for an example. I don't see a reason to prefer this method to a simpler obfuscation.
Encrypt them with an asymmetric algorithm. This kind of encryption is a big step in complexity and security and it'll be (very) useful only if license token is provided by a server/external entity. In this case it'll obfuscate license signed with its private key. Client application will validate signature with its public key and even if cracker will find this file (and decompile your code to read public key) they still won't be able to change it because they don't have private key.
Please note that data obfuscation/encryption can be used in conjunction with above mentioned steganography (for example to hide encrypted license file inside an image).
Code Obfuscation
If you're not using license signing with asymmetric encryption then last step is to obfuscate your code. Whatever you will do they'll be able to see your code, check your algorithm and workaround it. So sad, you're deploying instructions manual! Obfuscate with an Obfuscator if you want but what I strongly suggest is to move your license check in a less obvious place.
Put all your license related code in a separate DLL. Sign it (be aware that signed assemblies may be decompiled and recompiled to remove signing, there are even tools to do it almost automatically).
Pack it inside your executable resources (with a not so obvious name) and do not deploy DLL.
Handle event AppDomain.AssemblyResolve, when your DLL will be needed at run-time you'll unpack in memory and return its stream of bytes. See more about this technique in this Jeffrey Richter's post.
I like this method because they'll see there is a license check but...they won't find license code. Of course any good cracker will solve this issue in 10 minutes but you'll be (little bit more) safe from random ones.
Conclusions
To summarize a little bit this is a list of what you may do to provide a stronger license check (you can skip one or more steps, of course, but this will reduce safety):
Split your license check code in two assemblies (one to perform the check and manage license and the other to provide a public interface to that engine).
Strong sign all your assemblies.
Embed your License Engine assembly inside your License Interface assembly (see Code Obfuscation section).
Create a License server that will manage your licenses. Be careful to make it secure, to have secure connection and secure authentication (see Validation section).
Save license file locally in a safe location (see Where section) and encrypted with an asymmetric encryption algorithm (see Data Obfuscation section).
Sometimes validate license with your License Server (see Validation section).
Addendum: Software Protection Dongles
A small addendum about hardware keys (Software protection dongles). They're an invaluable tool to protect your software but you have to design your protection even more carefully. You can assume hardware itself is highly secure but weak points are its connection with computer and communication with your software.
Imagine to simply store your license into the key, a cracker may use an external USB (assuming your SPD is USB) to share same key with multiple computers. You should also store some hardware unique ID within the key but in this case weak point is connection (hardware can be emulated by a software driver). It's a pretty easy crack and this false sense of security ("I'm using Software Protection Dongle, my software is then safe") will make your application even more vulnerable (because you risk to forget other basic protections to simplify license management).
Cost vs benefits for a poor designed protection using SPD should make you consider to use a normal USB pen drive. It costs 1 $ instead of 15/20$ (or much more) for a SPD and you have same level of protection against casual crackers. Of course it won't stop a serious cracker but also a poor designed SPD won't stop it.
A true protection (assuming you're not running on a DRM enabled device) is a dongle which can also execute your code. If you can move some basic algorithms (at least to decrypt vital - and dynamic - support files) into the key then to crack your software they will need to crack hardware. For a half-decent dongle this is a very very very hard task. More carefully you design this and more code you move into the key and more you'll be safe.
In any case you should doubt about marketing campaigns: software protection with a dongle isn't easier. It can be (much) more safe but it isn't as easy as vendors say. In my opinion plug-n-play protection cost is too high compared to its benefits (benefits = how much it'll make crackers' life harder).
Unfortunately wherever you store licence information on a client's machine it's open to abuse (because it's their machine!).
The only secure way to do this is to have your program check in with a remote service, obviously this requires a lot of overhead.
My own approach is that if customers mess with their licence key then they should expect issues and you are under no obligation to assist. I would make sure your key contains information about the machine it's running on (to prevent simply copying the key) but otherwise keep it very simple.
When researching licencing myself I found a philosophy I tend to stick by - you drive away more potential customers with convoluted and difficult licencing setups than you lose through piracy.
My suggestion would be that you reverse your logic - instead of having allowing the removal of a licence key to restart the free trial why not force them to have a licence key to unlock the full application?
If you are going to write to HKEY_CURRENT_USER you won't need Administrative rights.
on the other hand, writing to HKEY_LOCAL_MACHINE requires Administrative rights.
be sure when you open the key for writing to call it like this
RegistryKey key = Registry.CurrentUser.OpenSubKey(#"Software\YourAppPath", true);
if that doesn't work for you, there is a trick to write to the end of the executable file itself, but that's another thing.

How does .NET digital signing work?

I'm rather confused at how strong key signing for a .NET assembly works.. I signed with a strong name my .NET assembly by inserting a password and having a .pfx file generated...
how is this supposed to protect my dll? Can someone explain this to me in simple terms? I'm a newbie with digital signing and you'll have to take a BIG step back to have me understand this
I'm a newbie with digital signing and you'll have to take a BIG step back to have me understand this
Ok, let's take a big step back and ask some more basic questions then. I'll boldface every word that has a precise meaning.
What is the purpose of a security system?
To protect a resource (a pile of gold doubloons) against the threat of successful attack (theft) by a hostile party (a thief) who seeks to take advantage of a vulnerability (an unlocked window). (*)
How does .NET's Code Access Security work in general?
This is a sketch of the .NET 1.0 security system; it is rather complicated and has been more or less replaced by a somewhat simpler system, but the basic features are the same.
Every assembly presents evidence to the runtime. A domain administrator, machine administrator, user and appdomain creator each may create a security policy. A policy is a statement of what permissions are granted when a certain piece of evidence is present. When an assembly attempts to perform a potentially dangerous operation -- that is, an operation that might be a threat to a resource -- the runtime demands that the permission be granted. If the evidence is insufficient to grant that permission then the operation fails with an exception.
So for example, suppose an assembly presents the evidence "I was just downloaded from the internet", and the policy says "code downloaded from the internet gets permission to run and access the printer" and that code then attempts to write to C:\Windows\System32. The permission was not granted because of insufficient evidence, and so the operation fails. The resource -- the contents of the system directory -- are protected from tampering.
What is the purpose of signing an assembly with a digital certificate that I got from VeriSign?
An assembly signed with a digital certificate presents evidence to the runtime describing the certificate that was used to sign the assembly. An administrator, user or application may modify security policy to state that this evidence can grant a particular permission.
The evidence presented by an assembly signed with a digital certificate is: this assembly was signed by someone who possessed the private key associated with this certificate, and moreover, the identity of the certificate holder has been verified by VeriSign.
Digital certificates enable the user of your software to make a trust decision on the basis of your identity being verified by a trusted third party.
So how does that protect my DLL?
It doesn't. Your DLL is the crowbar that is going to be used to jimmy the window, not the pile of gold coins! Your DLL isn't a resource to be protected in the first place. The user's data is the resource to be protected. Digital signatures are there to facilitate an existing trust relationship. Your customer trusts you to write code that does what it says on the label. The signature enables them to know that the code they are running really came from you because the identity of the author of the code was verified by a trusted third party.
Isn't strong naming the same thing then?
No.
Strong naming is similar, in that a strong-named DLL presents cryptographically strong evidence to the runtime that the assembly was signed by a particular private key associated with a particular public key. But the purpose of strong naming is different. As the term implies, strong naming is about creating a name for an assembly that can only be associated with the real assembly. Anyone can make a DLL named foo.dll, and if you load foo.dll into memory by its weak name, you'll get whatever DLL is on the machine of that name, regardless of who created it. But only the owner of the private key corresponding to the public key can make a dll with the strong name foo, Version=1.2.3.4, Culture=en, PublicKeyToken=03689116d3a4ae33.
So again, the purpose of strong naming is not directly to facilitate a trust relationship between a software provider and a user. The purpose of strong naming is to ensure that a developer who uses your library is really using the specific version of that library that you actually released.
I notice that VeriSign wasn't a factor in strong naming. Is there no trusted third party?
That's right; with a strong name there is no trusted third party that verifies that the public key associated with a given strong name is actually associated with a particular organization or individual.
This mechanism in digital certificates facilitates a trust relationship because the trusted third party can vouch that the public key really is associated with the trusted organization. Lacking that mechanism, somehow the consumer of a strong name needs to know what the public key of your organization is. How you communicate that to them securely is up to you.
Are there other implications to the fact that there is no trusted third party when strong naming?
Yes. Suppose for example that someone breaks into your office and steals the computer with the digital certificate private key on it. That attacker can now produce software signed with that key. But certifying authorities such as VeriSign publish "revocation lists" of known-to-be-compromised certificates. If your customers are up-to-date on downloading revocation lists from their certifying authorities then once you revoke your certificate, they can detect that your software might be from a hostile third party. You then have the difficult task of getting a new cert, re-signing all your code, and distributing it to customers, but at least there is some mechanism in place for dealing with the situation.
Not so with strong names. There is no central certifying authority to appeal to for a list of compromised strong names. If your strong name private key is stolen, you are out of luck. There is no revocation mechanism.
I took a look at my default security policy and it says that (1) any code on the local machine is fully trusted, and (2) any code on the local machine that is strong-named by Microsoft is fully trusted. Isn't that redundant?
Yes. This way if the first policy is made more restrictive then the second policy still applies. It seems reasonable that an administrator might want to lower the trust level of installed software without lowering the trust level of the assemblies that must be fully trusted because they keep the security system itself working.
But wait a moment, that still seems redundant. Why not set the default policy to "(1) any code on the local machine is trusted (2) any code strong-named by Microsoft is fully trusted"?
Suppose a disaster strikes and the Microsoft private key is compromised. It is stored deep in a vault under building 11, protected by sharks with laser beams, but still, suppose that happened. This would be a disaster of epic proportions because like I just said, there's no revocation system. If that happened AND the security policy was as you describe then the attacker who has the key can put hostile software on the internet that is then fully trusted by the default security policy! With the security policy as it actually is stated -- requiring both a strong name and a local machine location -- the attacker who has the private key now has to trick the user into downloading and installing it.
This is an example of "defense in depth". Always assume that every other security system has failed, and still do the best to stop the attacker.
As a best practice you should always set a strong name or digital signing policy to include a location.
So again, strong naming isn't to protect my DLL.
Right. The purpose of the security system is never to protect you, the software vendor, or any artifact you produce. It is to protect your customers from attackers who seek to take advantage of the trust relationship between your customers and you. A strong name ensures that code which uses your libraries is really using your libraries. It does this by making an extremely convenient mechanism for identifying a particular version of a particular DLL.
Where can I read more?
I've written an entire short book on the .NET 1.0 security system but it is now out of print, and superseded by the new simplified system anyways.
Here are some more articles I've written on this subject:
http://blogs.msdn.com/b/ericlippert/archive/2009/09/03/what-s-the-difference-part-five-certificate-signing-vs-strong-naming.aspx
http://ericlippert.com/2009/06/04/alas-smith-and-jones/
(*) Security systems have other goals than preventing a successful attack; a good security system will also provide non-repudiable evidence of a successful attack, so that the attacker can be tracked down and prosecuted after the attack. These features are outside the scope of this discussion.
In brief, one of the main purposes is to protect against loading an assembly by the CLR after it has been changed. For example, you ship a software with a DLL, someone could reflect it, change the code, and build it again, and let your program load it thinking it is your dll. To protect against that, a hash is computed when you build your dll and stored with it. At runtime, when the CLR loads the DLL, it computes the hash again and compares against what is stored with the DLL, if the DLL is still intact, then it should match and CLR would load it, otherwise it would not load.
This should give you a very good explanation with more details:
http://resources.infosecinstitute.com/dot-net-assemblies-and-strong-name-signature/

The ClickOnce dilemma: Code Identity versus Security - What to do?

We have quite some room for improvement in our application lifecycle management. We don't use TFS or any other suite (shame on us, I know).
Anyway, one of the aspects we currently ponder about is code identity.
In other words: Auditors ask us how we ensure that the software tested and accepted by the departments is exactly and with no alternations the assembly we deploy for productive use and not some tinkered-with version. "That's easy", we say, "it isn't!". Because some configuration variables are set between approval and release, the ClickOnce hashes differ and thus does the whole thing.
That's the story so far, but we want (and have) to get better at what we do, so there's no way around creating our assemblies stateless and oblivious to their environment. But then we will have to set the environment at runtime, our options here are:
Using Application settings and excluding the application configuration from the ClickOnce hash. This sucks because we can't sign the ClickOnce Manifest that way, so users will always be prompted a "watch out, you don't know this guy" kind of message.
Passing Query-String parameters to the application file and using those to distinguish between test and productive environment. I don't like this because it's too open and enables any user to control the important bits (like dbhost or whatever).
Passing in something like "is_test=1" means there's a lot of inline-switching going on, and that on the other hand could mean that the assembly behaves different in production than in test, which brings us back to the start, although we've ensured Assembly-Identity on the way.
I think all that is rather unsatisfying and there must be a better way to do it. How can this be done by little means (meaning without TFS or similar monstrosities)?
I just messed around with the ApplicationDeployment class a little. I think what I have now is pretty close to what I was looking for.
private static void GetDeploymentEnvironment()
{
if (ApplicationDeployment.IsNetworkDeployed)
{
ApplicationDeployment dep = ApplicationDeployment.CurrentDeployment;
FileInfo f = new FileInfo(dep.UpdateLocation.AbsolutePath + ".env");
if (f.Exists)
{
/// read file content and apply settings
}
}
}
This enables me to put a file in the deployment folder (where the .application-file resides) that I can use to override settings. If there is no such file, well...nothing gets overridden. Whatever I do with the content of this file, the Assembly Identity is preserved.
EDIT : Just a hint, as you see this is useful only for applications deployed as Online Only. You cannot start the same ClickOnce .application file from different locations in the Available Offline scenario.

Should Open-Source Libraries be Digitally Signed

It is a good practice to always sign executable files (exe, dll, ocx, etc.). On the other hand, with an open source project it may considered disregarding the contributions to the project from all other developers.
This is quite an ethical dilemma for me and I would like to hear more opinions on this from either people who have been in a similar situation or people who contributed to an open source project.
I would like to note that this question is for an open-source project that was written in C# using .NET 4 so when user clicks the executable, he or she will be prompted a warning stating that the file is from an untrusted publisher if it is not digitally signed.
By the way, the assemblies all have strong-naming (signature) already, but they are not digitally signed yet (i.e. using a Verisign Code signing certificate).
.Net is a diffrent beast as many features require (especially libraries) require the file to be signed with a strong name key, but those can be self signed with no complaint from the final product (it uses the programs cert not the libraries to pop up that message box you refer to in your original question).
However in the general case I see nothing wrong with a group signing the official distro with a private key. If you do something to the source and recompile technically "the file is from an untrusted publisher" as I may trust Canonical but I do not trust you. As long as the executable being not being signed from a specific publisher does not stop it from being used in the manner it was intended (the tivoization clause in the GPL) I see no reason NOT to sign your executables.
Saying that this is "quite an ethical dilemma" is probably blowing it out of proportion. You definitely want to code sign your executables, and I don't really see the problem with you signing it. For example, TortoiseSVN is signed by "Stefan Kueng, Open Source Developer".
That said, it is probably a good idea to form some kind of legal entity for your project, and then get the code-signing certificate in the name of your project's entity. That way, rather than you personally signing the executable (and thus "taking all the credit"), your project's name shows up as the publisher.
If you were in the US, I would suggest either forming a LLC or possibly a 501(c)(3) organization, which is exempt from income tax and allows individuals to make tax-deductable donations to the project. (Many open source projects organize as 501(c)(3) entities, including WordPress and jQuery.) I see you're in Turkey, so you'll have to research your local requirements for forming some kind of legal entity; once formed, you'll be able to get a certificate from a CA in the name of your project's entity rather than your own.

How to disable an exe file after first installation?

Does anybody know the solution for this? I create an exe file of my software. After first installation I have to disable the exe, so it cannot be run again because when someone purchases the software from me they can install it only once.
To do this you'll need to store something somewhere, that something could be:
A file
A registry entry
A call to a web service you own that stores a unique identifier for the machine, and is checked on subsequent installation attempts (Note: If you choose this method you must be clear and up-front with your users that it's what you're doing).
Bear in mind that a determined user will be able to circumvent file and registry methods and also quite possibly the web service method. The former two by using something such as Process Monitor to identify the files/registry entries you're writing to and clear them. For the latter, by using something like Fiddler to identify the web service calls you're making and replacing the responses with ones that allow them to bypass your protection.
Remember, ultimately the user can disassemble your code and remove the protection mechanisms you've put in place, so don't rely on them being 100% un-breakable
Forget it, mate. It's software - you absolutely cannot enforce something like that because the user has complete control over the environment where the binary runs, including reverse engineering, virtualization, backups etc. etc. And the ones who you want to foil are precisely the ones who will go to any length to thwart any protection measure you could invent.
No, the only thing that works is to force an online connection and register, on your system, the fact that a particular binary was installed once, then forbid it the next time. That requires you to make each installer different and have a cryptographically strong key generator, and it's still susceptible to replay attacks - but it's the only thing that is not useless by definition.
(Well, either that, or make your software so insanely great that people will fall in love you and want to give you the money. That solution is probably even harder.)
You could store the installation path in the registry or some secret location and have your .exe check that if it has started from a location different than the one stored, to simply exit, as you probably don't want to tell the user what you are doing.

Categories

Resources