It is a good practice to always sign executable files (exe, dll, ocx, etc.). On the other hand, with an open source project it may considered disregarding the contributions to the project from all other developers.
This is quite an ethical dilemma for me and I would like to hear more opinions on this from either people who have been in a similar situation or people who contributed to an open source project.
I would like to note that this question is for an open-source project that was written in C# using .NET 4 so when user clicks the executable, he or she will be prompted a warning stating that the file is from an untrusted publisher if it is not digitally signed.
By the way, the assemblies all have strong-naming (signature) already, but they are not digitally signed yet (i.e. using a Verisign Code signing certificate).
.Net is a diffrent beast as many features require (especially libraries) require the file to be signed with a strong name key, but those can be self signed with no complaint from the final product (it uses the programs cert not the libraries to pop up that message box you refer to in your original question).
However in the general case I see nothing wrong with a group signing the official distro with a private key. If you do something to the source and recompile technically "the file is from an untrusted publisher" as I may trust Canonical but I do not trust you. As long as the executable being not being signed from a specific publisher does not stop it from being used in the manner it was intended (the tivoization clause in the GPL) I see no reason NOT to sign your executables.
Saying that this is "quite an ethical dilemma" is probably blowing it out of proportion. You definitely want to code sign your executables, and I don't really see the problem with you signing it. For example, TortoiseSVN is signed by "Stefan Kueng, Open Source Developer".
That said, it is probably a good idea to form some kind of legal entity for your project, and then get the code-signing certificate in the name of your project's entity. That way, rather than you personally signing the executable (and thus "taking all the credit"), your project's name shows up as the publisher.
If you were in the US, I would suggest either forming a LLC or possibly a 501(c)(3) organization, which is exempt from income tax and allows individuals to make tax-deductable donations to the project. (Many open source projects organize as 501(c)(3) entities, including WordPress and jQuery.) I see you're in Turkey, so you'll have to research your local requirements for forming some kind of legal entity; once formed, you'll be able to get a certificate from a CA in the name of your project's entity rather than your own.
Related
This question already has answers here:
How to pass the smart screen on Win8 when install a signed application?
(9 answers)
Closed 4 years ago.
We Use SNK key files to sign our assemblies, then we use wix to create msi install file, when we download the msi file, we get that smart screen, windows protected your PC, I read about all the certification stuff and I told the team we should get a certificate and so on, but they said no we just use snk files and add in .crproj file, is that correct or I have to do it
Update: not duplicate and yes you can sign code just with SNK, I did that
You will need to look into Code Signing for your app.
but they said no we just use snk files
I think your team is confusing Strong Naming with Code Signing. Though both use certificates, the certificates used with strong naming is not sufficient for Code Signing, which is what you need here.
Strong Naming is somewhat of a poor man's way to identify something (filename, culture, public key). It's identification method is not objective (as there is no third party) and it does not show whether something has been tampered with. It is purely a .NET beast.
Code Signing (or authenticode) identifies something by way of a trusted third party and can show if something has been tampered with or not. CS can be used with .NET and native apps.
Both are complex to discuss in full here particularly the latter.
I'm rather confused at how strong key signing for a .NET assembly works.. I signed with a strong name my .NET assembly by inserting a password and having a .pfx file generated...
how is this supposed to protect my dll? Can someone explain this to me in simple terms? I'm a newbie with digital signing and you'll have to take a BIG step back to have me understand this
I'm a newbie with digital signing and you'll have to take a BIG step back to have me understand this
Ok, let's take a big step back and ask some more basic questions then. I'll boldface every word that has a precise meaning.
What is the purpose of a security system?
To protect a resource (a pile of gold doubloons) against the threat of successful attack (theft) by a hostile party (a thief) who seeks to take advantage of a vulnerability (an unlocked window). (*)
How does .NET's Code Access Security work in general?
This is a sketch of the .NET 1.0 security system; it is rather complicated and has been more or less replaced by a somewhat simpler system, but the basic features are the same.
Every assembly presents evidence to the runtime. A domain administrator, machine administrator, user and appdomain creator each may create a security policy. A policy is a statement of what permissions are granted when a certain piece of evidence is present. When an assembly attempts to perform a potentially dangerous operation -- that is, an operation that might be a threat to a resource -- the runtime demands that the permission be granted. If the evidence is insufficient to grant that permission then the operation fails with an exception.
So for example, suppose an assembly presents the evidence "I was just downloaded from the internet", and the policy says "code downloaded from the internet gets permission to run and access the printer" and that code then attempts to write to C:\Windows\System32. The permission was not granted because of insufficient evidence, and so the operation fails. The resource -- the contents of the system directory -- are protected from tampering.
What is the purpose of signing an assembly with a digital certificate that I got from VeriSign?
An assembly signed with a digital certificate presents evidence to the runtime describing the certificate that was used to sign the assembly. An administrator, user or application may modify security policy to state that this evidence can grant a particular permission.
The evidence presented by an assembly signed with a digital certificate is: this assembly was signed by someone who possessed the private key associated with this certificate, and moreover, the identity of the certificate holder has been verified by VeriSign.
Digital certificates enable the user of your software to make a trust decision on the basis of your identity being verified by a trusted third party.
So how does that protect my DLL?
It doesn't. Your DLL is the crowbar that is going to be used to jimmy the window, not the pile of gold coins! Your DLL isn't a resource to be protected in the first place. The user's data is the resource to be protected. Digital signatures are there to facilitate an existing trust relationship. Your customer trusts you to write code that does what it says on the label. The signature enables them to know that the code they are running really came from you because the identity of the author of the code was verified by a trusted third party.
Isn't strong naming the same thing then?
No.
Strong naming is similar, in that a strong-named DLL presents cryptographically strong evidence to the runtime that the assembly was signed by a particular private key associated with a particular public key. But the purpose of strong naming is different. As the term implies, strong naming is about creating a name for an assembly that can only be associated with the real assembly. Anyone can make a DLL named foo.dll, and if you load foo.dll into memory by its weak name, you'll get whatever DLL is on the machine of that name, regardless of who created it. But only the owner of the private key corresponding to the public key can make a dll with the strong name foo, Version=1.2.3.4, Culture=en, PublicKeyToken=03689116d3a4ae33.
So again, the purpose of strong naming is not directly to facilitate a trust relationship between a software provider and a user. The purpose of strong naming is to ensure that a developer who uses your library is really using the specific version of that library that you actually released.
I notice that VeriSign wasn't a factor in strong naming. Is there no trusted third party?
That's right; with a strong name there is no trusted third party that verifies that the public key associated with a given strong name is actually associated with a particular organization or individual.
This mechanism in digital certificates facilitates a trust relationship because the trusted third party can vouch that the public key really is associated with the trusted organization. Lacking that mechanism, somehow the consumer of a strong name needs to know what the public key of your organization is. How you communicate that to them securely is up to you.
Are there other implications to the fact that there is no trusted third party when strong naming?
Yes. Suppose for example that someone breaks into your office and steals the computer with the digital certificate private key on it. That attacker can now produce software signed with that key. But certifying authorities such as VeriSign publish "revocation lists" of known-to-be-compromised certificates. If your customers are up-to-date on downloading revocation lists from their certifying authorities then once you revoke your certificate, they can detect that your software might be from a hostile third party. You then have the difficult task of getting a new cert, re-signing all your code, and distributing it to customers, but at least there is some mechanism in place for dealing with the situation.
Not so with strong names. There is no central certifying authority to appeal to for a list of compromised strong names. If your strong name private key is stolen, you are out of luck. There is no revocation mechanism.
I took a look at my default security policy and it says that (1) any code on the local machine is fully trusted, and (2) any code on the local machine that is strong-named by Microsoft is fully trusted. Isn't that redundant?
Yes. This way if the first policy is made more restrictive then the second policy still applies. It seems reasonable that an administrator might want to lower the trust level of installed software without lowering the trust level of the assemblies that must be fully trusted because they keep the security system itself working.
But wait a moment, that still seems redundant. Why not set the default policy to "(1) any code on the local machine is trusted (2) any code strong-named by Microsoft is fully trusted"?
Suppose a disaster strikes and the Microsoft private key is compromised. It is stored deep in a vault under building 11, protected by sharks with laser beams, but still, suppose that happened. This would be a disaster of epic proportions because like I just said, there's no revocation system. If that happened AND the security policy was as you describe then the attacker who has the key can put hostile software on the internet that is then fully trusted by the default security policy! With the security policy as it actually is stated -- requiring both a strong name and a local machine location -- the attacker who has the private key now has to trick the user into downloading and installing it.
This is an example of "defense in depth". Always assume that every other security system has failed, and still do the best to stop the attacker.
As a best practice you should always set a strong name or digital signing policy to include a location.
So again, strong naming isn't to protect my DLL.
Right. The purpose of the security system is never to protect you, the software vendor, or any artifact you produce. It is to protect your customers from attackers who seek to take advantage of the trust relationship between your customers and you. A strong name ensures that code which uses your libraries is really using your libraries. It does this by making an extremely convenient mechanism for identifying a particular version of a particular DLL.
Where can I read more?
I've written an entire short book on the .NET 1.0 security system but it is now out of print, and superseded by the new simplified system anyways.
Here are some more articles I've written on this subject:
http://blogs.msdn.com/b/ericlippert/archive/2009/09/03/what-s-the-difference-part-five-certificate-signing-vs-strong-naming.aspx
http://ericlippert.com/2009/06/04/alas-smith-and-jones/
(*) Security systems have other goals than preventing a successful attack; a good security system will also provide non-repudiable evidence of a successful attack, so that the attacker can be tracked down and prosecuted after the attack. These features are outside the scope of this discussion.
In brief, one of the main purposes is to protect against loading an assembly by the CLR after it has been changed. For example, you ship a software with a DLL, someone could reflect it, change the code, and build it again, and let your program load it thinking it is your dll. To protect against that, a hash is computed when you build your dll and stored with it. At runtime, when the CLR loads the DLL, it computes the hash again and compares against what is stored with the DLL, if the DLL is still intact, then it should match and CLR would load it, otherwise it would not load.
This should give you a very good explanation with more details:
http://resources.infosecinstitute.com/dot-net-assemblies-and-strong-name-signature/
I'm looking for the right approach to verify a currently running executable from within that executable.
I've already found a way to compute a (SHA256) hash for the file that is currently running.
The problem is: Where do I safely store this hash? If I store it in a config file, a malicious user can just calculate his own hash and replace it. If I store it in the executable itself, it can be overridden with a hex editor probably.
A suggestion I read was to do an asymmetrical en- (or was it de-) cryption, but how would I go about this?
A requirement is that the executable code hashes and en/decrypts exactly the same on different computers, otherwise I can't verify correctly. The computers will all be running the same OS which is Windows XP (Embedded).
I'm already signing all of my assemblies, but I need some added security to successfully pass our Security Target.
For those who know, it concerns FPT_TST.1.3: The TSF shall provide authorised users with the capability to verify the integrity of stored TSF executable code.
All the comments, especially the one from Marc, are valid.
I think your best bet is to look at authenticode signatures - that's kind of what they're meant for. The point being that the exe or dll is signed with a certificate (stamping your organisation's information into it, much like an SSL request) and a modified version cannot (in theory plus with all the normal security caveats) be re-signed with the same certificate.
Depending upon the requirement (I say this because this 'security target' is a bit woolly - the ability to verify the integrity of the code can just as easily be a walkthrough on how to check a file in windows explorer), this is either enough in itself (Windows has built-in capability to display the publisher information from the certificate) or you can write a routine to verify the authenticode certificate.
See this SO Verify whether an executable is signed or not (signtool used to sign that exe), the top answer links to an (admittedly old) article about how to programmatically check the authenticode certificate.
Update
To follow on from what Marc suggested - even this won't be enough if a self-programmatic check is required. The executable can be modified, removing the check and then deployed without the certificate. Thus killing it.
To be honest - the host application/environment really should have it's own checks in place (for example, requiring a valid authenticode certificate) - if it's so important that code isn't modified then it should have its own steps for doing so. I think you might actually be on a wild goose chase.
Just put whatever check will take least amount of effort on your behalf without worrying too much about the actual security it apparently provides - because I think you're starting from an impossible point. If there is actually any genuine reason why someone would want to hack the code you've written, then it won't just be a schoolboy who tries to hack it. Therefore any solution available to you (those mentioned in comments etc) will be subverted easily.
Rent-a-quote final sentence explaining my 'wild goose chase' comment
Following the weakest link principle - the integrity of an executable file is only as valid as the security requirements of the host that runs that executable.
Thus, on a modern Windows machine that has UAC switched on and all security features switched on; it's quite difficult to install or run code that isn't signed, for example. The user must really want to run it. If you turn all that stuff down to zero, then it's relatively simple. On a rooted Android phone it's easy to run stuff that can kill your phone. There are many other examples of this.
So if the XP Embedded environment your code will be deployed into has no runtime security checks on what it actually runs in the first place (e.g. a policy requiring authenticode certs for all applications) then you're starting from a point where you've inherited a lower level of security than you actually supposed to be providing. No amount of security primitives and routines can restore that.
Since .NET 3.5 SP1, the runtime is not checking the strong name signature.
When your assemblies are strong named, so I suggest to check the signature by code.
Use the native mscoree.dll with p/Invoke.
private static class NativeMethods
{
[DllImport("mscoree.dll")]
public static extern bool StrongNameSignatureVerificationEx([MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, byte dwInFlags, ref byte pdwOutFlags);
}
Than you can use the assemlby load event and check every assembly that is loaded into your (current) app domain:
AppDomain.CurrentDomain.AssemblyLoad += CurrentDomain_AssemblyLoad;
private static void CurrentDomain_AssemblyLoad(object sender, AssemblyLoadEventArgs args)
{
Assembly loadedAssembly = args.LoadedAssembly;
if (!VerifytrongNameSignature(loadedAssembly))
// Do whatever you want when the signature is broken.
}
private static bool VerifytrongNameSignature(Assembly assembly)
{
byte wasVerified = 0;
return NativeMethods.StrongNameSignatureVerificationEx(assembly.Location, 1, ref wasVerified);
}
Of course, someone with enough experience can patch out the "check code" from you assemlby, or simply strip the strong name from your assembly..
I need some ideas how to create a activation algorithm. For example i have demo certificate. Providing that the application runs in demo mode. When full version certificate is provided then application runs in full mode.
Is it even possible and how would be a good way creating this system?
One simple was i was thinking would be just have a 2 encrypted strings, now when the decryption is succsessful with the demo public key certificate then the application will run in demo mode and etc..
You could do something like:
Generate public/private key pair
As owner of private key, you can sign those "activation certificates" (called AC from now on)
In your app, with public key, you can check if the sign is correct
As Overbose mentioned -- you can't prevent reverse engineering. In general someone could take functionality and put it in his/hers own app and thus eliminate any possible activation algorithm. So you can only assume (or make) this is hard enough not to be worth the effort (this is the same as for cryptography -- when you make the cost of breaking the message greater then the profit of gaining it you can say it is well secured).
So you could:
Make executable self-verifying (signed by you, self-checking based on hard-coded public key (one thing: you must skip this value when self-checking)).
Do some tricks with pointers (point to the activation function, go to 7th bit and change value of it for something based on value of another pointer; in some weird places change hard-coded values to those based on occurrence of some bits in other places of the code; generally -- make it more difficult to break than by simply changing bits in executable with hex editor)
Try to make some protocol that your server would use to ask questions about the app ("gimme the value of 293 byte of yourself") and check answers.
Use imagination and think of some weird self-checking method nobody used before :)
As mentioned -- none of this is secure from cutting the authentication part off. But nothing is and this could make it harder for crackers.
Background: I've deployed an activation based system built on top of a third-party license system, i.e. server, database, e-commerce integrations. I've also separately written a C# activation system using RSA keys, but never deployed it.
Product Activation commonly means that the software must be activated on a given machine. I assume that's what you mean. If all you want to do is have two strings that mean "demo" and "purchased", then they will be decrypted and distributed within hours (assuming your product is valuable). There is just no point.
So. assuming you want "activation", then when the user purchases your software, the following process needs to happen:
Order-fulfillment software tells Server to generate "Purchase Key" and send to user
User enters "Purchase Key" into software
Software sends Purchase Key and unique Machine ID to server.
Server combines Purchase Key and Machine ID into a string and signs it with its certificate and returns it to user.
Software checks that signature is valid using Servers public key.
Software could check in lots of places: loading the sig in lots of places, checking it in others.
When generating Purchase Keys, the server can store not only what produce was purchased, but what level of product. You can also have "free" products that are time limited, so the user can try the full version of the software for 30 days.
You are using C#, so make sure you obfuscate the binaries, using dotfuscator or equivalent. However, even with that there is nothing you can do against a determined hacker. Your goal, I assume, is to force non-paying users to either be hackers themselves, or to have to risk using a cracked version: kids wont care, corporations might. YMMV.
The code that does the checking needs to be in every assembly that needs protecting, otherwise an attacker can trivially remove protection by replacing the assembly that does the checking. Cut and paste the code if you have to.
Or just buy something.
Another option is to have the server pre-generate "Purchase Keys" and give them to the Order fulfillment service, but then you dont get to link the key to the customers details (at least not until they register). Better to have the ecommerce server hit your server when a purchase has been made, and have your server send it out.
The hard part isn't so much the generation of activation keys as it is the creation of the server, database, and the integration with e-commerce software, and most of all, human issues: do you allow unlimited installs per Purchase Key? Only 1? If only 1 then you have to have customer-support and a way to allow a user to install it on a new machine. That's just one issue. All sorts of fun.
This guy wrote a blog post about a similar idea, explaining what he did with their own commercial software. Also wrote a list of recommendations about the most obvious cracking techniques. Hope it helps.
One simple was i was thinking would be just have a 2 encrypted
strings, now when the decryption is succsessful with the demo public
key certificate then the application will run in demo mode and etc..
Could be a simple solution. But this way you won't prevent someone to reverse engineer your binaries and make the execution jump to the correct line. Everyone has your program, has a complete version of it, so it's only a matter of find how to break this simple mechanism.
Maybe a better solution is encrypt a part of the binaries needed to use the full application version, instead of a simple string. This way to execute the application complete version someone need to decrypt those binaries in order to execute them.
Please take in consideration that even that solution isn't enough. There are other problems with that:
Does all the version of your tool will share the same encryption key? Breaking one of them for breaking all..
Even if you use a different key for each binary application released, does the encrypted binary are identical? Once cracked one, you can reuse the unencrypted binaries for all distributed applications.
How to solve these problems? There's no simple solution. Most of the more important commercial software with even sophisticated protection systems are broken just few hours or days after they have been released.
Product activation is not a problem that asymmetric cryptography can solve. Asymmetric cryptography is about keeping secrets from your adversary. The problem is that you can't keep a secret that is stored on you're adversaries machine, that would be security though obscurity.
The correct way to do product activation. Is to generate a Cryptographic Nonce that is stored in a database on your server. You give this Nonce to the customer when they buy the product, and then they activate it online. This activation process could download new material, which would make it more difficult for the attacker to modify the copy they have to "unlock" new features.
But even with DRM systems that require you to be online while using the product. Like the ones found in new games like "From Dust" are still broken within hours of their release.
One of the benefits of public key encryption is that you can verify the origin of a given piece of data. So if you store the public key in your assembly, then sign a given piece of data (say an authorization code or serial number) your assembly can verifiably determine that you were the one that created that data - and not a hacker. The actual data itself isn't all that important - it can be a simple pass/fail value.
This is actually pretty easy to do with .NET. You can use an x509 certificates or like we use in DeployLX Licensing the RSACryptoServiceProvider.
I would highly recommend buying a commercial product (doesn't really matter which one, though DeployLX is excellent) and not doing this yourself for 2 reasons
Even if you're a great developer, you'll probably get it wrong the first time. And any savings you might have enjoyed by rolling your own will be lost to recovering from that mistake.
You'll spend far more time working on your own system - time that you should spend making your product great.
The second phase in protecting the software is to make sure that it runs the way you created it - and hasn't been modified by a hacker. It really doesn't matter what encryption you use if hackers can check if( licensed ) to if( true ).
You can use AsProtect to solve this problem. This is good staring point.
i've got a question, is it possible to identify the creator of a .NET assembly, just with traces from VisualStudio within the assembly ?
Or can you even get a kind of unique ID of the creator out of it?
I don't mean the application information like company or description, they can be edited too easily.
The answer based on the fact that the code is not strong named or signed is no. Ultiamtely the only way would be to use some kind of public authority isseued certificate based code signing approach. And that is say unequivocally (theft aside) that a particular certificate owner signed the code, not that someone wrote the code.
Into the realms of more conjecture perhaps, if the code was written via a unique compiler, then one could possibly work this out. However I cannot see even this being unequivocal as who ran the compiler etc....