We have quite some room for improvement in our application lifecycle management. We don't use TFS or any other suite (shame on us, I know).
Anyway, one of the aspects we currently ponder about is code identity.
In other words: Auditors ask us how we ensure that the software tested and accepted by the departments is exactly and with no alternations the assembly we deploy for productive use and not some tinkered-with version. "That's easy", we say, "it isn't!". Because some configuration variables are set between approval and release, the ClickOnce hashes differ and thus does the whole thing.
That's the story so far, but we want (and have) to get better at what we do, so there's no way around creating our assemblies stateless and oblivious to their environment. But then we will have to set the environment at runtime, our options here are:
Using Application settings and excluding the application configuration from the ClickOnce hash. This sucks because we can't sign the ClickOnce Manifest that way, so users will always be prompted a "watch out, you don't know this guy" kind of message.
Passing Query-String parameters to the application file and using those to distinguish between test and productive environment. I don't like this because it's too open and enables any user to control the important bits (like dbhost or whatever).
Passing in something like "is_test=1" means there's a lot of inline-switching going on, and that on the other hand could mean that the assembly behaves different in production than in test, which brings us back to the start, although we've ensured Assembly-Identity on the way.
I think all that is rather unsatisfying and there must be a better way to do it. How can this be done by little means (meaning without TFS or similar monstrosities)?
I just messed around with the ApplicationDeployment class a little. I think what I have now is pretty close to what I was looking for.
private static void GetDeploymentEnvironment()
{
if (ApplicationDeployment.IsNetworkDeployed)
{
ApplicationDeployment dep = ApplicationDeployment.CurrentDeployment;
FileInfo f = new FileInfo(dep.UpdateLocation.AbsolutePath + ".env");
if (f.Exists)
{
/// read file content and apply settings
}
}
}
This enables me to put a file in the deployment folder (where the .application-file resides) that I can use to override settings. If there is no such file, well...nothing gets overridden. Whatever I do with the content of this file, the Assembly Identity is preserved.
EDIT : Just a hint, as you see this is useful only for applications deployed as Online Only. You cannot start the same ClickOnce .application file from different locations in the Available Offline scenario.
Related
this is my first project in asp.net core and i don't seem to figure out where to store a postgres db credentials.
i've read that they need to be stored in appsettings.json but if i do so, do i need to add it to the .gitignore file and do i have the ability to add it when pushing to production or should i use a .env file.
PS: i still don't know where the project will be hosted so do all hosts support appsettings.json configurations ?
Answer to second question - appsettings.json is just a file, part of dotnet publish output and it should work on any host that supports uploading files.
This is somewhat pet issue of mine, so answer to first question will be longer. You definitely don't want your passwords to go to git, you are right on that. If you want to stay with official solution, you are supposed to use application secrets for local development, and appsettings.{Develoment,Staging,Production}.json on servers where you deploy the application. They stack, what is in appsettings.json will be overriden by anything you put in one of the .env files with same key. I myself do have several issues with this approach.
appsecrets only work in development environment (unless you rewrite your default Startup class). If you are doing both development and maintenance, you have to mentally switch between two approaches to one thing.
appsecrets have their own special utility which you have to learn, while appsettings.* family are files in a well known path inside a project that you can edit with anything (secrets are a file too, with the same syntax, but it lives somewhere deep in %APPDATA%, out of normal reach. You have to search for it).
appsettings.{env}.json - should they be in git or not? No offical answer on that that I know of. Most projects I dealt with had some kind of a problem with these two files. If it lives in git, you have the same problem as appsettings.json - you can't put sensitive info into them. And you have to exclude if from publishing, because your git version won't have correct passwords. I have seen a few horrible teamcity scripts trying ensure this during build/publish step, and then randomly fail when devs change things that should have no impact on this (like target dotnet version). If they are not in git and you DO need to update server version? You have to make sure to merge the changes to live version manually. Which is easy to forget.
Neither of those points is a dealbreaker, but they leave bad taste for me.
After some meditation on those issues, I introduced appsettings.local.json file as standard part of any of our projects. It's in .gitignore, it never leaves the machine.
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostBuilderContext, configurationBuilder) =>
{
ContentRootPath = hostBuilderContext.HostingEnvironment.ContentRootPath;
for (int pos = configurationBuilder.Sources.Count - 1; pos >= 0; --pos)
{
if (configurationBuilder.Sources[pos] is JsonConfigurationSource)
{
var source = new JsonConfigurationSource()
{
Path = Path.Join(ContentRootPath, "appsettings.local.json"),
Optional = true,
ReloadOnChange = true,
};
source.ResolveFileProvider();
configurationBuilder.Sources.Insert(pos + 1, source);
}
}
})
This belongs in Program.cs (old style, it needs slight modification if you use toplevel statements) and places it into the chain of "stock" configuration providers, just after all other configuration files. They continue to work as before, but if .local file is present, values from it override other .json files.
With this, appsettings.json serves just as a template, it's tracked by git and if you have some discipline, it also serves as a structured overview of everything that can be configured, with default values. If the project has different needs for staging/prod environments, corresponding .env files still work, and can be tracked by git. Passwords go to .local file, everywhere.
I have a windows service written in C# running on a machine and it creates and uses a number of files. Is there a way to prevent a user on the machine, administrators included, from messing with these files(moving, editing, renaming, deleting) from the code?
I know that StreamWriter can achieve this, but I don't want to keep the files open all the time without the need to actually access the data in them, but I can't seem to find any other way.
EDIT: Let me rephrase the question base don the comments below. Is there a way to setup ACL in a way that only my service can access the files? I would also accept if only services could access the files(I have seen mention of All Services security group in Microsoft Docs but I can't seem to actually find it on the system or in .net).
You can do it changing access privileges BUT I strongly suggest to simply keep them open (just be careful to flush the stream after each batch write).
In the first part I try to address directly your question ("How to prevent...") but in the second part I tried to outline a different approach (make your application resilient: keep a backup).
How to prevent...
Assuming that you're running on Windows to avoid other users to mess with them you should:
Set the hidden attribute. By default hidden files are hidden and many users won't even see them. If you can do it at directory level then even better.
Change ACL to deny Full access to Users and Administrators group. Better if you cherry pick and just leave Read permissions. By default Windows pick the most restrictive policy, even when an user belongs to two group, then this will effectively stop everyone to write that file (if you deny also Read permissions then they won't even be able to see its content but see later).
Create a special group (with the required permissions, and only those) with one single user. Be sure that user isn't automatically added to the Users group.
Change your application to impersonate that user when writing those files. If you left the Read permissions in-place then code for reading isn't affected.
Don't forget to check with different versions and editions of Windows (HomeUsers keep bouncing in my mind.) If your application is a Windows Service then things may be slightly easier, see eryksun's comment.
You can experiment with all these things simply using Windows Explorer, just find the right balance but don't forget that each single installation is a different world and only God knows what the environment is (but he doesn't know why).
Few obvious drawbacks:
An administrator can ALWAYS do what he wants then they may find those files and revert permissions. I think (I'm not sure) that System Installer has some special privileges to prevent this but I'm not sure (and I can't imagine how to do it).
Installation is way more complicate (and you will need one if you don't have). You may do it when application is executed first time but then you will need administrative privileges (just once but probably worse.)
Your code is more complex.
More setup means more things that may go wrong, balance this with the effort of your technical support team.
Updates (and tech support job) will be more complicate.
Users with certain privileges won't be affected (see another comment) but this is really a good thing and you shouldn't every try to circumvent it.
Backup is the key!
Don't forget that if they really want to break your application then they will just delete the application directory...
I think, but I don't know your specific use-case, that maybe you're approaching the problem from the wrong angle. If what you want to prevent the user to corrupt your data files (intentionally or not) then what you need is a BACKUP. Save a copy in a different location each time your write them, mark it as hidden and live happy. If they're not too big you may even save content directly inside Windows Registry. For encrypted/hashed/checksummed files your application can easily detect when they're broken or missing: just restore backup and you're done.
I don't want to keep the files open all the time
But keeping them open is a good way that closely follows your intent and requirements.
As long as it's not about hundreds or more, this seems the best option.
The other way is to set the security properties (ACL) but that is messy and requires a higher privilege.
Excluding the Admin is not totally possible and you should not really want that. Avoiding accidental delete or rename is doable, total control is not.
2 Other options are
Set some permissions in the locations here the files are so that no one can access them
If all of the files in question will be created by your application, you could check the options in CreateFile, where you can set the sharing options to 0x00000000 to "Prevent other processes from opening a file or device if they request delete, read, or write access."
If you want to use CreateFile I guess you will have to pinvoke it
I'm looking for the right approach to verify a currently running executable from within that executable.
I've already found a way to compute a (SHA256) hash for the file that is currently running.
The problem is: Where do I safely store this hash? If I store it in a config file, a malicious user can just calculate his own hash and replace it. If I store it in the executable itself, it can be overridden with a hex editor probably.
A suggestion I read was to do an asymmetrical en- (or was it de-) cryption, but how would I go about this?
A requirement is that the executable code hashes and en/decrypts exactly the same on different computers, otherwise I can't verify correctly. The computers will all be running the same OS which is Windows XP (Embedded).
I'm already signing all of my assemblies, but I need some added security to successfully pass our Security Target.
For those who know, it concerns FPT_TST.1.3: The TSF shall provide authorised users with the capability to verify the integrity of stored TSF executable code.
All the comments, especially the one from Marc, are valid.
I think your best bet is to look at authenticode signatures - that's kind of what they're meant for. The point being that the exe or dll is signed with a certificate (stamping your organisation's information into it, much like an SSL request) and a modified version cannot (in theory plus with all the normal security caveats) be re-signed with the same certificate.
Depending upon the requirement (I say this because this 'security target' is a bit woolly - the ability to verify the integrity of the code can just as easily be a walkthrough on how to check a file in windows explorer), this is either enough in itself (Windows has built-in capability to display the publisher information from the certificate) or you can write a routine to verify the authenticode certificate.
See this SO Verify whether an executable is signed or not (signtool used to sign that exe), the top answer links to an (admittedly old) article about how to programmatically check the authenticode certificate.
Update
To follow on from what Marc suggested - even this won't be enough if a self-programmatic check is required. The executable can be modified, removing the check and then deployed without the certificate. Thus killing it.
To be honest - the host application/environment really should have it's own checks in place (for example, requiring a valid authenticode certificate) - if it's so important that code isn't modified then it should have its own steps for doing so. I think you might actually be on a wild goose chase.
Just put whatever check will take least amount of effort on your behalf without worrying too much about the actual security it apparently provides - because I think you're starting from an impossible point. If there is actually any genuine reason why someone would want to hack the code you've written, then it won't just be a schoolboy who tries to hack it. Therefore any solution available to you (those mentioned in comments etc) will be subverted easily.
Rent-a-quote final sentence explaining my 'wild goose chase' comment
Following the weakest link principle - the integrity of an executable file is only as valid as the security requirements of the host that runs that executable.
Thus, on a modern Windows machine that has UAC switched on and all security features switched on; it's quite difficult to install or run code that isn't signed, for example. The user must really want to run it. If you turn all that stuff down to zero, then it's relatively simple. On a rooted Android phone it's easy to run stuff that can kill your phone. There are many other examples of this.
So if the XP Embedded environment your code will be deployed into has no runtime security checks on what it actually runs in the first place (e.g. a policy requiring authenticode certs for all applications) then you're starting from a point where you've inherited a lower level of security than you actually supposed to be providing. No amount of security primitives and routines can restore that.
Since .NET 3.5 SP1, the runtime is not checking the strong name signature.
When your assemblies are strong named, so I suggest to check the signature by code.
Use the native mscoree.dll with p/Invoke.
private static class NativeMethods
{
[DllImport("mscoree.dll")]
public static extern bool StrongNameSignatureVerificationEx([MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, byte dwInFlags, ref byte pdwOutFlags);
}
Than you can use the assemlby load event and check every assembly that is loaded into your (current) app domain:
AppDomain.CurrentDomain.AssemblyLoad += CurrentDomain_AssemblyLoad;
private static void CurrentDomain_AssemblyLoad(object sender, AssemblyLoadEventArgs args)
{
Assembly loadedAssembly = args.LoadedAssembly;
if (!VerifytrongNameSignature(loadedAssembly))
// Do whatever you want when the signature is broken.
}
private static bool VerifytrongNameSignature(Assembly assembly)
{
byte wasVerified = 0;
return NativeMethods.StrongNameSignatureVerificationEx(assembly.Location, 1, ref wasVerified);
}
Of course, someone with enough experience can patch out the "check code" from you assemlby, or simply strip the strong name from your assembly..
Apps downloaded from the Windows Store are installed in this location:
C:\Program Files\WindowsApps
If you look inside this folder you can access each application's .exe and use reflector to decompile them.
Currently, my Windows RT application sends a password over SSL to a WCF service to ensure that only people using my app can access my database (via the service).
If my code can be read by anybody, how can I ensure that only people using my Windows 8 app are accessing the service?
Thanks!
In the very general sense, it is impossible. If ever you create anything that is placed on the customer's computer, eventually you will stumble upon someone that will manage to decipher your code and understand how to call your service. You may obfuscate it into insane levels, but still it has to be executable by the processor, so the processor has to understand it. And if it does, then potentially anyone knowing assembly can understand it too. You may smartly obfuscate it so that it will be very time-consuming to cleanup the code from unimportant trash, but still, at some point of time someone will read it.
One of common defenses is in trying to detect who* is actually trying to use your service. This is why all the "portals" require you to "register". This way, the application identity is marginalized and it is the user who provides login, password, PGP keys, etc is checked and verified whether he/she is allowed to actually run your service.
Also, on the OS/framework layer, there are several ways to selectively provide "licenses" to your customers and then in your application you may use keys/hashes from the licenses to authenticate in your service. This may partially remove from the user the burden of remebering the passwords etc, or it may provide an additional authentication factor, or it may simply be a yes-no flag that allows to run the app or not. Still, it will not guard your code against being read. Licenses just help in verifying if the software copy is legit and if belongs to that specific user/computer.
You may act selectively only against 'reflectoring' (or dotpeeking, or ildasming, or ...). Those tools really make the decompilation easy (although the original reflector is now paid software). So, the simpliest form would be to use obfuscator that will make the decompilation impossible or harder - that cuts some percentage of the potential code-readers and you can assume scriptkiddies are gone. You may ignore obfuscators and you may write the service connector in native code (C++, not C++/cli). That will make the code completely un-reflectorable and un-ildasmable, and that will cut off another large percentage of people, but with some will still be left (me and thousands of others, but that's much less than millions).
While this does not give you definitive answer, I wanted to show you that you can only get some "level of hardness", but you cannot make it totally safe from being read. This is why you should focus on making the service access in that way, that showing your code to a stranger on the street does not compromise your security.
Now gettint to your problem: the core thing seems to lie not in the fact that your app uses some secret algorithms, but rather - that you have hardcoded the password in. You see, there's with this approach, they do not need to read your code at all. They just need to listen what data your app sends over the sockets..
Another issue is that everyone uses the same keyphrase.
A hardcoded magic string may be some sort of validation, but never authentication. If you want the app to be register-free, make the registration silent and automatic at first run? Of course, you will just bounce the problem: anyone could read the code and learn how to autoregister, and then they will make a clone.. But, again, like I've said: you never know who's on the other side. Is it your app, or is it an ideal-clone of it? Or maybe is it a clone that uses your own hacked-a-bit libraries to connect to you? If it looks like a duck, and quacks like a duck, it is a duck..
Does anybody know the solution for this? I create an exe file of my software. After first installation I have to disable the exe, so it cannot be run again because when someone purchases the software from me they can install it only once.
To do this you'll need to store something somewhere, that something could be:
A file
A registry entry
A call to a web service you own that stores a unique identifier for the machine, and is checked on subsequent installation attempts (Note: If you choose this method you must be clear and up-front with your users that it's what you're doing).
Bear in mind that a determined user will be able to circumvent file and registry methods and also quite possibly the web service method. The former two by using something such as Process Monitor to identify the files/registry entries you're writing to and clear them. For the latter, by using something like Fiddler to identify the web service calls you're making and replacing the responses with ones that allow them to bypass your protection.
Remember, ultimately the user can disassemble your code and remove the protection mechanisms you've put in place, so don't rely on them being 100% un-breakable
Forget it, mate. It's software - you absolutely cannot enforce something like that because the user has complete control over the environment where the binary runs, including reverse engineering, virtualization, backups etc. etc. And the ones who you want to foil are precisely the ones who will go to any length to thwart any protection measure you could invent.
No, the only thing that works is to force an online connection and register, on your system, the fact that a particular binary was installed once, then forbid it the next time. That requires you to make each installer different and have a cryptographically strong key generator, and it's still susceptible to replay attacks - but it's the only thing that is not useless by definition.
(Well, either that, or make your software so insanely great that people will fall in love you and want to give you the money. That solution is probably even harder.)
You could store the installation path in the registry or some secret location and have your .exe check that if it has started from a location different than the one stored, to simply exit, as you probably don't want to tell the user what you are doing.