Determine if game has been de-compiled/altered - c#

I'm looking for Unity function to determine if my game has been de-compiled/ recompiled or modified in any way.

Yes, there is a Unity function for this but it can still be circumvented.
This can be done with Application.genuine which returns false when the application is altered in any way after it was built.
if (Application.genuineCheckAvailable)
{
if (Application.genuine)
{
Debug.Log("Not tempered");
}
}
The problem is that if the person is smart enough to de-compile, modify and compile the game, he/she can also remove the check above so the check above becomes useless. Any type of program genuinity or authenticity check can be removed as long as it is running on the player's machine.
EDIT
You can make it harder to be circumvented by doing the following:
1.Go to File --> Build Settings... then select your platform.
2.Click on Player Settings --> Other Settings and then change the Scripting Backend from Mono to IL2CPP(C++).
This will make it harder to circumvent that but it is still possible to be circumvented.

TL;DR: Thats frankly not possible.
You can never determine whether your program was decompiled, because there exists no measure to determine whether that happened. And every executable can be disassembled into at least assembler even if you scramble and screw up your data. You can make it hard to understand your source code though using obfuscating software. The ultimate obfuscator would be the M/o/Vuscator, which changes all assembler commands into mov instructions, which make it a pain in the butt to understand anything. But it also is slow as heck and probably not what you want (btw. this works because the mov-instruction is touring-complete in the x86 Instruction set. There is a great talk about it here). When you follow this trend further down the rabbit hole you can also use the exact same assembler code (around 10-20ish instructions) to create all programms possible which will make it impossible to get to your source code by simply disassembling your code.
Staying in the realm of the possible though: No, you are not able to prevent people from disassembling or decompiling your code. But you can make it harder (not impossible) to understand.
Detecting a change in the executable is on the possible side, though. Altough probably not feasible for you.
The main problem beeing that any code you build into the app to detect changes can be patched away. So you'll need to prevent that. But there is no practicall way of preventing that...
If you try to detect changes in your app by using a signature of the original and compare that to the actual signature for example, you can just exclude that check in the recompiled version. You can try to verify the signature against a server, but that can still be circumvented by removing the server check. You can force a server check for multiplayer games, but then we'll just use a fake signature. If you then want to calculate the signature on your server to prevent tampering, we'll just give you the original file and run the recompiled one.
There is one way (altough not feasible as mentioned above) to actually absolutely protect parts of your code against decompiling. The mechanism is called "BlurryBox" and was developed at KIT in germany. As I can't seem to find a proper document as a reference, here is what it does to archieve this.
The mechanism uses a stick with an encrypted storage and a microcontroller to do encryption. You put the parts of your code you want to protect (something that is called regularly, is necessary but not that time critical) into the encrypted storage. As it is impossible to retrieve the key [citation needed], you cannot access the code. The microcontroller then takes commands from your programm to call one of the encrypted functions in the storage with given parameters and to return the result. Because it is not possible to read the code you need to analyze its behaviour. Here comes the "Blurry" part of the box. Each function you store needs to have a small and well defined set of allowed parameters. Every other set of parameters leads into a trap that kills your device. As the attacker has no specs as to what the valid parameters are, this method gives you profable security against tampering with the code (as they state). There might be some mistakes on how this exactly works though as I'm writing this down from my memory.
You could try mimicking that behaviour with a server you control (code on the server and IP bans for trying to understand the code)

Related

Best approach for storing sensitive data without using a database or an .xml file

I'm developing an application in c# which uses an .xml file to obtain information that is used inside the logic of the app. When a new version of the app is launched, a setup is created (using Inno Setup Compiler) and after successfully installing the setup, the .xml is placed inside the setup files of the app. This .xml file contains about 200 objects with 4 properties each of sensitive data.
I got asked to launch a customer version of the app, and a requirement for this version is to remove the access or manipulation of this .xml file, since it contains sensitive data that the customer should not be able to see or manipulate.
My senior engineer told me to simply implement the information inside a list in the source code, so that the use of the .xml file is removed and the customer can't manipulate this info once installed the app, as it would be hidden inside the source code, but this seems really inefficient for me and i would need to change a lot of logic about the use of the .xml file inside the app for this to work.
Is there a way to create a setup of the app and hiding this file in the setup files so it cant be manipulated by the customer?
If there isn't, what approach could you suggest me to do? Or do i have no options but to do this the hard way?
If you want to make it harder for the end user to modify the information, while still keeping a separate configuration file that won't require code change of the application itself, you can sign the file and have the application verify the signature.
A simplest way is to to calculate a hash of a file and "secret" value. Of course it is hardly tamperproof. But in the end, there's no tamperproof way to prevent a user from manipulating data on his/hers own computer. It's only about how hard you want to make it.
A better way would be to use a proper certificate for the signature. The application will know only the public key and will use it to verify a signature created with a private key, which will never leave the development team.
From a theoretical standpoint, if your program can get hold of the data, then a user with full control of the computer on which the program is running can also get hold of the data if they try hard enough. That means you can make it difficult for them, but you can't make it impossible. So if that's what you're trying to achieve, you need to be quite clear about the limitations of the approach.
How difficult should you make it? Well, if it's purely a commercial risk, you should make it hard enough that the cost of getting the data is greater than the benefit. If the risk can't be measured in that way, for example if there are legal requirements for you to protect the data, then that isn't going to be good enough.
For some situations, it's probably enough to encrypt the XML file, and bury the decryption key deep within the logic of a compiled program written in a language that doesn't allow easy decompilation. That's likely to be better than simply burying the XML data within the compiled program, which is what your senior engineer is suggesting. But her suggestion may be OK too. It really shouldn't be too difficult to change the program logic from reading an external file to reading a string constant within the program.

C# encryption - how to secure password

I am creating an application with the purpose of receiving an encrypted string, decrypting it and passing the clear text string as arguments to a PowerShell script.
The executable has to be self contained, cannot connect to things like a SQL DB or anything alike. The cipher will always be the same, which means that the password/salt can't really be random either.
I know that hardcoding the password/salt is not really a good idea, but I'm struggling with how store a password/salt that doesn't change in a secure way in a self-contained executable.
Right now what I'm doing is rather than having a static string as the password/hash, I create a password and salt based on the modified date of the executable itself (with a few more things done to it). If the executable changes I'll have to recreate the cipher as the previous one cannot be decoded anymore, but at least I'm not really hardcoding a password and/or salt.
Still, I'm not sure just how secure this is and am sure there has to be a better way.
Any suggestions?
EDIT
The only place where this will be used is inside a task sequence running inside SCCM, which means that users won't be able to interact with the computer at all during the time that the task sequence is running (assuming that debug mode is not enabled, else there's also far worse things to worry about).
So I could potentially pass it in clear text to the script as no one would be able to read it since they can't interact with the PC, but then SCCM would automatically output it to logs, which obviously I don't want. I could write it on the script which would avoid having it on the logs, but if someone gets a hold of the script, bearing in mind it's a script and not compiled code, they'd know the password.
Remember the password/salt are not actually hardcoded strings as it is, they are generated during runtime, so they will not be visible using a disassembler.
This article can help you to decide how you need to design password storage
http://flagdefenders.blogspot.in/2012/12/how-to-save-password-securely.html

What can I do to stop other people running my Windows RT code?

Apps downloaded from the Windows Store are installed in this location:
C:\Program Files\WindowsApps
If you look inside this folder you can access each application's .exe and use reflector to decompile them.
Currently, my Windows RT application sends a password over SSL to a WCF service to ensure that only people using my app can access my database (via the service).
If my code can be read by anybody, how can I ensure that only people using my Windows 8 app are accessing the service?
Thanks!
In the very general sense, it is impossible. If ever you create anything that is placed on the customer's computer, eventually you will stumble upon someone that will manage to decipher your code and understand how to call your service. You may obfuscate it into insane levels, but still it has to be executable by the processor, so the processor has to understand it. And if it does, then potentially anyone knowing assembly can understand it too. You may smartly obfuscate it so that it will be very time-consuming to cleanup the code from unimportant trash, but still, at some point of time someone will read it.
One of common defenses is in trying to detect who* is actually trying to use your service. This is why all the "portals" require you to "register". This way, the application identity is marginalized and it is the user who provides login, password, PGP keys, etc is checked and verified whether he/she is allowed to actually run your service.
Also, on the OS/framework layer, there are several ways to selectively provide "licenses" to your customers and then in your application you may use keys/hashes from the licenses to authenticate in your service. This may partially remove from the user the burden of remebering the passwords etc, or it may provide an additional authentication factor, or it may simply be a yes-no flag that allows to run the app or not. Still, it will not guard your code against being read. Licenses just help in verifying if the software copy is legit and if belongs to that specific user/computer.
You may act selectively only against 'reflectoring' (or dotpeeking, or ildasming, or ...). Those tools really make the decompilation easy (although the original reflector is now paid software). So, the simpliest form would be to use obfuscator that will make the decompilation impossible or harder - that cuts some percentage of the potential code-readers and you can assume scriptkiddies are gone. You may ignore obfuscators and you may write the service connector in native code (C++, not C++/cli). That will make the code completely un-reflectorable and un-ildasmable, and that will cut off another large percentage of people, but with some will still be left (me and thousands of others, but that's much less than millions).
While this does not give you definitive answer, I wanted to show you that you can only get some "level of hardness", but you cannot make it totally safe from being read. This is why you should focus on making the service access in that way, that showing your code to a stranger on the street does not compromise your security.
Now gettint to your problem: the core thing seems to lie not in the fact that your app uses some secret algorithms, but rather - that you have hardcoded the password in. You see, there's with this approach, they do not need to read your code at all. They just need to listen what data your app sends over the sockets..
Another issue is that everyone uses the same keyphrase.
A hardcoded magic string may be some sort of validation, but never authentication. If you want the app to be register-free, make the registration silent and automatic at first run? Of course, you will just bounce the problem: anyone could read the code and learn how to autoregister, and then they will make a clone.. But, again, like I've said: you never know who's on the other side. Is it your app, or is it an ideal-clone of it? Or maybe is it a clone that uses your own hacked-a-bit libraries to connect to you? If it looks like a duck, and quacks like a duck, it is a duck..

Embedded IronPython Security

I am embedding IronPython into my game engine, where you can attach scripts to objects. I don't want scripts to be able to just access the CLR whenever they want, because then they could pretty much do anything.
Having random scripts, especially if downloaded from the internet, being able to open internet connections, access the users HDD, or modify the internal game state is a very bad thing.
Normally people would just suggest, "Use a seperate AppDomain". However, unless I am severely mistaken, cross-AppDomains are slow. Very slow. Too slow for a game engine. So I am looking at alternatives.
I thought about compiling a custom version of IronPython that stops you from being able import clr or any namespace, thus limiting it to the standard library.
The option I would rather go with goes along the following lines:
__builtins__.__import__ = None #Stops imports working
reload = None #Stops reloading working (specifically stops them reloading builtins
#giving back an unbroken __import___!
I read this in another stack overflow post.
Assume that instead of setting __ builtins_._ import__ to none, I instead set it to a custom function that lets you load the standard API.
The question is, using the method outlined above, would there be any way for a script to be able to be able to get access to the clr module, the .net BCL, or anything else that could potentially do bad things? Or should I go with modifying the source? A third option?
The only way to guarantee it is to use an AppDomain. I don't know what the performance hit is; it depends on your use case, so you should measure it first to make sure that it actually is too slow.
If you only need a best-effort system, and if the scripts don't need to import anything, ever, and you supply all of the objects they need from the host, then your scheme should be acceptable. You can also avoid shipping the Python standard library, which will save some space.
You'll want to check the rest of the builtins for anything that might talk to the outside world; open, file, input, raw_input, and execfile come to mind, but there may be others. exec might be an issue as well, and as it's a keyword it might be trickier to turn off if there are openings there. Never underestimate the ability of a determined attacker!
I have embedded Iron Python in apps before and shared similar security concerns. What I did to help mitigate the risk was to create special objects just for the scripting run-time that were essentially wrappers around my core objects that only exposed "safe" functionality.
Another benefit from creating objects just for scripting is that you can optimize them for scripting with helper functions that make your scripts more terse and tidy.
Appdomain or not, there is nothing stopping somebody from loading an external .py module in their script.... Its a price you pay for the flexibility.

How to disable an exe file after first installation?

Does anybody know the solution for this? I create an exe file of my software. After first installation I have to disable the exe, so it cannot be run again because when someone purchases the software from me they can install it only once.
To do this you'll need to store something somewhere, that something could be:
A file
A registry entry
A call to a web service you own that stores a unique identifier for the machine, and is checked on subsequent installation attempts (Note: If you choose this method you must be clear and up-front with your users that it's what you're doing).
Bear in mind that a determined user will be able to circumvent file and registry methods and also quite possibly the web service method. The former two by using something such as Process Monitor to identify the files/registry entries you're writing to and clear them. For the latter, by using something like Fiddler to identify the web service calls you're making and replacing the responses with ones that allow them to bypass your protection.
Remember, ultimately the user can disassemble your code and remove the protection mechanisms you've put in place, so don't rely on them being 100% un-breakable
Forget it, mate. It's software - you absolutely cannot enforce something like that because the user has complete control over the environment where the binary runs, including reverse engineering, virtualization, backups etc. etc. And the ones who you want to foil are precisely the ones who will go to any length to thwart any protection measure you could invent.
No, the only thing that works is to force an online connection and register, on your system, the fact that a particular binary was installed once, then forbid it the next time. That requires you to make each installer different and have a cryptographically strong key generator, and it's still susceptible to replay attacks - but it's the only thing that is not useless by definition.
(Well, either that, or make your software so insanely great that people will fall in love you and want to give you the money. That solution is probably even harder.)
You could store the installation path in the registry or some secret location and have your .exe check that if it has started from a location different than the one stored, to simply exit, as you probably don't want to tell the user what you are doing.

Categories

Resources