I am embedding IronPython into my game engine, where you can attach scripts to objects. I don't want scripts to be able to just access the CLR whenever they want, because then they could pretty much do anything.
Having random scripts, especially if downloaded from the internet, being able to open internet connections, access the users HDD, or modify the internal game state is a very bad thing.
Normally people would just suggest, "Use a seperate AppDomain". However, unless I am severely mistaken, cross-AppDomains are slow. Very slow. Too slow for a game engine. So I am looking at alternatives.
I thought about compiling a custom version of IronPython that stops you from being able import clr or any namespace, thus limiting it to the standard library.
The option I would rather go with goes along the following lines:
__builtins__.__import__ = None #Stops imports working
reload = None #Stops reloading working (specifically stops them reloading builtins
#giving back an unbroken __import___!
I read this in another stack overflow post.
Assume that instead of setting __ builtins_._ import__ to none, I instead set it to a custom function that lets you load the standard API.
The question is, using the method outlined above, would there be any way for a script to be able to be able to get access to the clr module, the .net BCL, or anything else that could potentially do bad things? Or should I go with modifying the source? A third option?
The only way to guarantee it is to use an AppDomain. I don't know what the performance hit is; it depends on your use case, so you should measure it first to make sure that it actually is too slow.
If you only need a best-effort system, and if the scripts don't need to import anything, ever, and you supply all of the objects they need from the host, then your scheme should be acceptable. You can also avoid shipping the Python standard library, which will save some space.
You'll want to check the rest of the builtins for anything that might talk to the outside world; open, file, input, raw_input, and execfile come to mind, but there may be others. exec might be an issue as well, and as it's a keyword it might be trickier to turn off if there are openings there. Never underestimate the ability of a determined attacker!
I have embedded Iron Python in apps before and shared similar security concerns. What I did to help mitigate the risk was to create special objects just for the scripting run-time that were essentially wrappers around my core objects that only exposed "safe" functionality.
Another benefit from creating objects just for scripting is that you can optimize them for scripting with helper functions that make your scripts more terse and tidy.
Appdomain or not, there is nothing stopping somebody from loading an external .py module in their script.... Its a price you pay for the flexibility.
Related
I'm looking for Unity function to determine if my game has been de-compiled/ recompiled or modified in any way.
Yes, there is a Unity function for this but it can still be circumvented.
This can be done with Application.genuine which returns false when the application is altered in any way after it was built.
if (Application.genuineCheckAvailable)
{
if (Application.genuine)
{
Debug.Log("Not tempered");
}
}
The problem is that if the person is smart enough to de-compile, modify and compile the game, he/she can also remove the check above so the check above becomes useless. Any type of program genuinity or authenticity check can be removed as long as it is running on the player's machine.
EDIT
You can make it harder to be circumvented by doing the following:
1.Go to File --> Build Settings... then select your platform.
2.Click on Player Settings --> Other Settings and then change the Scripting Backend from Mono to IL2CPP(C++).
This will make it harder to circumvent that but it is still possible to be circumvented.
TL;DR: Thats frankly not possible.
You can never determine whether your program was decompiled, because there exists no measure to determine whether that happened. And every executable can be disassembled into at least assembler even if you scramble and screw up your data. You can make it hard to understand your source code though using obfuscating software. The ultimate obfuscator would be the M/o/Vuscator, which changes all assembler commands into mov instructions, which make it a pain in the butt to understand anything. But it also is slow as heck and probably not what you want (btw. this works because the mov-instruction is touring-complete in the x86 Instruction set. There is a great talk about it here). When you follow this trend further down the rabbit hole you can also use the exact same assembler code (around 10-20ish instructions) to create all programms possible which will make it impossible to get to your source code by simply disassembling your code.
Staying in the realm of the possible though: No, you are not able to prevent people from disassembling or decompiling your code. But you can make it harder (not impossible) to understand.
Detecting a change in the executable is on the possible side, though. Altough probably not feasible for you.
The main problem beeing that any code you build into the app to detect changes can be patched away. So you'll need to prevent that. But there is no practicall way of preventing that...
If you try to detect changes in your app by using a signature of the original and compare that to the actual signature for example, you can just exclude that check in the recompiled version. You can try to verify the signature against a server, but that can still be circumvented by removing the server check. You can force a server check for multiplayer games, but then we'll just use a fake signature. If you then want to calculate the signature on your server to prevent tampering, we'll just give you the original file and run the recompiled one.
There is one way (altough not feasible as mentioned above) to actually absolutely protect parts of your code against decompiling. The mechanism is called "BlurryBox" and was developed at KIT in germany. As I can't seem to find a proper document as a reference, here is what it does to archieve this.
The mechanism uses a stick with an encrypted storage and a microcontroller to do encryption. You put the parts of your code you want to protect (something that is called regularly, is necessary but not that time critical) into the encrypted storage. As it is impossible to retrieve the key [citation needed], you cannot access the code. The microcontroller then takes commands from your programm to call one of the encrypted functions in the storage with given parameters and to return the result. Because it is not possible to read the code you need to analyze its behaviour. Here comes the "Blurry" part of the box. Each function you store needs to have a small and well defined set of allowed parameters. Every other set of parameters leads into a trap that kills your device. As the attacker has no specs as to what the valid parameters are, this method gives you profable security against tampering with the code (as they state). There might be some mistakes on how this exactly works though as I'm writing this down from my memory.
You could try mimicking that behaviour with a server you control (code on the server and IP bans for trying to understand the code)
I'm trying to write a visualiser for some code which generates graphics for barcodes and labels. The way I want to do this is by recording the methods+parameters being run to a file, so I can play them back and see the visual output generated at each stage (so a kind of visual debugger to help me fix issues with measurements in the drawing)
I have access to the methods, and I can put anything I like in them - but I'm stuck on the best way to record the method signature being called and the parameters, especially since a lot of them are overloads etc.
Is there anything simple that will help me serialize/record actual method call information? (with a view to replay it back, so I need to programmatically load the information and call it) Perhaps something reflection-related?
Note: I'm an intern on the project I'm working on, and I'm probably not allow to introduce new assemblies etc. into the build, so I think aspect-based things requiring libraries are out. (At the same time, I'm not just asking a Q. I should be figuring out myself - this is more an additional thing I'm doing during my lunch break to help my main task)
It might be a good idea to start from an existing profiler as a base - e.g. from http://code.google.com/p/slimtune/
Note that profilers themselves are quite complicated - for .Net they require some C++/COM knowledge - but if you start from a base like slimtune, then hopefully you'll be able to avoid this core code and will instead be able to focus on your own visualisation requirements.
Recording the method name itself is easy, parameters will be more difficult. I think the only way to generically retrieve the parameters is to use reflection--the alternative is to have an ungodly amount of logging code where you explicitly log every parameter.
Also consider that you'll need all parameters to be serializable, and depending on how you want the file to be used (by a program vs. human readable) you might have to implement quite a bit of boilerplate serialization code.
You should really consider existing profiling tools and testing tools rather than thinking of inventing something new. It sounds like performance tests or integration tests may be more valuable than a "playback" utility.
Quite a few people have really taken interest in the dll's ivé sent them, and they're not the type that should be given away for free too often...
I was just wondering, if I were to sell my components, user controls etc, how would I go about protecting them, in terms of ownership/encrypting code (if possible) etc.. What steps have you taken to help prevent people using yours without paying for them?
You can use any commercial obfuscater which encrypt your functionality and giving error if decompile.
Here i have the whole list which are available in market.
I used many of them some are just encrypt string, public method, private methods,properties and all.
Just go through it.
see the whole list and article
The only truly secure way to protect your dll is not to give it to them. Expose it instead via a web-service etc (obviously this doesn't work in all cases). Every obfuscator can be broken with patience. Think how much the games industry spends on this, and things are broken / reverse-engineered within days, sometimes hours.
"Lawyers" may serve as a layer of protection, and obfuscation will certainly discourage idle browsing. But a determined hacker (for example, for commercial illegal spying) will be able to get at your code eventually.
I guess you simply need to weigh the costs and benefits...
Well, I will definitely put my copy right,company name and production name information to my DLL. Whenever anybody use it,those information still appear on my DLL. And if possible,I will try to use Dotfuscator tool from visual Studio which helps to obfuscate my DLL.
We are about to use Code Protectors (Obsfucation as well as Native Compilation), I assume ORMs will be dependent little bit on Reflection and I am worried will Obsfucation and Native Compilation protection techniques create any problems?
Has anyone tried successful ORM and Code Protection for any good desktop application? We are having WPF Desktop Application.
Our primary language for development is C# and we are using our custom ORM but I want to evaluate any commercial ORM or ADO.NET EF etc as well.
Question is not about what is Code Protection and which one I should use, I am trying to ask about the effect of protection on ORM.
If your code is using Reflection, most probably the obfuscated assembly will not work. You will need to exclude from obfuscation those entities referenced by their original name. Take a look at Crypto Obfuscator which will analyze your code during obfuscation and show all methods and line numbers where potentially breaking methods (such as Reflection ) are called. This is a huge timer-saver since it pinpoints the exact location and helps determine the properties/classes you need to exclude from renaming.
Try .Net Reactor. Available at http://www.eziriz.com/
Its a LOT cheaper than some of the others around, and it can do a lot more. You can also disable certain options (like obfuscation, to preserve the use of reflection) and only have certain options enabled like ILDASM Suppression, which will still protect the code.
Cheers
Redgate acquired Smart Assembly not too long ago, which is what I'd look at if I had a need to do this.
A while ago I trialed CodeViel to look at obfuscating/encrypting code with some degree of success. I think if you’re serious about doing this it’s not as simple as dropping an assembly in one end and it popping out a protected assembly. You will have to consider portions of your code (ie Namespaces, Classes, Methods, Fields, Properties, Structures, Events, and Resources) which are only to be used internally, and those that need to be exposed to other resources and libraries. In the case I was looking at I was able to encrypt (or use native compilation) to hide some method implementations, but left the class definition (name, methods, properties untouched). In some cases I left whole namespaces untouched as they contained only simple POCO objects required by other libraries.
It really seems to be a careful case by case basis as to what strategy you use where, some internals you could obfuscate to make decompilation/reverse engineering hard and that would be enough. Other cases you could use the encryption/native compilation to simply hide a method implementation. And you will also get cases where you are excluding portions of an assembly from being touched at all. Most of these programs will give you some recommended defaults and options that you can start from, but you will need to tweak and change these until you can produce results that protect your core IP but don't restrict your end users.
How do I protect the dlls of my project in such a way that they cannot be referenced and used by other people?
Thanks
The short answer is that beyond the obvious things, there is not much you can do.
The obvious things that you might want to consider (roughly in order of increasing difficulty and decreasing plausibility) include:
Static link so there is no DLL to attack.
Strip all symbols.
Use a .DEF file and an import library to have only anonymous exports known only by their export ids.
Keep the DLL in a resource and expose it in the file system (under a suitably obscure name, perhaps even generated at run time) only when running.
Hide all real functions behind a factory method that exchanges a secret (better, proof of knowledge of a secret) for a table of function pointers to the real methods.
Use anti-debugging techniques borrowed from the malware world to prevent reverse engineering. (Note that this will likely get you false positives from AV tools.)
Regardless, a sufficiently determined user can still figure out ways to use it. A decent disassembler will quickly provide all the information needed.
Note that if your DLL is really a COM object, or worse yet a CLR Assembly, then there is a huge amount of runtime type information that you can't strip off without breaking its intended use.
EDIT: Since you've retagged to imply that C# and .NET are the environment rather than a pure Win32 DLL written in C, then I really should revise the above to "You Can't, But..."
There has been a market for obfuscation tools for a long time to deal with environments where delivery of compilable source is mandatory, but you don't want to deliver useful source. There are C# products that play in that market, and it looks like at least one has chimed in.
Because loading an Assembly requires so much effort from the framework, it is likely that there are permission bits that exert some control for honest providers and consumers of Assemblies. I have not seen any discussion of the real security provided by these methods and simply don't know how effective they are against a determined attack.
A lot is going to depend on your use case. If you merely want to prevent casual use, you can probably find a solution that works for you. If you want to protect valuable trade secrets from reverse engineering and reuse, you may not be so happy.
You're facing the same issue as proponents of DRM.
If your program (which you wish to be able to run the DLL) is runnable by some user account, then there is nothing that can stop a sufficiently determined programmer who can log on as that user from isolating the code that performs the decryption and using that to decrypt your DLL and run it.
You can of course make it inconvenient to perform this reverse engineering, and that may well be enough.
Take a look at the StrongNameIdentityPermissionAttribute. It will allow you to declare access to your assembly. Combined with a good code protection tool (like CodeVeil (disclaimer I sell CodeVeil)) you'll be quite happy.
You could embed it into your executable, and extract and loadlibrary at runtime and call into it. Or you could use some kind of shared key to encrypt/decrypt the accompanying file and do the same above.
I'm assuming you've already considered solutions like compiling it in if you really don't want it shared. If someone really wants to get to it though, there are many ways to do it.
Have you tried .Net reactor? I recently came across it. Some people say its great but I am still testing it out.
Well you could mark all of your "public" classes as "internal" or "protected internal" then mark you assemblies with [assembly:InternalsVisibleTo("")] Attribute and no one but the marked assemblies can see the contents.
You may be interested in the following information about Friend assemblies:
http://msdn.microsoft.com/en-us/library/0tke9fxk(VS.80).aspx