I'm trying to use the kerbal space program (ksp) binaries to read in one of the ksp part models, I've added the ksp binaries and UnityEngine.dll under references then I'm doing the following:
Planet p = new Planet();
Running this gives me the following security exception:
ECall methods must be packaged into a system module.
Can I bypass or fix this?
Edit:
I found out that it's not from the ksp dll but from untiy ones, they use:
[WrapperlessIcall ]
[MethodImpl (MethodImplOptions.InternalCall)]
public extern void StopAllCoroutines ();
I need a way to get past this.
MethodImplOptions.InternalCall merely indicates that StopAllCoroutines is implemented directly in the CLR rather than the BCL (or some other library). Here's MSDN.
As to revising Unity3D not call in to this, you won't be able to bypass it.
Related
I was wondering how I could program like a certain API, I have written an algorithm that I want to publish so people can use it, but I don't want people to see the code, and steal it? Paranoid, I know, but still.
How is that made, so for instance I can in a C# script (the API would also be written in C#), include it (with using ApiName) and use the functions inside, for instance if the API has a function that I program like "void Calculate(float x, float y)", and then from a script they can call "Calculate(100, 200)" for instance. I know it's somehow possible because of the Windows API, etc. Also is creating a Class Library the same thing?
Before any code runs, it is either compiled or interpreted into binary. This is highly simplified but that is the general idea. As long as a library or API provides an interface like names of functions, the implementation itself can be compiled and still work.
For C#, NuGet is a good example, you can create a NuGet of your code (see https://learn.microsoft.com/en-us/nuget/create-packages/creating-a-package) where the public function and method signatures will be visible and usable but the implementations will be compiled. DLLs work in a similar way. You can reference them and call their public members but not see the code unless you use a tool to decompile them.
I have an SSIS (2012) package which has 16 (c#) script tasks so far and there will be more. I am frustrated at having to basically copy code in for each component. I have a .cs file which I add to each script task but as this is copied into the dtsx if I update the 'common' code I have to modify every other component which has contains it.
I have tried to create an external assembly so I can put it into the GAC and avoid this stupidity but I cannot get it to work - the problem seems to be that my common code requires a reference to the InheritedVariableDispenser to work. I have tried adding references to Microsoft.SqlServer.Dts.Runtime in the external assembly in order to handle this but as I say, this fails.
A partial solution would be to create a customised script task with my common routines pre-loaded but this doesn't seem possible either.
My last alternative option is to write a program which acts directly upon the dtsx file and modifies the xml to update all the section which incorporate the common code. While this appeals to the hacker in me it does some like a little too much work and a bit 'iffy' - it would only solve the update problem while still not allowing re-use in another package.
If anybody knows how to re-use code which references SSIS objects I would be most interested!
Does your common code really require access to the full VariableDispenser object, or just to the variables?
If you only need the variables themselves, then I'd suggest putting the common code in the external assembly. Use script components to consume the common code functionality; e.g.:
var foo = new MyLogic();
Dts.Variables["TransmogrifyResult"].Value = foo.Transmogrify(Dts.Variables["AnInput"].Value, Dts.Variables["AnotherInput"].Value);
If you absolutely must have access to the VariableDispenser itself, the MSDN VariableDispenser documentation warns that you may actually need to use an IDTSVariableDispenser100 instead.
Here's a rather unpleasant pickle that we got into on a client site. The client has about 100 workstations, on which we deployed version 1.0.0 of our product "MyApp".
Now, one of the things the product does is it loads up an add-in (call it "MyPlugIn", which it first looks for on a central server to see if there's a newer version, and if it is then it copies that file locally, then it loads up the add-in using Assembly.Load and invokes a certain known interface. This has been working well for several months.
Then the client wanted to install v1.0.1 of our product on some machines (but not all). That came with a new and updated version of MyPlugIn.
But then came the problem. There's a shared DLL, which is referenced by both MyApp and MyPlugIn, called MyDLL, which has a method MyClass.MyMethod. Between v1.0.0 and v1.0.1, the signature of MyClass.MyMethod changed (a parameter was added). And now the new version of MyPlugIn causes the v1.0.0 client apps to crash:
Method not found: MyClass.MyMethod(System.String)
The client pointedly does not want to deploy v1.0.1 on all client stations, being that the fix that was included in v1.0.1 was necessary only for a few workstations, and there is no need to roll it out to all clients. Sadly, we are not (yet) using ClickOnce or other mass-deployment utilities, so rolling out v1.0.1 will be a painful and otherwise unnecessary exercise.
Is there some way of writing the code in MyPlugin so that it will work equally well, irrespective of whether it's dealing with MyDLL v1.0.0 or v1.0.1? Perhaps there's some way of probing for an expected interface using reflection to see if it exists, before actually calling it?
EDIT: I should also mention - we have some pretty tight QA procedures. Since v1.0.1 has been officially released by QA, we are not allowed to make any changes to MyApp or MyDLL. The only freedom of movement we have is to change MyPlugin, which is custom code written specifically for this customer.
The thing is that the changes you made have to be basically in addition and not the change. So if you want to be back compatible in your deployment (as much as I understood in current deployment strategy you have this is an only option) you should never change the interface but add a new methods to it and avoid tight linking of your plugin with shared DLL, but load it dynamically. In this case
you will add a new funcionality without disturbing a old one
you will be able to choose which version of dll to load at runtime.
I have extracted this code from an application I wrote some time ago and removed some parts.
Many things are assumed here:
Location of MyDll.dll is the current directory
The Namespace to get reflection info is "MyDll.MyClass"
The class has a constructor without parameters.
You don't expect a return value
using System.Reflection;
private void CallPluginMethod(string param)
{
// Is MyDLL.Dll in current directory ???
// Probably it's better to call Assembly.GetExecutingAssembly().Location but....
string libToCheck = Path.Combine(Environment.CurrentDirectory, "MyDLL.dll");
Assembly a = Assembly.LoadFile(libToCheck);
string typeAssembly = "MyDll.MyClass"; // Is this namespace correct ???
Type c = a.GetType(typeAssembly);
// Get all method infos for public non static methods
MethodInfo[] miList = c.GetMethods(BindingFlags.Public|BindingFlags.Instance|BindingFlags.DeclaredOnly);
// Search the one required (could be optimized with Linq?)
foreach(MethodInfo mi in miList)
{
if(mi.Name == "MyMethod")
{
// Create a MyClass object supposing it has an empty constructor
ConstructorInfo clsConstructor = c.GetConstructor(Type.EmptyTypes);
object myClass = clsConstructor.Invoke(new object[]{});
// check how many parameters are required
if(mi.GetParameters().Length == 1)
// call the new interface
mi.Invoke(myClass, new object[]{param});
else
// call the old interface or give out an exception
mi.Invoke(myClass, null);
break;
}
}
}
What we do here:
Load dynamically the library and extract the type of MyClass.
Using the type, ask to the reflection subsystem the list of MethodInfo present in that type.
Check every method name to find the required one.
When the method is found build an instance of the type.
Get the number of parameters expected by the method.
Depending on the number of parameters call the right version using Invoke.
My team has made the same mistake you have more than once. We have a similar plugin architecture and the best advice I can give you in the long run is to change this architecture as soon as possible. This is a maintainability nightmare. The backwards compatibility matrix grows non-linearly with each release. Strict code reviews can provide some relief, but the problem is you always need to know when methods were added or changed to call them in the appropriate way. Unless both the developer and reviewer know exactly when a method was last changed you run the risk of there being a runtime exception when the method is not found. You can NEVER call a new method in MyDLL in the plugin safely, because you may run on a older client that does not have the newest MyDLL version with the methods.
For the time being, you can do something like this in MyPlugin:
static class MyClassWrapper
{
internal static void MyMethodWrapper(string name)
{
try
{
MyMethodWrapperImpl(name);
}
catch (MissingMethodException)
{
// do whatever you need to to make it work without the method.
// this may go as far as re-implementing my method.
}
}
private static void MyMethodWrapperImpl(string name)
{
MyClass.MyMethod(name);
}
}
If MyMethod is not static you can make a similar non-static wrapper.
As for long term changes, one thing you can do on your end is to give your plugins interfaces to communicate through. You cannot change the interfaces after release, but you can define new interfaces that the later versions of the plugin will use. Also, you cannot call static methods in MyDLL from MyPlugIn. If you can change things at the server level (I realize this may be outside your control), another option is to provide some sort of versioning support so that a new plugin can declare it doesn't work with an old client. Then the old client will only download the old version from the server, while newer clients download the new version.
Actually, it sounds like a bad idea to change the contract between releases. Being in an object-oriented environment, you should rather create a new contract, possibly inheriting from the old one.
public interface MyServiceV1 { }
public interface MyServiceV2 { }
Internally you make your engine to use the new interface and you provide an adapter to translate old objects to the new interface.
public class V1ToV2Adapter : MyServiceV2 {
public V1ToV2Adapter( MyServiceV1 ) { ... }
}
Upon loading an assembly, you scan it and:
when you find a class implementing the new interface, you use it directly
when you find a class implementing the old interface, you use the adapter over it
Using hacks (like testing the interface) will sooner or later bite you or anyone else using the contract - details of the hack have to be known to anyone relying on the interface which sounds terrible from the object-oriented perspective.
In MyDLL 1.0.1, deprecate the old MyClass.MyMethod(System.String)and overload it with the new version.
Could you overload MyMethod to accept MyMethod(string) ( version 1.0.0 compatible) and MyMethod(string, string) (v1.0.1 version)?
Given the circumstances, I think the only thing you can do really is have two versions of MyDLL running 'side by side',
and that means something like what Tigran suggested, loading the MyDLL dynamically - e.g. as an a side example not related but might help you, take a look at the the RedemptionLoader http://www.dimastr.com/redemption/security.htm#redemptionloader (that's for an Outlook plugins which often have problems crashing to each other referencing different versions of a helper dll, just as a background story - that's a bit more complex cause of the COM involved but doesn't change much here) -
it's what you can do, something similar. Load dynamically the dll by it's location, name - you can specify that location internally, hard-code, or even set it up from config or something (or check and do that if you see that MyDll is not of the right version),
and then 'wrap' the objects, calls form the dynamically loaded dll to match what you normally have - or do some trick like that (you'd have to wrap something or 'fork' on the implementation) to make everything work in both cases.
Also to add on the 'no-nos' and your QA sorrows :),
they should not break the backward compatibility from 1.0.0 to 1.0.1 - those are (usually) the minor changes, fixes - not breaking changes, major version # is needed for that.
Since my game, which I'd really like to be Mono-usable, does not seem to run under Linux because LuaInterface is being a jerk (see the the relevant SO thread for more on that), I've decided to do what's suggested there. I wrote my own Lua511.LuaDLL class to reflect the one used by LuaInterface, replacing every single public function with its respective DllImport from lua51:
//For example, like this:
[DllImport("lua51")]
public static extern IntPtr luaL_newstate();
With the edited LuaInterface.dll (which now hosts its own Lua511.LuaDLL) and a pure, native Win32 lua51.dll in my game's startup folder, I somehow get a DllNotFoundException when LuaInterface tries initialize:
public Lua()
{
luaState = LuaDLL.luaL_newstate(); //right there, right then.
...
Ofcourse, with the DLL right there it shouldn't do that, right? Strangely, putting back the messed-up .Net version of lua51.dll gives an EntryPointNotFoundException instead. The mind boggles.
So what's up with that?
Relevant source code: Lua511.cs, dropped it in the LuaInterface project, then removed the reference so it'd be replaced.
Edit: Screw this, I'm gonna look for alternatives. Or roll my own. Or just stop caring about Linux compatibility.
You referenced my question. I took the other way to solve the problem and started to develop a new Lua .NET interface. I called it Lua4Net.
You can find the sources on Google hosting. And here the unit tests.
Currently implemented features: Execute code with exception handling and provide the return values; register global functions with parameter handling.
Features that will follow: Get/set global variables; debugging support, ...
You can find the used native windows DLL here (it is the renamed VC++ 9.0 DLL from here).
AND: Today I ran my first Linux/Mono tests, and all my unit tests worked!!!
AFAIK mono uses .so extension for native libraries under Linux by default.
Try to rename your lua51.dll to lua51.so or change dllname in DllImport attribute. Or use dllmap.
I was curious if anyone knew of a way of monitoring a .Net application's runtime info (what method is being called and such)
and injecting extra code to be run on certain methods from a separate running process.
say i have two applications:
app1.exe that for simplicity's sake could be
class Program
{
static void Main(string[] args)
{
while(true){
Somefunc();
}
}
static void Somefunc()
{
Console.WriteLine("Hello World");
}
}
and I have a second application that I wish to be able to detect when Somefunc() from application 1 is running and inject its own code,
class Program
{
static void Main(string[] args)
{
while(true){
if(App1.SomeFuncIsCalled)
InjectCode();
}
}
static void InjectCode()
{
App1.Console.WriteLine("Hello World Injected");
}
}
So The result would be Application one would show
Hello World
Hello World Injected
I understand its not going to be this simple ( By a long shot )
but I have no idea if it's even possible and if it is where to even start.
Any suggestions ?
I've seen similar done in java, But never in c#.
EDIT:
To clarify, the usage of this would be to add a plugin system to a .Net based game that I do not have access to the source code of.
It might be worth looking into Mono.Cecil for code injection. It won't work online the way you described in your question, but I believe it may do what you want offline (be able to find a given method, add code to it, and write out the modified assembly).
http://www.codeproject.com/KB/dotnet/MonoCecilChapter1.aspx
Well, to do that without App1's permission is difficult. But assuming you actually want to create an extension point in App1, it's relatively straightforward to do what you're suggesting with some kind of extensibility framework. I suggest:
SharpDevelop's add-in architecture
Mono.Addins
MS's Managed Extensibility Framework
Since I am more familiar with MEF, here's how it would look:
class Program
{
[ImportMany("AddinContractName", typeof(IRunMe))]
public IEnumerable<IRunMe> ThingsToRun { get; set; }
void SomeFunc()
{
foreach(IRunMe thing in ThingsToRun)
{
thing.Run();
}
/* do whatever else ... */
}
}
You need to use the Profiling API to make the second program profile the first one. You can then be notified of any method calls.
Another idea is to write an app that will change the exe you want to monitor. It would do things similar to what profiling tools do when they "instrument" your app. Basically, you use reflection to browse the app, then you re-create the exe (with a different file name) using the Emit features of .NET and insert your code at the same time.
Of course, if the app attempted to do things securely, this new version may not be allowed to communicate with its other assemblies.
With the clarifications you made in a comment, it seems to me you would be better off disassembling and reassembling using ildasm and ilasm.
Depending on how the author packaged/structured his application, you might be able to add a reference to his EXE file from your own project and just execute his code from within your own application. If that works, it's just a matter of using Reflection at that point to get at useful interfaces/events/etc that you can plug into (if they are marked private...otherwise just use them directly :). You can add a reference to any EXE built with the default settings for debug or release builds, and use it just like a DLL.
If that doesn't work there are advanced tricks you can use as well to swap out method calls and such, but that is beyond the scope of this reply...I suggest trying the reference thing first.
Injecting code into a running app is possible. This is a working example, but involves injection into an unmanaged dll.
Inject code in the assembly with Mono Cecil
You can try out CInject to inject your C# OR VB.NET code into assemblies