OpenTK does bindings to OpenGL by first defining a delegate with a matching signature to some target C function:
[System.Security.SuppressUnmanagedCodeSecurity()]
internal delegate void Uniform1f(Int32 location, Single v0);
internal static Uniform1f glUniform1f;
And then it assigns a value to glUniform1f that is returned from a platform specific OpenGL GetProcAddress function.
If I don't use OpenTK's approach, and instead just pinvoke the function using DllImport, will my code perform slower? (in otherwords, is there any performance benefit for using a delegate).
No, if anything, there will be a performance hit (although incredibly insignificant in most cases) because you are using a delegate.
Remember, a delegate is a reference to a method. Every time it's called, that reference has to be derefrenced. Compare this to a method call that's compiled into your code; the runtime knows exactly where it has to go as the method reference is baked into the IL.
Note that delegate performance has improved significantly since .NET 3.0. With the introduction of LINQ, delegates were going to be used extremely heavily, and with them being that ubiquitous, they have to be fast.
A possible reason you are seeing delegates being used is because the DLL that contains the unmanaged code needs to be determined at runtime (perhaps because of naming issues, processor-specific builds distributed together under different names, etc.).
In this case, an call is made to the unmanaged LoadLibrary Windows API function, followed by a call to the unmanaged GetProcAddress Windows API function.
Once the function poitner has been retreived, it is passed to the GetDelegateForFunctionPointer method on the Marshal class in order to get the delegate.
Related
I have read this and this and was wondering if I use in C# functions from unmanaged C++ library via C# wrapper of this library, is there going to be any difference in performance compared with the same program, but written fully with unmanaged C++ and the C++ library? I am asking about crucial performance difference bigger then 1.5 times. Notice I am asking about the function performance of the C++ library only(in the two ways - with and without the use of C# wrapper), isolating the other code!
After edit:
I was just wondering if I want to use C++ dynamic unmanaged library(.dll) in C# and I am using wrapper - which is going to be compiled to intermediate CIL code and which is not. I guess only the wrapper is being compiled to CIL, and when C# want to use C++ function from the library it is just parsing and passing the arguments to C++ function with the use of the wrapper, so there will be maybe some delay, but not like if I write the whole library via C#. Correct me if I am mistaking please.
Of course, there is overhead involved in switching from managed to unmanaged code execution. It is very modest, takes about 12 cpu cycles. All that needs to be done is write a "cookie" on the stack so that the garbage collector can recognize that subsequent stack frames belong to unmanaged code and therefore should not be inspected for valid object references.
These cookies are strung together like a linked-list, supporting the scenario where C# code calls native code which in turn calls back into managed code. Traversed by the GC when it collects. Not as uncommon as it sounds, it happens in any GUI app for example. The Click event is a good example, triggered when the UI thread pinvokes GetMessage().
Not the only thing that needs to happen however, in any practical scenario you also pass arguments to the native function. They can require a lot more work to get marshaled into a format that the native code can understand. Arrays in particular, they'll need to get pinned if the array elements are blittable, that's still pretty cheap. Gets expensive when the entire array needs to be converted because the element is not blittable. Not always easy to recognize, a profiler is forever the proper tool to detect inefficient code.
I used native method call in C# with DllImport feature. I want to know that should I release memory for method paremeters manually in native-side.
Currently, I send double[] array to native method, and native method get paretmers as double* type. Should I release double* in native method?
No, you should let .NET handle the memory management itself. The native code marshaller follows the basic rules for COM interop, which also happen to work most of the time for P/Invoke, since Win32 also follows the rules. (There are exceptions, but they'd be called out in the Windows API documentation).
Since you wrote both ends of the P/Invoke call, you should follow those same rules to make your life easy. As far as memory allocations are concerned, most of the time the caller is responsible for freeing any memory that crosses the P/Invoke boundary, since the callee doesn't know if/when it's safe to do so. This includes:
If you allocate memory for a parameter and pass it in
If there is an out parameter or return value that is allocated by the callee and returned
In both cases, only the caller knows when the memory is no longer needed and is safe to be freed. In the case of a P/Invoke call, the run-time marshaller knows this, and it will allocate memory to marshal your double[] to a double * before it makes the call, then free that memory when the call returns. Depending on the combination of ref, out, [In] or [Out] attributes, it may or may not try to copy the data back into your double[], but it will always free that memory.
I'm trying to familiarize myself with delegates and on http://msdn.microsoft.com/en-us/library/aa288459(v=vs.71).aspx, I'm reading:
"Unlike function pointers in C or C++, delegates are object-oriented, type-safe, and secure."
a mean, I do have C++ background, and somewhat cannot see how to understand the word "unlike" there. What do they mean by delegates are object-oriented and C++ fnc pointers are not? Same for type safe, and secure.
Anyone could show few examples and contra-examples?
Thanks.
A delegate does quite a bit more than a function pointer. It not only stores the function address, it also stores a reference to the target object. Unlike a C++ method pointer. So it is simple to use to call an instance method. That takes care of the "object-oriented" claim.
A bit down-hill from there, but type safety is ensured by the compiler verifying that the function signature exactly matches the delegate type when you assign the delegate. Which is no different in C++ but there's no way to cast a mismatch away. Another possible security aspect is that the object reference held by the delegate object is visible to the garbage collector. So you can never call an instance method on a deleted object.
I have two questions, stemming from observed behavior of C# static methods (which I may be misinterpretting):
First:
Would a recursive static method be tail call optimized in a sense by the way the static method is implemented under the covers?
Second:
Would it be equivalent to functional programming to write an entire application with static methods and no variables beyond local scope? I am wondering because I still haven't wrapped my head around this "no side effects" term I keep hearing about functional programming..
Edit:
Let me mention, I do use and understand why and when to use static methods in the normal C# OO methodology, and I do understand tail call optimization will not be explicitly done to a recursive static method. That said, I understand tail call optimization to be an attempt at stopping the creation of a new stack frame with each pass, and I had at a couple points observed what appeared to be a static method executing within the frame of it's calling method, though I may have misinterpreted my observation.
Would a recursive static method be tail call optimized in a sense by the way the static method is implemented under the covers?
Static methods have nothing to do with tail recursion optimization. All the rules equally apply to instance and static methods, but personally I would never rely on JIT optimizing away my tail calls. Moreover, C# compiler doesn't emit tail call instruction but sometimes it is performed anyway. In short, you never know.
F# compiler supports tail recursion optimization and, when possible, compiles recursion to loops.
See more details on C# vs F# behavior in this question.
Would it be equivalent to functional programming to write an entire application with static methods and no variables beyond local scope?
It's both no and yes.
Technically, nothing prevents you from calling Console.WriteLine from a static method (which is a static method itself!) which obviously has side-effects. Nothing also prevents you from writing a class (with instance methods) that does not change any state (i.e. instance methods don't access instance fields). However from the design point of view, such methods don't really make sense as instance methods, right?
If you Add an item to .NET Framework List<T> (which has side effects), you will modify its state.
If you append an item to an F# list, you will get another list, and the original will not be modified.
Note that append indeed is a static method on List module. Writing “transformation” methods in separate modules encourages side-effect free design, as no internal storage is available by definition, even if the language allows it (F# does, LISP doesn't). However nothing really prevents you from writing a side-effect free non-static method.
Finally, if you want to grok functional language concepts, use one! It's so much more natural to write F# modules that operate immutable F# data structures than imitate the same in C# with or without static methods.
The CLR does do some tail call optimisations but only in 64-bit CLR processes. See the following for where it is done: David Broman's CLR Profiling API Blog: Tail call JIT conditions.
As for building software with just static variables and local scope, I've done this a lot and it's actually fine. It's just another way of doing things that is as valid as OO is. In fact because there is no state outside the function/closure, it's safer and easier to test.
I read the entire SICP book from cover to cover first however: http://mitpress.mit.edu/sicp/
No side effects simply means that the function can be called with the same arguments as many times as you like and always return the same value. That simply defines that the result of the function is always consistent therefore does not depend on any external state. Due to this, it's trivial to parallelize the function, cache it, test it, modify it, decorate it etc.
However, a system without side effects is typically useless, so things that do IO will always have side effects. It allows you to neatly encapsulate everything else though which is the point.
Objects are not always the best way, despite what people say. In fact, if you've ever used a LISP variant, you will no doubt determine that typical OO does sometimes get in the way.
There's a pretty good book written on this subject, http://www.amazon.com/Real-World-Functional-Programming-Examples/dp/1933988924.
And in the real world using F# unfortunately isn't an option due to team skills or existing codebases, which is another reason I do love this book, as it has shows many ways to implement F# features in the code you use day to day. And to me at least the vast reduction in state bugs, which take far longer to debug than simple logic errors, is worth the slight reduction in OOP orthodoxy.
For the most part having no static state and operating in a static method only on the parameters given will eliminate side-effects, as you're limiting yourself to pure functions. One point to watch out for though is retrieving data to be acted on or saving data to a database in such a function. Combining OOP and static methods, though, can help here, by having your static methods delegate to lower level objects commands to manipulate state.
Also a great help in enforcing function purity is to keep objects immutable whenever possible. Any object acted on should return a new modified instance, and the original copy discarded.
Regarding second question: I believe you mean "side effects" of mutable data structures, and obviously this is not a problem for (I believe) most functional languages. For instance, Haskel mostly (or even all!?) uses immutable data structures. So there is nothing about "static" behaviour.
C# main program needs to call a C program GA.c This C code executes many functions and one function initialize() calls objective() function. But this objective function needs to be written in C#.This call is in a loop in the C code and the C code needs to continue execution after the return from objective() until its main is over and return control
to C# main program.
C# main()
{
//code
call to GA in C;
//remaining code;
}
GA in C:
Ga Main()
{
//code
call to initialize function();
//remaining code
}
initialize function() in GA
{
for(some condition)
{
//code
call to objective(parameter) function in C#;
//code
}
}
How do we do this?
Your unmanaged C code needs to be in a library, not an executable. When a program "calls another program", that means it executes another executable, and any communication between the two processes is either in the form of command-line arguments to the callee coupled with an integer return value to the caller, or via some sort of IPC*. Neither of which allows the passing of a callback function (although equivalent functionality can be built with IPC, it's a lot of trouble).
From this C library, you'll need to export the function(s) you wish to be entry points from the C# code. You can then call this/these exported function(s) with platform invoke in C#.
C library (example for MSVC):
#include <windows.h>
BOOL APIENTRY DllMain(HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved){
switch(ul_reason_for_call){
case DLL_PROCESS_ATTACH:
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
case DLL_PROCESS_DETACH:
break;
}
return TRUE;
}
#ifdef __cplusplus
extern "C"
#endif
__declspec(dllexport)
void WINAPI Foo(int start, int end, void (CALLBACK *callback)(int i)){
for(int i = start; i <= end; i++)
callback(i);
}
C# program:
using System;
using System.Runtime.InteropServices;
static class Program{
delegate void FooCallback(int i);
[DllImport(#"C:\Path\To\Unmanaged\C.dll")]
static extern void Foo(int start, int end, FooCallback callback);
static void Main(){
FooCallback callback = i=>Console.WriteLine(i);
Foo(0, 10, callback);
GC.KeepAlive(callback); // to keep the GC from collecting the delegate
}
}
This is working example code. Expand it to your needs.
A note about P/Invoke
Not that you asked, but there are two typical cases where platform invoke is used:
To leverage "legacy" code. A couple of good uses here:
To make use of existing code from your own code base. For instance, your company might want a brand new GUI for their accounting software, but choose to P/Invoke to the old business layer so as to avoid the time and expense of rewriting and testing a new implementation.
To interface with third-party C code. For instance, a lot of .NET applications use P/Invoke to access native Windows API functionality not exposed through the BCL.
To optimize performance-critical sections of code. Finding a bottleneck in a certain routine, a developer might decide to drop down to native code for this routine in an attempt to get more speed.
It is in this second case that there is usually a misjudgment. A number of considerations usually prove this to be a bad idea:
There is rarely a significant speed benefit to be obtained by using unmanaged code. This is a hard one for a lot of developers to swallow, but well-written managed code usually (though not always) performs nearly as fast as well-written unmanaged code. In a few cases, it can perform faster. There are some good discussions on this topic here on SO and elsewhere on the Net, if you're interested in searching for them.
Some of the techniques that can make unmanaged code more performant can also be done in C#. Primarily, I'm referring here to unsafe code blocks in C#, which allow one to use pointers, bypassing array boundary checking. In addition, straight C code is usually written in a procedural fashion, eliminating the slight overhead that comes from object-oriented code. C# can also be written procedurally, using static methods and static fields. While unsafe code and gratuitous use of static members are generally best avoided, I'd say that they are preferable to mixing managed and unmanaged code.
Managed code is garbaged-collected, while unmanaged code is usually not. While this is mostly a speed benefit while coding, it is sometimes a speed benefit at runtime, too. When one has to manage one's own memory, there is often a bit of overhead involved, such as passing an additional parameter to functions denoting the size of a block of memory. There is also eager destruction and deallocation, a necessity in most unmanaged code, whereas managed code can offload these tasks to the lazy collector, where they can be performed later, perhaps when the CPU isn't so busy doing real work. From what I've read, garbage collection also means that allocations can be faster than in unmanaged code. Lastly, some amount of manual memory management is possible in C#, using Managed.AllocHGlobal and unsafe pointers, and this might allow one to make fewer larger allocations instead of many smaller ones. Another technique is to convert types used in large arrays to value types instead of reference types, so that the memory for the entire array is allocated in one block.
Often overlooked is the cost within the platform invoke layer. This can outweigh small native code performance gains, especially when many transitions from managed to unmanaged (or vice versa, such as with your callback function) must occur. And this cost can increase exponentially when marshaling must take place.
There's a maintenance hassle when splitting your code between managed and unmanaged components. It means maintaining logic in two different projects, possibly using two different development environments, and possibly even requiring two different developers with different skill sets. The typical C# developer is not a good C developer, and vice versa. At minimum, having the code split this way will be a mental stumbling block for any new maintainers of the project.
Oftentimes, a better performance gain can be had by just rethinking the existing implementation, and rewriting it with a new approach. In fact, I'd say that most real performance gains that are achieved when bottleneck code is rewritten for a "faster" platform are probably directly due to the developer being forced to rethink the problem.
Sometimes, the code that is chosen to be dropped out into unmanaged is not the real bottleneck. People too often make assumptions about what is slowing them down without doing actual profiling to verify. Profiling can often reveal inefficiencies that can be correctly fairly easily without dropping down to a lower-level platform.
If you find yourself faced with a temptation to mix platforms to increase performance, keep these pitfalls in mind.
* There is one more thing, sort of. The parent process can redirect the stdin and stdout streams of the child process, implementing character-based message passing via stdio. This is really just an IPC mechanism, it's just one that's been around longer than the term "IPC" (AFAIK).
This is known as a callback. When you create an instance of GA, pass it your c# objective() method as a delegate (a delegate is reference to a class method). Look for the MSDN help topic on delegates in C#.
I don't know the proper syntax for the C side of this. And there are sure to be some special considerations for calling out to unmanaged code. Someone else is bound to provide the whole answer. :)