Use C# properties in unmanaged C++ code - c#

We have a large project mainly written in C# (services, multithreading etc.). However, the core number crunching algorithms are written in unmanaged C++ to be fast (OpenMP etc.).
Unfortunately, at the moment we have to do a lot of effort to exchange data between these two worlds. I.e., we have to write wrapping classes in C++/CLI for each of the C++ classes. For (virtually) any required setting (Properties) in the C#-"world" there is a copy in the C++ world (a header file) and an explicit conversion back and forth in the wrapper class. This architecture seems very inefficient and quite error-prone.
Primary question:
Is there a way to share a C#-class with properties somehow automatically in unmanaged C++? (we have to read and write!)
Secondary question:
Could you give any advice of how to improve the architecture in a case as described above. One consideration of ours was to completely switch to C++, but having to find appropriate libraries and write clean code for all the (system-)things we do in .NET at the moment does not feel good.
Many thanks for your help and best regards,
Jakob

I am dealing with similar issues at my work where my primary task is to write managed interfaces to certain high perfromance, low latency dlls, this involves simple cases where I have to wrap the native classes using simple c++/cli containing a raw pointer to the native class or more complex issues where the native code is a server side publisher and the managed code has to subscribe to it using delegates ie they have to be converted to native callbacks.
.NET under the hood is a sophisticated COM server as far as I know. It is possible to write .net assemblies with the ComVisible attribute set to true then it acts as a classic COM component and then it is possible to use it from the native C++ code as a COM component. The reverse that is to use native code from managed can be achieved using the DllImport attributes and all the Marshaling can be fine tuned by the various attributes like the StructLayoutAttribute (http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.structlayoutattribute.aspx) and the MarshalAsAttribute (http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshalasattribute.aspx)
I am also using sometimes the unsafe keyword as well. I have to deal with high performance code so in some cases it is that after profiling that I know which is the best solution. Whether it is the warpper class solution that you have mentioned or the classic COM way, or some kind of hybrid with some caching, object pooling etc.
Hope that helps. :)
Apologies if that looks a bit disorganised. It is very late here. :)

Related

Managed C++ (C++/CLI) vs C#/VB.NET

I have worked extensively with C#, however, I am starting a project where our client wishes all code to be written in C++ rather than C#. This project will be a mix between managed (.NET 4.0) and native C++. Being that I have always preferred C# to C++ for my .NET needs, I am wondering if there are any important differences I may not be aware of between using C# and managed C++?
Any insight into this is greatly appreciated.
EDIT Looking at Wikipedia for managed C++ code shows that the new specification is C++/CLI, and that "managed C++" is deprecated. Updated the title to reflect this.
C++/CLI is a full fledged .NET language, and just like other .NET languages it works very well in a managed context. Just as working with native calls in C# can be a pain interleaving native C++ and Managed C++ can lead to some issues. With that said, if you are working with a lot native C++ code I would prefer to use C++/CLI over C#. There are quite a few gotchas most of which could be covered by do not write C++/CLI as if your were writing C# nor write it as if you were writing native C++. It is its own thing.
I have worked on several C++/CLI projects and the approach I would take really depends on the exposure of different levels of the application to native C++ code. If the majority of core of the application is native and the integration point between the native and managed code is a little fuzzy then I would use C++/CLI throughout. The benefit of the control in the C++/CLI will outweigh its problems. If you do have clear interaction points that could be adapted or abstracted then I would strongly suggest the creation of a C++/CLI bridging layer with C# above and C++ below. The main reason for this is that tools for C# are just more mature and more ubiquitous than the corresponding tools for C++/CLI. With that said, the project I have been working on has been successful and was not the nightmare the other pointed to.
I would also make sure you understand why the client is headed in this direction. If the idea is that they have a bunch of C++ developers and they want to make it simpler for them to move to write managed code I would posit to the client that learning C# may be less challenging then learning C++/CLI.
If the client believes that C++/CLI is faster that is just incorrect as they all compile down to IL. However, if the client has a lot of existing or ongoing native C++ development then the current path may in fact be best.
I've done a project with C++/CLI and I have to say it was an abomination. Basically it was a WinForms application to manage employees, hockey games, trades between teams, calendars etc, etc...
So you can imagine the number of managed controls I had on my forms: calendars / date time pickers, combo boxes, grids etc.
The worst part was to use only C++ types for my back-end, and use the managed types for the front-end. First off you can't assign a std string to a managed string. You'll need to convert everything. Obviously you'll have to convert it back...
Every time I needed to fill a grid, I serialized my C++ collections to something like a vector<std::string>, retrieve that in my UI library and then looped trough that and made new DataGridRow to add them to the grid. Which obviously can be done in 3 minutes with C# and some Linq to SQL.
I ended up with A+ for that application but lets be honest it absolutely sucked. I just can't imagine how pathetic the others app were for me to get that.
I think it would've been easier if i used List<Customer>^ (managed List of some object) in my C++ instead of always converting everything between vectors of strings. But I needed to keep the C++ clean of managed stuff.
/pissedof
From using all three areas (.NET, C++/CLI and C++) I can say that in everyway I prefer using .NET (through C# or VB.NET). For applications you can use either WinForms or WPF (the latter of which I find far better - especially for applications that look far more user friendly).
A major issue with C++/CLI is that you don't have all the nice language features that you get in .NET. For example, the yield keyword in C# and the use of lambda (I don't think that's supported in C++/CLI - don't hold me to that).
There is, however, one big advantage of C++/CLI. That is that you can create a bridge to allow C# and C++ to communicate. I am currently working on a project whereby a lot of math calculations and algorithms have already been written (over many years) in C++, but the company is wanting to move to a .NET-based user interface. After researching into various solutions, I came to the conclusion that C++/CLI was far better for this. One benefit is that it allowed me to build an API that, for a .NET developer, looked and worked just like a .NET type.
For developing an application's front end, however, I would really not recommend C++/CLI. From a usability point of view (in terms of developer time when using it) it just isn't worth it. One big issue is that VS2010 dropped support for IntelliSense for C++/CLI in order to "improve general IntelliSense" (I think specifically for C++). If you haven't already tried it, I would definitely advise checking out WPF for applications.

Using COM dll in C#

We have COM dll which was written in C++ and has been used by the apps written in vb 6.0. My company plans write the newer versions of apps in .Net platform.
As far as the performance is concerned, when using a COM dll in a C# project, what should I choose from the 3 options listed below
Just adding the dll as a com reference
Writing a wrapper dll with C++/Cli
Generating a wrapper dll using TlbImp.exe
Or are there any other options?
Thanks.
Writing a wrapper in C++/CLI isn't that likely to be faster, the COM interop marshaller in the CLR is heavily optimized. It auto-generates machine code stubs from the interop library that you create when you add a reference to the COM server. A does a lot more work that's pretty invisible and very hard to do yourself, related to exceptions.
It makes sure that failure HRESULTs are properly converted to managed exceptions and that managed exceptions cannot leak into the COM server code. The "make it fast" resolve you'll have when you do this will make you cut corners like this. Now you've got something that's fast but unreliable. Getting a managed exception in unmanaged code is brutally hard to diagnose, all the context is gone.
Options 1 and 3 are the same thing. Both generate the interop library, the IDE simply runs the equivalent of Tlbimp for you.
The usual guidance applies here. Do the simple thing first, the interop library is incredibly simple. Only contemplate doing the really hard thing when you can actually measure perf problems and have a realistic idea what to do about it. I've never once seen anybody decide that a C++/CLI wrapper was necessary.
Option 2 is more performant, but not much, especially considering the DLL itself is in VB6.
Not sure if option 3 works at all.
I would personally use option 1, but just keep the interop somewhere safe so that I just keep reusing the same interop and not creating it everytime I add the reference.
Another option is to use new dynamic features and late binding (using Activator to create the object) but that is definitely less performant of all.
Since the component is using COM, it will be easiest to add it as a reference and let visual studio build the proxies. This will be very strait forward and transparent to the .net code. It will not be quite as performant, but most likely it will suit your needs. I would do this first, since it is so easy, and then see how it performs.
If the component was not a COM component, and just a standard c++ dll, then the other two method would probably be a better choice.
A call to COM is slow because of the marshalling of the data. With slow I mean, compared to a call where you do not cross a Managed or COM boundary.
If you need to do a lot of small calls to your COM component, in a performance critical piece of your application, you could wrap (and combine) them with C++.
If the number of calls is minimal, or when they are not performance critical (but aren't all calls performance critical?) I would simply add a reference to the COM dll.
Summary Go for the refence to the COM dll, and test the performance. Since you migrate from VB6, you will get an enourmous performance boost already (string handling in .Net is sooooo much faster).

What are the situations or pros and cons to use of C++/CLI over C#

I have been keeping up with .NET CLR for awhile now, and my language of choice is C#.
Up until recently, I was unaware that C++/CLI could produce "mixed mode" executables capable of running native and managed code.
Now knowing this, another developer friend of mine were discussing this attribute and trying to determine when and how this ability would be useful.
I take it as a given that native code has the capability to be more efficient and powerful than managed code, at the expense of additional development time.
In the past, I resorted to strictly native C++ code libraries and used Interop to make use of the functionality I wrote into the native library.
I can see the benefit of not requiring an additional library, but I'm curious as to what all the pros/cons of using C++/CLI over soley managed executable created in C#, or such an executable using Interop to call a purely native C++ library?
(Sidenote: Are the terms Interop/PInvoke interchangeable, as I d.on't understand the difference between the terms, simply seen them used the same way.)
With C++/CLI you can create, broadly speaking, three types of objects:
Managed types. These will compile down to essentially the same IL as the equivalent C#. There is no performance opportunity here.
Native types. Compiles down to native code as if you'd used straight C++.
Mixed mode types. These compile down to managed code, but allow you to refer to native types too.
You might think of (3) as being like writing C# code with PInvoke code to accessing the native stuff - except all the PInvoke stuff is generated for you.
There's more to it than that, of course, as well as some caveats - but that should give you an idea of how it's useful.
In other words it's really a glue language. While you can write fully fledged apps in C++/CLI it's more normal to keep the managed and native parts separate and use C++/CLI to bridge the two more cleanly than with PInvoke.
Another common use is to extend and existing, native, C++ code base with .Net library calls.
Just be careful that you partition your code well as it can be quite subtle sometimes in compiling your pure C++ code down to IL transparently!
As to your sidenote: PInvoke is a particular type of Interop. Interop comes in other forms too, such as COM Interop. In fact, more accurately, PInvoke is a set of language features that make Interop with native code easier.
I've used Managed C++ (the .NET 1.1 precursor to C++/CLI) effectively in the past. I find it works best when you have a native C or C++ library you wish to use in managed code. You could go the whole Interop/PInvoke route, which makes for some ugly C# code and frequently has marshalling issues, or you could write a managed C++ wrapper, which is where C++/CLI really shines.
Because C++/CLI is managed code, you can call it from C# (or VB.NET if you lean that way) in the normal way, by adding a reference to the .DLL. No marshalling, no dllimport, nothing goofy like that. Just normal project references. Additionally, you get the benefit of static linked libraries if your native library is so designed, which is a Good Thing (tm).
Phil Nash really hit the big things. Here's one more that I've hit more than once and is the primary reason I've used C++/CLI in the past:
Some applications are extended by checked all DLLs in some location for exported functions with a particular name. In C#, there's no way to declare a native C-style export, but you can in C++/CLI. I create a "wrapper" in C++/CLI that exports the method, handles any translation of C structs to managed objects and passes the call on to an assembly written in C#.
There are certain types that are not available to other languages, such as templates,
const and tracking handle of boxed value types.
templates are specialized at compile-time. generics are specialized at runtime. Although CLR should cache generics specialization for future use (so you get the same List each time you use it), there is still a performance hit each time a generics specialization is requested.
i know other languages discard the const attribute, but have compile time checking in your C++ code is better than nothing.
Having a type like int^ allows you to access the memory on the managed heap directory without unnecessary unboxing. This can help performance when passing tracking handles of boxed values to functions that expect a tracking handle, such as Console::WriteLine(Object^). Of course the early boxing initialization can not be avoided. In other languages you can store the reference in an Object variable and pass it around to avoid unboxing, but you lose the compile time type check.

Shared common definitions across C/C++ (unmanaged) and managed C# code

I have a set of struct definitions that are used by both C# managed components and unmanaged C/C++ components. Right now, the identical struct definitions exist separately in C/C++ and C# code - causing duplication and related chaos. What's the best way to maintain single definitions that can be used from both C# and C/C++?
Thanks!
Amit
P.S.: I'm a C/C++ guy so if there's an obvious way to do this, I might be missing it completely!
I'm not familiar with your project(s), obviously, but have you considered building a managed bridge for your library in C++/CLI? With the "It Just Works" hackering the C++/CLI compiler does for you, many times you'll be able to marshal and share managed types with native code and back and forth.
Again, I don't know if it's right for you without more specifics, but it might be worth looking into.
you need a IDL (interface definition language) try googling:
protocol buffers.
ICE (internet communication engine).
Perhaps Microsoft COM ?.
--edit: new entry -- it appears microsoft has an IDL compiler.
It all depends on what you want. all the above technologies have an IDL element to them, and come with their own set of baggage. I personally would stay low level C/C++ :D. So I would Google "Imatix GSL" and use the mentioned technology to model the problem in XML and generate the data structures in any programming language -- this technology is very simple and subtle and requires an experience programmer so if it doesn't make sense you should stick with an IDL.
-- edit: programming technique --
You can solve the problem by pure technique if you like. Chaos ensues when the rigor of engineering breaks down. If you make a decision to firewall and encapsulate the problem into pure C/C++ code you won't have to worry about the interface falling appart in your dependant code -- this is because any usefull language can interface with the ABI of your platform (simple C functions :P). The crux is not to expose internals, but just an interface with opaque types, such as numeric handles that represent objects and functions that may be performed on your types.
I once wanted to do so in one of my projects that had a hard separation between C# code and C code. Ideally, the C# code would have borrowed header files from the C code but:
since C# doesn't support include, I don't see how you could share the definition of a structure and include it both in your C# or C/C++ code
having my structure definitions in separate headers wasn't convenient anyway
I didn't want to rely on IDL or custom "parse the C headers and only extract structures definitions" preprocessing step

Mixing C# Code and umanaged C++ code on Windows with Visual Studio

I would like to call my unmanaged C++ libraries from my C# code. What are the potential pitfalls and precautions that need to be taken? Thank you for your time.
There are a couple routes you can go with this - one, you can update your unmanaged C++ libraries to have a managed C++ extensions wrapper around them and have C# utilize those classes directly. This is a bit time-consuming, but it provides a nice bridge to legacy unmanaged code. But be aware that managed C++ extensions are sometimes a bit hard to navigate themselves as the syntax is similar to unmanaged C++, but close enough that a very trained eye will be able to see the differences.
The other route to go is have your umnanaged C++ implement COM classes and have C# utilize it via an autogenerated interop assembly. This way is easier if you know your way around COM well enough.
Hope this helps.
This question is too broad. The only reasonable answer is P/Invoke, but that's kind of like saying that if you want to program for Windows you need to know the Win32 API.
Pretty much entire books have been written about P/Invoke (http://www.amazon.com/NET-COM-Complete-Interoperability-Guide/dp/067232170X), and of course entire websites have been made: http://www.pinvoke.net/.
You're describing P/Invoke. That means your C++ library will need to expose itself via a DLL interface, and the interface will need to be simple enough to describe to P/Invoke via the call attributes. When the managed code calls into the unmanaged world, the parameters have to be marshalled, so it seems there could be a slight performance hit, but you'd have to do some testing to see if the marshalling is significant or not.
The easiest way to start is to make sure that all the C++ functionality is exposed as 'C' style functions. Make sure to declare the function as _stdcall.
extern "C" __declspec(dllexport) int _stdcall Foo(int a)
Make sure you get the marshalling right, especially things like pointers & wchar_t *. If you get it wrong, it can be difficult to debug.
Debug it from either side, but not both. When debugging mixed native & managed, the debugger can get very slow. Debugging 1 side at a time saves lots of time.
Getting more specific would require a more specific question.
You can also call into unmanaged code via P/Invoke. This may be easier if your code doesn't currently use COM. I guess you would probably need to write some specific export points in your code using "C" bindings if you went this route.
Probably the biggest thing you have to watch out for in my experience is that the lack of deterministic garbage collection means that your destructors will not run when you might have thought they would previously. You need to keep this in mind and use IDisposable or some other method to make sure your managed code is cleaned up when you want it to be.
Of course there is always PInvoke out there too if you packaged your code as DLLs with external entrypoints. None of the options are pain free. They depend on either a) your skill at writing COM or Managed C wrappers b) chancing your arm at PInvoke.
I would take a look at swig, we use this to good effect on our project to expose our C++ API to other language platforms.
It's a well maintained project that effectively builds a thin wrapper around your C++ library that can allow languages such as C# to communicate directly with your native code - saving you the trouble of having to implement (and debug) glue code.
If you want a good PInvoke examples you can look at PInvoke.net. It has examples of how to call most of win api functions.
Also you can use tool from this article Clr Inside Out: PInvoke that will translate your .h file to c# wrappers.

Categories

Resources