c# 4.0 DLR and COM - c#

a common example to illustrate the benefits of DLR is to use it with legacy components such as COM and having the ability to call methods that are not visible at compile time.
is the point of this to skip the COM Interop step? Because once the component has been Regasm'd the compiler will have the metadata.
and let's say we do skip the Interop step, any changes to COM signature (that is used by .net) will still require re-compilation.
if i had to make another guess, the DLR also provides caching of the calls, so any subsequent calls should be faster than those using regular reflection.. so maybe that's one benefit of the DLR here.. but syntactically i'm just not seeing it..

is the point of this to skip the COM Interop step?
Not, it just simplifies the calling of methods which could be quite a pain with Reflection. No matter whether you use Reflection or C# 4.0 dynamic, both use Interop to communicate with the unmanaged code.
and let's say we do skip the Interop step, any changes to COM
signature (that is used by .net) will still require re-compilation.
It depends. If you rename a COM method you will need to recompile the .NET calling code because it will break at runtime when you attempt to call an non-existing method.

It is specifically useful to interop with latebound COM. The kind where you don't add a reference to a COM type library or PIA. Very painful to do in C# before the dynamic keyword became available, it requires reflection code. If you have a type library then you almost always want to use it since it is a lot faster, catches mistakes at compile time and supports IntelliSense. The only reason you would not want to use one is when you try to make your code flexible enough to handle different versions of the COM server. It is however quite risky with a runtime exception always around the corner to ruin your day. It is in fact rare to not have a type library, almost all COM component authors provide one since the advantages are so great.
Regasm is the exact opposite, you only use that when you write your own [ComVisible] server in C#. There's no benefit to dynamic then.

Related

Using COM dll in C#

We have COM dll which was written in C++ and has been used by the apps written in vb 6.0. My company plans write the newer versions of apps in .Net platform.
As far as the performance is concerned, when using a COM dll in a C# project, what should I choose from the 3 options listed below
Just adding the dll as a com reference
Writing a wrapper dll with C++/Cli
Generating a wrapper dll using TlbImp.exe
Or are there any other options?
Thanks.
Writing a wrapper in C++/CLI isn't that likely to be faster, the COM interop marshaller in the CLR is heavily optimized. It auto-generates machine code stubs from the interop library that you create when you add a reference to the COM server. A does a lot more work that's pretty invisible and very hard to do yourself, related to exceptions.
It makes sure that failure HRESULTs are properly converted to managed exceptions and that managed exceptions cannot leak into the COM server code. The "make it fast" resolve you'll have when you do this will make you cut corners like this. Now you've got something that's fast but unreliable. Getting a managed exception in unmanaged code is brutally hard to diagnose, all the context is gone.
Options 1 and 3 are the same thing. Both generate the interop library, the IDE simply runs the equivalent of Tlbimp for you.
The usual guidance applies here. Do the simple thing first, the interop library is incredibly simple. Only contemplate doing the really hard thing when you can actually measure perf problems and have a realistic idea what to do about it. I've never once seen anybody decide that a C++/CLI wrapper was necessary.
Option 2 is more performant, but not much, especially considering the DLL itself is in VB6.
Not sure if option 3 works at all.
I would personally use option 1, but just keep the interop somewhere safe so that I just keep reusing the same interop and not creating it everytime I add the reference.
Another option is to use new dynamic features and late binding (using Activator to create the object) but that is definitely less performant of all.
Since the component is using COM, it will be easiest to add it as a reference and let visual studio build the proxies. This will be very strait forward and transparent to the .net code. It will not be quite as performant, but most likely it will suit your needs. I would do this first, since it is so easy, and then see how it performs.
If the component was not a COM component, and just a standard c++ dll, then the other two method would probably be a better choice.
A call to COM is slow because of the marshalling of the data. With slow I mean, compared to a call where you do not cross a Managed or COM boundary.
If you need to do a lot of small calls to your COM component, in a performance critical piece of your application, you could wrap (and combine) them with C++.
If the number of calls is minimal, or when they are not performance critical (but aren't all calls performance critical?) I would simply add a reference to the COM dll.
Summary Go for the refence to the COM dll, and test the performance. Since you migrate from VB6, you will get an enourmous performance boost already (string handling in .Net is sooooo much faster).

What are the situations or pros and cons to use of C++/CLI over C#

I have been keeping up with .NET CLR for awhile now, and my language of choice is C#.
Up until recently, I was unaware that C++/CLI could produce "mixed mode" executables capable of running native and managed code.
Now knowing this, another developer friend of mine were discussing this attribute and trying to determine when and how this ability would be useful.
I take it as a given that native code has the capability to be more efficient and powerful than managed code, at the expense of additional development time.
In the past, I resorted to strictly native C++ code libraries and used Interop to make use of the functionality I wrote into the native library.
I can see the benefit of not requiring an additional library, but I'm curious as to what all the pros/cons of using C++/CLI over soley managed executable created in C#, or such an executable using Interop to call a purely native C++ library?
(Sidenote: Are the terms Interop/PInvoke interchangeable, as I d.on't understand the difference between the terms, simply seen them used the same way.)
With C++/CLI you can create, broadly speaking, three types of objects:
Managed types. These will compile down to essentially the same IL as the equivalent C#. There is no performance opportunity here.
Native types. Compiles down to native code as if you'd used straight C++.
Mixed mode types. These compile down to managed code, but allow you to refer to native types too.
You might think of (3) as being like writing C# code with PInvoke code to accessing the native stuff - except all the PInvoke stuff is generated for you.
There's more to it than that, of course, as well as some caveats - but that should give you an idea of how it's useful.
In other words it's really a glue language. While you can write fully fledged apps in C++/CLI it's more normal to keep the managed and native parts separate and use C++/CLI to bridge the two more cleanly than with PInvoke.
Another common use is to extend and existing, native, C++ code base with .Net library calls.
Just be careful that you partition your code well as it can be quite subtle sometimes in compiling your pure C++ code down to IL transparently!
As to your sidenote: PInvoke is a particular type of Interop. Interop comes in other forms too, such as COM Interop. In fact, more accurately, PInvoke is a set of language features that make Interop with native code easier.
I've used Managed C++ (the .NET 1.1 precursor to C++/CLI) effectively in the past. I find it works best when you have a native C or C++ library you wish to use in managed code. You could go the whole Interop/PInvoke route, which makes for some ugly C# code and frequently has marshalling issues, or you could write a managed C++ wrapper, which is where C++/CLI really shines.
Because C++/CLI is managed code, you can call it from C# (or VB.NET if you lean that way) in the normal way, by adding a reference to the .DLL. No marshalling, no dllimport, nothing goofy like that. Just normal project references. Additionally, you get the benefit of static linked libraries if your native library is so designed, which is a Good Thing (tm).
Phil Nash really hit the big things. Here's one more that I've hit more than once and is the primary reason I've used C++/CLI in the past:
Some applications are extended by checked all DLLs in some location for exported functions with a particular name. In C#, there's no way to declare a native C-style export, but you can in C++/CLI. I create a "wrapper" in C++/CLI that exports the method, handles any translation of C structs to managed objects and passes the call on to an assembly written in C#.
There are certain types that are not available to other languages, such as templates,
const and tracking handle of boxed value types.
templates are specialized at compile-time. generics are specialized at runtime. Although CLR should cache generics specialization for future use (so you get the same List each time you use it), there is still a performance hit each time a generics specialization is requested.
i know other languages discard the const attribute, but have compile time checking in your C++ code is better than nothing.
Having a type like int^ allows you to access the memory on the managed heap directory without unnecessary unboxing. This can help performance when passing tracking handles of boxed values to functions that expect a tracking handle, such as Console::WriteLine(Object^). Of course the early boxing initialization can not be avoided. In other languages you can store the reference in an Object variable and pass it around to avoid unboxing, but you lose the compile time type check.

Keeping C# and C++ classes in sync at runtime?

My application is built in two sections. A C# executable which is the front end UI and a C++ dll which is more the low-level stuff. My application creates and manages many instances of objects, where every C++ object instance has a corresponding C# object instance. What techniques or libraries can I use to ensure that objects in the C# and C++ sections and data in those objects are always in sync at runtime? A change of one member in one object instance should update the corresponding object instance.
Thanks!
Edit: clarified a little what I meant by keeping the objects in "sync"
Perhaps code-generation would work well. E.g. define the properties/methods for these classes in one spot (maybe XML or something) and generate C# and C++ classes from that. Perhaps use something like CodeSmith (http://www.codesmithtools.com/) to generate your code.
It's not quite clear whether it would solve the problem, but have you considered Managed C++? I have had pretty good success simply compiling my C++ code as Managed C++, then using the managed extensions to create .NET classes that use the underlying C++ data directly. That way, there's only one copy of the data.
Probably not suitable for every situation (and I haven't tested its limits by any means) but I found it quite a timesaver. And since Managed C++ is a proper .NET language, the result was clean to use from the C# side, with none of the usual oddities or quirks one often has to work around when trying this sort of thing.
(Another similar approach would be to use SWIG (http://www.swig.org/) to generate wrappers for you. I hear it is easy to use and works well, but I haven't used it myself.)

C++ COM C# Mixed Mode Interoperation

I'm trying to understand my options for calling a C# library implementation from unmanaged C++.
My top level module is an unmanaged C++ COM/ATL dll. I would like to integrate functionality of an existing managed C# dll. I have, and can recompile the source for both libraries.
I understand from reading articles like this overview on MSDN and this SO question that it might be possible to create a "mixed-mode" dll which allows the native C++ code to call into the C# library.
I have a couple of questions about this approach:
How do I go about setting this up?
Can I simply change some properties
on the existing COM/ATL project to
allow use of the C# modules?
How will these mixed-mode calls
differ in performance from COM interop
calls? Is there a common string
format that may be used to prevent
conversion or deep copies between
the modules?
If this dll is created
mixed-mode, can it still be
interfaced/used in the same way by
its COM clients, or do they need to
be mixed mode aware?
Will inclusion of the CLR impose substantial
overhead when loading this COM object?
I'm new to Windows development, so please comment if anything in the question statement needs clarification or correction.
Thanks in advance.
How do I go about setting this up? Can I simply change some properties on the existing COM/ATL project to allow use of the C# modules?
If you fully control that project, so changing such settings isn't an issue, then sure. All you need is to enable /clr for this project (In project properties, open the "General" page, and look for "Common Language Runtime" support). Now you can use managed handles (^) and other C++/CLI bits in your project as needed. All existing code written in plain C++ should just keep working (it will be compiled to MSIL now, inasmuch as possible, but its semantics will remain unchanged).
How will these mixed-mode calls differ in performance from COM interop calls? Is there a common string format that may be used to prevent conversion or deep copies between the modules?
A mixed-mode call will be faster, because it uses faster calling conventions, and doesn't do any marshaling the way COM interop does (you either use types that are inherently compatible, or do your own explicit conversions).
There's no common string format - the problem is that System::String both allocates and owns its buffer, and also requires it to be immutable; so you can't create a buffer yourself and then wrap it as String, or create a String and then use it as a buffer to output text to.
If this dll is created mixed-mode, can it still be interfaced/used in the same way by its COM clients, or do they need to be mixed mode aware?
It can be interfaced the same, but if it's entered via an native entry point, it will try to load the CLR into the process, unless one is already loaded. If the calling client had already loaded CLR prior to the call (or the client was itself called from managed code), then you'll get the CLR that is already loaded, which may be different from the CLR that your code requires (e.g. client may have loaded 1.1, and your code needs 2.0).
Will inclusion of the CLR impose substantial overhead when loading this COM object?
It depends on what you define by overhead. Code size? Runtime penalties? Memory footprint?
In any case, loading the CLR means that you get all the GC and JIT machinery. Those aren't cheap. That said, if you need to call managed code ultimately anyways, there's no way around this - you will have to load CLR into some process to do this. The penalties aren't going to differ between COM Interop and mixed-mode C++/CLI assemblies.
I can't say much about the details like e.g. the string issues, since I never actively used this approach.
But you can easily consume any COM interface from any C# code by simply letting a VS wizard create a proxy for you, there is no performance overhead to it except the one that you always have when invoking COM and .NET.
The other direction, you just have to set your C# assemblies' ComVisibleAttribute to true (in VS it's a simple check box in the project properties), and then the compiler will automatically create COM interfaces for you. Again, there's no additional performance penalty.
HTH!

C++ calling C# options

We have native Win32 C++ code and a set of C# assemblies which we wish to call from the C++ code. I summaries our optios as:
Use COM. The C# code would need to be decorated with additional attributes (GUID, COMVisible). The C# assemblies would need to be registered regasm and would then be available to the native C++ code via COM.
Use a C++/CLI (formerly managed C++) wrapper class. A C++ class could be added to the native C++ project. That class would be compiled with /clr. The native C++ code would call the C++/CLI class which would then call the .Net code. No COM involved. The CLR is started by magic as required with marshalling handled by the C++/CLI extenstions.
Host an instance of the CLR in the native C++ code.
I'm going to discount option 3 as I don't see the benefits over option 2 other than we lose the need for a wrapper class. So the question is, what are the pros/cons of option 1 versus option 2?
Thanks in advance.
Option 2 will perform the best, and be the most seamless and maintainable, IMO.
There is really no advantage to option 1 that I've found. Using C++/CLI seems to function much better, work faster, and be much simpler in general.
You also can, btw, just use the C# assembly directly without having a wrapper class. This does require compiling any files that want to use it with /CLR, but it works quite well.
For option 1 your main pro would be not having to write a wrapper class which can get hairy depending on your project.
For option 2 you won't have to modify your managed library to facilitate unmanaged use, which is sometimes not an option.
For me it comes down to where you want to make your code changes.
With option 2 you also have a pretty straightforward way of subsequently convert your whole application to C++/CLI to avoid the managed/unmanaged transitions that you will get. The transitions could be an issue depending on how you use your referenced assemblies i.e. getting a performance hit.
So far I have had only positive experiences with C++/CLI and can recommend going that route.

Categories

Resources