Without using p/invoke, from a C++/CLI I have succeeded in integrating various methods of a DLL library from a third party built in C.
One of these methods retrieves information from a database and stores it in different structures. The C++/CLI program I wrote reads those structures and stores them in a List<>, which is then returned to the corresponding reading and use of an application programmed completely in C#. I understand that the double handling of data (first, filling in several structures and then, filling all of these structures into a list<>) may generate an unnecessary overload, at which point I wish C++/CLI had the keyword "yield".
Depending on the above scenario, do you have recommendations to avoid or reduce this overload?
Thanks.
You do not need the yield keyword to create iterators. Just create one class implementing IEnumerator<T> and another class implementing IEnumerable<T>.
Related
I am developing a .NET 4.0 client that will utilize a C Library for data processing. The user will be able to specify the DLL file they wish to load for processing.
I am doing late binding / assembly loading as described here. http://blogs.msdn.com/b/jonathanswift/archive/2006/10/03/dynamically-calling-an-unmanaged-dll-from-.net-_2800_c_23002900_.aspx
For each DLL, the same method call sequences will be the same in my client, but the method signatures will change or the data structs passed in will change. The data populated with the structures will be different depending on the version of the DLL and other factors. Example, the definition of MyStruct will change depending on the version of the DLL.
public delegate int INTF_my_method(ref MyStruct pDataStruct);
What design patterns or design decision are recommended for this approach? I need to load the appropriate C method delegates and data definitions based on the version of the DLL that the user has specified, and populate the structures appropriately. Has anyone done something like this before?
There is no clean approach to this, neither in managed code nor native code. The best you could possibly do is to declare an interface type that tries to cover all possible versions and then write concrete wrapper classes for each individual version of the API. If there's at least some common functionality then you can shovel that in a base class.
Notable too is that you cannot just let the user pick a DLL, you have to pair the DLL with the concrete wrapper class instance.
Building this kind of flexibility in your program is obviously very expensive.
You can load different versions of your DLLs, but only from separate AppDomains. That is, for each DLL you want to load, you will have to create a new AppDomain.
I am writing an application that allows the user to create custom algorithms for computing values over a collection of objects. Simply put, i will be having a string with the source code of class with one method.
The solution I have implemented is to compile the string source code in a separate dll for each such custom algorithm and then load them using Assembly.Load and instantiate the class saved in the dll. From a maintainability point of views, this means that i have to store the source code in the db (for example) and also manage the existence of the compiled dlls (recreate by compiling again the source code if it is missing)
Is there a better way to do this, considering the new features of .Net 4.0?
EDIT:
The input source code is C# and i am using CSharpCodeProvider to compile the code. The custom classes are all derived from a base class and they override the method that actually holds the computation logic. What i would really like to do is to get rid of the dll management and not lose (too much) performance in compiling all the classes every time my application starts up
I would look at scripting languages; IronPython is easy to embed, or there are JavaScript engines for .NET. Simple, and usually fast enough.
If (comments) you need to use c#, I would:
build all the current methods at the same time into one assembly; solves a lot of problems
if the data changes during execution, make use of AppDomains so that I can unload them
I've done something similar where the model/rules were XML, running it through a transform to get c#, and compiling with CSharpCodeProvider (or whatever); and simply polling every minute or so to see if a new build is required
The CSharpCodeProvider has been around for a while and should fit the ticket. It can be used to generate the separate libraries like you have been doing (perhaps you are using the CSharpCodeProvider), but it can also be used to generate dynamic class objects. If they all implement an interface you can cast the objects as an interface or you can use reflection to invoke your logic. Here is a codeproject article to achieve something similar:
http://www.codeproject.com/KB/dotnet/dynacodgen.aspx
How are elements stored in containers in .Net?
For example, a C++ vector stores in sequential order, while List doesn't.
How are they implemented for .Net containers (Array, ArrayList, ...)?
Thanks.
It depends on the element. But a C++ Vector is equivalent to a C# List, and a C++ List<T> is equivalent to a C# LinkedList
The C# ArrayList is pretty much, a C# List<object>
Wikipedia lists many data structures, and I suggest you have a look there, to see how the different ones are implemented.
So:
C++ C# How
Vector ArrayList / List Array (sequential)
List LinkedList Linked List (non-sequential, i.e. linked)
It varies by container. Because the implementation is proprietary, there are no public specs mandating what the underlying data structure must be.
If you're interested in taking the time, I've used the following tools before to understand how MS implemented this class or that:
Debugging with the Microsoft .NET Framework debugging symbols.
Inspecting assemblies with Reflector.
In .net, containers (even arrays) handle all access to them. You don't use pointers to work with them (except in extremely rare cases you probably won't ever get into), so it often doesn't matter how they store info. In many cases, it's not even specified how things work behind the scenes, so the implementation can be changed to something "better" without breaking stuff that relies on those details for some stupid reason.
Last i heard, though, arrays store their entries sequentially -- with the caveat that for reference-type objects (everything that's not a struct), "entries" are the references as opposed to the objects themselves. The data could be anywhere in memory. Think of it more like an array of references than an array of objects.
ArrayLists, being based on arrays, should store their stuff the same way.
I'm after help on how to use complex objects either as return values or passed as parameters to C# class methods exposed to unmanaged C++ as COM components
Here's why:
I'm working on a project where we have half a dozen unmanaged C++ applications that each directly access the same Microsoft SQL Server database. We want to be able to use MS-Sql/Oracle/MySql with minimum changes and we've decided to implement a business logic plus data layer exposed via WCF services to get the required flexibility.
This strategy hinges on being able to get the unmanaged C++ to interop with the WCF service. There are a number of ways to do this, but the strategy I want to follow is to create a C# assembly exposed as a COM component which will act as a bridge between C++ and the WCF layer. This C# assembly will be loaded into unmanaged C++ process as COM component.
The C# bridge assembly will contain a helper class which has a number of methods that describe the operations that were formerly expressed as direct sql or stored proc calls in the C++ code.
I have two problems to solve
1) For an INSERT, I need to pass an object representing the entity to be inserted. On the unmanaged C++ side, the I already know that one of the entities has about 40 properties which have to make it into SQL - I don't want a C# method with 40 parameters, I want to pass an object; I don't know how to marshal a C++ object via COM into C#, so I thought about defining a Stuct on the C# side and then make the Struct COM visible.
2) How to return the result of a "SELECT this, that, other, ... ". I've seen two examples. One returns a struct[] and another returns a single struct containing a string[] for each column field and an int count member describing the length of the other member arrays.
On the C# side, I think it will be a case of defining and exposing a number of request/response structs which will be used to pass data in/out. These structs will need to be decorated with attributes that cause their members not to "change position" as a result of optimization. And the struct members may need to be decorated with the attribute that hints to the marshaller how the member should be exposed in COM.
Then of course I'll have to work out how to instantiate and populate these structs as seen as COM objects from the unmanaged C++, then I'll have to pass them in method calls and process them as return values.
This is the most difficult part for me; I grok C++ and some MFC/ATL but COM under C++ is a whole extra level of complexity. Any recommended books, blogs, tutorials on the subject of parameter passing and return value processing as I've described would be very helpful indeed.
If possible, I'd avoing bringing COM into the picture. If you control the C++ code (it sounds like you do), it should be easier to add a single C++/CLI cpp file that calls into your C# code. C++/CLI can directly access and create managed and unmanaged types and copy between them.
I'm looking into adding scripting to my C# application. I've been debating between Lua and C# (through CSharpCodeProvider).
Regardless of which language I use, I need the script to be able to access/manipulate objects/arrays in my main application. With C# I should be able to expose my objects and interface functions without too many issues.
However, with Lua it seems like I'll only be able to access the application objects through exposed functions. I can't see how I could have a non-chunky interface to, for example, arrays. I'd either need Array1Set(index, value)/Array1Get(index) functions or ArraySet(array_no, index, value)/.... Is there an elegant way to implement this? I don't want to copy the arrays to the Lua machine, manipulate it, then pull it back into my application.
Thanks
You should take a look at the LuaInterface project, which supports full integration between Lua and .NET. Ask google for more information about LuaInterface to find lots of useful pages of discussion, samples, and ideas.
The general method of sharing objects between Lua and any application in any language is to define the __index() and __newindex() metamethods (and possibly others) of a userdata containing either the object instance itself (letting Lua's GC manage the object's lifetime) or a pointer to the instance (which requires careful cooperation with the GC). The metamethods allow Lua code to manipulate fields of the object as if they were stored in a Lua table.