In classic C, I may have a library at version 1.0, which defines a constant in its .h file like:
#define LIBRARY_API_VERSION_1_0
And I can do things like this in my application code:
#include "LibraryApi.h"
// ...
int success;
#ifdef LIBRARY_API_VERSION_1_0
int param = 42;
success = UseThisMethodSignature(42);
#endif
#ifdef LIBRARY_API_VERSION_2_0
float param = 42.0f;
success = UseOtherMethodSignature(param);
#endif
Now I'm working in C#. So, apparently #defines are only scoped to the file they're defined in, so I looked into the solution described here of using a static class with constants. But, that solution requires the checking to happen at runtime, which introduces a number of problems:
Potentially inefficient, if I'm running over the same code over and over again checking an extra conditional (though if it's a const, perhaps the compiler or .NET runtime is smart enough to avoid this?)
You can't do things that would throw compiler errors. In my above example, I've defined param twice with two different types. Also UseOtherMethodSignature may not exist as a function, which will not compile if both blocks are there only separated by if/else.
So, what is the accepted solution for this type of problem? My scenario is that I have multiple versions of a web service API (with varying degrees of differences depending on what you're doing with it) and I want to be able to compile against either without commenting/uncommenting a bunch of code or some other equally silly manual process.
Edit
For what it's worth, I'd prefer a compile-time solution--in my scenario I know when I compile which version I'm going to use, I don't need to figure out which version of the library is available on the system at runtime. Yes, that will work, but seems like overkill.
I would aim to abstract this into different wrapper libraries. They would be separate projects in Visual Studio and reference different versions of your framework.
// Shazaam contract.
public interface IShazaamInvoker {
Boolean Shazaam();
}
// ShazaamWrapper.v1.dll implementation
public class ShazaamInvoker : IShazaamInvoker {
public void Shazaam() {
Int32 param = 42;
return UseThisMethodSignature(param);
}
}
// ShazaamWrapper.v2.dll implementation
public class ShazaamInvoker : IShazaamInvoker {
public void Shazaam() {
Single param = 42f;
return UseOtherMethodSignature(param);
}
}
// Determine, at runtime, which wrapper to use.
var invoker = (IShazaamInvoker)(/*HereBeMagicResolving*/)
invoker.Shazaam();
I suggest using a DI framework to load the appropriate class / dll. If you can refactor your code to use interfaces then you can create an abstraction layer across different versions. See this link as to the different frameworks available.
Perhaps another solution in keeping with the compile time nature of your question is to use generated code with T4
You must define a compilation symbol at the project level. You do that in the project properties. These symbols can be referenced with the #if directive.
You could also create a project build configuration that includes one or the other compilation symbol and also check the configuration in the project file to include one or the other .dll reference based on the symbol so that you can properly build and debug both versions just by choosing the version from the dropdown in the toolbar.
Related
Ok, basically there is a large C++ project (Recast) that I want to wrap so that I can use it in my C# project.
I've been trying to do this for a while now, and this is what I have so far. I'm using C++/CLI to wrap the classes that I need so that I can use them in C#.
However, there are a ton of structs and enums that I will also need in my C# project. So how do I wrap these?
The basic method I'm using right now is adding dllexport calls to native c++ code, compiling to a dll/lib, adding this lib to my C++/CLI project and importing the c++ headers, then compiling the CLI project into a dll, finally adding this dll as a reference to my C# project. I appreciate any help.
Here is some code..I need manageable way of doing this since the C++ project is so large.
//**Native unmanaged C++ code
//**Recast.h
enum rcTimerLabel
{
A,
B,
C
};
extern "C" {
class __declspec(dllexport) rcContext
{
public:
inline rcContect(bool state);
virtual ~rcContect() {}
inline void resetLog() { if(m_logEnabled) doResetLog(); }
protected:
bool m_logEnabled;
}
struct rcConfig
{
int width;
int height;
}
} // end of extern
// **Managed CLI code
// **MyWrappers.h
#include "Recast.h"
namespace Wrappers
{
public ref class MyWrapper
{
private:
rcContect* _NativeClass;
public:
MyWrapper(bool state);
~MyWrapper();
void resetLog();
void enableLog(bool state) {_NativeClass->enableLog(state); }
};
}
//**MyWrapper.cpp
#include "MyWrappers.h"
namespace Wrappers
{
MyWrapper::MyWrapper(bool state)
{
_NativeClass = new rcContext(state);
}
MyWrapper::~MyWrapper()
{
delete _NativeClass;
}
void MyWrapper::resetLog()
{
_NativeClass->resetLog();
}
}
// **C# code
// **Program.cs
namespace recast_cs_test
{
public class Program
{
static void Main()
{
MyWrapper myWrapperTest = new MyWrapper(true);
myWrapperTest.resetLog();
myWrapperTest.enableLog(true);
}
}
}
As a rule, the C/C++ structs are used for communicating with the native code, while you create CLI classes for communicating with the .NET code. C structs are "dumb" in that they can only store data. .NET programmers, on the other hand, expect their data-structures to be "smart". For example:
If I change the "height" parameter in a struct, I know that the height of the object won't actually change until I pass that struct to an update function. However, in C#, the common idiom is that values are represented as Properties, and updating the property will immediately make those changes "live".
That way I can do things like: myshape.dimensions.height = 15 and just expect it to "work".
To a certain extent, the structures you expose to the .NET developer (as classes) actually ARE the API, with the behaviors being mapped to properties and methods on those classes. While in C, the structures are simply used as variables passed to and from the functions that do the work. In other words, .NET is usually an object-oriented paradigm, while C is not. And a lot of C++ code is actually C with a few fancy bits thrown in for spice.
If you're writing translation layer between C and .NET, then a big part of your job is to devise the objects that will make up your new API and provide the translation to your underlying functionality. The structs in the C code aren't necessarily part of your new object hierarchy; they're just part of the C API.
edit to add:
Also to Consider
Also, you may want to re-consider your choice to use C++/CLI and consider C# and p/invoke instead. For various reasons, I once wrote a wrapper for OpenSSL using C++/CLI, and while it was impressive how easy it was to build and how seamless it worked, there were a few annoyances. Specifically, the bindings were tight, so every time the the parent project (OpenSSL) revved their library, I had to re-compile my wrapper to match. Also, my wrapper was forever tied to a specific architecture (either 64-bit or 32-bit) which also had to match the build architecture of the underlying library. You still get architecture issues with p/invoke, but they're a bit easier to handle. Also, C++/CLI doesn't play well with introspection tools like Reflector. And finally, the library you build isn't portable to Mono. I didn't think that would end up being an issue. But in the end, I had to start over from scratch and re-do the entire project in C# using p/invoke instead.
On the one hand, I'm glad I did the C++/CLI project because I learned a lot about working with managed and unmanaged code and memory all in one project. But on the other hand, it sure was a lot of time I could have spent on other things.
I would look at creating a COM server using ATL. It won't be a simple port, though. You'll have to create COM compatible interfaces that expose the functionality of the library you're trying to wrap. In the end, you will have more control and a fully supported COM Interop interface.
If you are prepared to use P/Invoke, the SWIG software could maybe help you out: http://www.swig.org/
I know that if I mark code as DEBUG code it won't run in RELEASE mode, but does it still get compiled into an assembly? I just wanna make sure my assembly isn't bloated by extra methods.
[Conditional(DEBUG)]
private void DoSomeLocalDebugging()
{
//debugging
}
Yes, the method itself still is built however you compile.
This is entirely logical - because the point of Conditional is to depend on the preprocessor symbols defined when the caller is built, not when the callee is built.
Simple test - build this:
using System;
using System.Diagnostics;
class Test
{
[Conditional("FOO")]
static void CallMe()
{
Console.WriteLine("Called");
}
static void Main()
{
CallMe();
}
}
Run the code (without defining FOO) and you'll see there's no output, but if you look in Reflector you'll see the method is still there.
To put it another way: do you think the .NET released assemblies (the ones we compile against) are built with the DEBUG symbol defined? If they're not (and I strongly suspect they're not!) how would we be able to call Debug.Assert etc?
Admittedly when you're building private methods it would make sense not to include it - but as you can see, it still is built - which is reasonable for simplicity and consistency.
Last Updated: 2009-08-11 2:30pm EDT
A few days ago I posted this question about some very strange problems. Well, I figured out what specifically was causing a build on one machine to not run on others and even came up with a work-around, but now it leaves me with a nice, specific question: Why?
To reproduce the problem, I create a new InteropUserControl and do the following:
Add a new public struct MyStruct:
Give it a GUID and ComVisible attributes
Add a GetMyStruct member to the _InteropUserControl interface and implement it in InteropUserControl.
MyStruct:
[Guid("49E803EC-BED9-4a08-B42B-E0499864A169")]
[ComVisible(true)]
public struct MyStruct {
public int mynumber;
}
_InteropUserControl.GetMyStruct():
[DispId(7)]
void getMyStruct( int num, ref MyStruct data );
(I have tried returning MyStruct instead of passing by reference, as well.)
InteropUserControl.GetMyStruct() implementation:
public void getMyStruct( int num, ref MyStruct data ) {
data = new MyStruct();
data.mynumber = num * 2;
}
I also sign the assembly and install it to the GAC and register with Regasm. Upon adding it to a new VB6 project and adding a call to GetMyStruct() and compiling on our build machine, it refuses to run on other machines.
To get around this, I had to expose a class to COM instead of the struct, and basically change GetMyStruct to this:
public void GetMyData( int num, MyClass data ) {
data.mynumber = num * 2;
}
In my actual project, I retrieve the struct internally, and then copy all the field values from the struct to the matching members on the instance of the class passed to the method by the client.
So why did a struct cause this behavior and a class worked fine? Is there some magic to exposing a struct to COM for using in VB6?
I think it may have something to do with OLE Automation.
Note: I also tried returning the struct rather than using a ref parameter, but that did not change the behavior.
Edit to add link to project template:
Interop Forms Toolkit 2.0 is the original VB.NET project template and dll. I don't reference the dll, so you may not need to install this.
C# Translations of templates on CodeProject is what I used to create mine (the project template, not the item template). The VB.NET version generates the __InteropUserControl event interface, the _InteropUserControl interface, and a few relevant attributes automagically. Those are explicitly coded in the C# version, and that's about all that's different between the two.
I think I found a solution to this problem.
I had the same exact problem, vb6 breaks when calling a method of an interop library by passing an structure. This is a project I created for testing a DLL interop, so all I have in my project was a form. But I had another project (the main application) with the same reference and it works fine.
After reading Joel post, I wanted to test his solution and in fact id did work (using a class instead a structure). But I have other interops where I'm using structures, so I was quite worried that at any point my application might fail. Additionally I didn't want to do the extra work of creating and exposing interface and a class to replace the structure.
So, I took the code from my form and move it to a public sub in a module. It Worked immediately. By the way, that's how i had implemented the call in the main application which was working ok.
I hope it might help others.
Is there some magic to exposing a
struct to COM for using in VB6?
The article COM Data Types* on MSDN says that structs are supported. Specifically, the MSDN article says that COM structures are defined as:
ByRef VALUETYPE< MyStruct >
There are also a couple of articles on customing your COM-callable wrappers at the bottom of the page, you may wish to review those.
Edit (2016): Original link was broken, so I fixed it to Version 3.5 of the .Net Framework.
I have a C# library with the following Namespace/Class:
namespace Helper
{
public static class Util
{
/*static methods*/
}
}
I have referenced said library in a F# project and when I try to call one of the methods I get:
error FS0039: The namespace or module 'Helper' is not defined.
This is an example of the method call not working:
#light
let a = Seq.skip 1000 (Helper.Util.GetPrimes 200000);;
Am I missing something obvious? Using open Helper doesn't work either, and the weird thing is that IntelliSense does work, it lists every method in the Util class.
Also, what is the standard practice for calling functions in some of my files from other files in the same project? I don't wanna create full objects just to access a few functions.
Regarding multiple files, see the first portion of "Using multiple F# source files, and a useful debugging technique", as well as the final portion of "Sneak peeks into the F# project system, part three". The former discusses how top-level code in a file implicitly goes in a module of the same name as the filename, whereas the latter discusses how to order files in the project (since you can only see stuff declared above/before you).
What does your GetPrimes method look like? It work for me...
I have a solution with a C# library including this code:
namespace Scratch
{
public static class Util
{
public static IEnumerable<int> GetNumbers(int upto)
{
int i = 0;
while (i++<upto) yield return i;
}
}
}
And calling it from a F# project that references the C# project like this:
#light
let p = Seq.skip 1000 ( Scratch.Util.GetNumbers 2000000);;
I am writing a (very small) framework for checking pre- and postconditions of methods. Entry points are (they could be easily be methods; that doesn't matter):
public static class Ensures {
public static Validation That {
get { ... }
}
}
public static class Requires {
public static Validation That {
get { ... }
}
}
Obviously, checking the postconditions may be expensive, and isn't actually necessary, when the method isn't buggy. So I want a method which works like this:
public static class Ensures {
[ConditionalCallingCode("DEBUG")]
public static Validation ThatDuringDebug {
get { ... }
}
}
where ConditionalCallingCodeAttribute means that this method should only run when the calling code is compiled with the DEBUG symbol defined. Is this possible?
I want client code to look like this:
public class Foo {
public void Bar() {
... // do some work
Ensures.That // do these checks always
.IsNotNull(result)
.IsInRange(result, 0, 100);
Ensures.WhileDebuggingThat // only do these checks in debug mode
.IsPositive(ExpensiveCalculation(result));
return result;
}
}
Of course, I can simply not provide WhileDebuggingThat. Then the client code would look like this:
public class Foo {
public void Bar() {
... // do some work
Ensures.That // do these checks always
.IsNotNull(result)
.IsInRange(result, 0, 100);
#ifdef DEBUG
Ensures.That // only do these checks in debug mode
.IsPositive(ExpensiveCalculation(result));
#endif
return result;
}
}
This is the fallback plan if nothing else works out, but it breaks DRY really badly.
As I understand it, marking WhileDebuggingThat with [Conditional("DEBUG")] will emit (or not) this method depending on whether DEBUG is defined during the compilation of the library, not of the assemblies which reference this library. So I could do this and then write documentation telling the library users to link debug builds of their code with the debug build of the library, and release builds with release builds. This doesn't strike me as the best solution.
Finally, I could tell the library users to define this class inside their projects:
using ValidationLibrary;
public static class EnsuresWhileDebugging {
[Conditional("DEBUG")]
public static Validation That() {
return Ensures.That;
}
}
This should work as well, as far as I see, but still requires breaking the DRY principle, if only slightly.
Is this anything that the normal ConditionalAttribute doesn't do for you, aside from working on a property instead of a method? You may well need to change the way things are called so that you've got methods instead of properties - and the fact that it returns a value may cause issues.
It would help a lot if you'd show how your framework is used - currently we've not got a lot to work with.
Another thing to consider would be supplying a variety of binary builds of your library - so that the caller can just supply a different version which doesn't actually do any checking. Again though, it's hard to tell with only the code you've provided.
Any solution that is found here would be slower than the actual checks. Also, since it would not be build into the compiler like ConditionalAttribute, the parameters would still be calculated. If the postconditions could be very complicated, such as
Ensures.That.IsPositive(ExpensiveCalculation(result));
You might consider using icelava's suggestion to reflect on the calling assembly to find if it is built in debug or release - but then you must use some sort of delegate to delay the calculation - to ensure that it is only done when needed. e.g.:
Ensures.WhileDebugging.That. IsPositive(() => ExpensiveCalculation(result));
The IsPositive function should run the lambda and check its result, only after reflecting to find out if it should be calculated.
I have not tried this since I am gonna bathe and leave the house.
Call Assembly.GetCallingAssembly() to get the assembly where the method (class) calling your current executing method comes from.
Run a check on that Assembly object to see if it is Release or Debug build.
It sounds like most of what you're doing is already covered using Debug.Assert().
For that matter, this code would only ever run in debug mode (but you have to put up with catch-block slowness):
try
{
Debug.Assert(false);
}
catch (Exception e)
{
// will only and always run in debug mode
}
It appears that what I want is just not available. I will probably settle for providing an implicit conversion from Validation to bool, so that validation checking may be wrapped in Debug.Assert().
The Debug Assert method can be set/changed using a bool even after the program is compiled, if for example the value is taken from a project user seting:
Debug.Assert(!Properties.Settings.Default.UseAutoDebug);
I'm not sure but I think you could use ConditionalAttribute for this: whether to emit call or not to emit will depend on type of build of user, not your library. You can check this with Reflector or ILDasm: compile your samples and in Reflector (ILDasm) look whether call is emitted or not in sample project.
I have this occur:
Project A call 1 function of B.
B include this function:
Assembly.GetCallingAssembly().FullName
If build B at mode debug then running, this function return name of Project A, if build at mode release than return name of project B.
I dont know reason of this occur.
Please support me
Thanks