Should I strong name my assembly for remoting? - c#

I have read this excellent blog entry on the woes of strong naming and remoting, all of which I am experiencing.
Basically, the client app will always need to load the same version of the strong named common assembly the server is using when the server returns a custom type from the common assembly to the client. I.e.:
The problem one encounters is this: as soon as there is a difference
between the strong name of the common type library on the client, and
the strong name of the common type library on the server, everything
breaks. Remoting throws exceptions as soon as any notable
client/server communication starts.
This is rather annoying since we update the version number when we build, even if no changes have been made to the common assembly. The implementation may not have changed, just the version number.
I am currently getting round this by applying binding redirection/publisher policy - however this seems a lot of engineering just to conform to the strong name rules regarding assembly resolution.
I have tried not strong naming the common assembly as recommended in the blog, which works fine/I don't get any remoting exceptions - however, is this recommended?
I am not adding the common assembly (which just contains interfaces) to the GAC, nor am I too worried about tampering, and as long as I update interfaces carefully in order to maintain backward compatibility/not break existing clients is this enough, hence no need for strong naming?
Thanks in advance.
PS: I'm aware of WCF, but I still need to maintain a remoting interface.

Avoid strong naming if at all possible! Strong naming is painful.
As you probably already know, as soon as you strong name an assembly, everything that it references has to be strong named as well. In a simple application, no big deal. If you have to deal with COM interop libraries, other projects, etc the problem becomes a maintenance nightmare.

Related

How the assembly version matching works exactly?

Let's say I have assemblies in GAC with versions, 1.1.1.5, 1.1.5.1, 1.1.6.2, 1.2.1.1 and 2.1.2.1. My application have a reference of 1.1.3.0 version. Which assembly will be matched at runtime? and what are the actual rules for assembly matching?
If your reference requires a specific version, by default, it will fail on assembly load, as that version doesn't exist.
This can be configured, however, via Assembly Binding Redirection. There are various options of what will happen here, including:
The reference can say that it doesn't care about versioning, in which case the newest is loaded.
You can configure your application in a way that you specify how to redirect the binding.
The assembly in the GAC can be setup with a publisher policy that specifies how to handle this.
Which assembly will be matched at runtime?
None will be matched, your program will bomb.
The documentation for the Version class talks generically about how you pick version numbers. And yes, you normally consider a change in the build number to be a non-breaking change. And a change in the revision to be low risk. Things you consider when you pick an [AssemblyFileVersion].
However, the default CLR policy does not implement this kind of interpretation of the [AssemblyVersion], it insists on an exact match. It is only happy when it find the exact same DLL that you compiled your program with. This is not normally difficult to ensure. You can override this policy and make it weaker, although you should always think twice about that. There is a very long history of well intended minor changes in source code that just did not pan out that well in practice. Something that Microsoft knows too well, having to maintain code that lasts for decades. The default counter-measures against DLL Hell in the CLR are hard as a rock. As they should be. Ratcheting it down up to you.

Is an assembly that contains a single class overkill?

I'm writing an app which plays host to a series of plug-ins. Those plug-ins generally use two libraries .Common and .UI which contain the interfaces that the plug-ins need to implement etc.
I am now at the point where I'm adding the capability for plug-ins to be subject to licensing. I have modified my host application such that it will only load plug-ins which define an interface instance (ILicenseInfoProvider) and export it through MEF. That bit is fine.
We have a selected provider of licensing code, and their licensing system involves use of a library. Now, I don't want to force each plug-in to be licensed through that system, and, by extension, require a reference to that system's assembly. So, I am planning on putting the code that references the third-party library in it's own assembly (something like .Licensing.Vendor). This way plug-ins can simply add a reference to that assembly, and include a class that looks somewhat like this:
[Export(typeof(ILicenseInfoProvider))]
class MyAssemblyLicenseInfoProvider : BaseVendorLicenseInfoProvider
{
public MyAssemblyLicenseInfoProvider() : base("My Assembly's Product Name")
}
I'm reasonably happy with that set-up, apart from one niggling thing - which is that the .Licensing.Vendor assembly will only contain a single class, which is the BaseVendorLicenseInfoProvider relating to the specific licensing system in use.
So, after all that, my question is pretty simple:
Does it seem overkill to put that class in it's own assembly, or is the benefit of not forcing all plug-ins to hold a reference to the third party library worth it?
At the moment there's a suitable purpose for the assembly - a publicly visible assembly for third parties to provide a means to interact via licensing. Seems perfectly reasonable to me:
even if there is only the one class currently, there may be more in the future
it's publicly visible, so you only want to provide only that which is necessary
it encapsulates a reasonable level of responsibility, namely licensing, without forcing specific implementations
I vote no, its not overkill, some plugins may not need a license, some may do..
It depends on what you are trying to achieve. Assemblies are a way of physically separating code whereas namespaces are a way of logically separating code.
Given that there can be a slight performance hit of loading too many assemblies (by which I mean a significant number, not just a few) then I suppose you could consider if it is possible to group as much as you can into one assembly but separate them by namespaces. But if you feel that it really does make sense to keep BaseVendorLicenseInfoProvider completely separate from everything else then I also do not see that as an issue.
At the end of the day it is all about what you feel is right, everyone has their own opinion of course but as long as what you have works for you then I don't see a problem.

Using a type, without knowing about the dll

is it possible to use an interface type, that is defined in a huge external dll, without referencing that dll?
in other words, there will be one core or global dll, that references the external dll, and all the projects reference this global one, so the external dlls are hidden from the other projects.
I want to use the type in my code, while knowing only about the global AllInterfaces project.
can that work? and if so, what needs to be done for such a scenario?
Is it possible to use an interface type that is defined in a huge external dll, without referencing that dll at compile time?
Not really, no. The compiler has the reasonable expectation that the types it needs are available.
Is it possible to use an interface type that is defined in a huge external dll, without referencing that dll at runtime?
Yes. We added that feature to C# 4. The "proper" name for the feature is something like "Type Embedding with Type Equivalence", but everyone just calls it "No PIA".
The motivation for the feature is the one faced most obviously by Visual Studio Tools For Office developers. VSTO developers write C# code that customizes, say, an Excel spreadsheet with some managed code. They communicate with Excel via a managed interface, but of course Excel actually exposes a set of COM interfaces. To bridge that gap, the Office team supplies a Primary Interop Assembly, or PIA. The PIA is a huge external library that contains nothing but metadata that describes how the managed interfaces correspond to the unmanaged interfaces of the COM objects.
The problem is that the Office team does not by default install the PIA when your customer buys Office! Therefore you have to ship the PIA with your customization. And the PIA is so large, it is often many times the size of the customization, which makes your download longer. And so on; it's not an ideal situation by any means.
The No-PIA feature allows the compiler to link only the portions of the PIA you actually use into your library, so that you do not have to ship the PIA with it.
Now, you might ask "what if I have two customizations that communicate with each other, and both use the IFoo interface from a PIA that I am not shipping?" The runtime identifies types by the assembly they came from, and so the two IFoo interfaces would be considered different types, and therefore not compatible.
The "No PIA" feature takes this into account as well. It does the same trick you use in COM to solve this problem: the assembly instructs the runtime to unify all interfaces that have the same GUID into the same logical type even if they come from different assemblies. This thereby explains the requirement that every interface that you use with "no PIA" has to be marked as though it were a COM interop interface with a GUID.
On the command line, use /L instead of /R to reference an assembly as a "no PIA" assembly.
Do a web search on "no PIA" and you'll find more information on this feature.
If you want to use that interface type in your code, that interface should be visible to your code. You code won't compile.
You can write adapter interface in your global dll, for the original interface and use that every where.
It cannot be done statically but you can do it using reflection.
With C# 4 you can use the dynamic keyword.
However, I fail to see how not knowing the interface in advance is going to help you - how are you going to know which methods to call?
You are trying to fool type identity. The CLR identifies a type by these properties:
Assembly display name
[AssemblyVersion]
[AssemblyCulture]
The assembly's PublicKeyToken value
The assembly's processor architecture (implicit)
The type's namespace name
The type's name.
Faking the type namespace name and name isn't difficult, the hard thing to do is faking the assembly properties. You are dead in the water if the assembly is strong-named (non-null PublicKeyToken) or if it is stored in the GAC, you can't get the substitute loaded. Faking the culture and architecture isn't hard to do, you'll have to get the display name and version right.
And of course, you'll have to get the interface declaration exactly right. Intentionally invoking DLL Hell like this is otherwise an Extremely Bad Idea. Not in the least because you now can never get the real assembly loaded.

Dynamic Assembly Resolution/Management

Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.

C# .net - Dll Naming Conventions in Interface / Implementation Scenarios

I need some suggestions for clever naming of dll and/or a hint if any naming conventions for the following scenario exist.
I have an interface definition and several types used by that interface definition encapsulated in one dll. Then I have an implementation of this interface in another dll.
The “special” thing about this situation is that I do not develop an application but more a collection of functionalities (aka framework) that is used by multiple applications of my company. These functionalities are accessed through its interface definition via MEF, so the user of this framework does usually not know, nor is it important to him, in which dll the implementation is (since he only needs to know and reference the dll containing the interface definition). Just in really uncommon cases he might want to know how the dll (the one containing the implementation) is named, because he wants to replace the implementation with his own.
I created some requirements for my dll naming:
The dll with the interface definition needs to be well named because this is the dll the user is referencing.
The namespace of the interface definition dll needs to be very well named (and be very intuitive) so the user really expects this definition in this namespace, where it would be the optimum that the namespace equals the solution structure.
The implementation dll must be named very clear, so the user can identify the dll in the working directory to remove it and install an own implementation.
The namespace of the implementation does not really matter since its only used internally.
The dll names should not be too long.
First, I came up with the idea to group all interface definitions of a specific type in one dll, that would create a very well named namespace since I can group for example all “services” in a dll called MyCompany.Services.dll, put all definition and types in that root (which creates the namespace MyCompany.Services), and therefore have kept the solution structure equal to the namespaces (which might be alo discussed here if this is useful or not).
But that generates a big problem:
If I signature the dlls and change something in my MyCompany.Services.dll, I have to recompile all implementation dlls even if this change only affects one of this n dlls. At that point I thought about putting each interface definition and ity types in an own dll (as described in the beginning of this post).
My 2 cents worth:
Use a common top-level namespace so everything that's part of your framework can be easily identified. You might not "need" it but it just seems silly not to.
Use descriptive names. Things like Basti.SpecialFramework.Interfaces.DataAccess.Customer would make a lot of sense to me.
Structuring the namespace around the structure / architecture of your system makes lots of sense.
Having a well structured namespace tree will help the interpretation of key works / terms in the same place, e.g: Basti.SpecialFramework.Interfaces.DataAccess.Customer vs Basti.SpecialFramework.BaseImplementations.DataAccess.Customer
Treat it a bit like developing Information Architecture or doing usability testing: come up with a draft set of names and see if your friends can figure it out. Do the eqivalent of a Card Sorting exercise - do you structure it: [Layer].[Interface / BaseImplementation] or [Interface / BaseImplementation].[Layer]? (I'm not sure exactly how you would do the card sorting exercise but I can see some strong parallels).
Descriptive names tend to be long, this goes aganist your last point; I agree long names might not be "easy" and "convenient" but if they clearly convey what I need to know I would be okay with that.
By the way: I'm sure naming conventions exist for DLLs and Assembilies - I just don't know them off the top of my head. I guess I could Google / Bing them but I guess you've done that already.

Categories

Resources