Runtime throws method not found - referenced assembly version mismatch - c#

I have a class library that references a specific 3th-party DLL.
This DLL changes versions very often, but is always backwards compatible within the same major version. My class library uses specific types of the DLL, and executes different methods contained in these 3th-party DLLs.
I have to rethink the design of the app here, as I have one current issue but will have a bigger issue once there will be multiple major versions of the 3th-party DLL (there will be a limited set of major versions, 3 to be specific).
How can I make sure I can use a different version of the referenced assembly than the one which was originally used during compile-time? My runtime now loads in a DLL which is of a higher minor version, but which throws a 'Method not found' exception. I've removed the tag, as well as trying to execute the Assembly.Load to simulate any behavior when specifying the newer DLL, but both yield the same result; method not found.
What is the best approach to support three major (!) versions of referenced dlls within a single DLL? Because of the nature of the usage of the class library, it's not possible to either allow the user to choose the correct version or build 3 different DLLs.

If there is chance of your vendor breaking binary code compatibility and you cannot influence what one is doing there is no simple solution to this problem. Late binding would be one of the workarounds to deal with this in C# using reflection or dynamic, but it will come with the cost of the run-time performance and extremely increased complexity of your code.
If you are set to build this integration layer anyway, you would have to code for all three versions to cover known permutations between them and Adapter might be a good design pattern to start looking into. You would make sure the differences and the entities from the external library are not spilled out of the integration layer into your own business logic hence a lot of conversion logic would be required to isolate the fragile library from the rest of the code. Variations like differences in the types, methods, signatures, behavior and exceptions being thrown would have to be encapsulated and confined.
You would also have to re-design your application or presentation layer whichever depends on this hostile library to deal with the differences accordingly and make it rely only on your own wrappers.
Your integration tests would have to be execute against all three versions of the vendor library constantly possibly on every single time you check in code to the repository so you have enough protection and agility to move forward. And since the vendor keeps working on the code of the library you have to allocate enough time for the constant maintenance and support of the compatibility layer.

Related

determine minimum compatible API version

Given that the THINC API is written to be backwards compatible, and lower versions enable a greater number of potential machines to run a given application, everyone should strive to use the minimum version necessary.
Does anyone know if there is an easy way determine what the minimum version necessary for a given application would be?
For example, I've got an application that only uses 3 API functions:
GetHourMeterCount, GetActiveProgramName, and GetMachiningReport
How do I know what API version I can use?
I can think of several possibilities:
For your situation, the easiest solution I can think of is just to check the .chm documentation for your earliest THINC API version to see if it supports GetHourMeterCount, GetActiveProgramName, and GetMachiningReport. If not, continue checking later versions until you find one that does.
If you had a more complicated solution that used more THINC API functionality, a quick check would be:
Make sure the project builds cleanly.
Go into the project references and remove the reference to THINC API. Now you will have a compilation error everywhere that THINC API is referenced.
Add a reference to your earliest version of THINC API.
Rebuild. If there are still compiler errors, then your code references one or more THINC methods that do not exist in this version. Move on to the next version and rebuild.
Once your project builds cleanly again, you have found the version of THINC API to reference.
You could also code a tool that examines your code (via code analysis) or your compiled assembly (via reflection) to find all THINC API functionality, and then looks at multiple versions of THINC API to find the earliest that implements all of the functionality. That shouldn't be difficult, but still seems like overkill.
For your purposes, it would also be convenient to have a table of all THINC API methods, vs. the versions in which those methods are supported. I don't have such a table, but someone conceivably might.
All of these methods just check whether certain functions exist in a given version of THINC API. They won't warn you about any breaking changes or different behavior between different versions. That requires knowing the API, checking the release notes, and/or testing.

c# how to organize dll tools?

I would like to know which is the best way to organize the dll tools.
For example, I can have a project that has all the class tools the the company has been implement. For example, class to work with strings, class to work with files... and so on. I mean, a generic dll with tools that I can use in many projects. This would be a generic myCompaty.Utils.dll for example.
Other way it to have many dlls, of for each type of work. For example, I could have a myCompany.Utils.Files, other myCompany.Utils.Strings... etc.
With the first option, I would have only one dll, but if two persons need to add or fix something, only can work one person, because if two persons work at the same time, when one of the person compiles the new dll, the other person loses the work.
If I have many dlls, one for each kind of type of work, then is more difficult that two persons need to modify the same dll, because it's possible that each person is responsible of one of the dlls. However, the problem is that in this way, when I deploy the application, I would have a lot of dlls in the program directory.
So I would like to know which is the best practice when is created dlls.
Thanks.
From your question it is clear that you are using no versioning system. Try checking out something like Tortoise SVN - then, you will have no problems with several people working on same piece of software.
Regarding DLLs - I would go with having multiple DLLs, each only containing a specific type of utility methods. It will make deployment simpler. If you would do the opposite, that is, have a single DLL for all your utility methods, you would need to redeploy it everytime anything in it changes - you change the code responsible for working with files, you have to ship the whole DLL that will contain unrelated code, too. Whereas if you'll have multiple DLLs, you only need to redeploy the one that has really changed.
Basically it's going to depend on the number of classes, interfaces and delegates that your library is going to own.
Imagine the case you've 3000 classes in your "Company.Shared.dll" and you're developing a Web application. 600 of 3000 classes are for mobile development. What's the chance of using them in your Web application development? Zero.
So why you'd be deploying a 3000-classes-assembly for a Web application development if you only need Web development-related classes? Library size is greater than a Web-specific one as first can contain code for a lot of things which wouldn't be working in Web development.
For that reason you'd have a shared library called Company.Shared.Web.dll and a common to all development scenarios called Company.Shared.dll.
You can use above logic for other cases and scenarios.
Apart from the versioning system, (should be a must when more than half developer works on a project), it's really crazy that your organization allows everyone to change the base library (or libraries) on which every other project depends on. This will be evolve in a mess very quickly.
In my shop only one/two people are allowed to change anything there. And these guys are the most skilled and valuable colleagues.
For the subdivision of functionality present in the library I am not concerned with the big one DLL. It's true that I need to redistribute all even when we change a little bit of code (and when your code is mature and well tested this happens very rarely), but keeping track of every dll shipped for this project or for that project outweights the cost of the single one DLL

How can I test the backward compatibility of API between .net assemblies

I have an assembly that provides an API and is used by some other assemblies. I need to verify that a newer version of API dll is still compatible with the older assemblies that were using the older version of API.
I've found a couple of questions that ask the same, but there are no answers that resolve my problem:
Tool to verify compatibility of a public APIs
Tool for backwards compatibility for the C#/.NET API
Suggested tools can only compare two assemblies and say if there are possible breaking changes in API, but not if the newest API really breaks the older assembly that uses it.
I'd like to find a tool or write a test that will be able to check whether each of the older dlls can work with my new API dll.
As for the changes in API more likely that I will only extend it, but even though it still can break the code in older assemblies. Some of the examples of such changes can be found here:
A definite guide to API-breaking changes in .NET
.NET: with respect to AssemblyVersion, what defines binary compatibility?
For now the only solution I see is to compile the source code of the older assemblies with the newest API, but I would like to do it only with assemblies and add them as part of my unit tests. Is there any better way I can handle that?
edit:
I'm looking for a tool that will be able to automate the process of verifying the backward compatibility between .net assemblies. (command line or with some api too)
What you want is to do a diff and generate a the list of breaking changes. Then you want to search if of your assemblies does use any of the broken APIs. You can do this with ApiChange tool to do the diff and to find any affected users of it.
To make it more concrete. If you have removed a method from an interface then you need to find all implementers and users of this method in classes which uses the interface method or any class that does implement this method.
ApiChange can search for implementers and users of specific methods on the command line with the commands -whoimplementsinterface and -whousesmethod. It is not automated at the command line but you can directly use the ApiChange.Api.dll to automate this queries.
Edit1:
I just forgot: The ApiChange tool has actually the functionality you are interested in already. It is the option
-ShowrebuildTargets -new -old [-old2 ] -searchin
We did use it in our department with good results. The only gotcha are the XML Intellisense files. If another target does not use the removed method but references it inside the XmlDoc the compiler will write a warning that a non existing method was referenced. This is quite hard to catch and would involve to parse the intellisense docu files as well. But this is quite an edge case.
I've spent the day looking around for an answer to this. It seems like the tools referenced on the related (unhelpfully closed) questions are now dead or as good as. But I've just taken a look at Telerik's assembly diff tool JustAssembly and this looks much better than rolling your own, which, if you look at their library seems to be a whole heap of work and likely to go wrong.
They have a UI which isn't of that much help from the point of view of integrating into your CI build, it is pretty basic, but you can build the library from source, which I've just done and the library looks like it has everything you need to get yourself up and running pretty quickly.

Link seams in .NET

I just recently finished Michael Feathers' book Working Effectively with Legacy Code. It was a great book on how to effectively create test seams and exploit them to get existing code under test.
One of the techniques he talk about was using "link seams". Basically the idea was that if you had code that depending on another library you could use the linker to insert a different library for testing than for production. This would allow you to sense test conditions through a mock library, or avoid calling into libraries that have real world effects (databases, emails, etc.), etc.
The example he gave was in C++. I'm curious if this technique (or something similar) is possible in .NET / C#?
Yes it is possible in .Net. In the simplest case, you can just replace an assembly with another one of the same name.
With a strongly named assembly, you should change the version number and then configure assembly bindings to override the compile time "linked" version. This can be done on an enterprise, machine, user or directory level.
There are some caveats, related to security. If the assembly you wish to substitute has been strongly named, then you will need to recreate the same public key in signing the assembly.
In other words, if you as the application developer do not want your libraries "mocked" (or perhaps replaced with malicious code) then you must ensure that the assembly is signed and the private key is not publicly available.
That is why you cannot mock DateTime -- because Microsoft has strongly named the core libraries of .Net.
That sounds something a bit like the things Typemock isolator offers, in particular their claimed ability to rip out and mock existing types. But I've never used it ;-(
As an example, DateTime.Now is something that shouldn't be mockable, right?
alt text http://site.typemock.com/storage/feature-images/dateTime.png?__SQUARESPACE_CACHEVERSION=1252774490561

What are the advantages of loading DLLs dynamically?

Looking for the advantages of loading DLLs dynamically as opposed to letting your application load the DLLs by default.
One advantage is for supporting a plugin architecture.
Suppose for example you want to write a service that performs different types of tasks on a scheduled basis. What those tasks are doing, isn't actually relevant to your core service which is just there to kick them off at the right time. And, it's more than likely you want to add support to do other types of tasks in the future (or another developer might want to). In that scenario, by implementing a plugin approach, it allows you to drop in more (compatible by interface) dlls which can be coded independently of the core service. So, adding in support for a new task does not require a new build/deployment of the whole service. If a particular task needs to change, just that dll needs to be redeployed and then automatically picked up.
It also requires other developers to not be concerned with the service themselves, they just need to know what interface to implement so it can be picked up.
We use this architecture for our processing applications to handle differences that our different customers require. Each DLL has a similar structure and implements the same interface and entry method "Process()". We have an XML file that defines which class to load based on the customer and whether there are more methods besides process that needs to be called. Performance should not be an issue until your transaction count gets very high.
Loading Shared Objects dynamically is the mechanism for allowing plugins ad hoc to running applications. Without plugins a modular application would have to be put together at link-time or compile-time (look at the code of nginx).
Your question is about C#/.NET so in this world dynamic DLL loading requires advanced programming skills. This could compensate all the potential benefits of dynamic DLL loading. You would simply have to write a lot 'low level' code.
In C++/Win32 I often have to load a DLL dynamically when this DLL has some new API function which is not available on older operating systems. In this case I need to ensure the availability of this API at runtime. I cannot just link against this DLL because it will cause application loading errors on legacy operating systems.
As mentioned, you could also have some benefits in a plugin-based environment. In this case you would have more control on your resources if loading DLLs dynamically. Essentially COM is a good example of dynamic DLL handing.
If you only load the DLLs you need then the startuptime of the application should be faster.
Another reason to load DLL's dynamically is for robustness.
It is possible to load a DLL into what is known as an AppDomain. An Appdomain is basically a sand box container that you can put things into (Either portions of DLL's or whole EXEs) to run in isolation, but within your application.
Unless you call into a type contained within an AppDomain, it has no way to interact with your application.
So, if you have a dodgy third party DLL, or a DLL that you don't otherwise have the source code for, you can load it into an AppDomain to keep it isolated from your main application flow.
The end result is that if the third party DLL throws a wobbly, only the appdomain, and not your entire application is affected.

Categories

Resources