Check if all required DLLs and their dependencies exist on target - c#

Is there a good way in C# to check if all referenced DLLs are installed on the target machine?
What I want to achieve is that my software doesn't start if one of the referenced DLLs are missing on the target machine. Based on that I can be sure that my features won't break during runtime because a DLL can't be found.
My idea: Create a separate AppDomain during startup, and load ALL referenced DLLs to it. If it fails I'd shut down my software. If loading succeeds I'd unload the separate AppDomain and proceed to start the software. Is it a good idea to perform the check in such a manner, or are there some known problems/pitfalls around it.
I'm aware that I posted no corresponding piece of code, I only want to gather feedback if my idea is a good one or not.
Thx

Related

How to call tool DLLs in C# when the DLL-path is different on the target PC?

I might be a bit stupid, but I want to create a tool in Visual Studio in C# and want to call third party tools via their API-DLLs. The only topics I found here are dealing with one of the two methods that I already know:
Compilation time: add a reference to "C:\FooTool\foo.dll" in my project + "using fooToolNamespace.fooToolClass" in my code (compilation time) --> I can "naturally" use the classes of the DLL and will even get full IntelliSense support if a suiting XML-file is available with the DLL. Also compilation time checks will be done for my usage of the dll.
Dynamic (run time): calling e.g. Assembly.LoadFile(#"C:\FooTool\foo.dll") and then using reflection on it to find functions, fields and so on --> no IntelliSense, no compilation time checks
So I actually have the DLL at hand and thus option 1) would be nice during development. But if my tool is used on a different PC, the third-party DLL might be in a different path there, e.g. "C:\foo\foo.dll" and "C:\bar\foo.dll".
In my understanding using a copy of "foo.dll" will not work, because "foo.dll" might have dependencies, e.g. requiring other files of the FooTool-directory. Thus in my understanding I have to call the DLL which is "installed" to the target PC and not a local copy of it.
So can I somehow change the path where my tool accesses the "foo.dll" at runtime and still use method 1) during development?
Or is there another way of doing things?
Or am I just dumb and there is a simple solution for all this?
Thanks a lot for the help and have a great day
Janis
To be able to use option 1 (a referenced DLL), you need to put the DLL somewhere "where your EXE (or, more precisely, the Assembly Resolver) can find it" on the customer's PC.
So where does the assembly resolver look for your DLL?
In the directory where the EXE resides (for desktop/console applications) or the bin subdirectory (for web applications). Since you mention that your DLL requires other dependencies as well, you'd need to copy them to that location as well.
The Global Assembly Cache (GAC). If your dependency supports this, installing it to the GAC ensures that it can be found by your application.
These two are the "supported" scenarios. There is also the possibility to tweak the assembly resolver to look into other directories as well, but that should be reserved for special cases where the other two options failed. (We had such a case and solved it with a custom AssemblyResolve handler on the application domain.)

How to avoid 'Unable to load DLL' scenarios

I'm trying to build a game using Monogame. Monogame uses its own bunch of DLLs. However inside one of those DLLs is a dependency on another DLL (which could potentially not exist as it was not installed).
More specifically when trying to utilize a method/class of Monogame's DLLs
GamePad.GetState(PlayerIndex.One).Buttons.Back
This error is produced:
Unable to load DLL 'xinput1_3.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)
The fix for this error isn't difficult. The user needs to install the correct system requirements.
How do I tell this to the user from the front end? My aim is to inform the user that 'Gamepad controllers are disabled for this because xxx is not installed on your system', while carrying on running the game - as gamepads are optional.
Is there a way I could have caught this issue (i.e. is there a way I could have checked to see if this DLL exists - without being very specific of its location/filepath) before attempting to utilize the method/class?
Is having a try-catch block around the whole application the best approach?
Are there better ways to handling the (common?) 'Unable to load DLL' cases?
My main goal is to ignore any DLL dependency issues as I plan to distribute this game without installers. Plus any missing non-game-breaking DLLs should just be ignored.
One solution could be to programmatically check all system folders for the required dlls during startup and alert user if they are missing.
EDIT: There is a better solution, check using LoadLibrary() function, Check if a DLL is present in the system
You can check list of all .DLLs you need before running the app.
Of course you can alays rely on try and catch for DLLs which you never knew.

What are my options in a .NET DLL hell scenario involving multiple private assemblies with different versions?

I'm writing a plug-in for a 3rd party application (.NET). This application lets me choose the plug-in (as a .dll library file) to load. However, if I have two versions of the same library---they have the same name but are in different directories---and try to load one after the other, it only loads the first plug-in and treats the second as if it was the first. In other words, if the first plug-in is supposed to show a message box saying "First plug-in" and the second plug-in is supposed to show a message box saying "Second plug-in," then loading the second plug-in after the first will actually show a message saying "First plug-in" (i.e., the second plug-in was actually never loaded).
After searching and reading online, I believe that the problem is that the 3rd party application loads its plug-ins into its primary AppDomain. Therefore, plug-in libraries are never unloaded (become locked?) and subsequent attempts to load a plug-in with the same name simply uses the library that's already been loaded. I thought perhaps signing my plug-in libraries would fix the problem, but unfortunately, I'm unable to sign them because I depend on a .dll provided by the 3rd party application and it is not signed. Also, I cannot change the 3rd party application's config file, so I cannot play around with probing.
Our current solution is to re-name the assembly for every version of the plug-in library we have (for example, "PlugIn-1.0.dll" and "PlugIn-2.0.dll"), including re-naming all their dependent assemblies. I don't mean just changing their file name, but changing the AssemblyName property and re-compiling. This works, but I'd like to see if there's a cleaner solution. It wouldn't be so bad if it was just the plug-in assembly name we had to change, but we are also forced to change all their dependent .dll's (because different plug-ins may use different versions of these .dll's as well). I tried creating a config file for the plug-in library to change the probing directory, but this doesn't work. It looks like it is the application itself that does the probing, not the library that depends on the .dll's (am I correct in inferring this?)
Finally, I tried having my plug-in create an AppDomain and load its dependent .dll's into it, but unfortunately my plug-in directory location (and dependent .dll's) must be on a remote location relative to the 3rd party app. There are security/permission issues with loading assemblies over network locations in .NET 4.0 (which I'm using) that I haven't been able to solve.
What are my options? Thanks in advance.
I think you're knocking on the right door with spinning up your own AppDomain. That is the only way I know of to have different versions of the same assembly loaded into the same process.
And yes - as soon as a .NET appdomain has loaded an assembly, it will not load another version of that assembly. Whoever is first wins. And of course, there is no way to unload an assembly once it has been loaded.
As for your permission issues.. Can you copy the assemblies from that network location to a user folder (where the user has write permissions) and load them from there? I've been able to do this successfully for an auto-updating application.

Reload a reference on the fly (C# .NET)

Ok, heres the rundown...
I have a service that is running and it references a dll that is changed a lot. This service is hopefully going to have multiple clients hitting it at once so it would be inefficient to shut it down and recomplile/reload with a new reference. I was wondering if there was anyway the program could pretty much auto-detect a dll with a later version and just drop the old one and load the new one without having to be shut down.
This can be difficult to achieve in a .Net application. Once a DLL is loaded into a particular AppDomain there is no way to unload the DLL from the AppDomain. The only way to get the DLL out of the process is to unload the AppDomain itself.
You could achieve what you're trying to accomplish by having the DLL loaded into a secondary AppDomain, and restarting that AppDomain with the new DLL when you detect a change. This also involves using some advanced shadow copy features though to allow the DLL to be deleted while being used by the process.
The only way to do this is move the code that uses the assembly to its own AppDomain. You can shutdown the AppDomain and restart it with the new assembly.
See this question: How to reload an assembly in C# for a .NET Application Domain?
The Managed Extensibility Framework will allow you to do this.

How do I work with shared assemblies and projects?

To preface, I've been working with C# for a few months, but I'm completely unfamiliar with concepts like deployment and assemblies, etc. My questions are many and varied, although I'm furiously Googling and reading about them to no avail (I currently have Pro C# 2008 and the .NET 3.5 Platform in front of me).
We have this process and it's composed of three components: an engine, a filter, and logic for the process. We love this process so much we want it reused in other projects. So now I'm starting to explore the space beyond one solution, one project.
Does this sound correct? One huge Solution:
Process A, exe
Process B, exe
Process C, exe
Filter, dll
Engine, dll
The engine is shared code for all of the processes, so I'm assuming that can be a shared assembly? If a shared assembly is in the same solution as a project that consumes it, how does it get consumed if it's supposed to be in the GAC? I've read something about a post build event. Does that mean the engine.dll has to be reployed on every build?
Also, the principle reason we separated the filter from the process (only one process uses it) is so that we can deploy the filter independently from the process so that the process executable doesn't need to be updated. Regardless of if that's best practice, let's just roll with it. Is this possible? I've read that assemblies link to specific versions of other assemblies, so if I update the DLL only, it's actually considered tampering. How can I update the DLL without changing the EXE? Is that what a publisher policy is for?
By the way, is any of this stuff Google-able or Amazon-able? What should I look for? I see lots of books about C# and .NET, but none about deployment or building or testing or things not related to the language itself.
I agree with Aequitarum's analysis. Just a couple additional points:
The engine is shared code for all of the processes, so I'm assuming that can be a shared assembly?
That seems reasonable.
If a shared assembly is in the same solution as a project that consumes it, how does it get consumed if it's supposed to be in the GAC?
Magic.
OK, its not magic. Let's suppose that in your solution your process project has a reference to the engine project. When you build the solution, you'll produce a project assembly that has a reference to the engine assembly. Visual Studio then copies the various files to the right directories. When you execute the process assembly, the runtime loader knows to look in the current directory for the engine assembly. If it cannot find it there, it looks in the global assembly cache. (This is a highly simplified view of loading policy; the real policy is considerably more complex than that.)
Stuff in the GAC should be truly global code; code that you reasonably expect large numbers of disparate projects to use.
Does that mean the engine.dll has to be reployed on every build?
I'm not sure what you mean by "redeployed". Like I said, if you have a project-to-project reference, the build system will automatically copy the files around to the right places.
the principle reason we separated the filter from the process (only one process uses it) is so that we can deploy the filter independently from the process so that the process executable doesn't need to be updated
I question whether that's actually valuable. Scenario one: no filter assembly, all filter code is in project.exe. You wish to update the filter code; you update project.exe. Scenario two: filter.dll, project.exe. You wish to update the filter code; you update filter.dll. How is scenario two cheaper or easier than scenario one? In both scenarios you're updating a file; why does it matter what the name of the file is?
However, perhaps it really is cheaper and easier for your particular scenario. The key thing to understand about assemblies is assemblies are the smallest unit of independently versionable and redistributable code. If you have two things and it makes sense to version and ship them independently of each other, then they should be in different assemblies; if it does not make sense to do that, then they should be in the same assembly.
I've read that assemblies link to specific versions of other assemblies, so if I update the DLL only, it's actually considered tampering. How can I update the DLL without changing the EXE? Is that what a publisher policy is for?
An assembly may be given a "strong name". When you name your assembly Foo.DLL, and you write Bar.EXE to say "Bar.EXE depends on Foo.DLL", then the runtime will load anything that happens to be named Foo.DLL; file names are not strong. If an evil hacker gets their own version of Foo.DLL onto the client machine, the loader will load it. A strong name lets Bar.EXE say "Bar.exe version 1.2 written by Bar Corporation depends on Foo.DLL version 1.4 written by Foo Corporation", and all the verifications are done against the cryptographically strong keys associated with Foo Corp and Bar Corp.
So yes, an assembly may be configured to bind only against a specific version from a specific company, to prevent tampering. What you can do to update an assembly to use a newer version is create a little XML file that tells the loader "you know how I said I wanted Foo.DLL v1.4? Well, actually if 1.5 is available, its OK to use that too."
What should I look for? I see lots of books about C# and .NET, but none about deployment or building or testing or things not related to the language itself.
Deployment is frequently neglected in books, I agree.
I would start by searching for "ClickOnce" if you're interested in deployment of managed Windows applications.
Projects can reference assemblies or projects.
When you reference another assembly/project, you are allowed to use all the public classes/enums/structs etc in the referenced assembly.
You do not need to have all of them in one solution. You can have three solutions, one for each Process, and all three solutions can load Engine and Filter.
Also, you could have Process B and Process C reference the compiled assemblies (the .dll's) of the Engine and Filter and have similar effect.
As long as you don't set the property in the reference to an assembly to require a specific version, you can freely update DLLs without much concern, providing the only code changes were to the DLL.
Also, the principle reason we
separated the filter from the process
(only one process uses it) is so that
we can deploy the filter independently
from the process so that the process
executable doesn't need to be updated.
Regardless of if that's best practice,
let's just roll with it. Is this
possible?
I actually prefer this method of updating. Less overhead to update only files that changed rather than everything everytime.
As for using the GAC, whole other level of complexity I won't get into.
Tamper proofing your assemblies can be done by signing them, which is required to use the GAC in the first place, but you should still be fine so long as a specific version is not required.
My recommendation is to read a book about the .NET framework. This will really help you understand the CLR and what you're doing.
Applied Microsoft .NET Framework Programming was a book I really enjoyed reading.
You mention the engine is shared code, which is why you put it in a separate project under your solution. There's nothing wrong with doing it this way, and it's not necessary to add this DLL to the GAC. During your development phase, you can just add a reference to your engine project, and you'll be able to call the code from that assembly. When you want to deploy this application, you can either deploy the engine DLL with it, or you can add the engine DLL to the GAC (which is another ball of wax in and of itself). I tend to lean against GAC deployments unless it's truly necessary. One of the best features of .NET is the ability to deploy everything you need to run your application in one folder without having to copy stuff to system folders (i.e. the GAC).
If you want to achieve something like dynamically loading DLL's and calling member methods from your processor without caring about specific version, you can go a couple of routes. The easiest route is to just set the Specific Version property to False when you add the reference. This will give you the liberty of changing the DLL later, and as long as you don't mess with method signatures, it shouldn't be a problem. The second option is the MEF (which uses Reflection and will be part of the framework in .NET 4.0). The idea with the MEF is that you can scan a "plugins" style folder for DLL's that implement specific functionality and then call them dynamically. This gives you some additional flexibility in that you can add new assemblies later without the need to modify your references.
Another thing to note is that there are Setup and Deployment project templates built into Visual Studio that you can use to generate MSI packages for deploying your projects. MSDN has lots of documentation related to this subject that you can check out, here:
http://msdn.microsoft.com/en-us/library/ybshs20f%28VS.80%29.aspx
Do not use the GAC on your build machine, it is a deployment detail. Visual Studio automatically copies the DLL into build directory of your application when you reference the DLL. That ensures that you'll run and debug with the expected version of the DLL.
When you deploy, you've got a choice. You can ship the DLL along with the application that uses it, stored in the EXE installation folder. Nothing special is needed, the CLR can always find the DLL and you don't have to worry about strong names or versions. A bug fix update is deployed simply by copying the new DLL into the EXE folder.
When you have several installed apps with a dependency on the DLL then deploying bug fix updates can start to get awkward. Since you have to copy to the DLL repeatedly, once for each app. And you can get into trouble when you update some apps but not others. Especially so when there's a breaking change in the DLL interface that requires the app to be recompiled. That's DLL Hell knocking, the GAC can solve that.
We found some guidance on this issue at MSDN. We started with two separate solution with no shared code, and then abstracted the commonalities to a shared assemblies. We struggled with ways to isolate changes in the shared code to impact only the projects that were ready for it. We were terrible at Open/Close.
We tried
branching the shared code for each project that used it and including it in the solution
copying the shared assembly from the shared solution when we made changes
coding pre-build events to build the shared code solution and copy the assembly
Everything was a real pain. We ended up using one large solution with all the projects in it. We branch each project as we want to stage features closer to production. This branches the shared code as well. It's simplified things a lot and we get a better idea of what tests fail across all projects, as the common code changes.
As far as deployment, our build scripts are setup to build the code and copy only the files that have changed, including the assemblies, to our environments.
By default, you have a hardcoded version number in your project (1.0.0.0). As long as you don't change it, you can use all Filter builds with the Process assembly (it only knows it should use the 1.0.0.0 version). This is not the best solution, however, because how do you distinguish between various builds yourself?
Another option is use different versions of the Filter by the same Process. You should add an app.config file to the Process project, and include a bindingRedirect element (see the docs). Whenever the Runtime looks for a particular version of the Filter, it's "redirected" to a version indicated in the config. Unfortunately, this means that although you don't have to update the Process assembly, you'll have to update the config file with the new version.
Whenever you encounter versioning problems, you can use Fuslogvw.exe (fusion log viewer) to troubleshoot these.
Have fun!
ulu

Categories

Resources