Ninject fails to bind when running from NUnit - c#

i have come across a problem i am not sure how to best solve, ideally without redoing the code.
Prologue:
I was handed an existing application to maintain and as necessary, upgrade. It's an C# application. Now, i am not a pro when it comes to c#, but i am good enough to get things done while not making anyones eyes bleed when reading my code.
The problem is, the application uses NUnit (not that big of a problem, i have experience with that and i understand the basics) and Ninject (thats a different story).
Issue:
The application uses Ninject to bind several classes to the kernel. My goal was to modify the application a bit to allow support for some different objects.
Now, when i debug the application or i build it and deploy, it works.
However, running Nunit, i get an Ninject.ActivationException: Error activating IDatabaseConnection exception. (IdatabaseCnnection is an interface from internal library.
I tried to recreate the test that fails in an console application and it works there as well, it fails only when run through NUnit
Ninject.ActivationException: Error activating IDatabaseConnection
No matching bindings are available, and the type is not self-bindable.
Activation path:
3) Injection of dependency IDatabaseConnection into parameter databaseConnection of constructor of type OrderMailDefinitionSource
2) Injection of dependency IMailDefinitionSource into parameter mailDefinitionSource of constructor of type DocumentMailService
1) Request for DocumentMailService
Suggestions:
1) Ensure that you have defined a binding for IDatabaseConnection.
2) If the binding was defined in a module, ensure that the module has been loaded into the kernel.
3) Ensure you have not accidentally created more than one kernel.
4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name.
5) If you are using automatic module loading, ensure the search path and filters are correct.
So as far as i can tell, for some reason when running through NUnit, the binding fails. Running it in any way just not through NUnit, it works.
If you would have any ideas, i would be thankfull.
Have a nice day.
Answer
This was the problem (originally didnt use the CodeBase property). Nunit didnt copy the DLLs and so failed to load about 21 different dlls. After this was fixxed it worked.
kernel.Bind(x => x.FromAssembliesInPath(Path.GetDirectoryName(new Uri(Assembly.GetExecutingAssembly().CodeBase).LocalPath))
.SelectAllClasses()
.BindDefaultInterface());

Related

How to reference to assembly in mvc at runtime

In my Asp.Net MVC application, i have some view file (.cshtml) which has reference to an external library which it will be loaded at runtime. so after app started, i load the assembly by Assembly.Load and i register the controllers by my own custom ControllerFactory and every thing is ok.
But, in some views which has references to the dynamically loaded assembly, throws the :
Compiler Error Message: CS0234: The type or namespace name 'MyDynamicNamespace' does not exist in the namespace 'MyApp' (are you missing an assembly reference?)
exception that tells the razor compiler cannot resolve the related assembly.
My question is that, is there a way to register the assembly at runtime, to able the razor compiler can access to it and resolve it?
Notice that i can't use BuildManager.AddReferencedAssembly method because my assembly have to be loaded after app start, and the BuildManager does not support it.
1) I wouldn't recommend having your views directly use external references or dynamically loaded external references. Abstract this by having your view interact with a controller. Make your controller feed a data object to your view that is known at build time by your application (in other words, an object known to your web application at build time). This is to completely isolate (abstract) plugin specific business from your view. Then make your controller interact with the "plugin".
2) I don't know how your "custom factory" works but nowadays we don't really build any "custom factories" anymore. Instead we leverage dependency injection containers such as Microsoft Unity(or Ninject, or Castle Windsor or etc..). Creating "custom factories" is very old fashioned and you're basically reinventing the wheel that has been solved with dependency injection.
3) As far as dynamically loading external assemblies, I don't know if you have it right but here's a link:
Dynamically load a type from an external assembly
4) Typically, a plugin design exposes interfaces that are known to your main web application at build time. What the plugin design hides is the implementation which can change from one plugin to another. The important thing is that each plugin implements the same public interfaces, those that are expected by your main web app. Usually, you will have those interfaces in a separate "Common" project that is referenced by both, your main web application and your plugin that implements those interfaces. Therefore, from your main web app, you will know what the public interfaces of your plugins are, you can dynamically load the external assembly and use C# reflection to find the classes that implements those interfaces and load them into your dependency injection container. Likewise, anyone who will want to develop of a plugin for your web app will have to implement the interfaces that are defined in your "Common" project.
Note: "Common" is just a random name I gave to the project. You can name it "PluginInterface" or whatever you want.
After that, having your controller grab whatever it needs from the dependency injection container is trivial.
Note: Your plugin interfaces will probably have input and output entities. These entities are shared between your main web app and your plugin. In such case, since these entities are part of your interfaces they need to be in the "Common" project. You may be tempted to have your controller return those entities directly to your view but then you won't have a perfect abstraction between your view and your plugin. Not having perfect abstractions is for another discussion.
Hope it helps!
As a Sys Admin, I would recommend a maintenance period, especially if the file you replace messes something else up. Even if your maintenance period is only a half hour it is good practice.
As for the DLL and recompile... typically the IIS Worker Process (the service running your application pool) will recycle at normal intervals based on the IIS configuration and memory usage. When this happens the application will recompile if anything requires the JIT. It also terminates all open user sessions as it physically stops and then re-starts. The worker process also monitors the root directory (like you mentioned) for any file changes. If any are found a recompile is forced. Just because a dependency is changed does not force a recompile. If you pre-compile your DLL the only thing left to compile is any code inside of your actual ASPX file and this uses the JIT which compiles each time. From what you described IIS should not need to recompile or restart, sounds like another problem where IIS is hanging when you swap out the file. Might need to get a sys admin involved to look at the IIS logs.
Good Luck!
http://msdn.microsoft.com/en-us/library/ms366723.aspx
http://msdn.microsoft.com/en-us/library/bb398860.aspx
Here is a note that may helps: If you are not loading your assemblies from the /bin directory, you need to ensure that the path to the assemblies is discoverable:
AppDomain.CurrentDomain.AppendPrivatePath(path_to_your-dyna_assembly);

Self-Hosted Web API - Referenced Controllers Don't Work with "Optimize Code" Option

I have an ASP.NET Web API project hosted in a Windows Service, using OWIN. I'm using a Startup class that configures some things, and am using the IAppBuilder.UseWebApi() option. Everything works perfectly in the debugger and from the command line (I use a command line argument of -e to run in console, or it can run as a Windows Service).
Everything is working great, BUT, when I build in Release mode with the build option enabled for "Optimize Code", my service controllers don't seem to work.
I have my controller in a separate class library, and I'm using this line to probe the controller on application start, as suggested here: Self-hosting WebAPI application referencing controller from different assembly
var controllerType = typeof(MetricsController);
I have a feeling that the Optimize Code option causes the compiler to ignore this line. Does anyone have any insight or ideas about how I can make this work?
Thanks!
After working with this for a bit, I implemented the following approach which the Optimize Code option seems to be happy with.
Class-level member:
private readonly List<Type> _controllers = new List<Type>();
Then in my Startup.Configuration method, I replaced this:
// Hack: This forces a manual probe of the controller assembly
var controllerType = typeof(MyController);
With this:
// Better Hack: This forces a manual probe of the controller assembly
_controllers.Add(typeof(MyController));
What seems to be happening is that the Optimize Code option is stripping out logic that is declared but never used. In this case, I was using the original hack to probe the assembly so the application knows about its existence. Since it was so tightly scoped and the variable controllerType was never used, the compiler ignores it. The new approach is probably just enough of a hint that it may be used that the compiler keeps it.
I tried a reflection-based approach but could not get it to work. I even manually loaded the assembly having the controllers, and I could see it loaded in the AppDomain when debugging, but for some reason it still wouldn't work. I could even verify that the List was populated with the controller types, but strangely no luck. Definitely open to any suggestions on this since I will be using a similar approach in the future on another project.

Error activating IInterceptor... only through COM?

TL;DR: Kernel.Get<T> works when called from a .net/WPF app, but blows up with an inner ActivationException (inside a TargetInvocationException) when called from a VB6 app. WTH?
This is a bit of a follow-up on this question where I configured an abstract factory with Ninject conventions, in such a way that I never need to actually implement one, the Ninject factory extension takes care of generating one on-the-fly.
This worked beautifully... until I needed to run my library from VB6 code.
_kernel.Bind(t => t.FromAssemblyContaining(typeof(ViewModelBase))
.SelectAllInterfaces()
.EndingWith("ViewFactory")
.BindToFactory());
As soon as I call anything on the app from VB6 code, if I wrap the resolving of dependencies inside a try/catch block, I'm trapping a TargetInvocationException with an inner ActivationException:
Error activating IInterceptor using conditional implicit self-binding
of IInterceptor Provider returned null. Activation path:
3) Injection of dependency IInterceptor into parameter of constructor of type IViewFactoryProxy
2) Injection of dependency IViewFactory into parameter viewFactory of constructor of type MsgBox
1) Request for IMsgBox
Suggestions:
1) Ensure that the provider handles creation requests properly.
I have no reference to the Ninject.Interception extension (at this point).
Oddly if instead of launching VB6 I launch a sandbox WPF test app when I debug, I don't get any ActivationException and everything works like a charm.
The VB6 code dies with automation error -2146232828 (80131604) which yields nothing on Google, but I'm guessing it has to do with the TargetInvocationException being thrown.
As far as .net code is concerned it just works: if I compose the app from a WPF client I can break in the MsgBox class constructor and see that the IViewFactory parameter is happy with a Castle.Proxy.IViewFactoryProxy; if I compose the app from a VB6 ActiveX DLL (I also created a VB6 EXE to test and same as the DLL), it blows up.
UPDATE
I removed the generic abstract factories, and I no longer get this error. And because I don't want to be writing factories, I went for a bit of tighter coupling that I can live with. Now I'd like to know why this was happening!
I ran into this exception today in a completely different context to you. I was trying to use a kernel configured with a custom module inside a design time view model in the VS WPF Designer. The module had a number of interfaces configured using the ToFactory() extension method.
The problem was that for some reason the Ninject.Extensions.Factory.FuncModule was not loaded automatically when I was initialising my kernel, possibly due to some trickery in the way the VS designer handles creating design time classes (maybe it didn't load the appropriate assembly or something, who knows).
I had a unit test that was creating one of these design time view models, and it worked perfectly, so it was definitely something related to the designer.
I fixed the issue by creating a special kernel for my design time view models.
public class DT_Kernel : StandardKernel
{
public DT_Kernel()
: base(new MyModule())
{
if (!HasModule(typeof(FuncModule).FullName))
{
Load(new[] { new FuncModule() });
}
}
}
The important part of this code is the bit that loads the FuncModule if it isn't already loaded.
You might be able to leverage that code to fix your issue.

Modular application architecture and Castle Windsor

I'm developing a .Net desktop app that interacts with scientific instruments. There are a number of variations of this instrument, each with different features, components, etc, so I've come up with a plugin/modular architecture where a "module assembly" contains all of the necessary business logic, UI, etc. to interact with that hardware component/feature.
Currently I have one solution that contains everything - the "core" application project, common libraries, plus the "module" projects. The idea is that we install the whole lot to a customer site (rather than cherry-picking which DLLs they need), and "activate" the relevant modules using a config file that contains a list of required modules.
The main application project loads the modules using Castle Windsor, using an AssemblyFilter and a custom InstallerFactory. It searches each module assembly looking for a class implementing IWindsorInstaller and decorated with a particular custom attribute (which has a property containing the module name). The module's installer will only be run if the attribute's module name is one of those requested. These installer classes are responsible for registering everything needed by that module with Windsor (business logic, views, view models, etc.).
This solution works fine in my proof of concept, however I can see a scenario where two or more modules are functionally very similar, and will therefore need to share common code. Let's say I have projects "ModuleA" and "ModuleB", and their Windsor installers registers the same IFooService class in project "ClassLibraryX". The app will fall over because IFooService has been reigstered twice, and Windsor won't know which one to resolve when requested by a constructor.
What's the best way to handle this? Thoughts so far:-
Find out if a particular component has already been registered with Windsor. This feels hacky (if possible at all)
Register components using a name, but how do I request a named instance with constructor injection?
In each module project create a new interface, such as public interface IModuleAFooService : IFooService, and register/use this throughout the project (rather than IFooService).
Any thoughts?
Edit: in fact Windsor won't fall over when it tries to resolve IFooService. It will fall over when the second module attempts to register the same interface/concrete implementation!
The way I see it, you have a couple options. I think you have two main issues. The first is that you are installing the shared interface twice (or more than that). The second is that you could have two different versions of the shared interface.
For the first issue, I would separate out the shared interfaces into their own assembly. Inside that assembly, I would have an installer that is scoped to that assembly. Then, you can tell Windsor to install that shared component and it knows how to wire itself up.
For the second issue, you have two options (as I see it). First option is that you keep your shared components backwards compatible. Second option is to isolate you runtime (through app domains or processes).
Can you not provide some meta-data for the plugin, i.e give each plugin implementation a name attribute which can be used by windsor to identify which of the implementations you want?
I have not used Castle too much recently but I am sure it did have the notion of named Bindings/Registrations, so you could use that as a way to distinguish things, if that is not going to be possible and there is no other meta data you can think of using which would make it less ambiguous for Windsor, then I would just opt with your 3rd option.
Having just read your 2nd option again (after writing the above) that seems the best option, I cannot remember EXACT syntax but in most DI frameworks you do something like:
var instance = Get<IMyInterface>("Named This");
There will be loads of syntax examples on their documentation somewhere, but you will need to know the name on both the Windsor side to register it AND on the client side to request it.
Named instances are ok. You can define dependency on concrete named service via DependsOn(Dependency.OnComponent("paramName", "serviceName")) method in fluent configuration.

Unit testing private members using *_Accessor

I am currently working on a C# solution in VS 2010.
In order to write sufficient unit tests for my business processes I am using the Accessor approach to access and change the internals of my business objects.
The issues that has arisen on my TFS build server now that I have added Accessors to my objet assembly in a number of other test assemblies, when my test run not all the test pass, some fail with a warning along the lines of:
... <Test failed message> ....
... Could not load file 'ObjectLibrary_Accessor, Version=0.0.0.0,
Culture=neutralm PublicKeyToken=ab391acd9394' or one of its dependencies.
...
...
I believe the issue is that as each test assembly is compiled a ObjectLibrary_Accessor.dll is created with a different strong name. Therefore when some of the tests are compiled the strong name check fails with the above error even-though the dll is in the expected location.
I see a number of options, none of which are particularly attractive, these include:
Not using the _Accessor approach.
Set a different XX_Accessor.dll for each test assembly - Is it possible to change the name of the generated assembly to avoid the clash?
Change my integration build to use a different binaries folder for each test project(how?)
Other options I do not know about?
I would be interested in any advice or experience people have had of this issue, solutions and workarounds (although I do not have time to change my code so option 1 is not a favorate).
The Accessor approach is a bit fragile, as you've seen.
You can make internal items visible to your test code by using the InternalsVisibleTo assembly attribute.
If you want to get at private methods and you're using .NET 4.0 then consider using something like Trespasser to make this easier.
For more options see the answers for this question: How do you unit test private methods?

Categories

Resources