TL;DR: Kernel.Get<T> works when called from a .net/WPF app, but blows up with an inner ActivationException (inside a TargetInvocationException) when called from a VB6 app. WTH?
This is a bit of a follow-up on this question where I configured an abstract factory with Ninject conventions, in such a way that I never need to actually implement one, the Ninject factory extension takes care of generating one on-the-fly.
This worked beautifully... until I needed to run my library from VB6 code.
_kernel.Bind(t => t.FromAssemblyContaining(typeof(ViewModelBase))
.SelectAllInterfaces()
.EndingWith("ViewFactory")
.BindToFactory());
As soon as I call anything on the app from VB6 code, if I wrap the resolving of dependencies inside a try/catch block, I'm trapping a TargetInvocationException with an inner ActivationException:
Error activating IInterceptor using conditional implicit self-binding
of IInterceptor Provider returned null. Activation path:
3) Injection of dependency IInterceptor into parameter of constructor of type IViewFactoryProxy
2) Injection of dependency IViewFactory into parameter viewFactory of constructor of type MsgBox
1) Request for IMsgBox
Suggestions:
1) Ensure that the provider handles creation requests properly.
I have no reference to the Ninject.Interception extension (at this point).
Oddly if instead of launching VB6 I launch a sandbox WPF test app when I debug, I don't get any ActivationException and everything works like a charm.
The VB6 code dies with automation error -2146232828 (80131604) which yields nothing on Google, but I'm guessing it has to do with the TargetInvocationException being thrown.
As far as .net code is concerned it just works: if I compose the app from a WPF client I can break in the MsgBox class constructor and see that the IViewFactory parameter is happy with a Castle.Proxy.IViewFactoryProxy; if I compose the app from a VB6 ActiveX DLL (I also created a VB6 EXE to test and same as the DLL), it blows up.
UPDATE
I removed the generic abstract factories, and I no longer get this error. And because I don't want to be writing factories, I went for a bit of tighter coupling that I can live with. Now I'd like to know why this was happening!
I ran into this exception today in a completely different context to you. I was trying to use a kernel configured with a custom module inside a design time view model in the VS WPF Designer. The module had a number of interfaces configured using the ToFactory() extension method.
The problem was that for some reason the Ninject.Extensions.Factory.FuncModule was not loaded automatically when I was initialising my kernel, possibly due to some trickery in the way the VS designer handles creating design time classes (maybe it didn't load the appropriate assembly or something, who knows).
I had a unit test that was creating one of these design time view models, and it worked perfectly, so it was definitely something related to the designer.
I fixed the issue by creating a special kernel for my design time view models.
public class DT_Kernel : StandardKernel
{
public DT_Kernel()
: base(new MyModule())
{
if (!HasModule(typeof(FuncModule).FullName))
{
Load(new[] { new FuncModule() });
}
}
}
The important part of this code is the bit that loads the FuncModule if it isn't already loaded.
You might be able to leverage that code to fix your issue.
Related
Basic layout is currently a simple app. Nothing fancy, I've got Views in the App.Views namespace and my ViewModels in the App.ViewModels namespace. The ViewModels are autowired to the Views through the XAML directive:
xmlns:prismMvvm="using:Prism.Windows.Mvvm"
prismMvvm:ViewModelLocator.AutoWireViewModel="True"
So, basically, this works. Then I wanted to leverage Unity's IoC / DependencyInjection for some unit tests.
The usual way would be to simply add a Windows 10 Unit Test App to the existing app and reference the latter in the Unit Test App.
This will crash upon executing the unit tests because it seems that you may not derive the App class from anything other but Application.
I.e. this works:
public sealed partial class App : Application
This does not:
public sealed partial class App : PrismUnityApplication
It's probably also not Prism's fault and something Microsoft has to fix on their end.
Now, the proposed workaround is to simply put whatever you want to unit test into a class library and reference this library from the unit test app. This works for unit testing. It also works for the Models.
My naive approach does not work, however, for the ViewModels. The ViewModel classes are still found under the App.ViewModels namespace, as before, just that they're now in a Class Library. I can access them programmatically in the main app just fine. But upon running the program, the AutoWiring silently fails without an error.
Even when I do something like this, it still does not work:
ViewModelLocationProvider.Register(typeof(MainPage).ToString(), () => new ViewModels.MainPageViewModel());
I'm not that experienced with the technologies involved yet so without an actual error I'm a bit at a loss.
edit: Just to add to the mystery - this code does work, regardless of whether the ViewModel resides in the main app or the class library:
var x = Container.Resolve(typeof(ExamplePageViewModel)) as ExamplePageViewModel;
You are correct, this is a limitation of UWP application testability in general. The testability of UWP apps is imperfect right now. In order to fix the first issue, you need to add the BindableAttribute to your application class:
[Bindable]
sealed partial class App : PrismUnityApplication
{
}
As far as pulling your Views/ViewModels out into separate assemblies, this issue is due to how UWP handles loading types form separate assemblies. None the less, this shouldn't stop you from testing your ViewModels. You can use a regular class library to test your ViewModel logic. You will mock your dependencies. You shouldn't be creating instances of your Views to test your ViewModels. So the ViewModelLocator becomes a non-issue.
We got the same issue as you, test silently failing.
when we looked at the Test output window we saw this:
App activation failed.
Failed to initialize client proxy: could not connect to test process.
Inside our [UnitTestApp], which inherits [PrismUnityApplication], we override [ConfigureContainer] method, and setup mocks on the container, instead of setting up real classes, i.e.
UnitTestApp.Current.Container.RegisterInstance(myMock.Object, new ContainerControlledLifetimeManager());
You may also have the Container setup in each test class' constructor
This way we got our test running without a silent failure.
It seemed that the silent failure was due to the container of the actual App class being called in the context of the UnitTestApp class - but I cannot confirm 100%
I'm building an application (Xamarin.Forms, PCL and iOS, in case it's relevant) that uses Simple Injector for dependency injection throughout. So far, it's been great to work with, and with the recent release of Simple Injector 3 I'm updating my code to work with the new API. Unfortunately, I'm having a little trouble with one particular registration.
Here's the basic flow:
register an ISQLitePlatform (Windows, iOS, Android, etc) to handle the platform-specific bits (SQLite API, file I/O etc.)
register a bootstrapper which will be responsible for creating the database file, setting up all the table/type mappings etc.
register a lambda that will create our data access provider, use the bootstrapper to set up the database, and return the access provider
The app can then use the data access provider to open transactions on the database - the IDataAccessProvider interface just has one method, NewTransaction(), which returns a unit-of-work object with generic CRUD methods. The IDataAccessProvider also implements IDisposable, and the Dispose() methods handle the cleanup, closing of open connections/transactions, etc.
The issue is with these registrations:
container.RegisterSingleton<ISQLitePlatform, SQLitePlatformIOS>();
container.RegisterSingleton<ILocalDataBootstrapper, SQLiteDataBootstrapper>();
container.RegisterSingleton<IDataAccessProvider>(
() => container.GetInstance<ILocalDataBootstrapper>().DataAccessProvider);
For some reason, on startup I get the diagnostic warning that SQLiteDataAccessProvider is IDisposable and has been registered as transient. I can handle this - the disposal is being handled in the correct place - but what's confusing me is that it's definitely not being registered as transient, given that I'm using RegisterSingleton().
Is this a quirk of using RegisterSingleton() with a creation lambda instead of an instance? Is it a bug? Or am I missing something I'm supposed to do?
The most likely reason why this is happening is that somewhere in your code you make the following call:
container.GetInstance<SQLiteDataAccessProvider>();
Or you have a constructor that depends on this concrete type instead of the abstraction (in which case you can expect other warnings about this as well).
Since SQLiteDataAccessProvider is not registered 'as itself' (but merely by its IDataAccessProvider abstraction), Simple Injector will resolve this type for you and by default Simple Injector will make this type transient.
By doing so, a new registration is added to Simple Injector; you will be able to see that in the debugger. There is a singleton registration for IDataAccessProvider and a transient registration for SQLiteDataAccessProvider.
Now because there is a transient registration for SQLiteDataAccessProvider, Simple Injector will warn you that this instance will not get disposed automatically.
The solution is to remove the GetInstance<SQLiteDataAccessProvider>() call (or changge the ctor argument to use the abstraction) and replace it with a call to GetInstance<IDataAccessProvider>().
i have come across a problem i am not sure how to best solve, ideally without redoing the code.
Prologue:
I was handed an existing application to maintain and as necessary, upgrade. It's an C# application. Now, i am not a pro when it comes to c#, but i am good enough to get things done while not making anyones eyes bleed when reading my code.
The problem is, the application uses NUnit (not that big of a problem, i have experience with that and i understand the basics) and Ninject (thats a different story).
Issue:
The application uses Ninject to bind several classes to the kernel. My goal was to modify the application a bit to allow support for some different objects.
Now, when i debug the application or i build it and deploy, it works.
However, running Nunit, i get an Ninject.ActivationException: Error activating IDatabaseConnection exception. (IdatabaseCnnection is an interface from internal library.
I tried to recreate the test that fails in an console application and it works there as well, it fails only when run through NUnit
Ninject.ActivationException: Error activating IDatabaseConnection
No matching bindings are available, and the type is not self-bindable.
Activation path:
3) Injection of dependency IDatabaseConnection into parameter databaseConnection of constructor of type OrderMailDefinitionSource
2) Injection of dependency IMailDefinitionSource into parameter mailDefinitionSource of constructor of type DocumentMailService
1) Request for DocumentMailService
Suggestions:
1) Ensure that you have defined a binding for IDatabaseConnection.
2) If the binding was defined in a module, ensure that the module has been loaded into the kernel.
3) Ensure you have not accidentally created more than one kernel.
4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name.
5) If you are using automatic module loading, ensure the search path and filters are correct.
So as far as i can tell, for some reason when running through NUnit, the binding fails. Running it in any way just not through NUnit, it works.
If you would have any ideas, i would be thankfull.
Have a nice day.
Answer
This was the problem (originally didnt use the CodeBase property). Nunit didnt copy the DLLs and so failed to load about 21 different dlls. After this was fixxed it worked.
kernel.Bind(x => x.FromAssembliesInPath(Path.GetDirectoryName(new Uri(Assembly.GetExecutingAssembly().CodeBase).LocalPath))
.SelectAllClasses()
.BindDefaultInterface());
I have an ASP.NET Web API project hosted in a Windows Service, using OWIN. I'm using a Startup class that configures some things, and am using the IAppBuilder.UseWebApi() option. Everything works perfectly in the debugger and from the command line (I use a command line argument of -e to run in console, or it can run as a Windows Service).
Everything is working great, BUT, when I build in Release mode with the build option enabled for "Optimize Code", my service controllers don't seem to work.
I have my controller in a separate class library, and I'm using this line to probe the controller on application start, as suggested here: Self-hosting WebAPI application referencing controller from different assembly
var controllerType = typeof(MetricsController);
I have a feeling that the Optimize Code option causes the compiler to ignore this line. Does anyone have any insight or ideas about how I can make this work?
Thanks!
After working with this for a bit, I implemented the following approach which the Optimize Code option seems to be happy with.
Class-level member:
private readonly List<Type> _controllers = new List<Type>();
Then in my Startup.Configuration method, I replaced this:
// Hack: This forces a manual probe of the controller assembly
var controllerType = typeof(MyController);
With this:
// Better Hack: This forces a manual probe of the controller assembly
_controllers.Add(typeof(MyController));
What seems to be happening is that the Optimize Code option is stripping out logic that is declared but never used. In this case, I was using the original hack to probe the assembly so the application knows about its existence. Since it was so tightly scoped and the variable controllerType was never used, the compiler ignores it. The new approach is probably just enough of a hint that it may be used that the compiler keeps it.
I tried a reflection-based approach but could not get it to work. I even manually loaded the assembly having the controllers, and I could see it loaded in the AppDomain when debugging, but for some reason it still wouldn't work. I could even verify that the List was populated with the controller types, but strangely no luck. Definitely open to any suggestions on this since I will be using a similar approach in the future on another project.
I currently use NInject to bind interfaces to concrete types and inject those into my classes. However, it is my understanding that this is a run-time affair. To me, it seems like a point of attack if someone wanted to change the behavior of my application.
Is there anything that would allow me to migrate my dependency injection IoC to compile time (Read: post-build IL weaving/replacement)?
To Elaborate
In my code I setup a binding
Bind<IFoo>().To<Foo>()
Bind<Bar>().ToSelf().InSingletonScope();
with ctor Foo(Bar dependency)
At the root of my application (on start-up) I resolve the graph
var foo = kernel.Get<IFoo>();
Assume I have no service locators (anti-pattern anyway right?). So I have no use for kernel anymore.
Now I want to have a "post-build release-compile" that replaces the kernel's resolution engine with instanciators, or references to constant/singleton, etc. Such that while my code looks like this;
var foo = kernel.Get<IFoo>();
In reality, after IL replacement in my final build stage, it looks like this:
var bar = new Bar();
var foo = new Foo(bar);
And there is no reference to NInject anymore.
My rational for this question is that I'm using Fody to IL Weave all my PropertyChanged raisers and I'm wondering whether it would be possible to do something similar for Dependency Injection.
From a security perspective in general, the use of a DI container does not pose any extra threats to your application.
When you write a service (such as web service or web site) application, the attacker could only change the DI configured behavior of the application when that application or server has already been compromized. When this happens, the server should be be considered lost (you will have to reformat that server or throw it away completely). DI doesn't make this worse, since a DI container does typically not allow the behavior to be changed from the outside. You will have to do something very weird to make this happen.
For an application that runs on the user's machine on the other hand, you should always consider that application to be compromised, since an attacker can decompile your code, change the behavior at runtime etc. Again, DI doesn't make this worse, since you can only protect yourself against attacks on the service boundary. That client app must communicate with the server and the place to store your valuable assets is within the service boundaries. For instance, you should never store a accounts password inside a DLL on the client. No matter whether it is encrypted or not.
The use of DI however, can make it somewhat easier for an attacker to change the behavior of a client application, especially when you configure everything in XML. But that holds for everything you store in the configuration file. And if that's your only line of defense (either with or without DI) you're screwed anyway.
it seems like a point of attack if someone wanted to change the
behavior of my application
Please note that any application can be decompiled, changed, and recompiled. It doesn't matter whether it's managed (.NET, Java) or not (C++), or obfuscated or not. So again, from a security perspective it doesn't matter whether you do runtime DI or compile-time DI. If this is an issue, don't deploy that code on machines that you have no control over.
As discussed, your cited reasons for doing this don't add up. However, Philip Laureano (Linfu author) did a Hiro project some time back which does pre-deployment DI. No idea if it went anywhere...
I am working on a compile time IOC container for .Net utilizing source generators:
https://github.com/YairHalberstadt/stronginject
https://www.nuget.org/packages/StrongInject/
With it you can do the following:
using StrongInject;
[Registration(typeof(Foo), typeof(IFoo))]
[Registration(typeof(Bar), Scope.SingleInstance)]
public class Container : IContainer<IFoo> {}
public static class Program
{
public static void Main()
{
new Container().Resolve(foo => //use foo here);
}
}
This will give you compile time errors and warnings if it can't resolve IFoo and avoids use of reflection.