I have a Windows service that contains a file watcher that raises events when a file arrives. When an event is raised I will be using Ninject to create business layer objects that inside of them have a reference to an Entity Framework context which is also injected via Ninject. In my web applications I always used InRequestScope for the context, that way within one request all business layer objects work with the same Entity Framework context. In my current Windows service scenario, would it be sufficient to switch the Entity Framework context binding to a InThreadScope binding?
In theory when an event handler in the service triggers it's executed under some thread, then if another file arrives simultaneously it will be executing under a different thread. Therefore both events will not be sharing an Entity Framework context, in essence just like two different http requests on the web.
One thing that bothers me is the destruction of these thread scoped objects, when you look at the Ninject wiki:
.InThreadScope() - One instance of the type will be created per thread.
.InRequestScope() - One instance of the type will be created per web request, and will be destroyed when the request ends.
Based on this I understand that InRequestScope objects will be destroyed (garbage collected?) when (or at some point after) the request ends. This says nothing however on how InThreadScope objects are destroyed. To get back to my example, when the file watcher event handler method is completed, the thread goes away (back to the thread pool?) what happens to the InThreadScope-d objects that were injected?
EDIT:
One thing is clear now, that when using InThreadScope() it will not destroy your object when the handler for the filewatcher exits. I was able to reproduce this by dropping many files in the folder and eventually I got the same thread id which resulted in the same exact Entity Framework context as before, so it's definitely not sufficient for my applications. In this case a file that came in 5 minutes later could be using a stale context that was assigned to the same thread before.
Objects that are thread-static could possibly live for a very long time, which means that at some time that ObjectContext will get stale and work with old (cached) values, which will result in hard-to-find bugs.
I normally create an ObjectContext with the same scope as I create a database transaction (I often even wrap a ObjectContext in a database transaction and dispose them right after each other). One (web) request could possibly have multiple database transactions, but will usually have one 'business transaction', which executes the business logic. Other transactions could be started for things as logging (before, after, and sometimes during the business transaction). When you reuse the ObjectContext for the complete request you could end up with a mess, because when a business transaction fails, the ObjectContext could be in an invalid state, which might have effect on operations (such as logging) that reuse that same ObjectContext.
With your Windows Service, I think every event raised by the file watcher possibly triggers a new business transaction. In that case I would create a new ObjectContext per event.
Long story short, I would not inject an ObjectContext that has a lifetime that is managed by your IoC framework. I would inject a factory that allows your code to create a new ObjectContext and dispose it. I just answered another question about this a few ours back. Look at the solution I propose in that answer.
Good luck.
Related
I have a web API where one of my classes in the service layer injects an instance of automapper via DI.
This service has a method that gets called which begins a task in the background and then immediately returns a result to the caller, while the background task continues to execute. This solution works fine for our use case since all of our dependencies are initialized via DI at the construction of the service, and they are thread-safe too.
There is one exception however - if the background task tries to use the injected automapper instance after the request has returned (and therefore the DI scope has already been disposed) we get an ObjectDisposedException because it seems like it is trying to someone access the DI scope, even though automapper is already instantiated so it shouldn't need to.
How can I prevent this behavior? I really don't want or need automapper to access our DI scope at all - once the instance is constructed I just want it to work without needing further access to DI.
I've built a WPF application using Entity Framework. My data store consists of hierarchical data (one project, with multiple different children entities).
To date I've been using a singleton pattern for my Context, as this allows me to have a global navigation tree in my UI (which then lazy loads as the user chooses to expand a specific parent to show its children). This has been working great up until now, but I'm now running into the dreaded exception:
A second operation started on this context before a previous asynchronous operation completed. Use 'await' to ensure that any asynchronous operations have completed before calling another method on this context. Any instance members are not guaranteed to be thread safe.
I understand that I'm seeing this exception due to some actions being performed on some entities and simultaneous requests being made to load other entities, all from the same singleton context.
I further understand that it's best practice to keep a context as short-lived as possible. However, how will this be possible if I want the user to be able to see the whole project and make changes to some entities at a time? I'm at a complete loss as to how to have this global navigation tree in place with a short-lived context (as I keep running into the 'context has been disposed' problem).
Should I implement some locking mechanism around the context, or worse still, have this locking mechanism check each property before requesting it from the context? What is the recommended best practice for this scenario?
Correct, DbContext instances are cheap (they're just wrappers around pooled database connections).
If you want to maintain entities between persistence operations then you can detach and reattach entities to the new DbContext instance:
See https://msdn.microsoft.com/en-us/data/jj592676.aspx
FooEntity fromPreviousContext = ...
using (DbContext context = new DbContext())
{
context.Foos.Attach( fromPreviousContext );
context.SaveChanges();
}
A side-note: generally the Singleton pattern is considered by many to be an anti-pattern as it is easy to be misused especially when a singleton instance is being used to store contextual data - it becomes just a slightly more polite approach to global-variables. You might want to consider the Context pattern instead (unrelated to DbContext).
I have a situation where I have an object C that is required by two types of classes. One of these class runs in separate thread, the other one creates multiple threads with the help of timer elapsed event.
So there are basically two life times of object C.
Object C is created along with A and B by a factory. For class1 I create the instance through master factory, but for second one I will have to pass the entire factory. The second class will now decide in run time (based on timer tick) how to create object C.
My question is regarding the second case: I am passing the entire factory, which besides the knowledge of creating object C also has a knowledge of creating A and B, is this considered bad design?
I am attaching a snapshot of what I am doing
Composition Snapshot
When working with multiple threads, each thread should get its own object graph. This means that every time you spin off some operation to a new thread (or a thread from the thread pool), you should ask the container again for the root object to work with. Prevent passing services from on thread to the other, because this scatters the knowledge about the thread-safety of your services throughout the code base, while with dependency injection you try to centralize this knowledge to a single place (the composition root). When this knowledge is scattered throughout the application, it becomes much harder to change the behavior of components what thread-safety is concerned.
When you do this, there is probably no need to even have two different configurations for that class. That class might simply be registeres as transient and because you resolve it at each pulse of the timer, each thread gets its own instance or the lifetime is scoped, in that case the class' lifetime will probably end when the timed operation ends.
The code that the timer calls and calls back into the container should be part of the composition root. Since the service is resolved on a background thread, you will often have to wrap that call in some sort of scope (lifetime scope, child container, etc). This allows that instance (or any other registered service) to live for the duration of that scope.
At the moment, I'm doing something similar to this to integration test a library that communicates with our API controllers, and so far so good, but I've run into a snag. In all of our other integration tests, we run the test inside an MSDTC transaction at isolation level ReadCommitted so that each one gets its own little private session with the databases and such, and at the end of each test, the transactions are rolled back. ..But that doesn't work for these tests because the transactions are per-thread and all of the HttpClient/HttpServer methods are asynchronous, so the work is done on a different thread than the main one for that test, doesn't have an ambient transaction to subscribe to, and goes right along and commits.
I've come across a few posts about how to open a TransactionScope on one thread and then create a dependent transaction to be passed to a new task via a closure, but I have no idea how to apply that to an HttpClient that's connected to an in-memory HttpServer. I suspect I'm just not thinking about it the right way, but that's about all I have to go on.
What would make sense/work/etc? I have full control over the creation of the HttpServer and the HttpClient that will connect to it, but I'm at a loss as to what to do with them.
UPDATE:
Some progress has been made- I wrote a message handler that can create a dependent transaction on the worker thread if Transaction.Current is populated when it gets there, and for some of my calls it is, but for others it isn't, and I'm wondering if I may be chasing shadows - like, there's a lot of ContinueWith around, and I think it's executed on the calling thread (which would naturally have a transaction) if the antecedent task is already complete.
Would it be possible just to run the whole thing synchronously and carry the test's thread all the way through? I've experimented some with ContinueWith'ing synchronously without much success..
If you aren't dead-set on using a real HTTP connection, you could call the interfaces directly via code (by using an assembly reference) from a test framework that allows you to do per-session or per-test start-up and shut-down work (such as MSTest's class and test initialize functions). In this case, you would open a TransactionScope that is shared across the class in a member variable and dispose it in the class or test shut-down function. Because you didn't call .Commit(), it will roll-back the operations that occurred during the transaction.
As it turns out, the HttpClient and HttpServer weren't spinning up background threads - rather, I had some errant Task.StartNew's in my code that were causing the problem. Removing those got me going.
My team and I are working on an application that accesses a "huge" database, roughly 32M rows in 8 months. The application is a RIA Domain Service application. We have optimized the application and the database design in such a way that even on a box with very limited resources the response time is never more than few seconds.
However, there are certain tasks that need to be performed on a large record set (at least 2-3M records per operation). An example is the generation of a monthly report. Definitely we cannot keep the application waiting for a result, because it would hit either the 30 seconds timeout.
After reading this post, I thought I could create an [Invoke] method, which spawns a new thread, and consequently it would free the client up. The thread would be in charge of extracting data from the DB and writing them nicely in a PDF. I've tried to implement this scenario, but I get an exception, which says that the underlying connection has already been disposed...
Is this approach correct? Can I achieve what I am trying to do or there is some issue I cannot overcome? And is there any better way to do this?
Cheers,
Gianluca.
Ok, I've realized my question is a bit silly.
As far as I understood, the ObjectContext exists as long as the client is connected, otherwise it gets disposed. Because I was writing an Invoke method that does not require any change tracking, I have resolved by:
- spawning a new thread from within the Invoke method
- instantiating a new EF context inside the worker thread
- disposing the new EF context as soon as the separate thread operation is terminated.
Cheers,
Gianluca.