I've built a WPF application using Entity Framework. My data store consists of hierarchical data (one project, with multiple different children entities).
To date I've been using a singleton pattern for my Context, as this allows me to have a global navigation tree in my UI (which then lazy loads as the user chooses to expand a specific parent to show its children). This has been working great up until now, but I'm now running into the dreaded exception:
A second operation started on this context before a previous asynchronous operation completed. Use 'await' to ensure that any asynchronous operations have completed before calling another method on this context. Any instance members are not guaranteed to be thread safe.
I understand that I'm seeing this exception due to some actions being performed on some entities and simultaneous requests being made to load other entities, all from the same singleton context.
I further understand that it's best practice to keep a context as short-lived as possible. However, how will this be possible if I want the user to be able to see the whole project and make changes to some entities at a time? I'm at a complete loss as to how to have this global navigation tree in place with a short-lived context (as I keep running into the 'context has been disposed' problem).
Should I implement some locking mechanism around the context, or worse still, have this locking mechanism check each property before requesting it from the context? What is the recommended best practice for this scenario?
Correct, DbContext instances are cheap (they're just wrappers around pooled database connections).
If you want to maintain entities between persistence operations then you can detach and reattach entities to the new DbContext instance:
See https://msdn.microsoft.com/en-us/data/jj592676.aspx
FooEntity fromPreviousContext = ...
using (DbContext context = new DbContext())
{
context.Foos.Attach( fromPreviousContext );
context.SaveChanges();
}
A side-note: generally the Singleton pattern is considered by many to be an anti-pattern as it is easy to be misused especially when a singleton instance is being used to store contextual data - it becomes just a slightly more polite approach to global-variables. You might want to consider the Context pattern instead (unrelated to DbContext).
Related
I have a service that use several repositories. I want them all to use the same transaction so that, if anything goes wrong, I can rollback the transactions and nothing is left in an invalid state in the database.
I've created a connection factory that returns the same connection to all clients.
public IDbConnection Connection => _db ?? (_db = _factory.OpenDbConnection());
Repositories takes the class holding this property as a constructor argument. This seemingly works, and enables them to use the same connection, and I can manage the transaction on the outer level. Both the connection factory and its clients are registered in IoC with ReuseScope.Request.
I am wondering though, are there any pitfalls to this?
What happens if someoneā¢ starts using async/await with this shared connection? Do I have to ensure that the connection is never shared across threads? (I was thinking about storing it in a ThreadLocal inside the connection factory).
Anything else?
I'm sorry for this kind of vague question, but I believe this must be a quite common use case.
ADO.NET's IDbConnection resource instance is not thread safe and should never be used in multiple threads. Typically each Thread would retrieve its own pooled DB Connection from the Db connection factory.
Whilst async/await can continue on a different thread, they're not executed concurrently so the same DB Connection can be used as it doesn't get used by multiple threads at the same time.
When I do use repositories they would be responsible for logical units of functionality, (never per-table which I consider an anti-pattern forcing unnecessary friction and abstraction penalties), so I'd rarely have transactions spanning multiple repositories, if I did I'd make it explicit and pass the same DB connection to each repository within an explicit DB transaction scope, e.g:
public object Any(MyRequest request)
{
using (var dbTrans = Db.OpenTransaction())
{
MyRepository1.Something(Db, request.Id);
MyRepository2.Something(Db, request.Id);
//....
dbTrans.Commit();
}
}
I am trying to understand the scope of repository class in ASP.NET application. I assume they are thread safe in request scope since each request runs on a seperate thread. But how about having it singleton, is it a valid scenario.
Because these classes doesn't have state but only methods which manipulate data, so different threads executing these methods might be having different stack frames. Is my understanding right, could anyone provide more insights.
interface ICustomerRepository
{
List<Customers> GetAll();
Customer GetById(int id);
}
public class Customer : ICustomerRepository
{
//implement methods
}
Exposing a repository as a singleton for a concurrent environment is a bad idea.
You could possibly implement interface repository in a way that is safe to be used concurrently but this means that the only guarantee of concurrency consistency lies somewhere in the implementation. There is no other mechanism to enforce that concurrent calls would not fail, the contract at the programming language level (repository interface) is just too weak to express such requirement.
On the other hand, if each http context gets its own instance, the implementation details do not matter.
I suggest you read more on object lifetime. A singleton is just a specific example of a more general idea of controlling the lifetime. You can have objects with transient lifetime, objects that are shared in a single instance for the lifetime of you application but also objects that live in a context of a thread or an http context (that possibly spans multiple threads). And one of ways to control the lifetime is to have a factory that creates instances.
If you need something simple, you could have something that looks like a singleton but controls the lifetime in an arbitrary way:
http://netpl.blogspot.com/2010/12/container-based-pseudosingletons-in.html
I have a Windows service that contains a file watcher that raises events when a file arrives. When an event is raised I will be using Ninject to create business layer objects that inside of them have a reference to an Entity Framework context which is also injected via Ninject. In my web applications I always used InRequestScope for the context, that way within one request all business layer objects work with the same Entity Framework context. In my current Windows service scenario, would it be sufficient to switch the Entity Framework context binding to a InThreadScope binding?
In theory when an event handler in the service triggers it's executed under some thread, then if another file arrives simultaneously it will be executing under a different thread. Therefore both events will not be sharing an Entity Framework context, in essence just like two different http requests on the web.
One thing that bothers me is the destruction of these thread scoped objects, when you look at the Ninject wiki:
.InThreadScope() - One instance of the type will be created per thread.
.InRequestScope() - One instance of the type will be created per web request, and will be destroyed when the request ends.
Based on this I understand that InRequestScope objects will be destroyed (garbage collected?) when (or at some point after) the request ends. This says nothing however on how InThreadScope objects are destroyed. To get back to my example, when the file watcher event handler method is completed, the thread goes away (back to the thread pool?) what happens to the InThreadScope-d objects that were injected?
EDIT:
One thing is clear now, that when using InThreadScope() it will not destroy your object when the handler for the filewatcher exits. I was able to reproduce this by dropping many files in the folder and eventually I got the same thread id which resulted in the same exact Entity Framework context as before, so it's definitely not sufficient for my applications. In this case a file that came in 5 minutes later could be using a stale context that was assigned to the same thread before.
Objects that are thread-static could possibly live for a very long time, which means that at some time that ObjectContext will get stale and work with old (cached) values, which will result in hard-to-find bugs.
I normally create an ObjectContext with the same scope as I create a database transaction (I often even wrap a ObjectContext in a database transaction and dispose them right after each other). One (web) request could possibly have multiple database transactions, but will usually have one 'business transaction', which executes the business logic. Other transactions could be started for things as logging (before, after, and sometimes during the business transaction). When you reuse the ObjectContext for the complete request you could end up with a mess, because when a business transaction fails, the ObjectContext could be in an invalid state, which might have effect on operations (such as logging) that reuse that same ObjectContext.
With your Windows Service, I think every event raised by the file watcher possibly triggers a new business transaction. In that case I would create a new ObjectContext per event.
Long story short, I would not inject an ObjectContext that has a lifetime that is managed by your IoC framework. I would inject a factory that allows your code to create a new ObjectContext and dispose it. I just answered another question about this a few ours back. Look at the solution I propose in that answer.
Good luck.
I've got a problem with an multi-threaded desktop application using Castle ActiveRecord in C#:
To keep the GUI alive while searching for the objects based on userinput I'm using the BackgroundWorker for the search-function. Some of the properties of the objects, especially some HasMany-Relations, are marked as Lazy.
Now, when the search is finished and the user selects an resulting object, some of the properties of this object should be displayed. But as the search was done by the BackgroundWorker in a different thread, accessing the properties fails as the session for the lazy-access is no longer available.
What will be the best way to do the search in an extra thread to keep the GUI alive and to access all properties correctly including those marked as lazy?
Thanks for any advise!
Regards
sc911
A couple of options:
When querying, do an eager load of whatever you will need later in the main thread, thus avoiding lazy loading.
Use ISession.Lock() to reattach the entities to the ISession in the main thread.
Solved it with this nice blog post here:
http://www.darkside.co.za/archive/2008/09/09/castle-activerecord-lazy-loading-session-scopes-again.aspx
My current project is a WPF application with an SQL Server back end.
In WPF, the UI can only be modified by the UI thread. If a UI modification needs to be done on another thread, then the dispatcher object can be called and given an action. Effectively, this is mapping my Delegate to a WM_ message.
Since the linq datacontexts to SQL Server are also single threaded, how could I copy this "Dispatcher" idea from WPF and create a similar object that I can use to marshal requests to my public datacontext to be always from the "Public SQL thread".
I'm guessing I'd need to create a thread at start up which initialises the data contexts and then sleeps until woken by the SqlThread.Invoke() method.
Does anyone know of anything similar to this idea or any materials that may help me do this?
If you mean a LINQ-to-SQL DataContext, I would advise against this; use DataContexts as a short-lived unit-of-work, then Dispose() it; don't keep it for lots of different purposes (there are issues with stale data, cache growth, threading, concurrency, plus (importantly) how to handle failure / rollback).
Re the bigger picture:
Essentially you are describing a work queue, such as a producer/consumer queue. There are plenty of such around, or they are relatively easy to write (for example, see here or here; just add a loop to dequeue+process items). IIRC .NET 4.0 also includes (in the parallel extensions) such constructs pre-canned.