In all the examples I see for Entity Framework data access, every method has its own using block, as shown below.
Is there an alternative to this approach? For example, can the context object just be a class member, such as:
MyModelContext context = new MyModelContext();
Is there a reason why a new context object has to be created for each method in the DAO class?
public class DaoClass
{
public void DoSomething()
{
using (var context = new MyModelContext())
{
// Perform data access using the context
}
}
public void DoAnotherThing()
{
using (var context = new MyModelContext())
{
// Perform data access using the context
}
}
public void DoSomethingElse()
{
using (var context = new MyModelContext())
{
// Perform data access using the context
}
}
}
You could have the DaoClass implement IDisposable and have the context be a property of the class. Just make sure to wrap DaoClass in a using statement or call Dispose() on the instance of the DaoClass.
public class DaoClass : IDisposable
{
MyModelContext context = new MyModelContext();
public void DoSomething()
{
// use the context here
}
public void DoAnotherThing()
{
// use the context here
}
public void DoSomethingElse()
{
// use the context here
}
public void Dispose()
{
context.Dispose();
}
}
Please note that the context object is the equivalent to a database transaction.
It implements the IDisposable interface because a transaction must be closed when done with and you either have to use the using statement or make an implementation of IDisposable like Lews Therin demonstrated.
We use multiple instances of a context object to separate different transactions. There will be cases when you want all changes to go as a single transaction and either commit all together or rollback everything. Then you put it all in one context instance. But there will also be cases where you want to separate the saving of one data pack from another. Then you use different transactions, i.e. different context objects.
To get a better grasp of this chapter take a look at the unit of work pattern.
Hope I could help, merry coding!
The way you show it is how I've seen it recommended everywhere. I've seen there be weird issues with class level declaration returning stale or incorrect data.
To get around duplicating all the code I like to write an execute method that I can reuse, or make changes without having to go to every using.
private T Execute<T>(Func<MyModelContext, T> function)
{
using (MyModelContext ctx = new MyModelContext())
{
var result = function(ctx);
ctx.SaveChanges();
return result;
}
}
public List<Type> GetTypes()
{
return Execute((ctx) =>
{
return ctx.Types.ToList();
});
}
Related
I have a class that I want to use only in one thread. If I create an object of the class in one thread and use it in another, it will cause lots of problems. Currently, I resolve this problem like this:
I have Context class and I want to use it only in one thread:
public class Context
{
public Thread CreatedThread { get; }
public Context()
{
CreatedThread = Thread.CurrentThread;
}
public void AssertThread()
{
if (CreatedThread != Thread.CurrentThread)
{
throw new InvalidOperationException("Use only one thread!");
}
}
//Lot of properties and methods here
}
And here is the usage of Context class in Student class:
public class Student
{
Context context;
public Context Context
{
get
{
if (context == null)
context = new Context();
context.AssertThread();
return context;
}
}
}
And when I use context in a different thread, it will throw an error:
var student = new Student();
var context = student.Context;
Task.Run(() =>
{
var context = student.Context;//InvalidOperationException
});
But this solution is not reliable. For example, when I have another class that uses context, I need to do AssertThread on getting context property. Or when I get the context in a new variable and use it in a different thread, my exception will not be thrown. So, is there any solution to enforce class being used only in one thread?
AFAIK neither the C# language nor the standard .NET APIs provide a way to embed automatically the AssertThread() check in the start of every public method and property of a class. If you don't like the idea of adding this check manually, and you are serious about it, you might want to search for some tool/add-on that can add automatically this check for you at compile time, like the PostSharp, Fody or alternatives.
There is an extended implementation of command pattern to support multi-commands (groups) in C#:
var ctx= //the context object I am sharing...
var commandGroup1 = new MultiItemCommand(ctx, new List<ICommand>
{
new Command1(ctx),
new Command2(ctx)
});
var commandGroup2 = new MultiItemCommand(ctx, new List<ICommand>
{
new Command3(ctx),
new Command4(ctx)
});
var groups = new MultiCommand(new List<ICommand>
{
commandGroup1 ,
commandGroup2
}, null);
Now , the execution is like:
groups.Execute();
I am sharing the same context (ctx) object.
The execution plan of the web app needs to separate
commandGroup1 and commandGroup2 groups in different thread. In specific, commandGroup2 will be executed in a new thread and commandGroup1 in the main thread.
Execution now looks like:
//In Main Thread
commandGroup1.Execute();
//In the new Thread
commandGroup2.Execute();
How can I thread-safely share the same context object (ctx), so as to be able to rollback the commandGroup1 from the new Thread ?
Is t.Start(ctx); enough or do I have to use lock or something?
Some code implementation example is here
The provided sample code certainly leaves open a large number of questions about your particular use-case; however, I will attempt to answer the general strategy to implementing this type of problem for a multi-threaded environment.
Does the context or its data get modified in a coupled, non-atmoic way?
For example, would any of your commands do something like:
Context.Data.Item1 = "Hello"; // Setting both values is required, only
Context.Data.Item2 = "World"; // setting one would result in invalid state
Then absolutely you would need to utilize lock(...) statements somewhere in your code. The question is where.
What is the thread-safety behavior of your nested controllers?
In the linked GIST sample code, the CommandContext class has properties ServerController and ServiceController. If you are not the owner of these classes, then you must carefully check the documentation on the thread-safety of of these classes as well.
For example, if your commands running on two different threads perform calls such as:
Context.ServiceController.Commit(); // On thread A
Context.ServiceController.Rollback(); // On thread B
There is a strong possibility that these two actions cannot be invoked concurrently if the creator of the controller class was not expecting multi-threaded usage.
When to lock and what to lock on
Take the lock whenever you need to perform multiple actions that must happen completely or not at all, or when invoking long-running operations that do not expect concurrent access. Release the lock as soon as possible.
Also, locks should only be taken on read-only or constant properties or fields. So before you do something like:
lock(Context.Data)
{
// Manipulate data sub-properties here
}
Remember that it is possible to swap out the object that Data is pointing to. The safest implementation is to provide a special locking objects:
internal readonly object dataSyncRoot = new object();
internal readonly object serviceSyncRoot = new object();
internal readonly object serverSyncRoot = new object();
for each sub-object that requires exclusive access and use:
lock(Context.dataSyncRoot)
{
// Manipulate data sub-properties here
}
There is no magic bullet on when and where to do the locks, but in general, the higher up in the call stack you put them, the simpler and safer your code will probably be, at the expense of performance - since both threads cannot execute simultaneously anymore. The further down you place them, the more concurrent your code will be, but also more expense.
Aside: there is almost no performance penalty for the actual taking and releasing of the lock, so no need to worry about that.
Assume we have a MultiCommand class that aggregates a list of ICommands and at some time must execute all commands Asynchronously. All Commands must share context. Each command could change context state, but there is no set order!
The first step is to kick off all ICommand Execute methods passing in the CTX. The next step is to set up an event listener for new CTX Changes.
public class MultiCommand
{
private System.Collections.Generic.List<ICommand> list;
public List<ICommand> Commands { get { return list; } }
public CommandContext SharedContext { get; set; }
public MultiCommand() { }
public MultiCommand(System.Collections.Generic.List<ICommand> list)
{
this.list = list;
//Hook up listener for new Command CTX from other tasks
XEvents.CommandCTX += OnCommandCTX;
}
private void OnCommandCTX(object sender, CommandContext e)
{
//Some other task finished, update SharedContext
SharedContext = e;
}
public MultiCommand Add(ICommand cc)
{
list.Add(cc);
return this;
}
internal void Execute()
{
list.ForEach(cmd =>
{
cmd.Execute(SharedContext);
});
}
public static MultiCommand New()
{
return new MultiCommand();
}
}
Each command handles the asynchronous part similar to this:
internal class Command1 : ICommand
{
public event EventHandler CanExecuteChanged;
public bool CanExecute(object parameter)
{
throw new NotImplementedException();
}
public async void Execute(object parameter)
{
var ctx = (CommandContext)parameter;
var newCTX = await Task<CommandContext>.Run(() => {
//the command context is here running in it's own independent Task
//Any changes here are only known here, unless we return the changes using a 'closure'
//the closure is this code - var newCTX = await Task<CommandContext>Run
//newCTX is said to be 'closing' over the task results
ctx.Data = GetNewData();
return ctx;
});
newCTX.NotifyNewCommmandContext();
}
private RequiredData GetNewData()
{
throw new NotImplementedException();
}
}
Finally we set up a common event handler and notification system.
public static class XEvents
{
public static EventHandler<CommandContext> CommandCTX { get; set; }
public static void NotifyNewCommmandContext(this CommandContext ctx, [CallerMemberName] string caller = "")
{
if (CommandCTX != null) CommandCTX(caller, ctx);
}
}
Further abstractions are possible in each Command's execute function. But we won't discuss that now.
Here's what this design does and doesn't do:
It allows any finished task to update the new context on the thread it was first set in the MultiCommand class.
This assumes there is no workflow based state necessary. The post merely indicated a bunch of task only had to run asynchronous rather than in an ordered asynchronous manner.
No currencymanager is necessary because we are relying on each command's closure/completion of the asynchronous task to return the new context on the thread it was created!
If you need concurrency then that implies that the context state is important, that design is similar to this one but different. That design is easily implemented using functions and callbacks for the closure.
As long as each context is only used from a single thread concurrently there is no problem with using it from multiple threads.
Can we use the using statement in a constructor to declare an instance of an object for later usage. For example.
public class TestClass {
private DataClassesDataContext _dataContext;
public TestClass(string connString){
using (this._dataContext = DataClassesDataContext(connString));
}
private bool someMethod(){
_dataContext.instanceMethod(); // i want to use instance methods wherever needed and define once
}
}
You must implement IDisposable yourself and call Dispose on the data context from you Dispose method.
public class TestClass : IDisposable {
private DataClassesDataContext _dataContext;
public TestClass(string connString){
_dataContext = new DataClassesDataContext(connString);
}
private bool someMethod(){
_dataContext.instanceMethod(); // i want to use instance methods wherever needed and define once
}
public void Dispose(){
_dataContext.Dispose();
}
}
It's not clear what you expect the using statement to do here. All it does is make sure that Dispose is called at the end of the block.
So basically you'd be creating a DataClassesDataContext (I assume you missed the new keyword...), storing a reference in a field, and then immediately disposing of it. That's not going to work well - you should get rid of the using statement, but quite possibly make your class implement IDisposable so that when the instance of TestClass is disposed, you dispose of the data context.
According to MSDN:
The using statement calls the Dispose method on the object in the
correct way, and (when you use it as shown earlier) it also causes the
object itself to go out of scope as soon as Dispose is called.
The using statement is basically syntactic sugar for try/finally.
try
{
_dataContext = new DataClassesDataContext(connString);
}
finally
{
if (_dataContext != null)
((IDisposable)dataContext).Dispose();
}
Looking at it in this way it should become obvious that datacontext is no longer in scope and therefore can't be used by other methods as you desire. To solve the problem, you should make the class implement IDisposeable.
using (this._dataContext = DataClassesDataContext(connString));
is the same as
try
{
this._dataContext = DataClassesDataContext(connString);
}
catch
{
}
finally
{
if(this._dataContext!=null)
((IDisposable)this._dataContext).Dispose();
}
So you'll get _dataContext disposed in you constructor and it will be no longer available. You should implement IDisposable interface and you'll be able to employ using statement where you want like this:
using (TestClass test = new TestClass("conn"))
{
//your code
}
Just a quick sanity check here!
If I have a static method in an instance class e.g:
public class myClass
{
public static void AMethod()
{
//do somit
}
}
Would it cause problems if I was making reference to IDisposable resources in the method body, an object context for example?
public static void AMethod()
{
ObjectContext context = new ObjectContext();
// do somit
}
By would it cause problems, I mean, would it retain the object context behind the scenes after the end of the method body due to the fact it was a static method?
The class as can be seen, isn't static, and the variable is local to the method.
I'm aware that I should be using 'using' here, just curious as to whether this particular combo of events could/would cause memory leaks.
In order to avoid any problems it is recommended to dispose IDisposable resources as soon as you have finished using them. This could happen by wrapping them in a using statement:
public static void AMethod()
{
using (ObjectContext context = new ObjectContext())
{
// do something
}
}
After leaving the scope of your method AMethod, your context object can't be used anymore so it'll be garbage collected eventually.
But as it implements IDisposable, you should use the using statement:
using (ObjectContext context = new ...)
{
// Use the context here
}
I'm not sure exactly how to describe this question, but here goes. I've got a class hierarchy of objects that are mapped in a SQLite database. I've already got all the non-trivial code written that communicates between the .NET objects and the database.
I've got a base interface as follows:
public interface IBackendObject
{
void Read(int id);
void Refresh();
void Save();
void Delete();
}
This is the basic CRUD operations on any object. I've then implemented a base class that encapsulates much of the functionality.
public abstract class ABackendObject : IBackendObject
{
protected ABackendObject() { } // constructor used to instantiate new objects
protected ABackendObject(int id) { Read(id); } // constructor used to load object
public void Read(int id) { ... } // implemented here is the DB code
}
Now, finally, I have my concrete child objects, each of which have their own tables in the database:
public class ChildObject : ABackendObject
{
public ChildObject() : base() { }
public ChildObject(int id) : base(id) { }
}
This works fine for all my purposes so far. The child has several callback methods that are used by the base class to instantiate the data properly.
I now want to make this slightly efficient. For example, in the following code:
public void SomeFunction1()
{
ChildObject obj = new ChildObject(1);
obj.Property1 = "blah!";
obj.Save();
}
public void SomeFunction2()
{
ChildObject obj = new ChildObject(1);
obj.Property2 = "blah!";
obj.Save();
}
In this case, I'll be constructing two completely new memory instantiations and depending on the order of SomeFunction1 and SomeFunction2 being called, either Property1 or Property2 may not be saved. What I want to achieve is a way for both these instantiations to somehow point to the same memory location--I don't think that will be possible if I'm using the "new" keyword, so I was looking for hints as to how to proceed.
Ideally, I'd want to store a cache of all loaded objects in my ABackendObject class and return memory references to the already loaded objects when requested, or load the object from memory if it doesn't already exist and add it to the cache. I've got a lot of code that is already using this framework, so I'm of course going to have to change a lot of stuff to get this working, but I just wanted some tips as to how to proceed.
Thanks!
If you want to store a "cache" of loaded objects, you could easily just have each type maintain a Dictionary<int, IBackendObject> which holds loaded objects, keyed by their ID.
Instead of using a constructor, build a factory method that checks the cache:
public abstract class ABackendObject<T> where T : class
{
public T LoadFromDB(int id) {
T obj = this.CheckCache(id);
if (obj == null)
{
obj = this.Read(id); // Load the object
this.SaveToCache(id, obj);
}
return obj;
}
}
If you make your base class generic, and Read virtual, you should be able to provide most of this functionality without much code duplication.
What you want is an object factory. Make the ChildObject constructor private, then write a static method ChildObject.Create(int index) which returns a ChildObject, but which internally ensures that different calls with the same index return the same object. For simple cases, a simple static hash of index => object will be sufficient.
If you're using .NET Framework 4, you may want to have a look at the System.Runtime.Caching namespace, which gives you a pretty powerful cache architecture.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx
Sounds perfect for a reference count like this...
#region Begin/End Update
int refcount = 0;
ChildObject record;
protected ChildObject ActiveRecord
{
get
{
return record;
}
set
{
record = value;
}
}
public void BeginUpdate()
{
if (count == 0)
{
ActiveRecord = new ChildObject(1);
}
Interlocked.Increment(ref refcount);
}
public void EndUpdate()
{
int count = Interlocked.Decrement(ref refcount);
if (count == 0)
{
ActiveRecord.Save();
}
}
#endregion
#region operations
public void SomeFunction1()
{
BeginUpdate();
try
{
ActiveRecord.Property1 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
ActiveRecord.Property2 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
SomeFunction1();
SomeFunction2();
}
finally
{
EndUpdate();
}
}
#endregion
I think your on the right track more or less. You can either create a factory which creates your child objects (and can track "live" instances), or you can keep track of instances which have been saved, so that when you call your Save method it recognizes that your first instance of ChildObject is the same as your second instance of ChildObject and does a deep copy of the data from the second instance over to the first. Both of these are fairly non-trivial from a coding standpoint, and both probably involve overriding the equality methods on your entities. I tend to think that using the first approach would be less likely to cause errors.
One additional option would be to use an existing Obect-Relational mapping package like NHibernate or Entity Framework to do your mapping between objects and your database. I know NHibernate supports Sqlite, and in my experience tends to be the one that requires the least amount of change to your entity structures. Going that route you get the benefit of the ORM layer tracking instances for you (and generating SQL for you), plus you would probably get some more advanced features your current data access code may not have. The downside is that these frameworks tend to have a learning curve associated with them, and depending on which you go with there could be a not insignificant impact on the rest of your code. So it would be worth weighing the benefits against the cost of learning the framework and converting your code to use the API.