Trying to find a correct implementation of EF6 - c#

Enclosed below is an example of one of the methods that I created.
Am I correctly implementing EF6? As you can see in my commented code I first tried to create a repository class. I instead scrapped the repository class for this implementation. However I am now getting errors because I am returning an IQueryable object and then closing the dbcontext.
So this led me to the question: Am I implementing EF6 correctly?
I could change the IQueryable to return a List<obj> or I could remove the using (_myContext) statement. I'm just trying to understand the correct way to implement my methods.
public IQueryable<MY_USERS> GetAllUsers()
{
using (_myContext)
{
return _myContext.MY_USERS;
}
//MyRepository<MY_USERS> users = new MyRepository<MY_USERS>(_myContext);
//return users.GetAll();
}
Updated example:
public void DeleteUser(string userName)
{
using (var context = new MyEFConn())
{
using (var transaction = context.Database.BeginTransaction())
{
try
{
context.MY_USER_GROUPS.RemoveRange(GetUserGroups(userName));
context.MY_USERS.Remove(new MY_USERS { USER_NAME = userName });
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
throw ex;
}
}
}
}

There is no no wrong or right, it just depends.
When you use IQueriable this mean that you can have common queries in your repository class and outside of it you can append additional filter.
GetAllUsers().Where(u=>u.UserType== 'Test').ToList()
You can also use include depends on your needs let say in MVC controller
GetAllUsers().Include(u=>u.Roles).Take(10).ToList();
Important thing to note, EF dont connect to db until you do ToList() or iterate throw query.
Lastly as you mentioned in comment always need to remember when you use IQuerieable that context could be disposed so this should be taken to consideration too.
On the other side could be good option return IEnumerable from repositories, so lets say if you want to load users you should have method which will require input parameters for paging, filtering or other stuff. This is useful for testing because you will be able to mock data.
About Delete it always depends on your requirements and if you need to remove all together or nothing than you need to use transactions IN addition it could be same about all CRUD.

Related

Entity Framework 6 ability to reference nested objects on closed context

I have a class which loads all of the data I want on screen.
I am loading all the data within a using statement and returning the resultant records in a higher class.
I am able to loop through the objects, but any nested objects are unavailable and I get the error "The function evaluation requires all threads to run." when I try to inspect the objects.
The returned error to the web page is "The ObjectContext instance has been disposed and can no longer be used for operations that require a connection."
Is there a way in EF6 for me to load all the objects and nested objects and make them available outside of the context?
Using statement will automatically dispose the object.
You can use include while fetching main entities to also fetch the related entities.
https://msdn.microsoft.com/en-us/data/jj574232.aspx
DbContext isn't supposed to be used for a long time. It's better to instantiate context, copy all data that you need from it in to some array/collection and dispose it right after that. Then you can access your data with this array/collection.
Example:
In Controller class
Person[] people = Repo.GetAllPeople();
And in Repository class you have something like:
public People[] GetAllPeople()
{
try
{
MyDbContext cont = new MyDbContext();
return cont.People.ToArray();
}
catch { return null; }
finally { cont.Dispose(); }
}
P.S.
And yes - using statement is nothing else then just :
try
{
...instantiate some_resource that inherits from IDisposable
...do something with this resource
}
finally { some_resource.Dispose(); }

WebApi Controller: Accessing database more than once, using Controller's own method?

I am writing a, let's say, NodeController for a project of mine. It is linked up to a database, and all of its methods take an ID for which Node to access in the EF database.
A node can have a dependency on another Node, meaning that, if I want to, say, "execute" one Node, I must check whether any dependency-node has already been executed or not, and fail, if not all dependencies have been executed.
I will present partial versions of two of my methods, whereas one uses the other, but first, there is our Controller's constructor, as will be mentioned later on:
private readonly NodeDBContext _context;
public NodeController()
{
_context = new NodeDBContext();
}
Is this a correct way to use a DbContext, or should we limit ourselves to using "using" statements whenever we perform something actually database-related (Getting or Putting, in our case)?
[ResponseType(typeof(bool))]
[HttpGet]
[Route("node/{id}/executed")]
public async Task<IHttpActionResult> Executed(int id)
{
var node = await NodeControllerHelper.GetNode(_context, id);
if (node == null) return BadRequest("There is no such Node");
return Ok(node.Executed);
}
As you can see, the method above uses a "_context", which is a NodeDbContext, inheriting from DbContext. Is it correct for me to pass this field-ified DbContext to the method that will retrieve the Node object from the database?
Below is the method which will take use of both the "NodeControllerHelper.GetNode" method as well as the Executed(int id) method:
[HttpPut]
[Route("node/{id}/executed")]
public async Task<IHttpActionResult> Executed(int id, PutMessage message)
{
//Interpret input
var node = await NodeControllerHelper.GetNode(_context, id);
if (node == null) return BadRequest("There is no such Node");
foreach (var condition in node.Conditions)
{
bool executed = false;
response = await Executed(condition.Id).Result.ExecuteAsync(new CancellationToken(false));
executed = JsonConvert.DeserializeObject<bool>(await response.Content.ReadAsStringAsync());
if (!executed)
{ // Precondition has not been executed
await NodeControllerHelper.UnlockNodeIfSelfLockedAndSave(_context, node);
return BadRequest("One or more preconditions are not currently fulfilled.");
}
}
As can be seen, first the .GetNode call is made, meaning there is one call to the DbContext in there, and also, there is the "Executed(condition.Id)" call, which of course ends up calling the .GetNode method once more.
This is where we seem to have hit some sort of rock, as the PUT /executed method appears to fail on more than half of occasions, however sometimes manages to squeeze by, beyond my understanding.
I have a more general question in regards how we can best ensure happiness on all ranks of our Controller code;
What is the best practice for managing our DbContext? Currently, it is being instantiated in the constructor of the controller, set to a private readonly field and is dispersed down into "deeper" method calls. This seems to make some troubles. When debugging, once we reach the second call of the .GetNode method, and we make the step to go past the call 'get'ting from the database's table, debugging mysteriously stops, some Exception is thrown, and the client we use to call to the Api doesn't receive any HttpResponseMessage. An alternative is to have "using" statements which create their own short-lived DbContext, however we tried this implementation earlier and it seemed to have the same issue at the second .GetNode call.
Below is the GetNode method, for reference:
public static async Task<Node> GetNode(NodeDBContext context, int id)
{
var node = await context.Nodes.FindAsync(id);
return node;
}
Well damn, this post ended up being longer, than I had initially wanted to present, although code-cutting has occured as well. I hope that somebody out there sees something that is wrongly done by us, and can correct us in our perhaps slightly fundamental mistake of implementation...
Thank you a lot in advance,
Kirluu

Validate entities when adding to a navigation property

I have an entity with a collection property that looks something like this:
public class MyEntity
{
public virtual ICollection<OtherEntity> Others { get; set; }
}
When I retrieve this entity via the data context or repository, I want to prevent others adding items to this collection through the use of MyEntity.Others.Add(entity). This is because I may want some validation code to be performed before adding my entity to the collection. I'd do this by providing a method on MyEntity like this:
public void AddOther(OtherEntity other)
{
// perform validation code here
this.Others.Add(other);
}
I've tested a few things so far, and what I've eventually arrived at is something like this. I create a private collection on my entity and expose a public ReadOnlyCollection<T> so MyEntity looks like this:
public class MyEntity
{
private readonly ICollection<OtherEntity> _others = new Collection<OtherEntity>();
public virtual IEnumerable<OtherEntity>
{
get
{
return _others.AsEnumerable();
}
}
}
This seems to be what I'm looking for and my unit tests pass fine, but I haven't yet started to do any integration testing so I'm wondering:
Is there a better way to achieve what I'm looking for?
What are the implications I'll face if I decide to go down this route (if feasible)?
Thanks always for any and all help.
Edit 1 I've changed from using a ReadOnlyCollection to IEnumerable and am using return _others.AsEnumerable(); as my getter. Again unit tests pass fine, but I'm unsure of the problems I'll face during integration and EF starts building these collections with related entities.
Edit 2 So, I decided to try out suggestions of creating a derived collection (call it ValidatableCollection) implementing ICollection where my .Add() method would perform validation on the entity provided before adding it to the internal collection. Unfortunately, Entity Framework invokes this method when building the navigation property - so it's not really suitable.
I would create collection class exactly for this purpose:
OtherEntityCollection : Collection<OtherEntity>
{
protected override void InsertItem(int index, OtherEntity item)
{
// do your validation here
base.InsertItem(index, item);
}
// other overrides
}
This will make much more rigid, because there will be no way to bypass this validation. You can check more complex example in documentation.
One thing I'm not sure is how to make EF create this concrete type when it materializes data from database. But it is probably doable as seen here.
Edit:
If you want to keep the validation inside the entity, you could make it generic through custom interface, that the entity would implement and your generic collection, that would call this interface.
As for problems with EF, I think the biggest problem would be that when EF rematerializes the collection, it calls Add for each item. This then calls the validation, even when the item is not "added" as business rule, but as an infrastructure behavior. This might result in weird behavior and bugs.
I suggest returning to ReadOnlyCollection<T>. I've used it in similar scenarios in the past, and I've had no problems.
Additionally, the AsEnumerable() approach will not work, as it only changes the type of the reference, it does not generate a new, independent object, which means that this
MyEntity m = new MyEntity();
Console.WriteLine(m.Others.Count()); //0
(m.Others as Collection<OtherEntity>).Add(new OtherEntity{ID = 1});
Console.WriteLine(m.Others.Count()); //1
will successfully insert in your private collection.
You shouldn't use AsEnumerable() on HashSet, because collection can be easily modified by casting it to ICollection<OtherEntity>
var values = new MyEntity().Entities;
((ICollection<OtherEntity>)values).Add(new OtherEntity());
Try to return copy of a list like
return new ReadOnlyCollection<OtherEntity>(_others.ToList()).AsEnumerable();
this makes sure that users will recieve exception if they will try to modify it. You can expose ReadOnlyCollection as return type enstead of IEnumerable for clarity and convenience of users. In .NET 4.5 a new interface was added IReadOnlyCollection.
You won't have big integration issues except some component depend on List mutation. If users will call ToList or ToArray, they will return a copy
You have two options here:
1) The way you are currently using: expose the collection as a ReadOnlyCollection<OtherEntity> and add methods in the MyEntity class to modify that collection. This is perfectly fine, but take into account that you are adding the validation logic for a collection of OtherEntity in a class that just uses that collection, so if you use collections of OtherEntity elsewhere in the project, you will need probably need to replicate the validation code, and that's a code smell (DRY) :P
2) To solve that, the best way is to create a custom OtherEntityCollection class implementing ICollection<OtherEntity> so you can add the validation logic there. It's really simple because you can create a simple OtherEntityCollection object that contains a List<OtherEntity> instance which really implements the collection operations, so you just need to validate the insertions:.
Edit: If you need custom validation for multiple entities you should create a custom collection which receives some other object that perform that validation. I've modified the example below, but it shouldn't be difficult to create a generic class:
class OtherEntityCollection : ICollection<OtherEntity>
{
OtherEntityCollection(Predicate<OtherEntity> validation)
{
_validator = validator;
}
private List<OtherEntity> _list = new List<OtherEntity>();
private Predicate<OtherEntity> _validator;
public override void Add(OtherEntity entity)
{
// Validation logic
if(_validator(entity))
_list.Add(entity);
}
}
EF can't map property without setter. or even private set { } requires some configuration. keep models as POCO, Plain-Old like DTO
the common approach is to create separated service layer that contain validation logic against your Model before save.
for sample..
public void AddOtherToMyEntity(MyEntity myEntity, OtherEntity otherEntity)
{
if(myService.Validate(otherEntity)
{
myEntity.Others.Add(otherEntity);
}
//else ...
}
ps. You could prevent compiler to do somethings but not other coders. Just made your code explicitly says "don't modify Entity Collection directly, until it passed validation"
Finally have a suitable working solution, here's what I did. I'll change MyEntity and OtherEntity to something more readable, like Teacher and Student where I want to stop a teacher teaching more students than they can handle.
First, I created an interface for all entities that I intend to validate in this way called IValidatableEntity that looks like this:
public interface IValidatableEntity
{
void Validate();
}
Then I implement this interface on my Student because I'm validating this entity when adding to the collection of Teacher.
public class Student : IValidatableEntity
{
public virtual Teacher Teacher { get; set; }
public void Validate()
{
if (this.Teacher.Students.Count() > this.Teacher.MaxStudents)
{
throw new CustomException("Too many students!");
}
}
}
Now onto how I invoke validate. I override .SaveChanges() on my entity context to get a list of all entities added and for each invoke validate - if it fails I simply set its state to detached to prevent it being added to the collection. Because I'm using exceptions (something I'm still unsure of at this point) as my error messages, I throw them out to preserve the stack trace.
public override int SaveChanges()
{
foreach (var entry in ChangeTracker.Entries())
{
if (entry.State == System.Data.EntityState.Added)
{
if (entry.Entity is IValidatableEntity)
{
try
{
(entry.Entity as IValidatableEntity).Validate();
}
catch
{
entry.State = System.Data.EntityState.Detached;
throw; // preserve the stack trace
}
}
}
}
return base.SaveChanges();
}
This means I keep my validation code nicely tucked away within my entity which will make my life a whole lot easier when mocking my POCOs during unit testing.

Linq to SQL Repository ~theory~ - Generic but now uses Linq to Objects?

The project I am currently working on used Linq to SQL as an ORM data access technology. Its an MVC3 Web app. The problem I faced was primarily due to the inability to mock (for testing) the DataContext which gets autogenerated by the DBML designer.
So to solve this issue (after much reading) I refactored the repository system which was in place - single repository with seperate and duplicated access methods for each table which ended up with something like 300 methods only 10 of which were unique - into a single repository with generic methods taking the table and returning more generic types to the upper reaches of the application. The DataContext is now wrapped, and easily mocked.
[Edit: To achieve this i have used the link provided by Jacob below, coincidently!]
My question revolves more around the design I've used to get thus far and the differences I'm noticing in the structure of the app.
1) Having refactored the code which used classic Linq to SQL queries:
public Billing GetBilling(int id)
{
var result = (
from bil in _bicDc.Billings
where bil.BillingId == id
select bil).SingleOrDefault();
return (result);
}
it now looks like:
public T GetRecordWhere<T>(Expression<Func<T, bool>> predicate) where T : class
{
T result;
try
{
result = _dataContext.GetTable<T>().Where(predicate).SingleOrDefault();
}
catch (Exception ex)
{
throw ex;
}
return result;
}
and is used by the controller with a query along the lines of:
_repository.GetRecordWhere<Billing>(x => x.BillingId == 1);
which is fine, and precisely what I wanted to achieve.
...however.... I'm also having to do the following to get precisely the result set i require in the controller class (the highest point of the app in essence)...
viewModel.RecentRequests = _model.GetAllRecordsWhere<Billing>(x => x.BillingId == 1)
.Where(x => x.BillingId == Convert.ToInt32(BillingType.Submitted))
.OrderByDescending(x => x.DateCreated).
Take(5).ToList();
This - as far as my understanding is correct - is now using Linq to Objects rather than the Linq to SQL queries I was previously? Is this okay practise? It feels wrong to me but I dont know why. Probably because the logic of the queries is in the very highest tier of the app, rather than the lowest, but... I defer to you good people for advice. One of the issues I considered was bringing the entire table into memory but I understand that using the Iqeryable return type the where clause is taken to the database and evaluated there. Thus returning only the resultset i require... i may be wrong.
And if you've made it this far, well done. Thank you, and if you have any advice it is very much appreciated!!
Update: Inclusion of GetAllRecordsWhere method as requested
public IQueryable<T> GetAllRecordsWhere<T>(Expression<Func<T, bool>> predicate) where T : class
{
return _dataContext.GetTable<T>().Where(predicate);
}
which uses:
public IQueryable<TName> GetTable<TName>() where TName : class
{
return _db.GetTable<TName>().AsQueryable();
}
If _model.GetAllRecordsWhere returns an IQueryable then your subsequent querying is still just building up an expression tree (which is what i think you mean by using LinqToSql), it only gets turned into SQL an executed when you enumerate the collection by iterating over it or calling ToList() or ToArray().
As an aside don't do this:
catch (Exception ex)
{
throw ex;
}
All you are doing is swallowing the stack trace. If you want to rethrow an exception just call throw, never throw ex. If you don't do anything in your catch other than rethrow then don't catch. The nowmal pattern for this would be catch, do some logging, rethrow.
If you want to mock database context see this:
http://andrewtokeley.net/archive/2008/07/06/mocking-linq-to-sql-datacontext.aspx
Here is a good article that explains how to mock out your DataContext:
Faking your LINQ provider part 1

Data Access Layer - LINQ-To-SQL and generics. Can I optimize this?

We are working on improving our DAL which is written in LINQ that talks to the MS SQL database. Our goal is to achieve good re-usability with as little code as possible.
LINQ generated files are making a use of generics and reflection to map LINQ generated classes to the SQL objects (tables and views in our case).
Please see the example of the existing accessor. This method resides in the partial class that contains custom constructors, accessors and mutators:
public clsDVD getDVD(int dvdId)
{
try
{
using (DataContext dvdDC = new DataContext(ConnectionStringManager.getLiveConnStr()))
{
// Deferred loading
dvdDC.DeferredLoadingEnabled = false;
var tDVD = dvdDC.GetTable<DVD>();
return (from t in tDVD
// Filter on DVD Id
where t.DVDId == (dvdId)
select t).Single();
}
catch (Exception e)
{
Logger.Log("Can't get requested DVD.", e);
throw;
}
}
I believe that this is very easy to maintain, since the most of the work is done after var tDVD
It has been suggested not to declare tDVD at all and use dataContext.TableName, but behind the scenes it still calls GetTable<>.
The only way I can see of improving this is breaking this one partial class into 4 (CRUD) partial classes. E.g.
clsDVD_Select, clsDVD_Update, clsDVD_Insert, clsDVD_Delete
In this case each class will represent a set of behaviours.
The idea that we are discussing is to see whether it's possible to use generics on top of LINQ generics.
For example, instead of having the partial classes, we would figure out the properties of the class on the go by using reflection against the SQL database. My first concern here is performance impact. How significant will it be.
Instead of ClsDVD.getDVD(1231) we'd have something on the lines of: GenericDC.Select<DVD>(1231)
.Select method would figure out the primary key and run a select query on that table. I'm struggling to understand how can this work. Lets say we can get this to work for the simple select, i.e. select with a filter on primary key, but what is going to happen when we start doing complex joins and group by selects. What happens when we want to have multiple selects per DVD class?
My final concern is to do with good practices. I have been told before that it's good to have consistant code. For example, If I decide to use datatables , than I should stick to datatables throughout the project. It's a bad idea to have half of the project with datatables and another half with user defined classes. Do you agree on this?
I'm in a position where I think that existing implementation is quite good but maybe I'm missing out something very obvious and there is a much easier, more OO way of achieving the same results?
Thank you
Here is one way to make this situation a little more generic. Rince and repeat for the other CRUD opperations. For some sitiations the performance may be unacceptable. In those cases I would restructure that part of the program to call a non generic version.
public T GetSingleItem(Func<T,bool> idSelector ) where T : ??? // forgot what type it needs to be off the top of my head
{
try
{
using (DataContext context = new DataContext(ConnectionStringManager.getLiveConnStr()))
{
context.DeferredLoadingEnabled = false;
return context.GetTable<T>().Single( item => idSelector( item );
}
}
catch (Exception e)
{
Logger.Log("Can't get requested item.", e);
throw;
}
}
This would be how you woudl have to get the item. Not quite as elegant becase you have to tell the generic function which column you are going to be using.
GenericDC.GetSingleItem<DVD>( dvd => dvd.ID == 1231 )
To make this even more generic that limiting it to a single item with an ID...
public IEnumerable<T> GetItems(Func<T,bool> selectFunction ) where T : ??? // forgot what type it needs to be off the top of my head
{
try
{
using (DataContext context = new DataContext(ConnectionStringManager.getLiveConnStr()))
{
context.DeferredLoadingEnabled = false;
return context.GetTable<T>().Select( item => selectFunction( item );
}
}
catch (Exception e)
{
Logger.Log("Can't get requested item.", e);
throw;
}
}
Then you can call it like:
GenericDC.GetItems<DVD>( dvd => dvd.Title == "Title" && dvd.Cast.Contains( "Actor" ) );
Another possible solution would be to create a custom code generator that could you could modify in one place and create the similar routines for all other types. This would probably be a good solution if you are running into performace problems. You would want to limit the changes to the template piece of code that you use.

Categories

Resources