Testing Data Model in project that uses NHibernate for persistence - c#

I'm still new with NHibernate, so correct me if I'm wrong on any of this.
When you are using NHibernate to build collections of objects from database records, NH takes care of instantiating your collections and populating them.
How do you instantiate your ISet collections when you are writing tests that don't actually use NH?

You can instantiate a field by using the constructor or by instantiating the field directly in the declaration. Classes mapped with NHibernate can be persistence ignorant.
public class MyEntity
{
private readonly ISet<ChildEntity> children;
public MyEntity()
{
children = new HashedSet<ChildEntity>();
}
public IEnumerable<ChildEntity> Children
{
get { return children; }
}
public void AddChild(Child child)
{
children.Add(child);
}
}

Assuming you're testing something other than the data layer itself (like your domain logic, for example), you can simply create the objects yourself and populate them with test case data. Or you could be really sneaky and create a database with all your test cases.

Related

Autofac Dependency resolution in constructor vs resolution in the method of usage

Before I begin my question, here is a background of my application:
Application: Web Application using ASP.NET MVC and C#
ORM: Entity Framework
Dependency injection container: Autofac
Consider I have a class that only serves as a lookup class for database tables. My database contains about 350 tables. The following is a simplified example that shows exactly the structure in my application.
(if you think you've got the scenario, you can directly jump to the Question at the bottom.)
Here is the lookup class
public class Lookup
{
private readonly IRepository<Table001> _table001Repository;
// repositories Table002 to Table349
private readonly IRepository<Table350> _table350Repository;
public Lookup(
IRepository<Table001> table001Repository,
// parameters table002Repository to table349Repository
IRepository<Table350> table350Repository)
{
_table001Repository = table001Repository;
// assignments for table002Repository to table349Repository
_table350Repository = table350Repository;
}
public Table001 GetTable001(/* fields for looking up Table001 */)
{
// lookup logic using the repository for Table001, namely _table001Repository
}
// other lookup methods for Table002 to Table349
public Table350 GetTable350(/* fields for looking up Table350 */)
{
// lookup logic using the repository for Table350, namely _table350Repository
}
}
Now, an example class that uses this Lookup class:
public class MyClass
{
private readonly ILookup _lookup;
public MyClass(ILookup lookup)
{
_lookup = lookup;
}
public Method()
{
Table001 table001 = _lookup.GetTable001(/* fields */);
Table002 table002 = _lookup.GetTable002(/* fields */);
Table003 table003 = _lookup.GetTable003(/* fields */);
...
}
}
Registrations to Autofac are done this way:
// Lookup class registration
builder.RegisterType<Lookup>().As<ILookup>().InstancePerLifetimeScope();
// generic registration for Repositories
builder.RegisterGeneric(typeof(Repository<>)).As(typeof(IRepository<>)).InstancePerLifetimeScope();
Question:
As you can see, the class that actually makes use of the Lookup class is using only three methods from 350 lookup methods. But while instantiating Lookup class in MyClass (the _lookup field), all 350 repositories are resolved too, instantiating 350 repositories. Now my concern is regarding performance/efficiency. Which of the following two options is better performance wise, and why?
Resolving dependencies as is being done now, that is using auto-injection in the constructor.
Resolving the repository only in the method that actually uses the repository instead of auto-injection in the constructor. For example GetTable001() would become this (the field _table001Repository will be removed from Lookup class, so will be the parameter table001Repository in its constructor):
public Table001 GetTable001(/* fields for looking up Table001 */)
{
IRepository<Table001> table001Repository = container.Resolve<IRepository<Table001>>();
// lookup logic
}
In option 1, all 350 repositories are instantiated while resolving _lookup, even though, for example, the class MyClass uses only three lookup methods. Is the second approach any better than the first? Or, is the first option actually better than the second performance-wise? Also, does the scope of resolution (InstancePerLifetimeScope, InstancePerDependency, etc.) make any difference?
Thanks.
PS: please comment if you think this question should better be posted in SoftwareEngineering. The reason of asking it here is the broad audience SO has.
There are a few different things to consider here:
First, is anything else using the ILookup instance? If so, you already have an instance of those 350 repositories created, so it doesn't really matter in that respect.
Second, non constructor resolution of injected members (e.g. using ServiceLocator) is considered an anti-pattern (http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/). Thus, I would try to avoid if possible. The biggest reason why (as outlined in the article) is that it hides class dependencies from their consumers.
Third, is there any reason you cannot create a new service which consists of only the necessary repository as a dependency? If you can create an IMyClassLookup service which implements the ILookup interface that has only the necessary repositories you need, that's definitely the easiest and cleanest way to go. Then you would just register your new service and use it in your constructor:
New Interface
public interface IMyClassLookup : ILookup
{
}
New Class
public class MyClassLookup : IMyClassLookup
{
private readonly IWhateverRepository _whateverRepository;
public MyClassLookup(IWhateverRepository whateverRepository)
{
_whateverRepository = whateverRepository;
}
// IMyClassLookup implementations here that use _whateverRepository
}
Dependency registration
builder.RegisterType<MyClassLookup>().As<IMyClassLookup>().InstancePerLifetimeScope();
Your class
public class MyClass
{
private readonly IMyClassLookup _myClassLookup;
public MyClass(IMyClassLookup myClassLookup)
{
_myClassLookup = myClassLookup;
}
public Method()
{
Table001 table001 = _myClassLookup.GetTable001(/* fields */);
Table002 table002 = _myClassLookup.GetTable002(/* fields */);
Table003 table003 = _myClassLookup.GetTable003(/* fields */);
...
}
}
Seem like Lookup class itself is a Service Locator Pattern.
IRepository<Table001> table001Repository = container.Resolve<IRepository<Table001>>();
Service Locator has few problems. One of them is it doesn't follow Don't call us, we'll call you principle. It directly ask for the things we need rather than handed them to us.
With the service locator, we have to examine the code, searching for capricious calls that retrieve a required service. Constructor injection allowed us to view dependencies—all of them—with a glance at the constructor, or at a distance, via IntelliSense.
Solution
Ideally, you want to inject IRepository<Table001> to MyClass. I also use the similar approach in my projects.
public class MyClass
{
private readonly IRepository<Table001> _table001;
public MyClass(IRepository<Table001> table001)
{
_table001 = table001;
}
}
If you see yourself injecting a lot of dependencies into a single class, it might be that the class violates Single Responsibility Principle. You might want to separate it to different classes.
"creating an object instance is something the .NET Framework does extremely fast. Any performance bottleneck your application may have
will appear in other places." - Mark Seemann author of Dependency
Injection in .Net

Update derived entity with One-to-Many children

I have these POCO classes, they're mapped using Fluent API with a TPT (Table per Type) strategy.
public class Base
{
...
}
public class Derived : Base
{
...
public virtual ICollection<Foo> Foos {get; set;} // One-to-Many
}
public class Foo
{
...
public virtual ICollection<Bar> Bars {get; set;} // One-to-Many
}
public class Bar
{
...
}
My repository looks like this.
public class Repo
{
public void Update(Base item)
{
using (var ctx = new DbContext())
{
ctx.Entry(item).State = EntityState.Modified;
ctx.SaveChanges();
}
}
}
Action:
public void DoStuff()
{
Derived item = repo.GetById(1);
item.SomeProp = "xyz"; // new value
item.Foos = GenerateFoosWithBars(); // change children
repo.Update(item);
}
To my surprise Update actually works if I'm only updating the Base or Derived classes. However things turn ugly when I try to update the One-to-Many relations. I found a tutorial on how to Update One-to-Many Entities in EF4. I was really expecting EF to be way smarter then this, I mean I have to do it manually... that's so unlike everything else in EF.
So I started out trying to use Entry cause I wanted it to be generic (being able to update any Base derived class) using Entry.OriginalValues to avoid having to write a query myself. But now shit really hits the fan! Entry.OriginalValues fails with an exception saying that DbSet<Derived> doesn't exists. It's totally right, it doesn't. But it shouldn't as the the Derived is mapped to DbSet<Base> via inheritance.
Clearly I must be doing something wrong or something so different from everyone else as I'm unable to find anything useful on the matter. Haven't EF5 improved on this in anyway?
Any suggestions on how I could approach this problem?
Firstly, I think an Update method is not necessary in the Repository since EF tracks changes and applies then when you call SaveChanges() on the context.
Secondly, the problem might be that you're assigning a new collection to the Foos poperty when yo do: item.Foos = GenerateFoosWithBars(); You shouldn't do that since when EF materializes an object of the Derived type it actually returns a proxy which overrides the virtual Foos collection to use a special kind of lazy loaded collection that it tracks. If you assign a different collection of your own that will not be bound to the context. (I don't think that EF will handle that very well). What you should do is modify the collection items not the collection itself! Hope it helps!

One DataContext For Linq To SQL While Using Repository Pattern

I am fairly new to .NET C# development and recently started using LINQ to SQL for the majority of my data access layer. Unfortunately, I have been struggling with how to manage my DataContext so that I don't get the dreaded exceptions that result from entities either not being attached to a context or attempting to attach an entity to one context while it is attached to another. After much reading, I believe that the best solution for my application is to leave the DataContext open for the entire duration of my application.
In short, I am using the Repository pattern to implement the basic CRUD operations (i.e. Create, Read, Update, and Destroy) for all entities managed by my context. My implementation of the Repository is included below. In addition to the Repository, I have DAOs (Data Access Objects) for each entity that has more specific data access methods (i.e. CustomerDAO.getCustomerByName(string name), etc...). Each of my Windows forms has its' own instance of one or more DAOs (that extend Repository) and the DataContext in my repository is static. The problem that I am running into is that even though my DataContext in the repository class below is declared as static, I'm finding that each distinct DAO actually gets a different instance of the DataContext. For example, if I have 8 references to the CustomerDAO, they all have the same DataContext. But, if I create one WorkOrderDAO I get another instance of the DataContext and all future instances of WorkOrderDAO get this same DataContext. More specifically, I discovered this in the following scenario.
1) Use an instance of WorkOrderDAO to load all WorkOrders into ListView - Has one DataContext
2) Use an instance of WorkOrderJobsDAO to attempt to delete one of the jobs on the WorkOrder. This is a collection on the WorkOrder. Has a different DataContext so I can't attach
Is this a problem with how I have implemented the Repository below? The only thing that I can think of to solve this issue is to create a Singleton that the Repository uses to get its' DataContext. Can anyone make any recommendations here for how I should manage the Context?
public class Repository<T> : IRepository<T>
where T : class
{
private static NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger();
protected static DomainClassesDataContext db = new DomainClassesDataContext();
private static bool dataContextOptionsInitialized = false;
public Repository()
{
if (!dataContextOptionsInitialized)
{
db.DeferredLoadingEnabled = true;
dataContextOptionsInitialized = true;
}
}
public void AddEntity(T entity)
{
GetTable.InsertOnSubmit(entity);
SaveAll();
}
public void DeleteEntity(T entity, bool attach)
{
if(attach)
GetTable.Attach(entity);
GetTable.DeleteOnSubmit(entity);
SaveAll();
}
public void UpdateEntity(T entity, bool attach)
{
if(attach)
GetTable.Attach(entity, true);
SaveAll();
}
public System.Data.Linq.Table<T> GetTable
{
get { return db.GetTable<T>(); }
}
public IEnumerable<T> All()
{
return GetTable;
}
public void SaveAll()
{
db.SubmitChanges();
}
}
Generic classes in C# gets "expanded" on compilation, so your Repository will be a different class than Repository, which is why the static variables are different instances between your DTOs. So yes you probably want to save it somewhere else, like a in a Singleton class.

Performance effects with instances of entities in EntityFramework

What behavior should I use for a provider pattern with entity framework?
public class TestProvider : IDisposable
{
public Entities entities = new Entities();
public IEnumerable<Tag> GetAll()
{
return entities.Tag.ToList();
}
public ...
#region IDisposable Members
public void Dispose()
{
entities.Dispose();
}
#endregion
}
Or is it ok to use:
public class TestProvider
{
public IEnumerable<Tag> GetAll()
{
using (var entities = new Entities())
{
return entities.Tag.ToList();
}
}
public ...
}
Does it implies on performance? What are the pros and cons about it?
It depends on how long should TestProvider exist and what operations do you want to perform on retrieved entities. Generally ObjectContext instance should be used for the shortest time as possible but it should also represent single unit of work. ObjectContext instance should not be shared. I answered related question here.
It means that both your approaches are correct for some scenarios. First approach is ok if you expect to retrieve entities, modify them and the save them with the same provider instance. Second approach is ok if you just want to retrieve entities, you don't want to immediately modify them and you don't want to select anything else.

Designing database interaction while following Single Responsibility Principle

I'm trying to adhere to Single Responsibility Principle better, and I'm having issues grasping how to structure the general class design for communicating with a database. In a simplified version, I essentially have a database containing:
Manufacturer <== Probes <==> ProbeSettings
A probe has a manufacturer. A probe has 1 set of settings. The related objects are accessed all over the application, and quite frankly, the current implementation is a mess.
Currently, here's a general view of how communication and objects are implemented:
public class Manufacturer
{
public int ID; // Primary key, auto-incrementing on insert
public string Name;
}
public class Probe
{
public int ID; // Primary key, auto-incrementing on insert
public int ManufacturerID;
public string Name;
public int Elements;
}
public class ProbeSettings
{
public int ProbeID; // Primary key, since it is unique.
public int RandomSetting;
}
// This class is a mess...
public static class Database
{
public static string ConnectionString;
public static void InsertManufacturer(Manufacturer manuf); // ID would be ignored here, since it's auto-incrementing.
public static void InsertProbe(Probe probe); // Again, ID generally ignored.
public static void InsertProbeSettings(ProbeSettings probeSet);
public static Manufacturer[] GetAllManufacturer();
public static Probe[] GetProbesFromManufacturer(int manufacturerID);
public static Probe[] GetProbesFromManufacturer(Manufacturer manuf);
}
I see many issues here.
Database does far too much.
These objects can be immutable when read really, the only issue is after inserting, I'm not sure what ID they were assigned, and the inserted object is now obsolete.
Anytime a class needs to get information from the Database, I'd have to add another Get method to handle a specific query.
I'm really at a loss here on what a correct implementation would be. My only real idea for improvement is some kind of base interface for database objects, although it might only help for inserts...
public interface IDatabaseObject
{
void Insert(Database db);
bool Delete(Database db);
}
What is a good way to actually implement this?
Well the best solution to work with DB while maintaining SRP (or any other kind of sane pattern) is to use some kind of ORM (for example, NHibernate).
This will allow you to work with classes as they are, instead of manually tossing them from/to DB.
For example, with NH your classes can look like this:
public class Manufacturer
{
public string Name { ... }
public IList<Probe> Probes { ... }
}
public class Probe
{
public string Name { ... }
public int Elements { ... }
public ProbeSettings Settings { ... }
}
public class ProbeSettings
{
public int RandomSetting;
}
As you see, you already do not need GetProbesFromManufacturer since you can just navigate the collection within Manufacturer.
Also, ORM will manage object ids and saving for you. So all that you will need will be just a small and fixed number of general methods like LoadById/LoadAll, which fit nicely into a class SRP of which is data access. Also you would probably need a class per each complex and configurable query to the DB.
It sounds like you are looking for an ORM. Since you're working in C#, I'll assume you have access to LinqToSQL as part of the .NET framework. Linq can do what you're looking for as far as managing your basic CRUD operations. Similar projects, also worth checking out, are Castle ActiveRecord and NHibernate.

Categories

Resources