In my previous applications when I used linq-to-sql I would always use one class to put my linq-to-sql code in, so I would only have one DataContext.
My current application though is getting too big and I started splitting my code up in different classes (One for Customer, one for Location, one for Supplier...) and they all have their own DataContext DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
Now when I try to save a contact with a location (which I got from a different DataContext) I get the following error:
"An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported."
I assume this is because I create a DataContext for every class, but I wouldn't know how to this differently?
I'm looking for any ideas, thanks.
My classes look like the following:
public class LocatieManagement
{
private static DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
public static void addLocatie(locatie nieuweLocatie)
{
dc.locaties.InsertOnSubmit(nieuweLocatie);
dc.SubmitChanges();
}
public static IEnumerable<locatie> getLocaties()
{
var query = (from l in dc.locaties
select l);
IEnumerable<locatie> locaties = query;
return locaties;
}
public static locatie getLocatie(int locatie_id)
{
var query = (from l in dc.locaties
where l.locatie_id == locatie_id
select l).Single();
locatie locatie = query;
return locatie;
}
}
That happens if the entity is still attached to the original datacontext. Turn off deferred loading (dc.DeferredLoadingEnabled = false):
partial class SomeDataContext
{
partial void OnCreated()
{
this.DeferredLoadingEnabled = false;
}
}
You may also need to serialize/deserialize it once (e.g. using datacontractserializer) to disconnect it from the original DC, here's a clone method that use the datacontractserializer:
internal static T CloneEntity<T>(T originalEntity) where T : someentitybaseclass
{
Type entityType = typeof(T);
DataContractSerializer ser =
new DataContractSerializer(entityType);
using (MemoryStream ms = new MemoryStream())
{
ser.WriteObject(ms, originalEntity);
ms.Position = 0;
return (T)ser.ReadObject(ms);
}
}
This happens because you're trying to manage data from differing contexts - you will need to properly detach and attach your objects to proceed - however, I would suggest preventing the need to do this.
So, first things first: remove the data context instances from your entity classes.
From here create 'operational' classes that expose the CRUDs and whatnot to work with that specific type of entity class, which each function using a dedicated data context for that unit of work, perhaps overloading to accept a current context for when a unit of work entails subsequent operations.
I know everybody probably gets tired of hearing this, but you really should look at using Repositories for Data Access (and using the Unit of Work pattern to ensure that all of the repositories that are sharing a unit of work are using the same DataContext).
You can read up on how to do things here: Revisiting the Repository and Unit of Work Patterns with Entity Framework (the same concepts apply to LINQ to SQL as well).
Another solution I found for this is to create one parent class DataContext
public class DataContext
{
public static DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
}
And let all my other classes inherit this one.
public class LocatieManagement : DataContext
{
public static void addLocatie(locatie nieuweLocatie)
{
dc.locaties.InsertOnSubmit(nieuweLocatie);
dc.SubmitChanges();
}
}
Then all the classes use the same DataContext.
Related
I am trying to create a new core framework (web mostly) with Repository and Unit Of Work pattern for my applications that i can able to change my ORM to NHibernate or Dapper later on.
Right now my interface of Unit of work is like this :
public interface IUnitOfWork : IDisposable
{
void Commit();
void Rollback();
}
And Entity Framework implementation is like this (trimmed for readability)
public class EfUnitOfWork : IUnitOfWork
{
....
public EfUnitOfWork(ApplicationDbContext context)
{
this._context = context;
this._transaction = new EfTransaction(_context.Database.BeginTransaction());
}
public void Commit()
{
this._context.SaveChanges(true);
this._transaction.Commit();
...
}
public void Rollback()
{ ...
}
}
The problem is that in my Service Layer that contains business logic i can do something like this with the navigations properties:
public bool CreateCity(CityCreateModel model)
{
using (var uow = _unitOfWorkFactory.Create())
{
var city = new City();
city.Name = model.Name;
city.State = new State() { Country = new Country() { Name = "SomeCountry" }, Name = "SomeCity" };
_cityRepository.Create(city);
try
{
uow.Commit();
return true;
}
catch (Exception)
{
uow.Rollback();
throw;
}
}
}
The repository Create method is pretty straightforward as i use entity framework :
public void Create(City entity)
{
_set.Add(entity);
}
The problem begins here , when a member of team writes a code like the Service example with using new keyword on navigation properties or adding items for collection navigation properties, entity framework detects these changes and when i save changes, these are also saved to the database.
If i try to change existing sample to Dapper.NET or to a REST service later on there can be a LOT of problems that i had to go look for every navigation property and track that they have been changed or not and write a lot of (possibly garbage) code for them as i didn't really know what is inserted on the table via entity framework and what isnt (because of navigation properties are also inserted and my repositories called once for only 1 insert that is for City in my example above)
Is there a way to prevent this behavior or is there a pattern known that i can adapt early on so i won't have problems later on?
How did you overcome this?
Before I begin I want to give some notes to your code:
public EfUnitOfWork(ApplicationDbContext context)
{
this._context = context;
this._transaction = new EfTransaction(_context.Database.BeginTransaction());
}
1) From your example I can see that you are sharing the same DbContext(given as parameter in the constuctor for the whole application. I do not think this is a good idea, because the entities will be cached in the first level cache and the change tracker will track them all. With this approach will get soon performance problems when the database will be growth.
_cityRepository.Create(city);
public void Create(City entity)
{
_set.Add(entity);
}
2) The base repository should be generic of type T where T is an entity! and so you can create a city;
var city = _cityRepository.Create();
Fill the city or provide the data as parameters in the create method.
Back to your question:
Is there a way to prevent this behavior or is there a pattern known that i can adapt early on so i won't have problems later on?
Each ORM has his own desgin concept and it is not easy to find generic way which fit to them all that way I would do the following:
1) Separate the repository contracts in one the assembly (contracts dll)
2) For each ORM Framework use a separate assembly which implement the repository contracts.
Example:
public interface ICityRepository<City> :IGenericRepsotiory<City>
{
City Create();
Find();
....
}
Entity Frmework assembly:
public class CityRepositoryEF : ICityReposiory
{
..
Dapper Frmework assembly:
public class CityRepositoryDapper : ICityReposiory
{
..
You can find a brilliant walk through if you follow the URL below. It is authored by Julie Lerman who is an entity framework evangelist.
http://thedatafarm.com/data-access/agile-entity-framework-4-repository-part-1-model-and-poco-classes/
I have:
Category class
public partial class Category : BaseEntity
{
...
public string Name { get; set; }
private ICollection<Discount> _appliedDiscounts;
public virtual ICollection<Discount> AppliedDiscounts
{
get { return _appliedDiscounts ?? (_appliedDiscounts = new List<Discount>()); }
protected set { _appliedDiscounts = value; }
}
}
Service:
public IList<Category> GetCategories()
{
// ado.net to return category entities.
}
public ICollection<Discount> GetDiscount(int categoryId)
{
// ado.net .
}
I don't want to use ORM like EF.. but plain ado.net and i don't want to put in ugly Lazy<T> in my domain definition, for e.g public Lazy....
So how in this case could I get AppliedDiscounts automatically get binded lazy to GetDiscount without using explicitly declaration of Lazy<T> on the Category class ?
I don't know why you don't like the Lazy<T> type - it is simple, useful and you don't have to worry about anything.
And no one forces you to use public Lazy<IEnumerable<Discount>> GetDiscounts();
You could use it internally:
Lazy<IEnumerable<Discount>> discounts = new Lazy<IEnumerable<Discount>>(() => new DiscountCollection());
public IEnumerable<Discount> GetDiscounts()
{
return discounts.Value;
}
It operates as intended - until no one asks for discounts it won't be created.
If you really want - you could create your own implementation. Something like Singleton class in Richter's "CLR via C#" book (because Lazy has all the 'properties' of a proper singleton container - thread safety, only one instance of inner 'singleton' value could be evaluated...).
But do you really want to create it and test? You will just replace a well-designed standard component with a fishy custom one.
AFTER ACTUALLY READING YOUR QUESTION WITH ATTENTION
1) If your lazy loading does not need any thread safety you could accomplish similar behaviour even without any Lazy or complex constructs - just use Func delegate:
public partial class Category : BaseEntity
{
...
private Func<ICollection<Discount>> getDiscounts;
public Category(Func<ICollection<Discount>> getDiscounts) { ... }
public string Name { get; set; }
private ICollection<Discount> _appliedDiscounts;
public virtual ICollection<Discount> AppliedDiscounts
{
get { return _appliedDiscounts ??
(_appliedDiscounts = new List<Discount>(this.getDiscounts())); }
protected set { _appliedDiscounts = value; }
}
}
public IList<Category> GetCategories()
{
// ado.net to return category entities.
... = new Category(() => this.GetDiscount((Int32)curDataObject["categoryId"]))
}
public ICollection<Discount> GetDiscount(int categoryId)
{
// ado.net .
}
If you inject your service it will be even more simple:
public virtual ICollection<Discount> AppliedDiscounts
{
get { return _appliedDiscounts ??
(_appliedDiscounts = new List<Discount>(this.service.GetDiscounts(this.CategoryId))); }
protected set { _appliedDiscounts = value; }
}
2) If you need to use these objects in multiple threads then you will have to redesign your classes - they don't look like threadsafe.
AFTER THE COMMENT
what i want to do is exactly just like this guy
stackoverflow.com/questions/8188546/… . I want to know the concept how
ORM like EF do with the domain, keep it clean and separated from
injecting service class but still able to handle lazy loading. I know
i can use Reflection to get all the object properties and its object
variables(like AppliedDiscounts), but dont' know how to transform
these dynamically to lazy type so that it could be loaded later when
needed.
It is universal principle that you can't get something for nothing. You can't make your entities both clean and separated from any services(even through some proxy), and to allow them to load lazily - if they don't know anything about services and services don't know anything about them then how would the lazy loading work? There is no way to achieve such absolute decoupling(for two components to interact they have to either know about each other, know about some third module-communicator, or some module should know about them. But such coupling could be partially or completely hidden.
Technologies that provide entity object models usually use some of the following techniques:
Code generation to create wrappers(or proxies) above your simple data objects, or solid instances of your interfaces. It could be C# code or IL weaving, well, it could be even an in-memory assembly created dynamically in runtime using something like Reflection.Emit. This is not the easiest or most direct approach, but it will give you enormous code-tuning capabilities. A lot of modern frameworks use it.
Implementation of all those capabilities in Context classes - you won't have the lazy loading in your end objects, you will have to use it explicitly with Context classes: context.Orders.With("OrderDetails"). The positive side is that the entities will be clean.
Injection of service(or only of the needed subset of its operations) - that's what you'd prefer to avoid.
Use of events or some other observer-like pattern - your entities
will be clean from service logic and dependencies(at least in some
sense), but will contain some hookup infrastructure that won't be
very straightforward or easy to manage.
For your custom object model 2 or 3 are the best bets. But you could try 1 with Roslyn
This is a very weird architecture. Please bear with me.
We have an existing tiered application (data, logic/service, client).
The latest requirement is that the service layer should access two data sources!!!! (no other way around)
These two data sources have the same DB schema.
As with most tiered architectures, we have read and write methods like:
IEnumerable<Product> GetAllProducts(),
Product GetProductById(ProductKey id),
IEnumerable<Product> FindProductsByName(string name)
the product DTOs are:
class Product
{
public ProductKey Key { get; set;}
...
}
class ProductKey
{
public long ID { get; }
}
We narrowed it down to two possible solutions:
Alternative 1:
Add a parameter into the read methods so that the service knows what DB to use like so:
Product GetProductById(ProductKey id, DataSource dataSource)
DataSource is an enumeration.
Alternative 2 (my solution):
Add the DataSource property to the key classes. this will be set by Entity Framework when the object is retrieved. Also, this will not be persisted into the db.
class ProductKey
{
public long ID { get; }
public DataSource Source { get; } //enum
}
The advantage is that the change will have minimal impact to the client.
However, people dont like this solution because
the DataSource doesn't add business value. (My response is that
the ID doesn't add business value either. Its a surrogate key. Its
purpose is for tracking the persistence)
The children in the object graph will also contain DataSource which is redundant
Which solution is more sound? Do you have other alternatives?
Note: these services are used everywhere.
What I would suggest is door number 3:
[||||||||||||||]
[|||||||||s! ]
[||||nerics! ]
[ Generics! ]
I use a "dynamic repository" (or at least that is what I have called it). It is setup to be able to connect to any datacontext or dbset while still being in the same using block (i.e. without re-instantiation).
Here is a snippet of how I use it:
using (var dr = new DynamicRepo())
{
dr.Add<House>(model.House);
foreach (var rs in model.Rooms)
{
rs.HouseId = model.House.HouseId;
dr.Add<Room>(rs);
}
}
This uses the "default" dbcontext that is defined. Each one must be defined in the repository, but not instantiated. Here is the constructor I use:
public DynamicRepo(bool Main = true, bool Archive = false)
{
if (Main)
{
this.context = new MainDbContext();
}
if (Archive)
{
this.context = new ArchiveDbContext();
}
}
This is a simplified version where there are only two contexts. A more in depth selection method can be implemented to choose which context to use.
And then once initialized, here would be how the Add works:
public void Add<T>(T te) where T : class
{
DbSet<T> dbSet = context.Set<T>();
dbSet.Add(te);
context.SaveChanges();
}
A nice advantage of this is that there is only one spot to maintain the code for interacting with the database. All the other logic can be abstracted away into different classes. It definitely saved me a lot of time to use a generic repository in this fashion - even if I spent some time modifying it at first.
I hope I didn't misunderstand what you were looking for, but if you are trying to have one repository for multiple data sources, I believe this is a good approach.
How do I avoid requiring code like this:
public static class BusinessLogicAutomapper
{
public static bool _configured;
public static void Configure()
{
if (_configured)
return;
Mapper.CreateMap<Post, PostModel>();
_configured = true;
}
}
in my BL assembly, and having to call Configure() from my Global.asax in my MVC application?
I mean, I expect a call like this:
public PostModel GetPostById(long id)
{
EntityDataModelContext context = DataContext.GetDataContext();
Post post = context.Posts.FirstOrDefault(p => p.PostId == id);
PostModel mapped = Mapper.Map<Post, PostModel>(post);
return mapped;
}
to Mapper.Map<TIn,TOut> to produce the mapper if it isn't in existance, instead of having to create it myself manually (I shouldn't even know about this inner working). How can I work around declaratively creating mappers for AutoMapper?
A solution that's natural to AutoMapper would be desired, but an extension or some architectural change in order to avoid this initialization would work too.
I'm using MVC 3, .NET 4, and no IoC/DI (yet, at least)
I completely misunderstood what you were trying to do in my original answer. You can accomplish what you want by implementing part of the functionality of AutoMapper using reflection. It will be of very limited utility and the more you extend it, the more like AutoMapper it will be so I'm not sure that there's any long term value to it.
I do use a small utility like what you are wanting to automate my auditing framework to copy data from a entity model to its associated audit model. I created it before I started using AutoMapper and haven't replaced it. I call it a ReflectionHelper, the below code is a modification of that (from memory) -- it only handles simple properties but can be adapted to support nested models and collections if need be. It's convention-based, assuming that properties with the same name correspond and have the same type. Properties that don't exist on the type being copied to are simply ignored.
public static class ReflectionHelper
{
public static T CreateFrom<T,U>( U from )
where T : class, new
where U : class
{
var to = Activator.CreateInstance<T>();
var toType = typeof(T);
var fromType = typeof(U);
foreach (var toProperty in toType.GetProperties())
{
var fromProperty = fromType.GetProperty( toProperty.Name );
if (fromProperty != null)
{
toProperty.SetValue( to, fromProperty.GetValue( from, null), null );
}
}
return to;
}
Used as
var model = ReflectionHelper.CreateFrom<ViewModel,Model>( entity );
var entity = ReflectionHelper.CreateFrom<Model,ViewModel>( model );
Original
I do my mapping in a static constructor. The mapper is initialized the first time the class is referenced without having to call any methods. I don't make the logic class static, however, to enhance its testability and the testability of classes using it as a dependency.
public class BusinessLogicAutomapper
{
static BusinessLogicAutomapper
{
Mapper.CreateMap<Post, PostModel>();
Mapper.AssertConfigurationIsValid();
}
}
check out Automapper profiles.
I have this setup in my Global.asax - it runs once statically so everything is setup at runtime ready to go.
I also have 1 unit test which covers all maps to check they are correct.
A good example of this is Ayendes Raccoon Blog
https://github.com/ayende/RaccoonBlog
I'm using NHibernate + Fluent to handle my database, and I've got a problem querying for data which references other data. My simple question is: Do I need to define some "BelongsTo" etc in the mappings, or is it sufficient to define references on one side (see mapping sample below)? If so - how? If not please keep reading.. Have a look at this simplified example - starting with two model classes:
public class Foo
{
private IList<Bar> _bars = new List<Bar>();
public int Id { get; set; }
public string Name { get; set; }
public IList<Bar> Bars
{
get { return _bars; }
set { _bars = value; }
}
}
public class Bar
{
public int Id { get; set; }
public string Name { get; set; }
}
I have created mappings for these classes. This is really where I'm wondering whether I got it right. Do I need to define a binding back to Foo from Bar ("BelongsTo" etc), or is one way sufficient? Or do I need to define the relation from Foo to Bar in the model class too, etc? Here are the mappings:
public class FooMapping : ClassMap<Foo>
{
public FooMapping()
{
Not.LazyLoad();
Id(c => c.Id).GeneratedBy.HiLo("1");
Map(c => c.Name).Not.Nullable().Length(100);
HasMany(x => x.Bars).Cascade.All();
}
}
public class BarMapping : ClassMap<Bar>
{
public BarMapping()
{
Not.LazyLoad();
Id(c => c.Id).GeneratedBy.HiLo("1");
Map(c => c.Name).Not.Nullable().Length(100);
}
}
And I have a function for querying for Foo's, like follows:
public IList<Foo> SearchForFoos(string name)
{
using (var session = _sessionFactory.OpenSession())
{
using (var tx= session.BeginTransaction())
{
var result = session.CreateQuery("from Foo where Name=:name").SetString("name", name).List<Foo>();
tx.Commit();
return result;
}
}
}
Now, this is where it fails. The return from this function initially looks all fine, with the result found and all. But there is a problem - the list of Bar's has the following exception shown in debugger:
base {NHibernate.HibernateException} = {"Initializing[MyNamespace.Foo#14]-failed to lazily initialize a collection of role: MyNamespace.Foo.Bars, no session or session was closed"}
What went wrong? I'm not using lazy loading, so how could there be something wrong in the lazy loading? Shouldn't the Bar's be loaded together with the Foo's? What's interesting to me is that in the generate query it doesn't ask for Bar's:
select foo0_.Id as Id4_, foo0_.Name as Name4_ from "Foo" foo0_ where foo0_.Name=#p0;#p0 = 'one'
What's even more odd to me is that if I'm debugging the code - stepping through each line - then I don't get the error. My theory is that it somehow gets time to check for Bar's during the same session cause things are moving slower, but I dunno.. Do I need to tell it to fetch the Bar's too - explicitly? I've tried various solutions now, but it feels like I'm missing something basic here.
This is a typical problem. Using NHibernate or Fluent-NHibernate, every class you use that maps to your data is decorated (which is why they need to be virtual) with a lot of stuff. This happens all at runtime.
Your code clearly shows an opening and closing of a session in a using statement. When in debugging, the debugger is so nice (or not) to keep the session open after the end of the using statement (the clean-up code is called after you stop stepping through). When in running mode (not stepping through), your session is correctly closed.
The session is vital in NH. When you are passing on information (the result set) the session must still be open. A normal programming pattern with NH is to open a session at the beginning of the request and close it at the end (with asp.net) or keep it open for a longer period.
To fix your code, either move the open/close session to a singleton or to a wrapper which can take care of that. Or move the open/close session to the calling code (but in a while this gets messy). To fix this generally, several patterns exist. You can look up this NHibernate Best Practices article which covers it all.
EDIT: Taken to another extreme: the S#arp architecture (download) takes care of these best practices and many other NH issues for you, totally obscuring the NH intricacies for the end-user/programmer. It has a bit of a steep learning curve (includes MVC etc) but once you get the hang of it... you cannot do without anymore. Not sure if it is easily mixed with FluentNH though.
Using FluentNH and a simple Dao wrapper
See comments for why I added this extra "chapter". Here's an example of a very simple, but reusable and expandable, Dao wrapper for your DAL classes. I assume you have setup your FluentNH configuration and your typical POCO's and relations.
The following wrapper is what I use for simple projects. It uses some of the patterns described above, but obviously not all to keep it simple. This method is also usable with other ORM's in case you'd wonder. The idea is to create singleton for the session, but still keep the ability to close the session (to save resources) and not worry about having to reopen. I left the code out for closing the session, but that'll be only a couple of lines. The idea is as follows:
// the thread-safe singleton
public sealed class SessionManager
{
ISession session;
SessionManager()
{
ISessionFactory factory = Setup.CreateSessionFactory();
session = factory.OpenSession();
}
internal ISession GetSession()
{
return session;
}
public static SessionManager Instance
{
get
{
return Nested.instance;
}
}
class Nested
{
// Explicit static constructor to tell C# compiler
// not to mark type as beforefieldinit
static Nested()
{
}
internal static readonly SessionManager instance = new SessionManager();
}
}
// the generic Dao that works with your POCO's
public class Dao<T>
where T : class
{
ISession m_session = null;
private ISession Session
{
get
{
// lazy init, only create when needed
return m_session ?? (m_session = SessionManager.Instance.GetSession());
}
}
public Dao() { }
// retrieve by Id
public T Get(int Id)
{
return Session.Get<T>(Id);
}
// get all of your POCO type T
public IList<T> GetAll(int[] Ids)
{
return Session.CreateCriteria<T>().
Add(Expression.In("Id", Ids)).
List<T>();
}
// save your POCO changes
public T Save(T entity)
{
using (var tran = Session.BeginTransaction())
{
Session.SaveOrUpdate(entity);
tran.Commit();
Session.Refresh(entity);
return entity;
}
}
public void Delete(T entity)
{
using (var tran = Session.BeginTransaction())
{
Session.Delete(entity);
tran.Commit();
}
}
// if you have caching enabled, but want to ignore it
public IList<T> ListUncached()
{
return Session.CreateCriteria<T>()
.SetCacheMode(CacheMode.Ignore)
.SetCacheable(false)
.List<T>();
}
// etc, like:
public T Renew(T entity);
public T GetByName(T entity, string name);
public T GetByCriteria(T entity, ICriteria criteria);
Then, in your calling code, it looks something like this:
Dao<Foo> daoFoo = new Dao<Foo>();
Foo newFoo = new Foo();
newFoo.Name = "Johnson";
daoFoo.Save(newFoo); // if no session, it creates it here (lazy init)
// or:
Dao<Bar> barDao = new Dao<Bar>();
List<Bar> allBars = barDao.GetAll();
Pretty simple, isn't it? The advancement to this idea is to create specific Dao's for each POCO which inherit from the above general Dao class and use an accessor class to get them. That makes it easier to add tasks that are specific for each POCO and that's basically what NH Best Practices was about (in a nutshell, because I left out interfaces, inheritance relations and static vs dynamic tables).