how we can create a generic data access layer that can be used by any asp.net application using different datasource provider or webservices?
Can we create data access layer for application that consumes webservice?
You might look into the Repository Pattern. Repository is a facade that presents persisted objects as though they are in a collection in memory. Whichever provider you choose to get data is hidden behind the Repository interface.
IRepository with LINQ to SQL
Code Project Tutorial
A sample by Fredrik Kalseth
You have plenty of options! :-)
You mention you want to use the data access layer (DAL) from asp.net and web services. No problem.
Basically, what you need to figure out is a basic design you want to follow, and encapsulate the DAL into its own assembly which can be used / referenced from various consumers.
There are numerous ways of doing this:
create a Linq-to-SQL mapping for your tables, if you're using SQL Server as your backend, and use the Linq-to-SQL entities and methods
create a repository pattern (for each "entity", you have an "EntityRepository" class, which can be used to retrieve entities, e.g. EntityReposity.GetByID(int id), or EntityRepository.GetByForeignKey(string fk) or whatever
use some other means of accessing the data (NHibernate, your own ADO.NET based mapper)
you could actually also use webservice calls as your data providers
Your biggest challenge is to define a standard way of doing things, and sticking to it.
See some articles - maybe they'll give you an idea:
Creating a Data Access Layer in .NET - Part 1
Building a DAL using Strongly Typed TableAdapters and DataTables in VS 2005 and ASP.NET 2.0
Try the tutorials at www.asp.net:
DataAccess
Woah, there are a ton of resources out there. Best advice to start is to find a pattern that you feel comfortable with and stick to it for the project, there is nothing worse then changing your mind 3/4 the way in.
One that I have found and like to use is the Repository or Provider patter. The Repository pattern just makes sure you have standard access to repositories, like your store catalog or CMS system. You create an interface that in my case expose sets of IQueryable the object or the data model are just standard c# classes with now extra fluff POCO (Plain Old CLR Objects).
public interface ICMSRepository {
IQueryable<ContentSection> GetContentSections();
void SaveContentSection(ContentSection obj);
}
Then just implement the interface for your different providers, like a LINQ to SQL context, making sure to return the POCO objects as queryable. The nice thing about this is that you can then make extension methods off of the IQueryable to get what you need easily. Like:
public static IQueryable<ContentSection> WithID(this IQueryable<ContentSection> qry, int ID) {
return from c in qry select c;
}
//Allow you to chain repository and filter to delay SQL execution
ICMSRepository _rep = new SqlCMSRepository();
var sec = _rep.GetContentSections().WithID(1).SingleDefault();
The nice thing about using the Interface approach is the ability to test and mock it up or dependency inject your preferred storage at run time.
Another method I have used and is used in the ASP.Net framework a lot is the Provider model. This is similar except instead of an Interface you create a singleton abstract class, your implementations on top of the abstract class will then define the storage access means (XML, Flat file, SQL, MySql, etc). The base abstract class will be also be resonsible for creating it's singleton based on configuration. Take a look at this for more info on the provider model.
Otherwise you can look at this similar question.
IRepository of T is general good approach. This Interface should expose GetAll, GetByID, Insert, Delete and Submit methods.
But it's important to keep your business logic out of Repository class, and put it to custom logic/service class.
Also, if you return IQueriable in your GetAll method, what you can often see in various implementation of IRepository, UI layer can also query against that IQueriable interface. But querying of object graph should remain in Repository layer. There's a lot debate around this issue.
Look at using the Interfaces:
IDbConnection
IDbCommand
IDbDataAdapter
IdataReader
Related
I've implemented my Data layer using the IRepository pattern. One of these methods returns an IQueryable:
public virtual IQueryable<TEntity> Queryable()
{
return dbSet;
}
I also have a Service layer that calls this method that supplies filtering e.g.:
public User GetByName(string name)
{
return _repository.Queryable()
.Where(a => a.Name == name)
.ToList();
}
So far so good, Entity Framework is only referenced in the data layer, the Service layer knows nothing about this.
Now, however I want to do two additional things:
Use the Include method
Use ToListAsync() method
So now my service method looks like this:
public async Task<User> GetByName(string name)
{
return await _repository.Queryable()
.Where(a => a.Name == name)
.Include(x => x.Pets)
.ToListAsync();
}
The issue now is that both Include() and ToListAsync() are both in the System.Data.Entity namespace which is in the EntityFramework.dll.
I didn't want to reference EF from within my Service layer, it shouldn't care or know about this, but can't see how else to solve this while keep the architecture clean. The only solutions I can see are:
Add Entity Framework to Service layer
Add a new class to Data layer (UserData) that wraps IRepository<User>' and handles the two additional methods required. The service will then take use an instance of this rather thanIRepository`.
Any suggestions about how I solve this while still adhering to best practices?
I understand that you want to abstract away EF in the repository layer, based on best practices. But your use cases (your main question here) is a contradiction of that - you want to expose everything that EF offers to your service layer.
It sounds like you need to define a strict line as to what the responsibility of this repository is. If it's to return an IQueryable, then what is the benefit of wrapping EF at all? Is there some other repository implementation you plan to actually use in production? If so, it's going to be very hard to ensure both repo types are correct, when you expose IQueryable. This is the pain point that you are seeing right now.
In my opinion, a repository should define how you get data. It should expose simple calls like GetAllUsers that returns actual models. When you let the business/service layer define its own way to get the data, it makes the repository feel redundant.
Include and ToListAsync live in EF, therefore they should be kept in the EF service.
The only way to access these without having to provide a reference to System.Entity is to create your own proxy methods. At the end of the day you should code to what your application requires at a high level rather than making sure it can access everything EF can.
When I deal with EF, I create a Service/Context layer, a repository layer and then a client layer which abstracts away the EF technicalities. This is then user by other devs/users so they aren't exposed too much to the EF side of things. I just workout what is required by the app and implement methods to do just that and nothing more. There is little point bringing the functions of System.Entity beyond the DAL, they should be abstracted away at client level.
One thing I do hate though is having to duplicate models if I want to keep my service/context layer away from other parts of the application. To get around this I have a model factory which uses DI to find the EF models and deliver them via a DI container. It just means I have to add an interface to each model I create. No biggie.
I have just started to design a .NET library that is going to be used by either an application or a service (but this should not matter with 3-tier architecture?) and I'm struggling to find a proper separation of concerns and at the same time link DAL with BL in a proper way.
I was looking for tutorials, etc., but they all point to ASP.NET and Entity Framework, but I'd like to use ADO.NET (DataSets, DataTables) to build a library for desktop application / windows service usage.
Would anyone point me to a right direction by providing any sample/example implenentation or a tutorial/guide??
#EDIT
I was thinking about something like that:
DbManager - abstract class
XDbManager - X being a provider, SQL, etc. deriving from DBManager, being a singleton class (I'd prefer static, but these can't implement interface or derive from classes)
DbConnection - an object returned by DbManager method, containing methods for querying
BaseDbo - abstract class for Database Object
XDbo - X being the name of DBO, using DbManager => DbConnection to query (save, retrieve, retrieve sets, save sets? this is where I'm a bit confused, I need few persistent DataSets to save, update, retrieve data from tables - should they be implemented as Database Objects deriving from DataSets?)
BaseBo - abstract class for Business Object
XBo - business object class to handle and process data, etc.
Saying above I can't find a proper way to "link" both layers.
I also need to make use of SOAP web service in here, should that be implemented on business layer? Or should I introduce a new sub-layer?
If your application is properly designed, using EF or ADO.Net shouldn't matter much: that's the point of the DAL abstraction. I.e. you'll still have a method such as:
public IEnumerable<User> GetAllUsers()
{
....
}
that will returns all of your users. How it's implemented (i.e. Entity Framework or ADO.Net) is not important. The only difference is that your DAL won't be able to return IQueryable<T> (i.e. deferred execution won't work, or you'll need another abstraction layer on top of ADO.Net).
Same as for ASP.Net vs Desktop application: this shouldn't vary much either. You should use WCF services instead of ASP.Net MVC controllers. Instead of directly calling the BL from your controller, you would call a method of a WCF generated client proxy.
Everything that you could read about 3-tier application should apply for your use case too.
I would an opinion about how you see the Repository pattern.
In the "old" Domain conception (for example from P of EAAA) the repository should be like an "in memory collection", so it should returns always the same type, so if you need a projection you have to make it out, so the projection will be made in the service layer for example, right? Or is possible to make it directly into the "Domain" project?
E.g.
public class CustomerRepository
{
//Constructor accepts an IRepository<Customer>
public IQueryable<Customer> GetAllDebtors()
{
//Use IRepository<Customer> here to make the query
}
}
Instead, in DDD, the repository, especially combined with CQRS, can return directly the projected type because the repository becomes a denormalization service, right?
E.g.
public class CustomerDenormalizer
{
//Constructor *could* accept an IRepository<Customer>
public IQueryable<Debtor> GetAllDebtors()
{
//Use IRepository<Customer> here to make the query and the projection
}
}
IMO, the correspondence with an "in-memory" collection is overemphasized. A repository shouldn't hide the fact that it encapsulates some heavy IO - that would be a leaky abstraction. Furthermore, IQueryable<T> is also a leaky abstraction since hardly any provider will support all the operations. I'd suggest delegating as much of the projecting as possible to the database, because it is quite good at it.
A projection in CQRS is somewhat different. It is usually implemented as an event consumer which updates a datastructure stored in some underlying storage mechanisms, which could itself be SQL server, or a key-value store. The central difference is that, in this case, a projection response to events from a message queue. These events could be coming from external systems.
Saying that the Repository in its "original form" has to return only objects of the same entity type is somewhat exaggerated. At all times people have been including Count() methods or other calculations in their Repositories for instance - and that's something that was documented even back in Evan's blue book.
On the other hand, I'm not sure what you mean by "denormalized types" but if that's types that borrow from several different entities to expose their data as is or consolidated, or expose partial views of single domain entities, I tend to think it's not the Domain any more. Oftentimes it turns out they serve application-specific purposes such as generating Excel reports or displaying statistics or summary screens.
If this is the case, and for performance reasons I want to leverage the DB directly instead of relying on the Domain, I prefer to create a separate project where I place all this "reporting" related logic. Never mind if you still name your data access objects Repositories in there (after all, they are also illusions of in-memory collections, only collections of other types).
In the "old" Domain conception (for example from P of EAAA) the repository should be like an "in memory collection", so it should returns always the same type, so if you need a projection you have to make it out, so the projection will be made in the service layer for example, right?
In my own solutions, I keep distinct the domain model (that is an C# expression of the language I learned from the domain expert, almost an internal DSL) and the applicative concerns related to the domain (such as repositories, that cope with the application need of persistence). This means that they are coded in different projects.
In such a structure, I've tried two different approaches: queryable repositories and custom ones.
Repositories that implement IQueryable have worked very well for developers using the domain to build UI or services exposed to third parties, but required a lot of work in infrastructure. We used different approaches on this side, from Linq2NHibernate to re-linq, each with pros and cons, but each one quite expensive. If you plan to use this technique, define good metrics to ensure that the time you save during application development worth the time you have to spend on custom infrastructure.
Custom repositories (those that expose methods returning IEnumerables) are much easier to design and implement, but they require more effort for UI and service's developers. We also had a case where using a custom repository was required by domain rules, since the query objects used to obtain results were specifications that were also part of the ubiquitous language and we were (legally) required to grant that the method used to query was the same used to express such value and predicate.
However, in custom repositories, we often expose projective methods too.
Or is possible to make it directly into the "Domain" project?
It's possible, but when you have many different (and very complex) bounded contexts to work with, it becomes a pain. This is why I use different projects for the domain classes expressing the ubiquitous language and the classes that serve applicative purposes.
Instead, in DDD, the repository, especially combined with CQRS, can return directly the projected type because the repository becomes a denormalization service, right?
Yes, they can.
That's what we do with custom repositories, even without CQRS. Furthermore, even with some repository implementing IQueryable, we occasionally exposed methods that produce projections directly.
Don't have generic Repository.
Don't have IQueryable.
Have ICustomerRepository.
Have your specific implementation of CustomerRepository leverage all the bells and whistles of the underlying storage system.
Have repository return projections.
public Interface ICustomerRepository
{
public IEnumerable< Customer> GetAllCustomer()
public IEnumerable< Debtor> GetAllDebtors()
public IEnumerable< CustomerSummary> GetCustomerSummaryByName(string name)
public IEnumerable< CustomerSummary> GetCustomerSummaryById(string id)
}
I'm pretty new to IoC, Dependency Injection and Unit Testing. I'm starting a new pet project and I'm trying to do it right.
I was planning to use the Repository pattern to mediate with the data. The objects that I was going to return from the repositories were going to be objects collected from a Linq to entities data context (EF4).
I'm reading in "Dependency Injection" from Mark Seeman that doing it, makes an important dependency and will definitely complicate the testing (that's what he's using POCO objects in a Library Project).
I'm not understanding why. Although the objects are created by a linq to entities context, I can create them simply calling the constructor as they were normal objects. So I assume that is possible to create fake repositories that deviler these objects to the caller.
I'm also concerned about the automatic generation of POCO classes, which is not very easy.
Can somebody bring some light? Are POCO objects trully necessary for a decoupled and testable project?
**EDIT: Thanks to Yuck I understand that it's better to avoid autogeneration with templates, which brings me to a design question. If I come from a big legacy database wich his tables are assuming a variety of responsabilities (doesn't fit well with the concept of a class with a single responsability), what's the best way to deal with that?
Delete the database is not an option ;-)
No, they're not necessary it just makes things easier, cleaner.
The POCO library won't have any knowledge that it's being used by Entity Framework. This allows it to be used in other ways - in place of a view model, for instance. It also allows you to use the same project on both sides of a WCF service which eliminates the need to create data transfer objects (DTO).
Just two examples from personal experience but there are surely more. In general the less a particular object or piece of code knows about who is using it or how it's being used will make it more adaptable and generic for other situations.
You also mention automatic generation of POCO classes. I don't recommend doing this. Were you planning to generate the class definitions from your database structure?
I was planning to use the Repository pattern to mediate with the data.
The objects that I was going to return from the repositories were
going to be objects collected from a Linq to entities data context
(EF4).
The default classes (not the POCOs) EF generates contain proxies for lazy loading and are tied at the hip to Entity Framework. That means any other class that wants to use those classes will have to reference the required EF assemblies.
This is the dependency Mark Seeman is talking about. Since you are now dependent on these non-abstract types, which in turn are dependent on EF, you cannot simply change the implementation of your repository to something different (i.e. just using your own persistence store) without addressing this change in the class that depend on these types.
If you are truly only interested in the public properties of the EF generated types then you can have the partial classes generated by EF implement a base interface. Put all the properties you need in that base interface and pass the dependency in as the base interface - now you only depend on the base interface and not EF anymore.
I recently began reading Pro ASP.NET MVC Framework.
The author talks about creating repositories, and using interfaces to set up quick automated tests, which sounds awesome.
But it carries the problem of having to declare yourself all the fields for each table in the database twice: once in the actual database, and once in the C# code, instead of auto-generating the C# data access classes with an ORM.
I do understand that this is a great practice, and enables TDD which also looks awesome. But my question is:
Isn't there any workaround having to declare fields twice: both in the database and the C# code? Can't I use something that auto-generates the C# code but still allows me to do TDD without having to manually create all the business logic in C# and creating a repository (and a fake one too) for each table?
I understand what you mean: most of the POCO classes that you're declaring to have retrieved by repositories look a whole lot like the classes that get auto-generated by your ORM framework. It is tempting, therefore, to reuse those data-access classes as if they were business objects.
But in my experience, it is rare for the data I need in a piece of business logic to be exactly like the data-access classes. Usually I either need some specific subset of data from a data object, or some combination of data produced by joining a few data objects together. If I'm willing to spend another two minutes actually constructing the POCO that I have in mind, and creating an interface to represent the repository methods I plan to use, I find that the code ends up being much easier to refactor when I need to change business logic.
If you use Entity Framework 4, you can generate POCO object automatically from the database. ( Link)
Then you can implement a generic IRepository and its generic SqlRepository, this will allow you to have a repository for all your objects. This is explained here : http://msdn.microsoft.com/en-us/ff714955.aspx
This is a clean way to achieve what you want: you only declare your object once in your database, generate them automatically, and can easily access them with your repository (in addition you can do IoC and unit test :) )
I recommend you to read the second edition of this book which is pure gold and updated with the new features introduced in MVC 2
http://www.amazon.com/ASP-NET-Framework-Second-Experts-Voice/dp/1430228865/ref=sr_1_1?s=books&ie=UTF8&qid=1289851862&sr=1-1
And you should also read about the new features introduced in MVC3 which is now in RC (there is a new view engine really useful) http://weblogs.asp.net/scottgu/archive/2010/11/09/announcing-the-asp-net-mvc-3-release-candidate.aspx
You are not declaring the business logic twice. It's just that this business logic is abstracted behind an interface and in the implementation of this interface you can do whatever you want : hit the database, read from the filesystem, aggregate information from web addresses, ... This interface allows weaker coupling between the controller and the implementation of the repository and among other simplifies TDD. Think of it as a contract between the controller and the business.