I am using a pattern where a concrete ViewModel implementing an interface is passed to a repository, which then populates the ViewModel object, but only using the interface. This makes for a little heavier repository, but has allowed the repository to be reused in different scenarios. For example, the concrete implementation could be a MVC ViewModel, or it could be asp.net Page that implements the interface, where the set accessor for each proeprty is actually putting the value in to the GUI, like a textbox for example. The implementation of the interface serves as the mapping and eliminates an extra step of copying. Having used AutoMapper extensively, and now being exposed to this pattern, I prefer this.
public interface IPerson
{
int Id{set};
string Name{set};
string Address{set};
}
public class PersonRepository
{
GetPerson(int id, IPerson person)
{
//query...
person.Id = result.Id;
person.Name = result.Name;
person.Address = result.Address;
}
}
//...controller action
PersonViewModel person = new PersonViewModel();
rep.GetPerson(5, person);
Here comes the tricky part though. Sometimes the ViewModel needs a collection of items, either for an Index page or for something like a drop down, or to display a nested set of child objects. The repository can't instantiate an interface, so we provide it was a factory. After fighting with covariance for awhile, I gave up on exposing any type of collection and ended up with a method that both creates and adds the collection item:
public interface IPerson
{
//...
IJobRole CreateAndAddJobRole();
}
public class PersonViewModel:IPerson
{
//collection not part of the interface
ICollection<JobRoles> JobRoles {get;set;} //= new List<JobRoles> in constructor
public CreateAndAddJobRole()
{
role = new JobRole();
JobRoles.Add(role);
return role;
}
}
public class PersonRepository
{
GetPerson(int id, IPerson person)
{
//...
foreach(var result...)
{
IJobRole role = person.CreateAndAddJobRole();
role.SomeProperty = //...
}
}
}
Obviously I'd probably have the repository that handles job roles actually be the one to populate the collection. I'd probably actual have more granular interfaces so that different repositories would be responsible for populating the data they deal with. The ViewModel would simply implement multiple interfaces. That to say, I realize there's room for improvement, but I am here specifically because I don't have any good ideas for dealing with the collection problem.
The one benefit of this design is there is no collection exposed which could be misused by the repository. There is never a guess about who is responsible for instantiating the collection itself, or who populates it, or if you had just a getter, the repository could get the collection and modify it in an invalid way. I think these would be rare occurrences because of the team would know the pattern, but it's always nice to not have pitfalls at all, instead of having pitfalls there that everyone has to remember to not step in.
As it is, it feels a little mucky.
How would you design/expose the ability for concrete types to be instantiated and added to collection, when the method doing so only has knowledge of the interfaces?
It sounds like your best bet is to make each interface generic, and pass in the types of the collections. For example:
public interface IPerson<TJob> where TJob : IJobRole
{
ICollection<TJob> JobRoles {get;set;}
void AddJobRole(TJob role);
}
public JobRole : IJobRole
{
}
public class PersonViewModel:IPerson<JobRoles>
{
//collection is now part of the interface
ICollection<JobRoles> JobRoles //= new List<JobRoles> in constructor
public void AddJobRole(JobRoles role)
{
JobRoles.Add(role);
}
}
public class PersonRepository
{
GetPerson(int id, IPerson<JobRoles> person)
{
//...
foreach(var result...)
{
person.AddJobRole(new JobRole {
SomeProperty = //...
SomeOther = //...
}
}
}
}
Of course, this assumes that you know which type of IPerson<> you want when you call GetPerson(). If you need it to handle any IPerson there, though, it becomes more problematic.
Related
I have two data entities, which are almost similar, design is something like:
public Class Entity1 : Base
{
public int layerId;
public List<int> Groups;
}
Difference is Entity1 has an extra collection of integer Groups
public Class Entity2 : Base
{
public int layerId;
}
These entities are filled as an input from UI using Json, I need to pass them to a processing method, which gives the same Output entity. Method has a logic to handle if List<int> Groups is null, I need to create a method which is capable of handling each of the input in an elegant manner. I cannot just use only Entity1, since they are two different functional inputs for different business process, so using Entity1 as direct replacement would be a mis-representation
Instead of creating overload of the function, I can think of following options:
Use object type as input and typecast in the function internally
I think we can similarly use dynamic types, but solution will be similar as above, it will not be a clean solution in either case, along with the switch-case mess.
What I am currently doing is processing method is like this:
public OuputEntity ProcessMethod(Entity 1)
{
// Data Processing
}
I have created a constructor of Entity1, that takes Entity2 as Input.
Any suggestion to create an elegant solution, which can have multiple such entities. May be using generic, where we use a Func delegate to create a common type out of two or more entities, which is almost similar to what I have currently done. Something like:
Func<T,Entity1>
Thus use Entity1 output for further processing in the logic.
I need to create a method which is capable of handling each of the input in an elegant manner
Create an Interface, or a contract so to speak, where each entity adheres to the particular design. That way common functionality can be processed in a similar manner. Subsequently each difference is expressed in other interfaces and testing for that interface sis done and the differences handled as such.
May be using generic,
Generic types can be tested against interfaces and a clean method of operations hence follows suit.
For example say we have two entities that both have Name properties as string, but one has an Order property. So we define the common interface
public interface IName
{
string Name { get; set; }
string FullName { get; }
}
public interface IOrder
{
decimal Amount { get; set; }
}
So once we have our two entities of EntityName and EntityOrder we can add the interfaces to them, usually using the Partial class definition such as when EF creates them on the fly:
public partial class EntityName : IName
{
// Nothing to do EntityName already defines public string Name { get; set; }
public string FullName { get { return "Person: " + Name; }}
}
public partial class EntityOrder : IName, IOrder
{
// Nothing to do Entity Order already defines public string Name { get; set; }
// and Amount.
public string FullName { get { return "Order: " + Name; } }
}
Then we can process each of them together in the same method
public void Process(IName entity)
{
LogOperation( entity.FullName );
// If we have an order process it uniquely
var order = entity as IOrder;
if (order != null)
{
LogOperation( "Order: " + order.Amount.ToString() );
}
}
Generic methods can enforce an interface(s) such as:
public void Process<T>(T entity) where T : IName
{
// Same as before but we are ensured that only elements of IName
// are used as enforced by the compiler.
}
Just create generic method that will do this work for you:
List<OuputEntity> MyMethod<T>(T value) where T : Base
// adding this constraint ensures that T is of type that is derived from Base type
{
List<OutputEntity> result = new List<OutputEntity>();
// some processing logic here like ...
return result;
}
var resultForEntity1 = MyMethod<Entity1>();
var resultForEntity2 = MyMethod<Entity2>();
P.S. check my answer for this question as you may find it useful too:
map string to entity for using with generic method
You probably want to implement an interface or an abstract class.
From MSDN
If you anticipate creating multiple versions of your component, create
an abstract class. Abstract classes provide a simple and easy way to
version your components. By updating the base class, all inheriting
classes are automatically updated with the change. Interfaces, on the
other hand, cannot be changed once created. If a new version of an
interface is required, you must create a whole new interface.
If the functionality you are creating will be useful across a wide range of
disparate objects, use an interface. Abstract classes should be used
primarily for objects that are closely related, whereas interfaces are
best suited for providing common functionality to unrelated classes.
If you are designing small, concise bits of functionality, use
interfaces. If you are designing large functional units, use an
abstract class.
If you want to provide common, implemented
functionality among all implementations of your component, use an
abstract class. Abstract classes allow you to partially implement your
class, whereas interfaces contain no implementation for any members.
Abstract Class Example
Cat and Dog can both inherit from abstract class Animal, and this abstract base class will implement a method void Breathe() which all animals will thus do in exactly the same fashion. (You might make this method virtual so that you can override it for certain animals, like Fish, which does not breath the same as most animals).
Interface Example
All animals can be fed, so you'll create an interface called IFeedable and have Animal implement that. Only Dog and Horse are nice enough though to implement ILikeable - You'll not implement this on the base class, since this does not apply to Cat.
I'm working on a quite large application. The domain has about 20-30 types, implemented as ORM classes (for example EF Code First or XPO, doesn't matter for the question). I've read several articles and suggestions about a generic implementation of the repository pattern and combining it with the unit of work pattern, resulting a code something like this:
public interface IRepository<T> {
IQueryable<T> AsQueryable();
IEnumerable<T> GetAll(Expression<Func<T, bool>> filter);
T GetByID(int id);
T Create();
void Save(T);
void Delete(T);
}
public interface IMyUnitOfWork : IDisposable {
void CommitChanges();
void DropChanges();
IRepository<Product> Products { get; }
IRepository<Customer> Customers { get; }
}
Is this pattern suitable for really large applications? Every example has about 2, maximum 3 repositories in the unit of work. As far as I understood the pattern, at the end of the day the number of repository references (lazy initialized in the implementation) equal (or nearly equal) to the number of domain entity classes, so that one can use the unit of work for complex business logic implementation. So for example let's extend the above code like this:
public interface IMyUnitOfWork : IDisposable {
...
IRepository<Customer> Customers { get; }
IRepository<Product> Products { get; }
IRepository<Orders> Orders { get; }
IRepository<ProductCategory> ProductCategories { get; }
IRepository<Tag> Tags { get; }
IRepository<CustomerStatistics> CustomerStatistics { get; }
IRepository<User> Users { get; }
IRepository<UserGroup> UserGroups { get; }
IRepository<Event> Events { get; }
...
}
How many repositories cab be referenced until one thinks about code smell? Or is it totally normal for this pattern? I could probably separate this interface into 2 or 3 different interfaces all implementing IUnitOfWork, but then the usage would be less comfortable.
UPDATE
I've checked a basically nice solution here recommended by #qujck. My problem with the dynamic repository registration and "dictionary based" approach is that I would like to enjoy the direct references to my repositories, because some of the repositories will have special behaviour. So when I write my business code I would like to be able to use it like this for example:
using (var uow = new MyUnitOfWork()) {
var allowedUsers = uow.Users.GetUsersInRolw("myRole");
// ... or
var clothes = uow.Products.GetInCategories("scarf", "hat", "trousers");
}
So here I'm benefiting that I have a strongly typed IRepository and IRepository reference, hence I can use the special methods (implemented as extension methods or by inheriting from the base interface). If I use a dynamic repository registration and retrieval method, I think I'm gonna loose this, or at least have to do some ugly castings all the time.
For the matter of DI, I would try to inject a repository factory to my real unit of work, so it can lazily instantiate the repositories.
Building on my comments above and on top of the answer here.
With a slightly modified unit of work abstraction
public interface IMyUnitOfWork
{
void CommitChanges();
void DropChanges();
IRepository<T> Repository<T>();
}
You can expose named repositories and specific repository methods with extension methods
public static class MyRepositories
{
public static IRepository<User> Users(this IMyUnitOfWork uow)
{
return uow.Repository<User>();
}
public static IRepository<Product> Products(this IMyUnitOfWork uow)
{
return uow.Repository<Product>();
}
public static IEnumerable<User> GetUsersInRole(
this IRepository<User> users, string role)
{
return users.AsQueryable().Where(x => true).ToList();
}
public static IEnumerable<Product> GetInCategories(
this IRepository<Product> products, params string[] categories)
{
return products.AsQueryable().Where(x => true).ToList();
}
}
That provide access the data as required
using(var uow = new MyUnitOfWork())
{
var allowedUsers = uow.Users().GetUsersInRole("myRole");
var result = uow.Products().GetInCategories("scarf", "hat", "trousers");
}
The way I tend to approach this is to move the type constraint from the repository class to the methods inside it. That means that instead of this:
public interface IMyUnitOfWork : IDisposable
{
IRepository<Customer> Customers { get; }
IRepository<Product> Products { get; }
IRepository<Orders> Orders { get; }
...
}
I have something like this:
public interface IMyUnitOfWork : IDisposable
{
Get<T>(/* some kind of filter expression in T */);
Add<T>(T);
Update<T>(T);
Delete<T>(/* some kind of filter expression in T */);
...
}
The main benefit of this is that you only need one data access object on your unit of work. The downside is that you don't have type-specific methods like Products.GetInCategories() any more. This can be problematic, so my solution to this is usually one of two things.
Separation of concerns
First, you can rethink where the separation between "data access" and "business logic" lies, so that you have a logic-layer class ProductService that has a method GetInCategory() that can do this:
using (var uow = new MyUnitOfWork())
{
var productsInCategory = GetAll<Product>(p => ["scarf", "hat", "trousers"].Contains(u.Category));
}
Your data access and business logic code is still separate.
Encapsulation of queries
Alternatively, you can implement a specification pattern, so you can have a namespace MyProject.Specifications in which there is a base class Specification<T> that has a filter expression somewhere internally, so that you can pass it to the unit of work object and that UoW can use the filter expression. This lets you have derived specifications, which you can pass around, and now you can write this:
using (var uow = new MyUnitOfWork())
{
var searchCategories = new Specifications.Products.GetInCategories("scarf", "hat", "trousers");
var productsInCategories = GetAll<Product>(searchCategories);
}
If you want a central place to keep commonly-used logic like "get user by role" or "get products in category", then instead of keeping it in your repository (which should be pure data access, strictly speaking) then you could have those extension methods on the objects themselves instead. For example, Product could have a method or an extension method InCategory(string) that returns a Specification<Product> or even just a filter such as Expression<Func<Product, bool>>, allowing you to write the query like this:
using (var uow = new MyUnitOfWork())
{
var productsInCategory = GetAll(Product.InCategories("scarf", "hat", "trousers");
}
(Note that this is still a generic method, but type inference will take care of it for you.)
This keeps all the query logic on the object being queried (or on an extensions class for that object), which still keeps your data and logic code nicely separated by class and by file, whilst allowing you to share it as you have been sharing your IRepository<T> extensions previously.
Example
To give a more specific example, I'm using this pattern with EF. I didn't bother with specifications; I just have service classes in the logic layer that use a single unit of work for each logical operation ("add a new user", "get a category of products", "save changes to a product" etc). The core of it looks like this (implementations omitted for brevity and because they're pretty trivial):
public class EFUnitOfWork: IUnitOfWork
{
private DbContext _db;
public EntityFrameworkSourceAdapter(DbContext context) {...}
public void Add<T>(T item) where T : class, new() {...}
public void AddAll<T>(IEnumerable<T> items) where T : class, new() {...}
public T Get<T>(Expression<Func<T, bool>> filter) where T : class, new() {...}
public IQueryable<T> GetAll<T>(Expression<Func<T, bool>> filter = null) where T : class, new() {...}
public void Update<T>(T item) where T : class, new() {...}
public void Remove<T>(Expression<Func<T, bool>> filter) where T : class, new() {...}
public void Commit() {...}
public void Dispose() {...}
}
Most of those methods use _db.Set<T>() to get the relevant DbSet, and then just query it with LINQ using the provided Expression<Func<T, bool>>.
Well, I've had to rewrite this as I've been down voted five times for giving too much detail... Go figure!
class BaseModel
{
public T[] Get<T>()
{
// return array of T's
}
public T Find<T>(object param)
{
// return T based on param
}
public T New<T>()
{
// return a new instance of T
}
}
class BaseRow
{
private BaseModel _model;
public BaseRow(SqlDataReader rdr, BaseModel model)
{
// populate properties of inheriting type using rdr column values
}
public void Save()
{
// calls _model.Save(this);
}
}
I currently have a number of classes that inherit the BaseModel class. Each of the methods exposed by BaseModel will return an instance, or an array of instances of a type that inherits the BaseRow class.
At the moment, when calling the exposed methods on the BaseModel via an inheriting class, i.e.
using(DeviceModel model = new DeviceModel())
{
DeviceRow row = model.Find<DeviceRow>(1);
DeviceRow[] rows = model.Get<DeviceRow>();
DeviceRow newRow = model.New<DeviceRow>();
}
I have to specify the type (a class that inherits the BaseRow class), as the methods in BaseModel/BaseRow do not know/care what type they are, other than they inherit from BaseRow.
What I would like to do is find a way to remove the need to specify the without having to replicate code in every class that inherits BaseModel, i.e.
class DeviceModel : BaseModel
{
public DeviceRow Find(object param)
{
return this.Find<DeviceRow>(param);
}
}
Note: Unfortunately I am unable to implement or use any third party solutions. That said, I have tried using Castle Active Record/nHibernate and to be honest, they are very big and heavy for what should be a very simple system.
Hopefully I haven't provided "too much" detail. If I have, please let me know.
Thanks
If I were you, I'd suggest making BaseModel a generic class. In a situation of "can't win either way", the code you've removed to make others happy might have told me more about what you're doing (not a criticism by any stretch - I appreciate your position).
class BaseModel<T>
{
public virtual T[] Get()
{
// return array of T's
}
public virtual T Find(object param)
{
// return T based on param
}
public virtual T New()
{
// return a new instance of T
}
}
That's your base, and then you have inheritors like:
class DeviceModel : BaseModel<Device>
{
public override Device New()
{
return new Device();
}
}
Now, any generic operations you define in DeviceModel will default to returning or using strongly typed Device. Notice the virtual methods in the BaseModel class. In the base class methods, you might provide some basic operations predicated upon using T's or something. In sub-classes, you can define more specific, strongly typed behavior.
I'd also comment that you might want to pull back a little and consider the relationship of BaseModel and BaseRow. It appears that you're defining a parallel inheritance hierarchy, which can tend to be a code smell (this is where more of your code might have come in handy -- I could be wrong about how you're using this). If your ongoing development prospects are that you're going to need to add a FooRow every time you add a FooModel, that's often a bad sign.
I have 2 classes Customer and Person(share exactly the same properties and these properties should be filled in by a request
I would like to write a generic method like
//Usage Customer myCustomer =CreateCustomerOrPerson<Customer>(myRequest)
//Usage Person myPerson =CreateCustomerOrPerson<Person>(myRequest)
public static T FillPropertiesOfCustomerOrPerson<T>(Request request)
{
//not sure how I would I do it to fill the properties.
// T a = default(T);
//a.Name=request.Name;
//a.Surname=request.Surname;
// if (a is Customer)
//{
//?
/// }
return (T)a;
}
How would you write this generic method to avoid having 2 methods (one for customer and one for person)?
Edit
I have no control over these classes. I just need to fill the properties and I was wondering if I could write a generic method rather than 2 specific ones.
Given the requirements, your options are somewhat limited. If you don't want to make use of the dynamic keyword (due to .NET version or whatever), you could do this old-style and use reflection. A possible implementation of that follows:
private const string PROP_NAME = "Name";
private const string PROP_SURNAME = "Surname";
public static T FillPropertiesOfCustomerOrPerson<T>(Request request)
where T : new()
{
if (typeof(T) != typeof(Person) && typeof(T) != typeof(Customer))
{
throw new Exception(
string.Format("{0} is not a supported type.", typeof(T).Name)
);
}
PropertyInfo name = typeof(T).GetProperty(PROP_NAME);
PropertyInfo surname = typeof(T).GetProperty(PROP_SURNAME);
T t = new T();
name.SetValue(t, request.Name, null);
surname.SetValue(t, request.Surname, null);
return t;
}
Optionally, you could remove the where T : new() and replace the instantiation code with something like this:
T t = (T)Activator.CreateInstance(typeof(T));
+1 for CodeInChaos.
You should put the responsibility for reading properties from Request with the classes that have the properties.
The best way would be to have either a base class or an interface that provides you with, let's say, a FillProperties method that takes a Request. Then put a restriction on T for your method that specifies the base class or interface and call T.FillProperties(request).
If you can't fix the design, I'd use duck typing instead. C# 4 has the dynamic keyword. You'll lose type safety, and refactoring support, but at least you don't repeat yourself.
public static void FillPropertiesOfCustomerOrPerson(dynamic person, Request request)
{
person.Name=request.Name;
person.Surname=request.Surname;
}
Have Person and Customer either inherit from a base class (e.g. Customer : Person) or implement a common interface. You can then just have your method accept the base type instead:
public static Person FillPropertiesOfPerson(Request request)
{
Person returnValue = new Person();
returnValue.Name = request.Name;
// etc...
return Person;
}
Note that if Person and Customer are partial classes (for example the proxy classes generated when you consume a web service) then you can use a partial-ness of these classes to do this:
// Define your interface to match the properties which are common to both types
interface IPerson
{
string Name
{
get;
set;
}
}
// Then use partial declarations like this (must be in the same namespace as the generated classes)
public partial class Person : IPerson { }
public partial class Customer : IPerson { }
This will only work if Person and Customer declare exactly the same properties (and are partial classes obviously!), however if there are some slight mismatches then you can use your partial definition to do some "fudging."
Failing that the only method I'm aware of is to use reflection to set the properties, however this isn't typesafe, would incur some performance penalties and is all round not a great idea (I'd probably rather write two identical methods than resort to reflection for something like this).
I have a fairly simple system, and for the purposes of this question there are essentially three parts: Models, Repositories, Application Code.
At the core are the models. Let's use a simple contrived example:
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
In that same project is a generic repository interface. At its simplest:
public interface IRepository<T>
{
T Save(T model);
}
Implementations of that interface are in a separate project and injected with StructureMap. For simplicity:
public class PersonRepository : IRepository<Person>
{
public Person Save(Person model)
{
throw new NotImplementedException("I got to the save method!");
// In the repository methods I would interact with the database, or
// potentially with some other service for data persistence. For
// now I'm just using LINQ to SQL to a single database, but in the
// future there will be more databases, external services, etc. all
// abstracted behind here.
}
}
So, in application code, if I wanted to save a model I would do this:
var rep = IoCFactory.Current.Container.GetInstance<IRepository<Person>>();
myPerson = rep.Save(myPerson);
Simple enough, but it feels like it could be automated a lot. That pattern holds throughout the application code, so what I'm looking to do is create a single generic Save() on all models which would just be a shorthand call to the above application code. That way one would need only call:
myPerson.Save();
But I can't seem to figure out a way to do it. Maybe it's deceptively simple and I'm just not looking at it from the correct angle. At first I tried creating an empty ISaveableModel<T> interface and intended to have each "save-able" model implement it, then for the single generic Save() method I would have an extension on the interface:
public static void Save<T>(this ISaveableModel<T> model)
{
var rep = IoCFactory.Current.Container.GetInstance<IRepository<T>>();
model = rep.Save(model);
}
But it tells me that rep.Save(model) has invalid arguments. It seems that it's not wiring up the type inference as I'd hoped it would. I tried a similar approach with a BaseModel<T> class from which models would inherit:
public class BaseModel<T>
{
public void Save()
{
this = IoCFactory.Current.Container.GetInstance<IRepository<T>>().Save(this);
}
}
But the compiler error is the same. Is there a way to achieve what I'm trying to achieve? I'm very flexible on the design, so if I'm going about something all wrong on an architectural level then I have room to step back and change the big picture.
Would a generic extension method solve it?
public static T Save<T>(this T current)
{
var rep = IoCFactory.Current.Container.GetInstance<IRepository<T>>();
rep.Save(current);
}
You can then constrain it to your ISaveableModel<T> interface. Return type above not implemented, but you can put it to a boolean or status flag, whatever.
In both approaches, the parameter to the Save() function is not of type T. In the first one, it is ISaveableModel<T>, and in the second, it is BaseModel<T>. Since the repository is a generic based on T, Save method will expect a variable of type T. You can add a simple cast to T before you call Save to fix it.
Alternatively, your IRepostory<T> can be changed to
public interface IRepository<T>
{
T Save(ISaveableModel<T> model);
}
which makes more sense.