we have a data-layer which contains classes generated by outputs (tables/views/procs/functions) from database. The tables in database are normalized and are designed similar to OOP design ( table for "invoice" has 1:1 relation to table for "document", table for "invoice-item" has 1:1 relation to table for "document-item", etc...". All access to/from databaes is by stored procedures (for simple tables too).
Typical clas looks like (shortly):
public class DocumentItem {
public Guid? ItemID { get; set; }
public Guid? IDDocument { get; set; }
public DateTime? LastChange { get; set; }
}
public class InvoiceItem : DocumentItem {
public Guid? IDProduct { get; set; }
public decimal? Price { get; set; }
}
The problem is, the database tables has relations similar to multiple inheritance in OOP. Now we do a new class for every database output. But every database outputs are combination of "pure" tables in database.
The ideal solution would be (IMHO) tranform classes to interface, use the multiple implementation of interfaces, and then automaticly implement the members (this "table-classes" has only properties, and body of properties are always same).
For example:
public interface IItem {
Guid? ItemID { get; set; }
DateTime? LastChange { get; set; }
}
public interface IDocumentItem : IItem {
Guid? IDDocument { get; set; }
}
public interface IItemWithProduct : IItem {
Guid? IDProduct { get; set; }
}
public interface IItemWithRank : IItem {
string Rank { get; set; }
}
public interface IItemWithPrice : IItem {
decimal? Price { get; set; }
}
// example of "final" item interface
public interface IStorageItem : IDocumentItem, IItemWithProduct, IItemWithRank { }
// example of "final" item interface
public interface IInvoiceItem : IDocumentItem, IItemWithProduct, IItemWithPrice { }
// the result should be a object of class which implements "IInvoiceItem"
object myInvoiceItem = SomeMagicClass.CreateClassFromInterface( typeof( IInvoiceItem ) );
The database contains hunderts of tables and the whole solution is composed from dynamicly loaded modules (100+ modules).
What do you think, is the best way, how to deal with it?
EDIT:
Using partial classes is good tip, bud in our solution can not be used, because "IDocumentItem" and "IItemWithPrice" (for example) are in different assemblies.
Now, if we make change in "DocumentItem" table, we must go and re-generate source code in all dependent assemblies. There is almost no reuse (because can not use multiple inheritance). Its quite time consuming, if there are dozens of dependent assemblies.
I think it is a bad idea to automatically generate your domain model from your database schema.
So, you're really looking for some kind of mix-in technology. Of course, I have to ask why you aren't using LINQ to Entity Framework or NHibernate. O/RMs handle these problems by mapping the relational model into usable data structures that have APIs to support all of the transactions that you'll need to manipulate data in the database. But I digress.
If you are really looking for a mix-in technology to do dynamic code generation, check out Cecil at the Mono Project. It's a way better place to start than trying to use Reflection.Emit to build dynamic classes. There are other dynamic code generators out there but you may want to start with Cecil since the documentation is pretty good.
If you wish to continue auto-generating from the database and want to model multiple inheritance, then I think you have the right idea: Alter the tool to spit out interfaces with multiple inheritance, plus X num implementations.
You indicated elsewhere that a convention for inheritance vs. aggregation is enforced, and (as I understand) you know exactly how the resulting interfaces and classes should look. I understand that business rules are implemented elsewhere (maybe in a business rules engine?), so regenerating the classes should not require changes to dependent code, unless you want to take advantage of those changes, or existing properties have been altered or removed.
But you won't be done. Your classes will still have id's of related entities. If you want to make things easier for client code, you should have references to related entities (not caring about the related entity's ID), like this:
public class Person{
public Guid? PersonID { get; set; }
public Person Parent { get; set; }
}
That would make things easier on the client. When you think about it, going from ID's to references is work you have to do anyway; it's better to do it once in the middle tier than to let the client do it N number of times. Plus this makes your code less database-dependent.
So above all else, I recommend writing an OO wrapper for the auto-generated classes. You would program against this OO wrapper for almost everything; let only the data access layer interact with the auto-generated classes. Sure, you can't reuse inheritance metadata in the database (specified via conventions, I assume?), but at least you won't be carving a new path.
By contrast, what you have now looks like an anemic data model or worse.
The scenario is unclear to me.
If the code is generated, you don't need any magic: add some metadata to your database objects (e.g. Extended Properties in SQL Server) that flags the "basic" interfaces, and modify your generating template/tool to consider the flags.
If the question is about multiple inheritance, you are out of luck with .Net.
If the code is generated, you may also take advantage of partial classes and methods (are you using .Net 3.5?) to produce code in different source files.
If you need to generate code at run-time there are many techniques, not least ORM tools.
Now may you be a bit more explicit of your design context?
Related
So, I've got an aggregate( Project ) that has a collection of entities (ProjectVariables) in it. The variables do not have Ids on them because they have no identity outside of the Project Aggregate Root.
public class Project
{
public Guid Id { get; set; }
public string Name { get; set; }
public List<ProjectVariable> ProjectVariables { get; set; }
}
public class ProjectVariable
{
public string Key { get; set; }
public string Value { get; set; }
public List<string> Scopes { get; set; }
}
The user interface for the project is an Angular web app. A user visits the details for the project, and can add/remove/edit the project variables. He can change the name. No changes persist to the database until the user clicks save and the web app posts some json to the backend, which in turns passes it down to the domain.
In accordance to DDD, it's proper practice to have small, succinct methods on the Aggregate roots that make atomic changes to them. Examples in this domain could be a method Project.AddProjectVariable(projectVariable).
In order to keep this practice, that means that the front end app needs to track changes and submit them something like this:
public class SaveProjectCommand
{
public string NewName { get; set; }
public List<ProjectVariable> AddedProjectVariables { get; set; }
public List<ProjectVariable> RemovedProjectVariables { get; set; }
public List<ProjectVariable> EditedProjectVariables { get; set; }
}
I suppose it's also possible to post the now edited Project, retrieve the original Project from the repo, and diff them, but that seems a little ridiculous.
This object would get translated into Service Layer methods, which would call methods on the Aggregate root to accomplish the intended behaviors.
So, here's where my questions come...
ProjectVariables have no Id. They are transient objects. If I need to remove them, as passed in from the UI tracking changes, how do identify the ones that need to be removed on the Aggregate? Again, they have no identification. I could add surrogate Ids to the ProjectVariables entity, but that seems wrong and dirty.
Does change tracking in my UI seem like it's making the UI do too much?
Are there alternatives mechanisms? One thought was to just replace all of the ProjectVariables in the Project Aggregate Root every time it's saved. Wouldn't that have me adding a Project.ClearVariables() and the using Project.AddProjectVariable() to the replace them? Project.ReplaceProjectVariables(List) seems to be very "CRUDish"
Am I missing something a key component? It seems to me that DDD atomic methods don't mesh well with a pattern where you can make a number of different changes to an entity before committing it.
In accordance to DDD, it's proper practice to have small, succinct
methods on the Aggregate roots that make atomic changes to them.
I wouldn't phrase it that way. The methods should, as much as possible, reflect cohesive operations that have a domain meaning and correspond with a verb or noun in the ubiquitous language. But the state transitions that happen as a consequence are not necessarily small, they can change vast swaths of Aggregate data.
I agree that it is not always feasible though. Sometimes, you'll just want to change some entities field by field. If it happens too much, maybe it's time to consider changing from a rich domain model approach to a CRUD one.
ProjectVariables have no Id. They are transient objects.
So they are probably Value Objects instead of Entities.
You usually don't modify Value Objects but replace them (especially if they're immutable). Project.ReplaceProjectVariables(List) or some equivalent is probably your best option here. I don't see it as being too CRUDish. Pure CRUD here would mean that you only have a setter on the Variables property and not even allowed to create a method and name it as you want.
I'm a novice trying to wrap my head around MVVM. I'm trying to build something and have not found an answer on how to deal with this:
I have several models/entities, some of which have logical connections and I am wondering where/when to bring it all together nicely.
Assume we have a PersonModel:
public class PersonModel
{
public int Id { get; set; }
public string Name { get; set; }
...
}
And a ClubModel:
public class ClubModel
{
public int Id { get; set; }
public string Name { get; set; }
...
}
And we have MembershipModel (a Person can have several Club memberships):
public class MembershipModel
{
public int Id { get; set; }
public PersonId { get; set; }
public ClubId { get; set; }
}
All these models are stored somewhere, and the models are persisted "as in" in that data storage.
Assume we have separate repositories in place for each of these models that supplies the standard CRUD operations.
Now I want to create a view model to manage all Persons, e.g. renaming, adding memberships, etc. -> PersonMangementViewModel.
In order to nicely bind a Person with all its properties and memberships, I would also create a PersonView(?)Model that can be used in the PersonManagementViewModel. It could contain e.g. view relevant properties and also the memberships:
public class PersonViewModel : PersonModel
{
public Color BkgnColor { get return SomeLogic(); }
public IEnumerable<MembershipModel> { get; set; }
...
}
My question here is, how would I smartly go about getting the Membership info into the PersionViewModel? I could of course create an instance of the MemberShipRepo directly in the PersionViewModel but that seems not nice, especially if you have a lot of Persons. I could also create all repositories in the PersonManagementViewModel and then pass references into the PersonViewModel.
Or does it make more sense to create another layer (e.g. "service" layer) that returns primarily the PersonViewModel, therefore uses the individual repositories and is called from the PersonManagementViewModel (thus removing the burden from it and allowing for re-use of the service elsewhere)?
Happy to have pointed out conceptional mistakes or some further reading.
Thanks
You are creating separate model for each table I guess. Does not matter, but your models are fragmented. You can consider putting related data together using Aggregate Root and Repository per Aggregate root instead of per model. This concept is discussed under DDD. But as you said you are new to MVVM, there is already lot much to learn. Involving DDD at this stage will only complicate the things.
If you decide to keep the things as is, best and quick thing I can guess is what you are doing now. Get instance of model from data store in View Model (or whatever your location) and map somehow. Tools like Automapper are good but they does not fit each situation. Do not hesitate to map by hand if needed. You can also use mix approach (Automapper + map by hand) to simplify the things.
About service layer, sure... why not. Totally depends on you. If used, this layer typically contain your business logic, mapping, formatting of data, validations etc. Again, each of that thing is up to you.
My suggestions:
Focus on your business objectives first.
Design patterns are good and helpful. Those are extract of many exceptionally capable developers to solve specific problem. Do use them. But, do not unnecessarily stick to it. Read above suggestion. In short, avoid over-engineering. Design patterns are created to solve specific problem. If you do not have that problem, then do not mess-up your code with unnecessary pattern.
Read about Aggregate Root, DDD, Repository etc.
Try your best to avoid Generic Repository.
I am currently at the beginning of developing a large web application mainly containing an Angular SPA and an OData WebAPI that has access to a backend layer.
We're at an early stage and have begun to implement the first classes including a Model.dll that is in a common namespace so that it can be accessed by all layers.
We are now discussing about those DTOs within the model. Some say that using interfaces is absolutely neccessary, so the code would be like this:
namespace MySolution.Common.Model
{
public interface IPerson
{
int Id { get; set; }
string Name { get; set; }
...
}
}
namespace MySolution.Common.Model
{
public class PersonDTO : IPerson
{
public int Id { get; set; }
public string Name { get; set; }
...
}
}
So that's it. Just simple DTOs with no more intelligence.
I am now asking myself if this is really a good approach, because I don't see the necessity of using the interface here.
What are the advantages with this? Testability was mentioned, but is it even necessary to test DTos? Dependency Injection should also not the point.
Any enlightenment would be very helpful. At the end learning new stuff and approaches is always good...
DTOs transfer state - that's it. Injecting them via a container or mocking them for testing seems pointless (if that's the motivation) and totally unnecessary. Don't do it.
As an example, further to my comment above:
Interface IRepo
{
Person GetPerson(int id);
}
Public class DbRepo : IRepo
{
public Person GetPerson(int id){//get person from database}
}
Public class FakeRepo : IRepo
{
public Person GetPerson(int id)
{
return new Person {Id = id, Name = "TestName"};
}
}
You would use a FakeRepo with some mock objects for testing purposes.
I have this situation, where i'm writing a api that should be loosely coupled, because I can adapt any of its parts to behave different, like changing the storage, or alter a number of parameter from a request, so it can have another behavior without affecting what already exist.
With this in mind, is valid have a interface for the DTO, because on another time it could change its properties to carry more data, and you have only to implement a abstraction where you will use this new implemented dto, be to map the new parameters, use in a service to register a record.
Then you do the bindings of the interface(abstraction) to new implementations of the dto and the places that will have modifications.
Then you don't change the behavior of you program and don't make alterations on what already exist.
So you have to think too how will be you api.
DTOs may inherit properties from multiple interfaces and using interfaces may reduce casting data between components and modules, especially in the boundaries of a single solution.
Also, rules are often applied on interfaces, so DTOs should use them.
I'm new to coding for web, so this may be going the wrong direction, but I've got a DTO from a database, and I want to expose different bits of it for different views. I've encoded this using interfaces on the single DTO (using conditional serialization to ensure only the bits I want are exposed).
I'm also using interfaces on incoming data structures so I can use the same DTO, but mock it in my unit tests.
Ive got a library that handles the typical add/edit/update methods for an application. Im wondering what design pattern calls for naming the POCO classes that bundle the data for is sent back and forth. For example, one class might be similar to another, but needs to include a few other members for being sent back to the application vs the data that is sent in to be saved.
For example, this might be a POCO class that I would use to populate before in a library method before sending back to the app to be displayed/consumed.
public class CorporateDeptAssignmentInfo
{
public int Id { get; set; }
public int DivisionKey { get; set; }
public int DeptKey { get; set; }
public int Count { get; set; }
public string DeptName { get; set; }
public DateTime Corp_dept_from_date { get; set; }
public DateTime Corp_dept_to_date { get; set; }
}
On the other hand, if Im adding a new record, I might not want to populate all members.
I could either (a) make some members nullable or (b) create a new POCO class with a slightly different name for use with calling an update/add library method.
Are there any design patterns that mention the use of poco classes in either of the above ways?
It's either an Adapter, Decorator, or Facade. That's where I think it's heading anyway. You are looking for a way to present something, with modifications/simplification.
I don't know any specific design patterns for this scenario except the Data Transfer Object, but if your domain object actually does allow nullable values, why are your pocos not designed in the same way?
Personally I would make two POCO classes if the loading and adding / updating process takes different data. Both of these classes usually have an ID property that resort to the same domain object. Sometimes it's also useful if one of these classes encapsulate the other POCO, but I don't have this kind of of situation often in my code.
If you have further questions, feel free to ask.
I am looking for a solution in the following problem:
I have many tables that differentiate from one another in few/none columns (Why this happens is not the case as I have not designed said database nor can I change it).
Example:
Table User: Columns{name,surname,age}
Table OldUser: Columns(name,surname,age,lastDateSeen}
etc
Is there any way to indicate to the EntityFramework 4.0 in Visual Studio to extend a base class consisting of _name,_surname,_age fields and their corresponding properties so that I can use that class for batch jobs in the code?
My current solution is to make this class and use converters to pass the values from the persistent objects its' objects. It works but it is not elegant and I don't like it.
I come from a java/hibernate environment where this is basic functionality.
(for future refernce can the same thing be done if I want the classes to implement an interface?)
Thanks in advance.
Since your RDBMS (at least SQL Server 2008 and older) doesn't allow for table inheritance, I would recommend that you do not use inheritance in the DB model in C#. This is especially recommended when you cannot control the design of the tables.
Instead use an interface if you actually have clients of those classes who will benefit from the abstraction, but not being able to control the design of the DB makes this less valuable because the DB designer could change the tables thereby making your EF classes no longer implement the interface:
public interface IUser {
public string Name { get; }
// etc...
}
public class User : IUser {
public string Name { get; set; }
// etc...
}
public class OldUser : IUser {
public string Name { get; set; }
// rest of IUser
public DateTime? LastSeenOn { get; set; }
}